Mahshad Shariatnasab

Florida International University

Lecture Information:
  • October 20, 2023
  • 2:00 PM
  • Case 349 & Zoom
Headshot of Mahshad Shariatnasab

Speaker Bio

Mahshad Shariatnasab is a Ph.D. candidate at the Knight Foundation School of Computing and Information Sciences at Florida International University, actively involved in the Π-CoLab research group, supervised by Dr. Farhad Shirani. She completed her B.Sc. at Shahid Beheshti University in 2015 and after obtaining her M.Sc. from Khajeh Nasir Toosi University of Technology in Iran in 2017, she gained practical experience as a project advisor at Niroo Research Institute. In 2019, Mahshad commenced her Ph.D., dedicating her research to Information Theory, Graph Matching techniques, and Data Privacy. Her work in these domains has led to several publications in respected academic journals and conference proceedings. Expanding her practical skill set, she served as a Software Engineer intern at a startup company in 2022, where she was instrumental in debugging ventilator software during the COVID crisis. This role not only enriched her industry experience but also showcased her ability to apply academic concepts in a critical real-world setting.


As tracking technologies advance and become more sophisticated, there is a crucial need to quantify and understand the associated privacy risks. While users expect their online identities and activities to stay private, they are often monitored and tracked. In this proposal, an information theoretic framework for quantifying and evaluating network privacy is provided. The framework considers the problem both from the perspectives of the attacker and the defender. In particular, user anonymity under active and passive attacks is considered, and fundamental tradeoffs between utility and privacy are explored. Practical attack and defense strategies with theoretical guarantees are provided under a wide range of statistical scenarios. In passive attacks, network deanonymization scenarios involving graph matching techniques are studied. These techniques leverage graph correlations and other available side-information for successful deanonymization. In active fingerprinting deanonymization attacks, privacy limits in terms of number of queries required for deanonymization and time to deanonymization are studied under a general class of stochastic models. The associated defensive mechanisms for dataset obfuscation are considered, focusing on rank-preservation utility and anonymity protection under active fingerprinting attacks. These obfuscation mechanisms are applicable in the training of search engines and recommendation systems, and social network analysis.