Florida Internation University
Mozhgan Azimpourkivi is a Ph.D. candidate in School of Computing and Information Science at Florida International University, co-advised by Bogdan Carbunar and Umut Topkara. She received an M.S. in Information Technology from the Sharif University of Technology, Iran in 2011, and an M.S. in Computer Science from Florida International University, in 2015. Mozhgan’s research interests include usable security, mobile authentication, image data protection, social media verifications, and deep neural networks. Her current research focuses on improving security mechanisms to ensure they are usable and effective in practice.
Mobile and wearable devices are popular platforms for accessing sensitive online services such as e-mail, social networks, and banking. A secure and practical experience for user authentication in such devices is challenging, as their small form factor, especially for wearables (e.g., smartwatches and smartglasses), complicates the input of commonly used text-based passwords, even when the memorability of passwords already poses a significant burden for users. Conversely, it is difficult to enable mobile device users to verify the authenticity of online services they access or the identity of other users with whom they communicate. In the first part of this dissertation we introduce Pixie, a camera-based two factor authentication solution for mobile and wearable devices. Pixie combines the advantages of graphical password and physical token-based authentication, yet does not require any expensive, uncommon hardware. Further, we propose ai.lock, a system that improves the secret image-based authentication approach of Pixie. ai.lock reliably extracts biometrics-like credentials from images, thus enables their secure storage and matching even against adversaries with physical access to the victim device. In the second part of this dissertation, we introduce a novel approach to develop a usable and efficient CEAL (CrEdential Assurance Labeling) system that builds visual representations of identity (e.g., public keys, hashes, IP addresses). We describe early progress using deep generative models trained to generate images that are easily distinguishable through human verification.