Sheila Alemany

Ph.D. Candidate


Lecture Information:
  • October 25, 2022
  • 1:00 PM
  • Zoom

Speaker Bio

Sheila Alemany is a Ph.D. candidate in the Knight Foundation School of Computing and Information Sciences at FIU under the supervision of Dr. Niki Pissinou and a 2021 NSF Graduate Research Fellow and 2019 National GEM Consortium full fellow with MIT Lincoln Laboratory.

She graduated magna cum laude from Florida International University with a B.Sc. in mathematics and a B.Sc. in computer science in 2019 and will receive her M.Sc. in Telecommunications and Networking in the Fall of 2022. During her undergraduate studies, Sheila co-authored five peer-reviewed papers in IEEE and ACM conference proceedings on data behavior modeling and trend prediction for mobile wireless sensor networks, with one of the articles receiving the best paper award. She was also a finalist for the 2018 Computing Research Association (CRA) Outstanding Undergraduate Researcher award. She was funded through an Army Educational Outreach Program and NSF REU SITE award. Before beginning her graduate studies, she also completed two summer research internships at MIT Lincoln Laboratory.

Sheila’s current research area is in adversarial machine learning, and she has published two peer-reviewed articles in IEEE Big Data and AAAI’s SafeAI proceedings. Since 2020, Sheila has also served as a graduate research mentor for the NSF-sponsored Research Experience for Undergraduates (REU) and Research Experience for Teachers (RET) programs.

Description

Machine learning (ML) models have achieved significant performance because of their ability to generalize information from the training data. This generalization power stems from creating unique abstractions that capture patterns and establish relationships between features in data. On the other hand, the generalization of information during the training process makes ML models more vulnerable to carefully crafted imperceptible noise created by adversaries. This vulnerability exists because training an ML model on datasets that are incomplete and instantaneous representations of information produces a probability distribution with low confidence scores corresponding to the underlying algorithm’s ability to extract correct output values. Adversaries can exploit these low confidence areas in the machine learning pipeline by creating minimal input changes to skew the model’s recommendations leading to wrong or inaccurate results. An adversary’s malicious exploitation of these areas of uncertainty in a trained learning model is similar to how propaganda can influence a person’s opinion when the person is least familiar with the propaganda’s subject matter. Having larger datasets with more features results in more complex relationships and increases the likelihood of lower confidence in the generalized information model, which could lead to more malicious attacks.

This research is one of the first steps that identifies, explores, and provides novel ML modeling techniques and architectures that restrict an adversary’s ability to optimize ML attacks in the presence of model uncertainty but are not computationally intensive. In particular, the proposed research aims to restrict an adversary’s ability to optimize attacks through the development and experimental demonstration of methods and approaches that: (1) reduce the areas of uncertainty created due to the incomplete and instantaneous representations of information presented in the training data; (2) increase the non-linearity of the abstraction surface such that adversaries require more adversarial iterations to converge to efficient local minima or maxima for their adversarial examples; and (3) suppress the impact of minor adversarial perturbations on feature characteristics that adversaries exploit. Our preliminary results show that by applying varying data transformations or model parameters, we can restrict the directions an adversary can exploit when optimizing their attacks, increasing resilience to adversarial examples.

Click here to join the Zoom meeting.

Meeting ID: 945 1793 2807
Passcode: 4FB3vy