Samira Pouyanfar

School of Computing and Information Sciences


Lecture Information:
  • June 6, 2019
  • 9:00 AM
  • CASE 349

Speaker Bio

Samira Pouyanfar is a Ph.D. candidate at the School of Computing and Information Sciences (SCIS), Florida International University (FIU), under the supervision of Professor Shu-Ching Chen. She received her Master’s degree in Artificial Intelligence from Sharif University of Technology (SUT), Iran in 2012, and a Bachelor’s degree in Computer Engineering at the University of Isfahan, Iran in 2008. Her research interests include data science, machine learning, deep learning, and multimedia big data. She has published over 25 research papers in top-tier international journals and conference proceedings. During her Ph.D. studies, Samira received a number of awards including Dissertation Year Fellowship (DYF), Overall Outstanding Graduate Student award at SCIS, FIU SGA graduate scholarship, FIU GSAW, and several student travel grants. She also did one internship at Volkswagen Electronic Research Lab as a machine learning researcher. After her graduation, she will join the COSINE Data & Intelligence team at Microsoft as a Data Scientist II.

Description

With the proliferation of online services and mobile technologies, the world has stepped into a multimedia big data era, where new opportunities and challenges appear with the high diversity multimedia data together with the huge amount of social data. Nowadays, multimedia data consisting of audio, text, image, and video has grown tremendously. With such an increase in the amount of multimedia data, the main question raised is how one can analyze this high volume and variety of data in an efficient and effective way. A vast amount of research work has been done in the multimedia area, targeting different aspects of big data analytics, such as the capture, storage, indexing, mining, and retrieval of multimedia big data. However, there is insufficient research that provides a comprehensive framework for multimedia big data analytics and management.

To address the major challenges in this area, a new framework is proposed based on deep neural networks for multimedia semantic concept detection with a focus on spatio-temporal information analysis and rare event detection. The proposed framework is able to discover the pattern and knowledge of multimedia data using both static deep data representation and temporal semantics. Specifically, it is designed to handle data with skewed distributions. The proposed framework includes the following components: (1) a synthetic data generation component based on simulation and adversarial networks for data augmentation and deep learning training, (2) an automatic sampling model to overcome the imbalanced data issue in multimedia data, (3) a deep representation learning model leveraging novel deep learning techniques to generate the most discriminative static features from multimedia data, (4) an automatic hyper-parameter learning component for faster training and convergence of the learning models, (5) a spatio-temporal deep learning model to analyze dynamic features from multimedia data, and finally (6) a multimodal deep learning fusion model to integrate different data modalities. The whole framework has been evaluated using various large-scale multimedia datasets that include the newly collected disaster-events video dataset and other public datasets.