Haiman Tian

Florida International University

Lecture Information:
  • June 22, 2018
  • 11:30 AM
  • ECS 349
Photo of Haiman Tian

Speaker Bio

Haiman Tian is a Ph.D. candidate in Computer Science at Florida International University (FIU) and currently working as a research assistant in the Distributed Multimedia Information Systems (DMIS) Lab under the supervision of Dr. Shu-Ching Chen. Her research interests include Multimedia Data Mining, Machine Learning, Multimodal Deep Learning, Data Science, Big Data, Multimedia Systems, and Image and Video processing. Haiman Tian received the B.S. degree in Computer Science from Sun Yat-Sen University, Guangzhou, China, in 2009, and the M.S. degree in Computer Engineering in 2014 from FIU. She is working on several multidisciplinary projects, such as Multimedia-Aided Disaster Information Integration System (MADIS) and Florida Public Hurricane Loss Model (FPHLM).


The advances in technologies have rapidly accumulated a zettabyte of “new” data every two years. The huge amount of data have a powerful impact on various areas in science and engineering and generates enormous research opportunities, which calls for the design and development of advanced approaches in data analytics. Given such demands, data science has become an emerging hot topic in both industry and academia, ranging from basic business solutions, technological innovations, and multidisciplinary research to political decisions, urban planning, and policymaking.

Within the scope of this dissertation, a multimodal data analytics and fusion framework are proposed for data-driven knowledge discovery and cross-modality semantic concept detection. The proposed framework can explore useful knowledge hidden in different formats of data and incorporate representation learning from data in multimodalities, especially for disaster information management. First, a Feature Affinity-based Multiple Correspondence Analysis (FA-MCA) method is presented to analyze the correlations between low-level features from different features, and an MCA-based Neural Network (MCA-NN) is proposed to capture the high-level features from individual FA-MCA models and seamlessly integrate the semantic data representations for video concept detection. Next, a genetic algorithm-based approach is presented for deep neural network selection. Furthermore, the improved genetic algorithm is integrated with deep neural networks to generate populations for producing optimal deep representation learning models. Then, the multimodal deep representation learning framework is proposed to incorporate the semantic representations from data in multiple modalities efficiently. At last, fusion strategies are applied to accommodate multiple modalities. In this framework, cross-modal mapping strategies are also proposed to organize the features in a better structure to improve the overall performance.