Dr. Lijun Yin

SUNY Binghamton Department Of Computer Science

Lecture Information:
  • April 22, 2016
  • 2:00 PM
  • ECS: 241

Speaker Bio

Dr. Lijun Yin is a Professor of Computer Science, Director of Graphics and Image Computing Laboratory, and Co-director of Seymour Kunis Media Core, T. J Watson School of Engineering and Applied Science at the State University of New York at Binghamton. He received Ph.D. of computer science from the University of Alberta, Canada and Master of Electrical Engineering from Shanghai Jiao Tong University in China. Dr. Yin’s research focuses on computer vision, graphics, HCI, and multimedia, specifically on face and gesture modeling, analysis, recognition, animation, and expression understanding. His research has been funded by the National Science Foundation, Air Force Research Lab, Air Force Office of Scientific Research (AFOSR), SUNY Upstate Medical Center, SUNY Health Network of Excellence, and New York State Science and Technology Office (NYSTAR). Dr. Yin received the prestigious James Watson Investigator Award of NYSTAR (2006) and SUNY Chancellor’s Award for Excellence in Scholarship & Creative Activities (2014). He received the Best Paper Award (ICPR 2006 and Journal of Computers and Graphics 2005). He holds two US patents. His group has released three 3D facial expression databases, which have become the benchmark databases for the research community and used by over 500 groups worldwide in areas of computer science, engineering, psychology, medicine, arts, etc. Dr. Yin served as a program co-chair of FG 2013 and an Area Chair and programming committee member of dozen of IEEE/ACM conferences of the fields. He is currently serving on the editorial board of journals of IVC and PRL.


A facial surface is a three-dimensional time varying ‘wave’, which is associated with the movement of facial expressions. Tracing the behavior of the 3D primitive features in a spatial-temporal domain could reveal precious information about the nature of the underlying physical process. In this talk, I will introduce the recent work in areas of 3D face-related information processing, including 3D dynamic face modeling, 3D spatial-temporal facial expression analysis, multimodal database development, etc. A new study on 3D facial surface tracking, feature analysis, and classification will also be discussed. In addition, the topic of hand gesture computing will be introduced with some demonstrations. Future developments and possible extensions of the work will also be discussed at final.