Kevin Desai
Department of Computer Science, The University of Texas at San Antonio
Lecture Information:
- October 2, 2020
- 2:00 PM
- Zoom: Contact SCIS coordinator (Hadi Amini) for zoom credentials if you did not receive the email.

Speaker Bio
Dr. Kevin Desai is an Assistant Professor in Practice with the Computer Science department at the University of Texas at San Antonio. He received his PhD degree in Computer Science from The University of Texas at Dallas in May 2019 with dissertation titled – “Quantifying Experience and Task Performance in 3D Serious Games”. He also received his MS in Computer Science from The University of Texas at Dallas in May 2015, whereas his Bachelor of Technology in Computer Engineering from Nirma University (India) in June 2013. Dr. Desai’s research interests include 3D Computer Vision, Multimedia Systems, Mixed, Virtual and Augmented Reality, Human Computer Interaction, Machine Learning and Deep Learning with applications in the domains of healthcare, rehabilitation, virtual training, and serious gaming.
His research work focuses on capturing and generating 3D model of a human and placing them in collaborative virtual environments, mainly for serious game applications such as exergames (exercise and gaming) and virtual STEM experiments. Different problems that Dr. Desai has published research work on include multi-camera calibration, 3D model combination, mesh generation and simplification, AR-based exergames and virtual laboratories. He is currently working on two research problems – (1) real-time hand pose estimation for full-body capture systems, and (2) uneven light removal for efficient depth map generation from stereo images. Dr. Desai’s work has been published in international conferences (IEEE/ACM) mainly in the field of Multimedia, namely MMSys, ISM, BigMM, and ICME. He has acted as a program committee member and reviewer for multiple peer-reviewed international journal and conferences in IEEE, ACM, and Springer.in collaborative virtual environments, mainly for serious game applications such as exergames (exercise and gaming) and virtual STEM experiments. Different problems that Dr. Desai has published research work on include multi-camera calibration, 3D model combination, mesh generation and simplification, AR-based exergames and virtual laboratories. He is currently working on two research problems – (1) real-time hand pose estimation for full-body capture systems, and (2) uneven light removal for efficient depth map generation from stereo images. Dr. Desai’s work has been published in international conferences (IEEE/ACM) mainly in the field of Multimedia, namely MMSys, ISM, BigMM, and ICME. He has acted as a program committee member and reviewer for multiple peer-reviewed international journal and conferences in IEEE, ACM, and Springer.
Description
Mixed reality systems allow the development of interactive 3D tele-immersive applications. A live captured 3D model of a person is captured and immersed in a virtual environment, thereby enabling interactions and collaborations among geographically distributed people. Serious gaming is one application domain in which games are developed with a specific objective rather than just entertainment, such as STEM education, rehabilitation, military training, etc. These experiences should reduce the gap between the virtual world and the corresponding real-world situation. To obtain a good overall user experience, we need to consider the visual appearance as well as the interaction and immersion quality.
In this talk, we showcase our interactive 3D tele-immersion system while considering specific use cases –soccer penalty training, exergames (exercise and gaming), and virtual chemistry laboratory experiments. We discuss the following questions: (1) Can we objectively quantify the visual quality of a single-camera 3D human open mesh? (2) Can we create network adaptive 3D human meshes with high quality that can adapt to the change in the bandwidth? (3) Can we efficiently calibrate multiple cameras and generate a combined 3D human model to improve the interaction and visual quality? (4) Can we, and if so, how do we map the user's task performance in a real-world scenario to the corresponding virtual world and perform automated assessment? For the above questions, we give an overview of the problem and describe, at a very high level, our preliminary approach in addressing the issue.