Sajjan Shiva
Distinguished Professor, University of Memphis
Lecture Information:
- March 28, 2023
- 4:00 PM
- CASE 135

Speaker Bio
Dr. Sajjan Shiva is First Horizon Foundation Distinguished Professor of computer science at the University of Memphis. He is an IEEE Life Fellow and served as the founding chairman of the Computer Science Department from 2002 to 2015. He is the Director of Game Theory and Cyber Security laboratory (https://gtcs.cs.memphis.edu ). He has served on the Computer Science faculties of University of Alabama in Huntsville and Alabama A&M University. He has served as Software Quality Manager, Technical Project Manager and Senior Software Engineer in industry and has been a consultant to industry and Government. His current research spans game theory applications to cyber security, cloud security, secure software development, SCADA security, machine learning based intrusion detection and Frameworks for security and privacy assessment of cloud and Internet of Medical things. His research has been supported by NASA, NSF, U.S. DoD and ONR. He has taught courses on security testing of systems and software, cyber security and cloud security. He has authored four books (10 editions) on computer architecture, now used in more than 120 universities around the world.
Description
Machine Learning (ML) has now taken over as the preferred method for building intelligent applications. Correspondingly, since ML systems are predominantly software applications, the software development process has transitioned from the traditional behavior-oriented process (programming) to a data-oriented process. Ideally, the ML system development workflow should follow the Software Development Life Cycle (SDLC) processes to utilize the lessons learned in traditional systems development. However, the data-centric nature of ML systems does not cater completely to this integration. This talk presents the state-of-the-art in Machine Learning System Development Lifecycle (ML SDLC) and brief descriptions of Machine Learning Operations (MLOps). Although the industry has undertaken significant measures to streamline the ML system development and deployment and several new processes are proposed by researchers, there is no accepted standard for the MLSDLC. In addition, with the current emphasis on building trustable ML/AI systems, their security mechanisms and explainability of prediction features have also become important, especially for critical applications. As such, we propose a security and explainability augmented ML system development life cycle.