Dionny Santiago

Florida International University

Lecture Information:
  • March 19, 2018
  • 11:00 AM
  • ECS 134
Photo of Dionny Santiago

Speaker Bio

Dionny Santiago graduated with his BSc. in Computer Science from Florida International University and is currently a Master’s student at FIU focused on Computer Science. Dionny is a Software Test Architect at Ultimate Software currently focused on R&D efforts to innovate within the software testing field via the use of Artificial Intelligence and Machine Learning. He provides expertise and leadership in software testing and contributes to hands-on testing, technical problem-solving training, and internal tool development. He has also published and contributed to the design of test specification languages and various test automation frameworks.


Achieving high software quality today involves manual analysis, test planning, documentation of testing strategy and test cases, and development of automated test scripts to support regression testing. In order to keep up with software evolution, test artifacts must also be frequently updated. While current test automation technology helps to mitigate the cost of regression testing, there exists a large gap between the current paradigm and the true automation of testing. Recent advances in Artificial Intelligence (AI) and Machine Learning (ML) have shown that machines are capable of matching or surpassing human performance across various problem domains. Mastering the game of Go, surpassing human speech recognition performance, and developing self-driving cars are just a few examples. Our research is motivated by the opportunity to bridge the gap between current test automation and true test automation by investigating learning-based solutions to software testing. There is a need for a more automated way of testing software in order to reduce cost and reduce cycle times for shipping production software more efficiently. The primary goal of this research is to develop a prototype of a system that leverages Machine Learning to generate system tests based on training across various systems from disparate domains. The proposal presents an approach that involves developing a trainable classifier which perceives application state, involves creating a language specification and grammar that can be used to describe test cases, and involves learning from test cases in order to generalize patterns and generate new test cases applicable across different applications.