Dionny Santiago

School of Computing and Information Sciences


Lecture Information:
  • November 9, 2018
  • 2:00 PM
  • ECS 349

Speaker Bio

Dionny Santiago graduated with a BSc. in Computer Science from Florida International University, and is currently a Master’s student at FIU focused on Computer Science. Dionny is a Software Test Architect at Ultimate Software currently focused on R&D efforts to innovate within the software testing field via the use of Artificial Intelligence and Machine Learning. He provides expertise and leadership in software testing and contributes in hands-on testing, technical problem solving, training, and internal tool development. He has also published and contributed to the design of test specification languages and various test automation frameworks.

Description

Achieving high software quality today involves manual analysis, test planning, documentation of testing strategy and test cases, and development of automated test scripts to support regression testing. In order to keep up with software evolution, test artifacts must also be frequently updated. There is a need for a more automated way of testing software in order to reduce cost and reduce cycle times for shipping production software more efficiently. This thesis is motivated by the opportunity to bridge the gap between current test automation and true test automation by investigating learning-based solutions to software testing. We describe a prototype of a system that leverages Machine Learning to generate system tests based on training across various systems from disparate domains. We present an approach that involves developing a trainable classifier which perceives application state, involves creating a language specification and grammar that can be used to describe test cases, and involves learning from test cases in order to generalize patterns and generate new test cases applicable across different applications.

In order to train the Machine Learning models, data from a total of 7 systems comprised of 95 web pages and 17,360 elements was manually collected and labelled. In addition, a total of 250 test flows were constructed. Each of the 250 crafted test flows captured real tests that could feasibly be performed by human testers against web applications. A variety of different learning algorithms were tested as part of our work.

Findings suggest that Random Forest classifiers perform well on most web component classification problems. In addition, Long Short-Term Memory (LSTM) neural networks showed the ability to model and generate test flows. Results suggest that the learning algorithms are able to generalize patterns from the data. Findings also suggest that it is possible to raise the level of abstraction for working with webpage components, to model human testing behavior as test flows that operate against webpage component abstractions, and to automatically generate and execute test cases using the approach.