Dionny Santiago

Knight Foundation School of Computing and Information Sciences


Lecture Information:
  • June 7, 2021
  • 10:00 AM
  • Zoom

Speaker Bio

Dionny Santiago is a Director of Engineering at test.ai, where he leads the development of the core AI-driven testing platform. Dionny’s goal is to advance the current state of the art in software testing through the application of Artificial Intelligence and Machine Learning. Dionny holds a M.S. degree and is currently pursuing a Ph.D. in Computer Science at the Knight Foundation School of Computing and Information Sciences (KF-SCIS), Florida International University (FIU). He has published and contributed to the design of test specification languages and AI-driven test generation approaches. Dionny also enjoys attending software testing conferences to both learn and share. Dionny is a member of the IEEE Computer Society and the ACM.

Description

Achieving high software quality today involves manual analysis, test planning, documentation of testing strategy and test cases, and the development of automated test scripts to support regression testing. In order to keep up with software evolution, test artifacts are also frequently updated. There is a need for a more automated approach to testing software to reduce cost and reduce cycle times for shipping production software more efficiently. Advances in Artificial Intelligence (AI) and Machine Learning (ML) have yielded viable solutions to significant problems, including computer vision and language understanding and translation. Generative techniques are applied to generate text and images that closely resemble similar artifacts produced by humans. There is an opportunity to bridge the gap between current test automation and true test automation by investigating learning-based solutions to generate abstract and concrete test cases.

We propose an approach for the automatic generation of test cases that leverages three major applications of AI and ML: (1) the use of hierarchical supervised ML classification models to perceive intra-page application state; (2) the use of hierarchical supervised ML sequential models to support the generation of intra-page and inter-page test cases; and (3) the use of a knowledge base and an inference engine for the generation of specific test inputs. We frame the problem of recognizing application components as a supervised learning classification problem and generating abstract test cases for an application as a supervised sequence learning problem. We transform the abstract test cases into executable concrete test cases via a deductive database capable of generating specific test inputs. We leverage the Rico dataset to build our training and validation datasets. The Rico dataset is a publicly available dataset consisting of mobile screens, elements, and semantic annotation data mined from about 9,300 Android applications across 27 categories.

Our approach aims to achieve context-awareness by tracking the history of actions and the current state when generating test cases. The learning-based models support this generation approach. Lastly, the approach supports online test generation during a run-time exploration and testing session. We implement a prototype of the approach and evaluate the prototype by selecting a set of open-source Android applications, instrumenting them, and measuring code and activity coverage after executing our prototype against the set of applications. We also perform a qualitative evaluation of the generated test cases. The qualitative analysis involves measuring the number of valid generated tests compared to the total number of desired test cases per application.