In a telco R&D, we are working with a very big existing system, including many legacy functionalities. After knowing the capability of a machine leanring (ML) based AI system, we expect that it can be applied to improve our regression testing.
Now when we have a new requirement, we typically do the implementation on top of the existing system. So, from testing point of view, we will do new functional testing with the following procedure:
- A new requirement defines a new solution;
- To test the solution, we define new test cases;
- We execute the test case to say if the solution is OK or not to meet the requirement.
And on the other hand, we spend huge effort and HW resources to do regression testing – verifying that the new change doesn't break any existing functionalities. But only a few of them will catch bugs.
With test automation, we have already categorized the regression case sets in layers and with different tags, and tests from the most basic ones to the most advanced features. However, the feedback about case quality, the analysis of testing result, the selection of regression case set and the scheduling algorithm for different case sets are still manual, and highly depending on the skill level of our testing experts.
We have already had a set of test cases, and know their corresponding code coverage; a history test record about the regression case set and its execution results (execution time, logs, pass/no pass); a list of bug reporting data.
By feeding these data to the SMART regression testing system, we expect that when there is a new code commit, the system can output a set of regression cases with execution order automatically.