The surface area for testing software has never been so broad. Applications today interact with other applications through APIs, they leverage legacy systems, and they grow in complexity from one day to the next in a nonlinear fashion. What does that mean for testers?
The 2016-17 World Quality Report suggests that AI will help. “We believe that the most important solution to overcome increasing QA and Testing Challenges will be the emerging introduction of machine-based intelligence,” the report states.
How will we as testers leverage AI to verify these ever-growing code suites? And what will happen as AI works its way into our production applications? How will testing change?
Here are five ways experts see the introduction of AI changing testing.
1. OUR TOOLS WILL CHANGE
Jason Arbon is the CEO and founder of AppDiff, a company that uses AI to test mobile apps. He’s also a developer and a tester, having worked at Google and Microsoft. He co-authored the book How Google Tests Software. Who better than he to comment on how AI will affect testers?
Arbon shared a funny anecdote to answer that question. He said his kids giggle at him for “making the gesture to manually roll down a car window.” He related this to the next generation of testers: They will soon “laugh at the notion of selecting, managing, and driving systems under test (SUT)—AI will do it faster, better, and cheaper.”
2. WE’LL TRASH DETERMINISM
When studying AI, the biggest “A-ha!” moment for me was when I realized that the problems we solve with AI are not deterministic. If they were, we wouldn’t use AI to solve them! Also, the solutions to the problems we’re trying to solve with AI change as our systems incorporate new data. Talk about a moving goal post.
Moshe Milman and Adam Carmi, co-founders of Applitools, which makes an application meant to “enhance tests with AI-powered visual verifications,” say there will be “a range of possible outcomes. A test engineer would need to run a test many times and make sure that statistically the conclusion is correct. The test infrastructure would need to support learning expected test results from the same data that trains the decision-making AI.”
This varies greatly from our current work with systems under test. It sounds more experimental, more thought-provoking, and more mathematic.
One of the best views into how testers will work with AI as our software becomes less deterministic is an experience report from Angie Jones, Senior Software Engineer in Test at Twitter. In a recent Testing Trapeze article called “Test Automation for Machine Learning: An Experience Report,” Jones systematically isolates the learning algorithms of the system from the system itself. She isolates the current data in order to expose how the system learns and what it concludes based on data she gives it.
Will processes such as these become best practices? Will they be incorporated into methodologies we’ll all be using to test systems?
3. AI WILL BE YOUR BFF
If AI will change our perspective the same way power windows forced giggles out of Arbon’s kids, maybe our lives as testers are about to get a whole lot easier.
“AI’s interactions with the system multiply results you’d have with manual testing,” says Jeremias Rößler. Rößler, who has a PhD in computer science, has spent the last three years working on an AI-based testing program called ReTest. Currently in beta, ReTest offers the luxury of generating test cases for Java Swing applications.
If generating test cases isn’t enough to commit to BFF status with AI, Infosys now has an offering for “artificial intelligence-led quality assurance.” The idea is that the InfoSys system uses data in your existing QA systems (defects, resolutions, source code repo, test cases, logging, etc.) to help identify problem areas in the product.
Citing the same vision toward AI-as-testing-assistant projected by Rößler and Infosys, Milman and Carmi claim, “First, we’ll see a trend where humans will have less and less mechanical dirty work to do with implementing, executing, and analyzing test results, but they will be still integral and necessary part of the test process to approve and act on the findings. This can already be seen today in AI-based testing products like Applitools Eyes.”
When AI can make less work for a tester and help identify where to test, we’ll have to consider BFF status.
4. WE’LL BECOME MYSTICS
What happens when both testing applications and systems under test use AI?
Rößler immediately brought up “The Oracle Problem,” which was exposed during an attempt to automate the testing process. Automation may know how to interact with the system, but it is missing “a procedure that distinguishes between the correct and incorrect behaviors of the SUT.”
In other words, how would an AI that tests know that the system under test is correct?
Humans do this by finding a source of truth—a product owner, a stakeholder, a customer. But what would the source of truth for the testing AI be?
While AI may give us mystic insight into what a system will do, the Oracle problem would have to be resolved for testing AIs to test AI-based SUTs.
How will AI testing AI affect us as testers? As Milman and Carmi point out, “Test engineers would need a different set of skills in order to build and maintain AI-based test suites that test AI-based products. The job requirements would include more focus on data science skills, and test engineers would be required to understand some deep learning principles.“
5. WE’LL BECOME EXTINCT
If you want hope for testing, though, it comes from Arbon: “I frankly can’t recall a single testing activity I’ve done in the past that couldn’t eventually be done better by an AI with enough training data.” Eventually sounds like a long time away. But I still feel the need to cue the tension-filled cliffhanger music …
Maybe there is hope in the length of the runway between here and where AI takes off. It’s easy to get stuck on our own importance in our roles, that we’re irreplaceable because we can do this or that. But make no mistake: Like the asteroid that slew the dinosaurs, AI is coming.