This is a guest post by Peter G. Walen.
Companies and teams often struggle to get the right mix of test automation into their testing toolbox. There are some, perhaps many, that struggle to determine what needs to be tested. The next question is the hard one.
Are we as good at figuring out what needs testing as we think we are? Are we as good at that as we’d like to be? A fair number of the people who are the best at testing are not as good at that as they’d like to be. Even if we are that good, are we good enough?
There is a problem here. If we are not as good as we would like to be when it comes to testing, how good are we when we need to figure out what to automate? Is there a way we can find balance in test automation?
The problem with good testing is it can be slow, cumbersome and time consuming. At least from the outside. Managers want metrics that show regular progress. Sometimes they want to see it monthly, weekly or daily. But they want to be able to see and share progress.
Instead, many substitute deep testing for what appears to be testing. The challenge is to identify the best testing possible within the boundaries and needs of the project and the organization. This might be another aspect of what good testing looks like.
We need to be able to identify what it is that is most important. We need to be able to find a way to balance what is most important to test with what we can test. The challenge we have is to find a way to do that.
There are some ideas I have found to help.
Everyone in the project is involved in some form of testing, whether they know it or not. Developers need to test their own code. Simple unit tests do wonders to make later testing go smoother and faster than without it. When done well, this also makes the metrics managers want to be delivered in an acceptable way. In every project and engagement, I’ve been in, testing in a local environment cuts down the number of later problems.
Even when doing paired or mob programming, local testing finds all sorts of small, annoying things that likely would slow things down if found later.
There needs to be conversations between developers and the people testing their work around how the code developers tested. With any level of unit or local environment testing, there should be conversation between the people writing code and the people exercising the code. Once it makes it to the “test” environment, repeating these tests as a sanity check often gives a good level of confirmation for the behavior.
This gives a solid foundation to move forward and launch a deeper evaluation. The tester can hit obvious failure points by checking links, drop-down lists, communication with other modules, response codes, and messages in the logs.
Then they can check the acceptance criteria and requirements. Make sure those are handled properly and the possible exception conditions. Testers can check exceptions to show they are handled properly. If the requirements have been tested in advance, the possible holes have likely been limited if not closed altogether. This gives the opportunity to look for any that may have been missed. What if something unexpected happens? There is often great value in finding the unexpected.
By doing these steps and engaging with these ideas, it is possible to begin the quest to find balance in test automation. With a set of conditions identified that gives a good foundation to move forward, you have a list of basic items to consider for automating.
Each build going forward from the first, you have a snapshot of known behavior. Once you are certain these behaviors are correct, you can build reliable automation scripts to allow people to focus on the next level of testing.
These might not be the most interesting tests to automate, but they provide value simply by saving time and effort going forward. You have a basis for implementing them in a CI/CD environment as well as starting an automated regression suite.
As with any test effort, the next challenge is to delve a bit deeper. If you have exercised the acceptance criteria and are looking beyond that, look for the overall purpose of the project. In some environments, the single set of requirements is the extent for things to be tested. In organizations doing some form of iterative development, these foundations are vitally important.
Build on strong foundations
With each iteration, with each new layer of development, there is a new set of conditions to consider in testing. Depending on how the work is structured, this can build on what was already done. While most will look at the code as the main artifact to build on, to iterate, testing and test automation do the same.
Look at the purpose of each iteration. The change introduced at each level is another piece of information for what can be tested. By repeating the same process each iteration, you build up on your earlier foundations.
Each iteration also gives you more to compare against the larger purpose of the changes. By using this approach, careful evaluation of intended compared to actual behavior, against the purpose of the change, you are broadening the range of testing. Importantly, you are discovering the areas likely to have issues impacting the customer.
When you do that, you are finding areas of interest to test, beyond the formal acceptance criteria and looking at “suitability to purpose.” This helps create scenarios close to how the software will be used in production. Emulating how customers, external or internal, reasonably intend to use the software.
Can real customer scenarios be emulated?
At one point in my working life, I would have said “no.” A wise woman gently asked me once, “Have you tried asking anyone?” I hadn’t. That was a lesson I have never forgotten.
It often isn’t in the requirements or in the acceptance criteria, and is not often addressed in the “justification” or “statement of business purpose” or “problem/need” statement. Most of the time those are not prepared by the people who use the software to do what needs to be done. Ask the people who need it for their jobs, if at all possible. It may not be, I get that. But someone can likely describe how the software gets used.
Talk with them. Build scenarios to exercise what they describe, Review it with them. Show them what the software does to make sure you understand what need is being addressed.
These scenarios, combined with your scripts based on acceptance criteria, gives a broad based range to test. Test those things which need to be consistent. Automate these scenarios.
This can give you the balance you need in your automation.