This is a guest posting by Carol Brands
Lately, my test team have been thinking about automated testing much more. We recently started working with browser-based projects, and with a corporate push toward a DevOps culture, we were considering how we can use automation to drive our testing strategy forward.
To help get us started, I attended a tutorial on automation in testing presented at the Conference of the Association for Software Testing, or CAST. I knew that I wanted to start thinking about what we should and shouldn’t automate. The ideas of “too many tests” and having tests that don’t bring value were worrying me, but I didn’t have a clear picture of how to avoid the problems.
We used a demonstration hotel website during the course. We explored how booking a room on the demo website would work, and by doing a task analysis, we revealed some less than intuitive behavior. For example, we found that rather than simply moving from the browser to the database and back, the booking scenario included a booking API talking to an authorization API, which had its own path to the database and back before the booking could be completed.
Once we completed the task analysis, it was easier to understand how to make decisions about what we wanted to check and what should be included in the scenario. For example, if you wanted to check that the booking API works, you may not want to run an end-to-end scenario, because the end-to-end scenario would include the authorization API. If the authorization API failed, your end-to-end scenario would fail, and your check might fail without telling you anything about whether the booking API worked.
Get TestRail FREE for 30 days!
Taking it Back to the Team
As soon as we completed the exercise, I knew that it was the one I wanted to take back to the office to complete with the team. The first reason was that modeling the stack for our project was something I hadn’t done before. I assumed I had a basic understanding of how the stack worked, but I suspected that creating an explicit model with the development team would be revealing. Also, we’d been talking about how we could incorporate an understanding of the developer’s unit tests into our testing strategy. The task analysis could give us a better understanding of which tasks are covered by unit tests and would lead to conversations about that coverage, which means the test team would be able to make better decisions about which scenarios to focus on in our automated checks.
As soon as we could schedule it, the development and test teams met to carry out the exercise. I started by explaining the exercise as it occured in the tutorial. Then I asked if we could try creating a stack model that represented our current project. As expected, the model for our stack wasn’t quite as simple as the demo website used in the tutorial. Because we’re using a domain-driven design, instead of a simple line from the APIs to the back end and then the database, we have two lower branches — one for the business logic and one for the view cache.
Next we tried going through a task analysis for an example scenario. I asked the testers to think of the simplest scenario possible. The project we are working on now allows us to view the status of equipment, so they decided on searching for a common piece of equipment. As we worked through the task, we realized that what seemed simple to the testers was considered highly complex to the developers. A common question asked by the development team was, “How much detail do you really want?” I suggested we start with maximum detail and then leave out whatever didn’t help with making a test decision.
As we worked through the task analysis, we discovered that our original stack model was incomplete. On the view cache side of the model, we added an additional layer — which, after much discussion, was named “super secret query model,” and we had a good laugh.
Finally, we got into a discussion about how we can handle authorization during our automated checks. The way we handle authorization is distinctly different from how it was handled in the example from the tutorial. We were able to talk about how we can get authorization tokens that let us write our tests without every test becoming an authorization test.
A New Framework Going Forward
Thinking about getting involved in automation is exciting. It was great having a conversation with development that allowed us to explore our understanding of the project and reveal new information. It was also enlightening to find that the simplest scenario from a black box standpoint was one of the least straightforward scenarios from a task analysis standpoint.
The best part is that we now have a framework for our discussions going forward, as well as a direct path through task analysis for asking about how the unit tests written by development will influence the checks we choose for automation.
This is a guest posting by Carol Brands. Carol is a Software Tester at DNV GL Software. Originally from New Orleans, she is now based in Oregon and has lived there for about 13 years. Carol is also a volunteer at the Association for Software Testing.
Test Automation – Anywhere, Anytime