In my last blog post Changing Locations and Changing Strategies, I talked about how moving into the development team room caused me to change my testing strategy. This article will reveal how I learned the hard way that changing strategies isn’t enough. You need to be able to adjust your behavior to implement the new strategy as well.
Get TestRail FREE for 30 days!
Testing Scope – What I Expected to Happen
One of the key strategic decisions the developers and I agreed to early in the project was that stories did not need to be tested comprehensively as they were finished. The project was in its earliest stages, and for the initial development phase, it was expected that changes in the stories would occur over time. This meant that testing initially consisted of tester review before and after development, and then some basic validation to allow the story to move to complete. We planned to do deeper scenario level testing later in the development cycle to make up for the shallow testing we were going to do upfront. This way, we could avoid a lot of testing rework.
Testing Scope – What Actually Happened
The main drawback of the “just enough” testing approach was that I didn’t have the experience to recognize what “just enough” was. Early in the development cycle, I felt uncomfortable closing stories that I hadn’t tested exhaustively, because I knew that they were going to need retesting as changes came through the pipeline. Doing shallow testing to keep things moving felt very unnatural, so it was difficult to consider these stories ‘tested’. Instead of finding another way to track which stories would need retesting later in the process, the stories remained in limbo. They were tested, but not closed and not accepted. This list of unclosed stories quickly became overwhelming.
There were other stories that were over-tested. One notable instance was the importing/exporting of a data library. I thought this kind of story needed more thorough testing, and it did, but it ended up taking up far more time than I could afford to spend given the bottleneck that was already developing. After spending 3 full days of testing on the story, the underlying technology for the import process changed later in development, and it needed to be retested anyway.
Testing Scope – Resolution
The main way I’m addressing my scope problem is by constantly reassessing what ‘good enough’ testing looks like. I’m re-evaluating each story that hasn’t yet been moved to complete and thinking: What parts of this have this story have already been tested by other stories? What testing would be most relevant knowing what I now know about how the system has evolved? How has changing our expectation of who will use this program affected what should be tested?
This reassessment of what is acceptable testing should help me identify what stories I can consider completed with a broad stroke approach to testing. Now that the development cycle has moved closer to the release, we expect fewer changes, and I can start examining groups of stories together to identify testing that will cross stories.That will allow me to close more stories with less time spent testing. As I complete that testing, I’ll keep my eyes open for risk so I can decide what still needs to be deeply explored before we can release.
Test Reviews – What I Expected to Happen
One of the better changes we made involved adding developer/tester reviews before and after story development. These reviews gave us the chance to point out potential problems prior to acceptance testing. We also added a product owner/tester review to the acceptance testing process. This allowed me to share with the product owner current story behavior and product status, and we were able to identify missing scope and potential defects that needed to be fixed as part of the story. This means instead of wasting time writing up defects, we move stories to ‘completed’ faster by just adding information to the story and returning it to development. Returned stories were queued up for development with a high priority and moved through the development cycle faster than a defect would have.
Test Reviews – What Actually Happened
The test reviews seemed like a good idea, and they were consistently mentioned in retrospectives as being particularly helpful. The problem was they could only be done with the product owner, and the product owner was very busy. It was difficult to make time for what often became lengthy reviews, as we took time during each one to decide whether an issue should be written as a defect to be addressed later, or a story amendment to be addressed immediately. When the product owner was busy, testing that was waiting for review began to pile up. This meant reviews took even longer, as with each one I needed to reorient myself and recreate data for testing that I had already moved on from.This created yet another bottleneck preventing us from moving stories to completed status.
Test Reviews – Resolution
While I still think the testing reviews are helpful, I’ve changed how I work with the product owner to maximize the time we spend on them. I used to wait until our review to let the product owner decide whether each potential issue was a defect or a story problem. Now, I make that decision ahead of time, and write up defects and edit stories, but we review each decision. Often, I’ve taken the action that would have been suggested anyway. If my decision doesn’t satisfy the product owner, I just correct it after the review. I’ve also started setting aside specific meeting times to work with the product owner, instead of waiting for him to be available like I used to. This means reviews are happening more regularly, instead of being deprioritized by default.
We’ve seen both successes and failures with the new test strategy, but that seems right in line with the experimental nature of what we’ve changed. The feeling around the office is, our old method would likely have seen just as many bottlenecks pop up, but with the new test strategy, even ‘untested’ stories are of higher quality due to the multiple reviews they receive during development. The way I see it, every failure we discover in the testing process is a chance to keep learning and keep adapting.
This is a guest posting by Carol Brands. Carol is a Software Tester at DNV GL Software. Originally from New Orleans, she is now based in Oregon and has lived there for about 13 years. Carol is also a volunteer at the Association for Software Testing.