A while back, I talked about how testing looked before I moved out of the test team room and new ways I am able to participate in the development process from the development team room. I want to talk a little more about how my approach to testing has changed since moving into the team room to work on a brand new project.
Get TestRail FREE for 30 days!
The Old Way
When I worked in the test team room, user stories and defects fell into my lap. All of our products were established, and the product was generally considered ‘done’ by the time it entered testing. Once stories or defects were marked “Ready for Test”, I claimed them in whatever order seemed convenient at the time. After claiming a story, I read it, maybe asked a few questions if I felt I needed to, and then decided how it needed to be tested.
In general, testing was as comprehensive as possible. It was important to catch any known defects at this stage, because there would be no revisiting of completed features once they were tested. My goal was to be sure that we were fully aware of all knowable behavior associated with the completed story/defect. After carrying out my tests, I would write up any defects that arose from the testing, and mark the item as “Complete” if it seemed good enough, regardless of the defects found, or “Ready for Development” if it seemed too broken. Prior to release, a product manager reviewed the status of all the stories and defects that were part of the release, but there was no real review of what was tested or how it was done.
The New Way
When I first moved into the development room, the architect and lead developer and I talked about how being in the team room and working on a brand new product might change my approach to testing. Instead of testing completed stories, testing would happen throughout the development process, but each story and defect would still have a final ‘testing’ stage prior to being accepted.
We expected the behavior of the program to change over time. This meant testing each story comprehensively would likely produce waste as new stories emerged and the behavior created by older stories became invalid. Instead of constantly retesting, we decided to test judiciously. When we expected big changes to come, stories would be tested broadly to confirm the acceptance criteria was met. Deeper integration testing could wait until larger epics were completed and the program had gained some stability. This way, fundamental flaws could be addressed while design decisions were still being made in other parts of the program.
The questioning that we introduced during story reviews made it easier to decide whether the time was right for deeper testing, or whether the features were still fluctuating too much to make deep testing valuable.
Increased Communication Means Fewer Defects
In addition to changing the testing approach, we added an additional step to the end of the testing process. Instead of just marking stories as “Accepted” once the testing was done, I began debriefing the tests results with the product owner. Initially, I literally showed him all the steps I had taken, and all the notes I had written, so that he could get a better understanding of how our story reviews translated into test ideas.
Over time, these debriefs became more focused. I would give the basic idea of what I’d tried to test, and we reviewed all the test results that seemed wrong or unusual. We decided together whether to write up a defect, return a story to development, or do nothing at all.
At first, so many things were expected to change that we would often take a ‘wait and see’ approach to writing defects. If the coming changes would alter the behavior anyway, there was no value in writing a defect that wouldn’t be there in a week or two. As we got closer to the end of the project and expected more stability, we were more willing to write defects to be addressed or documented in the final phases of development.
Change is Hard
The hardest part about making these changes wasn’t in deciding to do something different. It was forcing myself to actually go through with the changes. I discovered that it can be really hard to avoid testing deeply when that’s all you’ve ever done, even when you know it’s not valuable in the moment. Not writing a defect when something seemed wrong felt almost sacrilegious. But in the end, because of the testing we’ve built into development process, I know that we’re still producing a high quality product, just in a different way.
This is a guest posting by Carol Brands. Carol is a Software Tester at DNV GL Software. Originally from New Orleans, she is now based in Oregon and has lived there for about 13 years. Carol is also a volunteer at the Association for Software Testing.