I wish I could tell you that testing stories with developers was just as fruitful and rewarding as testing defects with developers was. In some ways it was, but we also hit a few unexpected snags along the way.
We were working with a tight deadline. In meeting with the product owner, the test team was clear that we would not be able to test as much as we would like if we wanted to meet the deadline that was given to us. We decided on a strategy: Test calculation-based defects and features thoroughly, and test the remaining features just enough to determine the major risks.
The product was already being tested by our intended users, who were internal consultants. Because the product was being used by them regularly, we believed that all the major workflows through the product had been reported by them during their testing. Our goal in testing the remaining features was to evaluate any risks through less common workflows. Deeper, more thorough investigation would have to wait for the next release, to a broader audience.
I explained our testing scope to the developers who would be testing with me. It seemed like we agreed, but I soon discovered that agreement was shallow. Shortly after the developers started working on testing features, I started to see new builds rolling in. These builds included fixes for defects that didn’t have defect reports written in our tracking system. Ack!
I explained to the developers that at this phase in testing, we really needed to write up a report for any defects that we found. It’s important to let the product owner make the decision about what to fix and what to leave alone so that we don’t waste effort and can minimize risk introduced by new changes.
Also, we had already made the decision not to test any defect fixes that didn’t affect calculations. So the new changes introduced by this change were likely not going to be tested, which meant the risk had been introduced and could go unevaluated. This is really the opposite of what we wanted at this stage in development, immediately prior to release. We decided that the developer would test the new fix, but I asked them to be a little more mindful in the future.
Get TestRail FREE for 30 days!
Crisis averted. Things were going smoothly — until a developer stopped by my desk to mention that our integration with a database wasn’t working well. I was a little surprised, because that database type wasn’t part of our initial scope.
I took a quick look at the testing notes for the feature the developer was testing. First, I was impressed by the level of detail in his notes. This was evidence of some serious work. As I read through them, I realized that I hadn’t made my expectations clear. He had done much more testing than I had expected from my description of the “happy path” testing strategy for features.
The project we are testing will ultimately need to integrate with three database types used by our flagship program, but for this release, we were focusing only on integrating with one of those database types. Knowing how this feature worked across all three database types was useful information, but it was more than we needed in order to make a decision for a release that focused only on one database type. It was time that we didn’t need to spend on making the decision for our limited release.
I decided to let the developer know that I’m glad he discovered that future risk, but that it was acceptable to not integrate with the broken database type. I suggested that we write a feature request and continue to focus only on the database type scoped for this release.
Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.
We continued to make steady progress together in testing the remaining features. Thanks to the developers’ assistance, we were able to meet our testing deadline for features and defects. We are still working through some of the details of release, but testing is no longer the bottleneck.
Working with the developers has been an eye-opening experiment. Now that this project is finally complete, I can start working on some of my personal goals as a new project ramps up on its way to entering development. I can’t wait to use what I learned from working with developers on this project to make working on the next project even better.
This is a guest posting by Carol Brands. Carol is a Software Tester at DNV GL Software. Originally from New Orleans, she is now based in Oregon and has lived there for about 13 years. Carol is also a volunteer at the Association for Software Testing.
Test Automation – Anywhere, Anytime
Help us improve this page!
What problem are you trying to solve?
Read more about how the QA team at ELEKS evaluated different test management solutions and why they ultimately chose TestRail.
This post covers how to show test coverage, manage requirements traceability, and create a traceability report in Jira.
Learn about the pros/cons of using Jira as a test management solution, alternatives, & how teams that use Jira manage their testing.