This is a guest posting by Jim Holmes
Bug reports and the triage process are waste. In every sense, but especially the Lean sense. What’s more valuable to your users and customers? High-quality software being used in production, or a bug database that’s filled with hundreds or thousands of bugs dating back a decade or more?
Yes, this is a provocative opening. Intentionally so! But tamp down your righteous outrage, bear with me for a moment, and let me see if I can lay out a reasonable case for you to consider.
Get TestRail FREE for 30 days!
Some Simple Numbers
Let’s start with a simple budget example to get a dollar cost on the bug processes. Below are some figures for a team that’s handling five bugs a week. Testers find a bug, spend time writing up reports and documentation, a developer has to investigate those reports as part of preparing for a triage meeting, then there’s time around fixing, validating, and closing out the reports.
Please don’t fixate on the exact numbers–this is meant as an illustration! (Although the numbers are pretty darn close to what I’ve seen over many years.)
Traditional Find, Report, Triage, Fix, Validate model
|Tester||80||Writing up and validating reports||5||$400|
|Dev||100||Investigating report (prep for triage meeting)||5||$500|
28 hours and $2,520 spent in one week just on the bugfix process and that’s immediate, direct costs. There are a number of indirect costs as well, such as opportunity costs, delay costs incurred between each of the numerous phases, and a significant hit to focus and morale.
Now consider the following numbers for a much more streamlined approach where a tester finds a bug and works with a developer to immediately fix and validate.
Modern Find, Fix, Validate model
|Tester||80||Collaborate with dev on fix and validation of fix||5||$400|
|Dev||100||Collaborate with tester on fix and validation of fix||5||$500|
|PM||80||Three Amigos collaboration on bug fixes||1||$80|
11 hours for $980. That’s an extraordinary difference and it doesn’t even take account of the huge indirect cost savings around delays, opportunity cost, and morale.
Over the years I’ve heard plenty of objections to this approach. Here are common ones and my usual responses:
“We lose tracking of what’s been fixed!” Why do you care? Seriously. Your users don’t care what you fixed, only that what’s in production meets their needs.
“We lose data on bad areas of the codebase!” Bug counts are a secondary measurement. Use better metrics like complexity, dependencies/coupling, and other output from static analysis tools. Source code repository churn is another great indicator of unhealthy parts of the codebase.
“We can’t compare the performance of different teams!” You shouldn’t be using such metrics to compare teams. Period. Full stop. This is perhaps one of the most egregious misuses of metrics and it absolutely will drive unhealthy behaviors such as focusing on trivial bug reports over delivered value.
“We will lose the ability to predict expected defect rates!” Again, just stop. Please. Predicting defect rates makes complete sense in manufacturing processes, where you certainly can get accuracy around mechanical processes. Thought work such as software development in a complex business rule environment is a completely different beast, however, and can’t be so easily modeled. Moreover, using data gathered from a backend data warehouse project makes no sense whatsoever when applied to a mobile app consuming microservices.
“We’ll suffer too much disruption to delivery and velocity!” No, no you won’t! The disruption may seem worse at first, but in reality it’s far less. Additionally, you’re actually improving delivery cadence as you are cutting wasted time and rework.
Putting to Practice
Moving to a Fix Versus File mindset takes a little adjustment on the parts of all involved. Here’s a typical flow of how this looks:
- A tester finds a bug. The tester jots down anything pertinent required to reproduce the bug.
- The tester reaches out to a developer and program manager (or business analyst or whatever similar role is on the project) for a quick Three Amigos meeting. This meeting should happen quickly, within a few hours at most. The three roles discuss if the bug is valid. If so, the developer agrees to fix it as soon as they can close out any work in progress. (This requires working in small pieces, for example, finishing out one test in a red-green-refactor cycle.)
- The developer and tester collaborate, either by direct pairing or remote if necessary, to fix the bug and validate it.
- The fix (including automated tests verifying the fix!) are pushed into source control.
Getting to this stage of highly efficient maturity requires a number of things to be in place.
First, the team’s culture needs to be focused on value delivery–high-quality software in production and in use by the customers. Secondly, the team needs to be working in small chunks. Not only is small work size critical for good value delivery flow and throughput, it’s critical for enabling quick pivots to high-value work like fixing bugs. Third, teams need to be proficient at automated testing, as it’s important to lock down bug fixes with automated tests. (This also explicitly requires the codebase to be supportive of automated testing.) Finally, this requires a collaborative team where testers, devs, and BAs/PMs can quickly discuss issues and reach decisions, even if they’re a geographically distributed team.
Are there places it makes sense to file bug reports? Yes! Absolutely. There are indeed some issues that should be documented. I agree with writing up bug reports that relate to security, architecture, and performance. These are complex enough areas that it makes sense to have more documentation around them.
Focus on Working Software In Production
Bug reports and the heavy processes around them provide little or no value to your end users. Rather, they act as a drag on productivity. Moreover, it’s frankly a bit dishonest of us to file away bug reports we never intend to resolve–as can be proven by the many bug databases around the world full of reports that are years old with no resolution in sight.
Instead, focus on preventing bugs by better collaboration before a line of code gets written. If a bug does make it out, fix it, verify it, and push it out so you can focus on the next piece of value!
Jim Holmes is an Executive Consultant at Pillar Technology where he works with organizations trying to improve their software delivery process. He’s also the owner/principal of Guidepost Systems which lets him engage directly with struggling organizations. He has been in various corners of the IT world since joining the US Air Force in 1982. He’s spent time in LAN/WAN and server management roles in addition to many years helping teams and customers deliver great systems. Jim has worked with organizations ranging from start ups to Fortune 10 companies to improve their delivery processes and ship better value to their customers. When not at work you might find Jim in the kitchen with a glass of wine, playing Xbox, hiking with his family, or banished to the garage while trying to practice his guitar.
Test Automation – Anywhere, Anytime