This is a guest posting by Justin Rohrman.
I was working on a pretty tough project early in my career. This was back in the bad old days when the waterfall was the go-to project management tool. We were at the end of a release, in the testing phase, and all the testers were working evenings and some weekend days to get through the queue of work. We were all left feeling burned out at the end of that release.
The Monday after the release, I asked if we could hire a new tester for our team. The answer I got from my manager was that I needed to show how that would make sense financially. I’m not an accountant, so I never did the analysis.
What is the right test-to-dev ratio, and how can you figure out what’s right for your team without working in the accounting department?
Analyze the problem
Software development is often a process of handoffs — even in agile, and even when we do two-week sprints.
Developers might start a sprint with a handful of tickets to work on apiece. They work for a day or two, then I check in and ask how things are going. The developer tells me that everything seems to be going well, but they need a little more time. They don’t really have anything testable just yet.
After a day more I get a small change from one developer and start working on that. I find a problem where entering a bad date in a date field caused an exception to be thrown when I submit the page, and another where a button is overlapping a text field in older versions of Internet Explorer. I report those issues to the developer, and he lets me know he’ll get them in his queue of work.
New changes ready to test start rolling in a couple of days before the end of the sprint and our inevitable deploy to production. We have to test a new report, a new management workflow, and a new page for managing those workflows — oh, and the two bugs I found a few days before. All of that is a lot for a team of two testers with two days to go before a release.
Presto: We now have too much work and not enough testers to do a good job. If we did have enough testers, we would have the new problem of fixing and retesting all the problems they find in time to release.
We had several problems. First, we started the sprint with large swaths of work, and a single card in our bug-tracking system might take around five days to complete, on average. Next, we have varying amounts of developer testing. Some developers created unit tests, and some did nothing aside from quickly open up a browser after they were done writing code. Last, we had a physical and mental separation between roles on the development team.
Our tester-to-developer ratio was completely inappropriate for how we were developing software, especially right before we wanted to put out new software. We had three options: recruit and hire an additional tester, work to make our development process more effective and efficient, or both of those things.
Here is where I would start.
Break work into smaller chunks
The first thing I would do in this scenario is learning to break down work into smaller pieces. In my experience, a day or two is a good upper limit for completing a change. Smaller changes are easier to understand, isolate, and, more importantly, test.
When I look at longer running changes I have worked on, ranging from a week to several months, there were inevitably surprises. We discovered new scope, forgot to test things, or had to rework parts of the code that were implemented poorly. This philosophy can easily stink of the big, upfront design if we aren’t careful.
The goal here isn’t 100 percent reduction of uncertainty; it is understanding a change just enough that you can get it done in a day or two.
Developers tend to start performing some testing after you get in the habit of making very small changes. Because very small changes are easier to understand, I see developers begin to consider how and what to test.
I like pairing with developers. We start by thinking about what we would test, and then we write that test. After that, we write the product code that will make the test pass. On a normal change, we do this back-and-forth process of writing tests and then production code until we get a gut feeling. That feeling lets us know that we are probably far enough along to create a new test environment and do some exploring.
This isn’t about pairing, though; this is about managing the tester-to-developer ratio. So why does this matter?
Software that is written in very small chunks and is built by people who care about quality tends to be pretty good. I don’t generally see dead-on-arrival builds, back-and-forth bug reports and bug fixes, or time spent in bug triage meetings trying to figure out what matters, what is a bug, and what is a “feature request.” There are fewer bugs, and the bugs that are there tend to be more interesting and harder to find.
This workflow requires fewer testers — they just also need to be more highly skilled.
Figure out your ratio
It is easy to look at a bottleneck — getting a large amount of testing done right before a release, for example — and automatically think you need to hire another person. But the tester-to-developer ratio tends to solve itself if you examine the actual problems you are having.
How long does it take to get one code change done? How much developer testing and quality emphasis do you see in the development process? How closely do the different roles on your development teamwork?
If you are working on small changes that last somewhere around a day, test-driving where you can and pairing sometimes, and you still have a bottleneck, then maybe you do need another tester. By that point, though, it should be apparent that the next step is hiring a new person; in this situation, there’s no accountant needed.