This is a guest posting by Johanna Rothman
Software product development is rarely a straight line from idea to code-and-test to delivery. We take ideas, massage them, build something, measure the results often with tests, learn from those results which then feed into new, evolved ideas. We might veer off our original plans to learn and develop something even more valuable.
Sometimes, we get stuck–we fail. If we get stuck in the building, we might change what we build–a smaller or different idea. If a test fails, that’s feedback that what we built didn’t work as we expected it to.
The build/measure learn loop helps us assess our product ideas, our product implementation, and how well we thought we understood the results we thought we’d receive from the product. The build-measure-learn loop helps us learn. And, the smaller the piece we build, the faster we can learn.
Testers are key to helping a team learn, by articulating uncertainty, by helping to gather information about the product under test, and by helping the team assess what they have learnt.
Get TestRail FREE for 30 days!
Testers Explain Uncertainties
Darcy, a tester, was worried. During the discussions about this search report, she expressed her concerns about several parts:
The user interface confused her, and she thought it might confuse the customers.
She wasn’t sure the search itself would be fast enough from what she knew about how they stored the data.
And, she wasn’t sure the data was tagged correctly to return the best results.
As Darcy explained her concerns and uncertainties, Peter, the product owner, interrupted.
“Look, I’m not buying it,” he said. “Have a little faith in me and in the rest of the team.”
“It’s not about faith,” Darcy said. “It’s about experimentation. I’d really like it if we ran some spikes for performance and got some early feedback about the search results and the user interface.”
Peter shook his head. “Nope, there’s no time for us to experiment. We need to finish this feature, and right now.” After a couple of minutes discussion with the team, he agreed that getting the search results right was the top priority.
The team discussed their general approach to generating search results, the design. Everyone except Darcy was convinced their tests would show that the feature as they’d discussed would just work.
Darcy created test data and some automated tests. The rest of the team reviewed the tests and the data. They all agreed that the tests and data were correct.
Darcy ran the tests. All of them failed.
“Oh no,” Peter said, “We failed. Now what?”
Steve, a developer said, “But we shouldn’t have failed at all. Now we need to go back to the drawing board and figure out what happened.”
Peter said, “Hold on. Does that mean we won’t get this entire feature set next week?”
“Probably not,” Steve said. “First, we need to learn what’s going on.”
Test “Failure” is Information
The team entered what they called “huddle” mode. The started reviewing the code as a team, using a projector to show the code large on the wall. They wanted to make sure they had written the code correctly.
While the code was semantically correct, Darcy and another developer pointed out that the code didn’t reflect the design they had discussed. They all sat back, looking at the code.
Steve was the first to talk. “I think we have several problems.” He listed them on the whiteboard, next to the code. “Does anyone see other problems that we haven’t covered?”
One developer added another item to the list.
Steve asked, “Okay, how are we going to get to the bottom of this?”
Darcy said, “If we create more tests, we can test our way out of it.”
Steve asked, “What do you mean?”
Darcy explained that with only system level tests, the team didn’t have enough information to diagnose and fix the problems. They didn’t have enough access points to learn what the product was doing. They needed unit level and some combination tests.
“I’m happy to write those combo tests, but you folks need to write the unit tests we don’t have yet,” Darcy said.
Steve nodded. “You’re right,” he said. He turned to the team and they all agreed to write various tests to gain a better understanding of the current code.
Assess the Test Results to Increase Learning
The team decided to try something a little different, to learn earlier. They knew they had a problem somewhere in either the new code, how the new code connected to the old code, or in the data. They decided to create very small tests to narrow down where the problems were.
They fully expected many of these tests to fail. The failure wasn’t the point. The more they could create small useful tests, the faster they would learn where the problems were. In fact, fast failures could guarantee the team would learn early (and often).
To increase their learning, the team decided to swarm around the tests. They created a one-day plan to add small tests in specific areas of the code. Each developer took certain areas of the code. Darcy had the overall perspective of the product.
Every hour, they planned to reconnect to see what they had created for tests, and what they had learned by running the tests.
After the first hour, they had all made significant progress on the tests. However, they hadn’t learned anything new yet. After the second hour, two people were stuck on how to generate tests. The team decided they would now work in developer-pairs, to learn how to test those areas. By the third hour, they started to learn more about where their assumptions were wrong.
The team continued through about six hours of test generation. By the end of that time, they all learned how the code actually worked–instead of how they thought it worked.
Darcy had automated feature-based tests and end-to-end system tests. Best of all, the entire team understood the internals much better.
Learning is the Goal
Darcy’s team learned something quite important. Instead of “failing fast,” they created ways to learn early using tests. In this case, they didn’t use paper prototypes or other customer-based tests. They used tests that made sense for the problems they saw.
Product owners and managers don’t like the idea of failure. Yet, we rarely succeed the first time we try something in software because we need to learn. Instead of “failing fast” consider reframing that idea to learn early. The faster you can learn, the faster the team can complete the work for the customer. And that’s the real goal.
This article was written by Johanna Rothman. Johanna, known as the “Pragmatic Manager,” provides frank advice for your tough problems. Her most recent book is “Create Your Successful Agile Project: Collaborate, Measure, Estimate, Deliver.”
Test Automation – Anywhere, Anytime