This is a guest posting by Justin Rohrman.
Every company I have worked with in the past decade has claimed some amount of agility. These companies all practiced Scrum, had sprints, and did retrospectives after each sprint to talk about what was learned and what we could improve upon. All the signals and rituals of agile were firmly in place.
Despite all of that, testers usually spent a significant amount of time waiting around — they usually didn’t have any work related to the sprint until the last few days. Development may have been able to respond to change, but testers were stuck in the dark ages.
How can testers be more adaptive to changes in software development?
Get TestRail FREE for 30 days!
Handoffs happen when there are strong divisions between roles on a development team. The business analyst talks to the customers, the product owner takes that information and makes requirements, the developer takes those requirements and turns it into software, and then the tester finds where things may have gone wrong.
This is the stuff boring computer science books are made of. Each handoff is a point that adds time to the process — and a point where it can become harder to respond to change.
I’ve had much better experiences working closely with developers instead.
A couple of weeks ago I was working with a developer to change a key concept in our product. We have files that get scanned into the product that are categorized with a label at scan time. The list of labels had active and inactive labels. Our task was to remove the concept of inactive labels. We ended up going down a route that wasn’t very good. Midway through the change, we talked with another developer and decided to do something completely different.
I was able to change my testing approach in real time with the developer because I was pairing with him on the change and was part of the conversation when we decided to take another design strategy in the code. This new testing strategy included different unit tests, browser automation and some subtly different exploration.
In other projects, I found out about large changes in direction days later and had to amend my approach based on secondhand information.
Being there when the change happened made me far more adaptable to the change, which is the heart of agile.
Normally when I read about testers having a strong presence in an agile process, it’s about their testing ideas. Testers attend design sessions and help to surface important ways the customer might use the product, they go to code reviews and ask about code coverage or what happens when a certain usage scenario occurs, or they generally talk with people and try to find software problems through conversation.
Conversation is powerful, but at some point you actually need to interact with the product. The question for testers working in agile is, what is the earliest point you can test software, not ideas and conversation? I don’t think you can get very early in the process without being comfortable with tooling and being able to string together a few lines of code.
Having some sort of API underneath the user interface is a popular way to develop software. This gives access to data manipulation (CRUD) and also conveniently offers a place to begin serious testing work before a user interface exists, or even to facilitate faster testing of more data permutations when the UI is in place.
One of my last full-time employers was a company that was building a marketing platform. They had been building this platform for about a year and had a single tester on staff until I joined. Their tester was overworked and not able to keep up with the changes that the team of seven developers was putting out each week.
The entire product was built on a REST API platform that wasn’t very well tested or explored — scary. But it was also an opportunity. I spent my first few days building a proof-of-concept test and getting that running in continuous integration. After that, we isolated risk in the product and built tests in those areas.
We identified a need on the team, discovered a place where I could be useful as a tester, selected a tool and pivoted after each set of tests based on what was relevant at the time. This is another good example of responding to change and being agile.
Join 34,000 subscribers and receive carefully researched and popular article on software testing and QA. Top resources on becoming a better tester, learning new tools and building a team.
When I look back at the past 10 years of testing, the majority of the advancements I see come from improvements in development. I regularly see developers doing things that result in fantastic first-time quality: pairing, test-driving, using build deploy pipelines and containerization. There is also a strong culture of contributing to the craft by creating libraries for public usage and sharing what was learned.
This software is both more testable and harder to test at the same time. The easy-to-find bugs are often designed out of the code through developer testing.
Most testers I work with, however, are still focused on the same problems we were working on a decade ago: how we can test effectively in reduced timelines, how we can display the value of our work, how we can build useful automation, how we can effectively report coverage, and why we missed that bug.
A few months ago, the project I am working on was hit with a hard and fast-approaching deadline. We had a large queue of work to complete and not enough staff to get it all done. The normal path for management in these situations is to require staff to work overtime until the deadline hits or the work is complete, whichever comes first.
We had a meeting with the entire development team to figure out how we could become more efficient. We came up with a long list of things, like reducing pairing when it makes sense, having testers float between projects, and having several hour-long blocks where there are no meetings.
Each day during our standup, we would review how our changes were affecting the process, what was working and what needed changing. We also spent some time during our sprint retros talking about how our changes were going and where we could go next.
This sort of thing is a daily discussion now. The topic of how we could become more useful is always on the table, and we don’t have to wait for a meeting to implement a new change.
Nearly 20 years after the creation of the Agile Manifesto, we still see companies trying to “be agile” or buying into frameworks that allow them to stay as rigid as possible while pretending to respond to change. The successes I have seen are the companies or projects where there is a cultural desire to hunt down specific areas where a team could improve, try something new, make observations about what happened and then try something else. Testers can be involved in the process too, and not just on the receiving end of the development process.
What part of your job can you do better, more efficiently or more effectively? Try something new and see how it works. And keep doing that.
This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.
Test Automation – Anywhere, Anytime
Help us improve this page!
What problem are you trying to solve?
Read more about how the QA team at ELEKS evaluated different test management solutions and why they ultimately chose TestRail.
This post covers how to show test coverage, manage requirements traceability, and create a traceability report in Jira.
Learn about the pros/cons of using Jira as a test management solution, alternatives, & how teams that use Jira manage their testing.