I have heard more than one person say that if testing isn’t interesting and fun, then you aren’t doing it right. Those people probably haven’t spent time working on a sprint cycle, taking new features and delivering new software every week or two. Testing is a wonderful and interesting craft, but repetition can make me forget that. Some features are interesting and some are not. Some testing projects are fun and exciting, and others are routine. When a new feature isn’t interesting, or a project feels routine, my focus starts to slip and that means I risk missing something important.
Everything in testing isn’t a Eureka moment, sometimes things can feel slow. What does it mean when testing isn’t exciting, and what can you do about it right now?
Get TestRail FREE for 30 days!
Testing some software can be an exercise in concentration. A couple of years ago, I was working on the API for platform that helped people build advertisements. We had a new API endpoint to facilitate creating a new type of advertisement that included video content. That API included information for the video URL, whether or not it should auto-play, definitions for the starting and ending dates for the ad once it was published, and various labels and descriptions and so on. Some of theses fields could be tested independently, meaning that the value I use would not, in theory, affect the value I used in another field. Others, such as the start and end dates, had dependencies on one or more other fields.
This was a fairly large, but not complicated, testing problem. I started by making a few calls to the API using a command line tool to get a feel for how things work and to get authentication and required headers sorted out. I started testing by creating a new video as using just the required data and then moved into seeing what bad things might happen with that data. Once I knew an ad could be created, I started focusing on the fields themselves. This is usually the question of what does the user want to do here, what is good data, what is bad data, and what should happen in the cases where someone sends bad data. I was keeping track of data, and data combinations in a spreadsheet. We were looking at quite a bit of test data, even starting with only the required fields, and my eyes were starting to glaze over.
This is where defocusing becomes useful.
I could have defocused by looking at other aspects of testing this change. How was the customer going to use this flow when the product was in their hands? What did they need to be productive? What did they want to be happy? This technique is like using a camera lens. I turn the dial one way and get a very detailed view of what is going on in the world. At this level of detail, I see specific fields one at a time, data, and maybe relationships to one or two other parts of the page. Turning the dial the other way to zoom out gives perspective. You see how the small piece of the software you are working relates to the rest of the product, and more importantly, to the people that are using it.
Sometimes I defocus with a specific mission in mind, sometimes I just step away to chat and get a cup of coffee. Either of these will usually get me back on track.
A big part of software testing work is entering data into a field, clicking submit, observing what happens, and then making a judgement about what you see. In the early 2000’s I was working on a product that helped salespeople figure out the best price for a product. This was calculated based on the current going price for something, a barrel of oil for example, then adding fees such as shipping and tax, and then making discounts. The ‘best’ price for a product was the one that provided the most profit to the seller, but low enough for the customer to be willing to pay for it. Testing this the first time around was interesting and exciting. Testing it after each code change in the user interface, or in the calculation engine, or in the workflow was usually not exciting. The repetition made me lose interest in what was happening, and sometimes miss important problems.
Using a small amount of code, and testing from the API would have kept me interested. What I would do today is use an allpairs or combinatorial tool to create a CSV file with different data combinations. After that, I would write a small amount of code that loops through the data and submits it into the API. This would allow me to design useful tests quickly, run them in minutes, and them run them again in Continuous Integration every time there is a change to that code repository.
Now that there is a base data set being used consistently, I would explore each change based on risk without worrying so much about the core functionality.
Pairing is a not so new technique that is becoming fashionable again, and for good reason.
I run out of ideas sometimes. We all do. A normal sprint in my experience is a list of features. A few get moved into the testing queue and I work on them one at a time. I might pick up a new change, one that adds the ability for customers to have multiple forms of payment in an e-commerce site. I start by adding a paypal account and a few credit cards; then Google and Apple pay. I make some purchases using each of the payment options. I make a couple of purchases from accounts that are disabled or do not have a high enough balance, to see what happens when a failure occurs. And then, when I don’t know what to do next or have gotten tired of sitting there and thinking, I move onto the next task in the queue.
When we test software, it is always done from our perspective. Some people like to focus on usability. Some enjoy looking underneath the user interface and testing closer to the application code. Others add value by discovering ways a piece of software may not help the customer do their job. Relying only on a single perspective can be a shortcoming. Pairing up with another person — a developer, a product person, a designer, another tester — is a way to force perspective expansion. A developer will ask questions about the software I never think to ask because they are familiar with the code base. Product people will ask questions that can show how the customer might not be happy.
Pairing with another person on the team, even if it is just for a few minutes, keeps the testing conversation interesting and moving forward.
Pick a New Strategy
Back in the 90s and early 2000s, and in some software companies today, regression testing started gathering up all of the test documentation from previous releases. We divided it all up among the testers, and each person would spend the days and weeks before a release working through the stack one test at a time. When managers realized that the number of tests wasn’t shrinking at the same rate as the number of days before a release, they would request the test team work weekends, and add junior developers to the hoard of people running the regression suite.
Regression testing was terrible, and terribly boring. No one wanted to do it.
It was boring because it wasn’t intellectually engaging. It wasn’t even very useful. A strategy based on risk and communication made pre-release testing much more interesting. Toward the end of each release, a few testers would get together with the development team to take a look at the changes. The goal was to figure out the parts of the software that changed, and the parts of the software that might be affected by those changes. Once we categorized changes and dependencies, we started talking with sales and product people. This exposed the technical team to information we weren’t usually privy to around product demos, new customers going live, customers that were unhappy, and changes that were more important than we imagined. The intersection of these two bits of information gave us a list of areas in the product to focus on, and a vague order of importance.
Testing the next release was fun again. I had to think about what I was doing, and wasn’t operating based on a procedure.
If testing feels boring, you might be doing it wrong. There are plenty of strategies available to get yourself engaged again — focus and defocus, use code and tools where appropriate, pair up with other people to expand perspective, and refactor your strategy.
What tricks do you use when you start to feel bored?
This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.