How Was This Tested?

This is a guest post by Peter G Walen.

There’s never enough time to really exercise a piece of software. Bosses want it done yesterday. Project Managers and Scrum Masters want to know the progress and what is left. So, hey, it worked just mark it as PASS and move to the next set of tests. That is fine until something critical fails and everyone looks at the people who tested it. How do we, as testers, answer the question “How was this tested?”

How was this tested?

It is one question that every tester will get asked at some point. It might be a “Great Job!” type of comment. But it often gets asked when something significant is found late in the project, or in production, or worse, makes the news. Then it becomes “Why didn’t you find this problem?”

In a well-functioning organization, discussions focus on avoiding the problem in the future, learning, and moving forward. Most of us do not work in well-functioning environments. Even those that are generally good can suffer from moments of dysfunction.

Whether learning from events or defending yourself from accusations of incompetence, testers need to be able to provide professional, robust, meaningful answers and information. Of course, we may get a little defensive when asked about a bug that was not detected. We can stay focused and lead an informative discussion where everyone learns something. To do that, we need to have some level of records, formal or informal, for our own use and maybe to explain and use as a teaching tool for others.

Test scripts

Scripts can be useful and provide clues about what we, or others on the project team, thought was important to test. Do we have any records on how the scripts, either manual or automated, were developed?

Documented test scripts can give insight into the thought patterns and intentions behind them. These then can give some ideas around the understanding of the expected behavior of the software.

Many testers working from a script operate under the belief that if a test “passes” the only thing that really needs to be recorded is that it passed. If there are problems, then the test is “failed” and at least some level of information is recorded. While we hope that is the case, it may not be.

Other artifacts

Many involved in software focus their understanding of testing around test artifacts. The collections of test plans, scripts and cases tend to dominate their discussions around testing. What many testers and their managers fail to understand is these same artifacts are not testing. They are models representing how testing can be done.

The most important set of artifacts around testing are the records kept during testing. Test plans and strategy documents describe what the testing should look like. Test cases and scripts describe the scenarios that need testing. They can also describe the configuration and other external factors that can impact how the software operates.

It is the information around the actual testing, the execution of the tests, that is of greatest interest. That the test was judged to have “passed,” or not, is of interest. Information around how the tester reached that judgment is also of interest.

Modern Test Case Management Software for QA and Development Teams

Evidence

When working with less experienced test organizations, I often use the analogy that they take adequate notes on testing that would provide evidence that would convict them of committing testing in a court of law.

What do I mean by that? Consider this: Rigorous testing requires tracking actions taken and results observed. Part of this involves taking notes so you or anyone else can recreate and understand what was done weeks or months after testing.

All of us are under pressure to get things done fast. No matter what environment software is being developed in, the actual hands-on testing of the feature or function always seems to be under some form of time pressure. The challenge to rigorous note and record-keeping comes in how long things take – and not letting the same note-taking get in the way of finishing testing “on time.”

Here are some ideas on how I try and address that.

Don’t go it alone

Have a partner/paired tester working with you to make notes on where and how you navigate, what values you enter, what options you select – even how long it takes between entering data or clicking on the dropdown.

This partner might notice things you miss. Sometimes responses or results that you are not paying attention to because you are focused on something else. These are often worth investigating. They can also serve as a sounding board. You can exchange ideas while working through a scenario. If a behavior is noted as unusual, but less important than what you are working through, a partner can help you remember that this could be an additional path to exercise in the next iteration.

Importantly, another set of eyes on the screen can help you “stay honest” and focus on what needs to be done next. Conversely, that extra set of eyes might help reduce the inattentional blindness that comes from being so deep into a function that other important things are missed.

Record everything

A common phrase among medical professionals is “if it isn’t written down, it didn’t happen.” Why? Because memories are flawed. “Eye-witness” testimony in criminal cases is coming under critical review because people don’t remember things as accurately as believed.

Write down everything as you do it and as it happens. Anything that seems “obvious” now might not be so “obvious” in a week or a month. Make a note of it. There are also screen recording tools to keep track of what the user does and what happens in response to every action. That is, they record what happens, where you go and what you do. They also record responses, screen displays, and messages. This gives you a straightforward way of recording “what happened” when you tested.

There are other options as well. Tools to facilitate note-taking while testing are available. You can use one window for the application under test and the note-taking tool on another. Screen snags can be copied over into the notes so the tester can show precisely what is meant in the notes.

Other forms of evidence

Simple, yet often overlooked information like the build or sprint when the testing was done. Possibly the version of the database or schema, as those can change. Sometimes, when database environments change there are unexpected consequences that might not show up until later.

Depending on the type of software you are testing, you may have evidence you can identify and capture with minimal work on your part. For example, logs.

Application logs. Database logs. System logs – as in logs on the device you are executing the tests on. Host logs – as in logs on the system host.

These can contain valuable information about what is being tested. There can also be information to examine that is not readily apparent to an observer. These both have value to the tester – and in the records to be retained as possible evidence around testing.

The question

Why do we need to keep this information? What is the point?

When unexpected results are found a few iterations or builds after this was tested, or in production, the question around how a given feature was tested will almost certainly arise.

In my experience the true purpose of keeping this evidence, these artifacts, is quite simple: It is a gift your current self is giving to your future self. It might be a future self in a few weeks, possibly a sprint or two later, or months. It might be longer. It might also be someone else who will make use of your gift. Whoever it is, when that person goes looking, they will appreciate the effort you put into explaining how this was this tested.

I get that this level of detail and record-keeping might not be needed for many organizations. Still, if testing has been problematic or the organization has a low level of trust in testing, countering that with hard evidence of what testing occurred can and will change minds. When that happens, the question “How was this tested?” becomes something very different indeed.

All-in-one Test Automation Cross-Technology | Cross-Device | Cross-Platform


Peter G. Walen has over 25 years of experience in software development, testing, and agile practices. He works hard to help teams understand how their software works and interacts with other software and the people using it. He is a member of the Agile Alliance, the Scrum Alliance and the American Society for Quality (ASQ) and an active participant in software meetups and frequent conference speaker.

In This Article:

Try a 30-day trial of TestRail today!

Share this article

Other Blogs

Key Factors to Consider When Selecting the Right Test Case Management Tool
General, Software Quality, TestRail

Choosing the Right Test Case Management Tool: Key Factors

Understanding the need you have is often the first step in defining the method for managing test cases that will work for you and your team.

Understanding QA Roles and Responsibilities
General, Agile

Understanding QA Roles and Responsibilities

The software development process relies on collaboration among experts, each with defined roles. Understanding these roles is crucial for effective management and optimizing contributions throughout the software development life cycle (SDLC). QA roles, respons...
DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC
General, Business, Software Quality

DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC

Building QA into your SDLC is key to delivering quality. Here are the mistakes to avoid when building quality throughout the SDLC.