I occasionally get stuck on one test technique and forget about everything else. I get a new change and immediately see a series of fields and buttons. My mind leaps, almost automatically, to the technique we use to analyze variables and stays there. I start performing some tests on a field — enter a ‘good’ value and submit, a value that is out of range, a ‘bad’ value — and then move on to the next field. It happens to everyone, or at least that’s what I tell myself to feel better when I get caught in that trap.
In my experience, most testers don’t think much about test design techniques. They might be proficient with or really enjoy a handful of design techniques, domain testing or usability for example, and use those almost exclusively. This pattern lends itself to heavy coverage in some parts of the product. Over time, the rest of the product become a mystery. Testers lose their handle on how features work, how they interact with other parts of the product, or what risks might be hiding there.
I want to talk about a handful of test techniques in terms of the types of product and project risk they expose, and how to balance these techniques during a testing session so we don’t end up with too much missing coverage.
Get TestRail FREE for 30 days!
The essence of domain testing is this. You have access to one or more variables in the software you are testing. A variable is something you can change in a piece of software such as text fields, date fields, number fields and so on. These variables might be independent, or the value might affect the values another field can accept. Any value you enter into these fields can be classified somehow. For example, a birth date field might have a range older than 110 years that is invalid, a range starting with one day in the future that is invalid since time travel isn’t currently possible. Everything between the two is a valid range. Each of those date ranges can be thought of as a domain. Any value within a given domain should in theory give you an identical test result. If you want to study domain testing, there is no better reference than the Domain Testing Workbook written by Kaner, Padmanabhan, and Hoffman.
Domain testing is a great way to uncover risk around data and data dependencies. For companies that have an API or some sort of microservice architecture, it is almost trivial to make a CRUD (Create, Update, Delete) test against an endpoint that quickly runs through a set of data. Testers that don’t have access to the service layer might take a slower approach of creating a list of values they want to explore, enter them one at a time and hit submit to observe what happens. This gives coverage of the various kinds of text fields and exposes risk that the product might not accept a value you think it should, or that fails ungracefully when you try to submit a ‘bad’ value.
Testers that use API testing frameworks and those that use BDD, might get stuck in domain testing land because the tools lend themselves to this type of testing. When I look at these tools I often see only a data input mechanism paired with a feedback collector. It is easy for that to become a set of blinders. Stopping after doing some amount of domain testing means ignoring most of the context your software will be used in. Customers probably won’t be thinking about software in terms of variables.
Domain coverage is important, but narrow.
Scenario testing is used to expose problems in scenarios that a real customer might do. For example, an online banking customer might want to check their savings account balance, check their checking account balance, open an investment account, make a transfer from the savings to investment account, and then purchase shares of a mutual fund all in one go. This test design technique is optimized to expose problems that customers might encounter in day to day, normal product usage. If something bad happens between the flow of checking an account balance, making a money transfer, opening an account, and then making another transfer, it will hopefully be discovered by a scenario test.
I most often see scenario testing performed by testers that think of themselves as customer advocates. These testers view themselves as the first customer, or the last line of defense. The customer advocate tester spends their day doing a wonderful thing, hunting value for the customer. They talk with product managers who are often in close contact with customers, and sometimes the customers themselves, to learn explicit examples of what a user might do in day to day usage. These scenarios have a workflow, but they also have a person in mind. This isn’t a vague concept of “value to some person”, but a display of value for Jan the Manager of Product Engineering and how she will use a piece of software.
The other group of people I see spending a lot of time with scenario testing are testers that create user interface automation in tools like Webdriver. Automators will often start from either a test case, or a workflow explained by someone on the technical team. They turn that into lines of code that open browsers, navigate through pages, enter text, submit forms, and make assertions that certain values displayed on those pages are what is expected.
Both the customer advocate tester, and the people creating UI automation serve a useful role. The advocates ask questions that other testers forget to ask and understand that a real person will be using the software they are helping to create. The automators help teams to discover when those scenarios are broken as quickly as possible, and help to improve test coverage in each build. What they tend to leave out is deeper questions and harder to find bugs. What happens when the customer does something unexpected? What happens when we go off of the map and try out more complex and longer running test scenarios that are hard to understand or explain?
Risk Based Testing
Risk based testing is a test design technique that focuses on things that might go wrong. For example, a text field might have a risk of inserting bad data into the database, or submitting long strings might expose a buffer overflow. This technique requires imagination and some understanding of categories of bugs, or knowledge about how your product tends to fail. I have used this technique most often when working with bug reports submitted by a customer. Customers don’t generally put a lot of thought into how they report bugs, so I might get something that says “order submit page is broken”.
I did a lot of this early in my career. We were in the process of getting our first customer up and running. An implementations team was on site with the customer getting the product configured correctly, and doing some training. Each morning, I came in to an email listing various failures in that style. There was no mention of reproduction steps, or environment information, and certainly no trace of helpful things like screenshots or log files. The customer had reported a very real risk, and it was my job to go from from that description to an understanding of how it happened.
I like risk based testing for special projects like this but don’t spend a lot of time with it otherwise.
This is only three test design techniques, there are hundreds to choose from and it is probably a good idea to get familiar with as many of them as you can. Every design technique is used best to expose specific types of problems in software. If we use one more than the others, our test coverage suffers. What techniques do you use most? What techniques do you want to learn?
This is a guest posting by Justin Rohrman. Justin has been a professional software tester in various capacities since 2005. In his current role, Justin is a consulting software tester and writer working with Excelon Development. Outside of work, he is currently serving on the Association For Software Testing Board of Directors as President helping to facilitate and develop various projects.
Test Automation – Anywhere, Anytime