3 Big Questions to Ask About Your Specifications

This is a guest post by Peter G. Walen.

Besides the ‘functional’ aspects of software, what “non-functional” things need to be tested? Based on the number of times I have encountered this question, this is something that seems hard for many people. Maybe the issue is in the very concept of “functional” testing.

Most people I know define functional testing something like this:

Functional testing is testing based on the specifications of the software component under test.

I generally agree with that definition. It works well enough as far as it goes. The issues that I run into are how people interpret that definition. Indeed, the great challenge is not in the concept of testing based on specifications, but in what constitutes “specification” and how it is exercised.

Here are three questions to ask about your specifications.

1. What do the specifications cover – and what’s missing?

Several years ago, when I was less nuanced and more impatient with people than I am now, I was working with a team where their leadership was proud of how well the “specifications” were documented. They produced copies of the “specifications” for several recent projects.

These were massive 3-ring binders, three inches thick and printed with small font, single-spaced. I began flipping through the binder on the top of the stack while the manager explained how these “specification documents” were absolutely complete; there was no possibility of a missed specification or of anyone involved misunderstanding any of the specifications.

I asked, and this is where the nuance would likely have helped, “Where is the specification stating any add or change to a record in the database would not result in corrupting the rest of the records in that table, if not the entire database?”

A moment of silence followed by exasperation. “Everyone knows that is a requirement!” At which point I gently said “So there isn’t one here, right? Because it is presumed that everyone knows?” I had his attention at this point.

I also asked about several requirements that stated something like “When Field X has a value of Y, this should happen…” I asked what if it had a value other than Y? How should the code handle a variable not listed? There was a blank stare.

Many would point out that these concerns and potential problems should be covered “elsewhere.” That is true. There can certainly be standards and guidelines in place for how software is developed. That would be a good place to cover the questions and expectations like transactions not corrupting the database. My question remains, are they actually included somewhere and exercised by someone?

I understand that is an extreme example made in response to an extreme assertion. I’ve done enough development work to remember many such confident statements that were not examined critically, because of who made them, led to significant problems for the organization and their customers.

When everyone presumes something to be “covered” I have learned that it often is not.

Modern Test Case Management Software for QA and Development Teams

2. Are your functional specifications clear and free of assumptions?

I remember one engagement where a specification spelled out very precise instructions. Things like:

  1. When field-1 has a value of X, do these things;
  2. When field-2 has a value of Y, do these things;
  3. When field-3 has a value of M, do these things.

Pretty straightforward. What should happen if none of those fields had those specific values? What should the software do? Were these the only values possible for those fields? What happens when all the conditions are true?

The developer presumed these were independent conditions. That seems reasonable, based on how it was written. A careful look at the business problem the software was intended to solve revealed that they were conditional.

When field-1 had a value of X, specific things happened. When field-2 had a value of Y and field-1 had a value of X, other specific things happened. The check for field-3 should only occur if field-1 and field-2 had the values defined above.

Everyone “knew” what was meant except the developer assigned to build the code. When he was asked, after problems were uncovered, he responded, “This is precisely what the spec said.”

You can certainly test and code, “to the requirements” or “to the specification.” When the specifications do not explicitly state something, there is a huge chance for misunderstanding and miscommunication.

Human language is fallible. It is messy and imprecise. Expecting people to have an absolute shared understanding of intent, based on a written document and nothing else, is a huge leap toward failed understanding and lost meaning and intent.

When you make the leap that the specifications as written are complete, correct, and need no clarification, it is likely there will be problems lurking in the shadows.

Of course, for specifications to have problematic language issues, they need to be present in the document. There are other areas where even precise language is likely to create uncertainty.

There may be team, department, or organizational standards around things like UI design and screen behavior. If they exist, are they referenced in the specifications? If so, do they give good information on their importance? In my experience, such “extra” requirements or specifications might be referenced during code reviews or other inspections before the code is included in a build. Teams will often presume they are familiar with the expectations and handle them “the way we always do.”

This might translate to following the guidelines. From many teams I have seen, it often does not.

3. Are non-functional specifications included?

There is another set of possibilities to consider. Do those team norms cover questions of security, data integrity, accessibility, and user experience? Most would argue that “quality” can not be added later. The hard fact is, these areas also cannot be added later. They need to be part of building software to begin with, can those considerations be effectively “added later?”

(Security concerns are varied and many. I often direct people with general questions to OWASP or, with more specific questions, to the plethora of ideas and articles at Kiuwan’s blog site, here.)

Oftentimes the concept of “usability” and “accessibility” get lumped together into a generic block of “UX” and assigned to a “UX Analyst” or some other role, who is not part of the development team and is not involved in creating the software in the first place. Like “quality” it is impossible to “add UX” after the software has been designed, written, and tested.

The real challenge, however, is recognizing the difference between the ideas of “usability” and “accessibility.” One is concerned with a comfortable flow for people working with the software. The other simply looks at the question “can anyone/everyone use our software?”

Rachel Kibler recently wrote about her challenges around approaching accessibility here. Her article describes what many people looking at accessibility in software have encountered. a good part of the complexity of organizations.

If the ideas encompassed in usability and accessibility are not part and parcel of the development of software, and included from the very beginning of each project, they are likely never going to be considered for testing at all – neither “functional” nor “non-functional” testing. Even if they are documented in some set of standards, are they considered or even referenced?

The nature of the software under development is a huge part of these “extra” considerations. These questions can inform us about other considerations. What presumptions are being made when defining the requirements, specifications, and acceptance criteria? Are they even considered or are they implicit biases of the team or the organization?

If “everyone” knows and “everyone” is responsible, most often this means no one is paying attention. When this is the case, no matter how well the “functional requirements” are filled, if people find your software cumbersome to use (if not difficult or impossible), they won’t use it.

For me, the difference between “functional” and “non-functional” testing is irrelevant. If something has the potential to impact how the software meets customer needs, it should be evaluated.

All-in-one Test Automation Cross-Technology | Cross-Device | Cross-Platform

Peter G. Walen has over 25 years of experience in software development, testing, and agile practices. He works hard to help teams understand how their software works and interacts with other software and the people using it. He is a member of the Agile Alliance, the Scrum Alliance and the American Society for Quality (ASQ) and an active participant in software meetups and frequent conference speaker.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

General, Agile, Software Quality

How to Identify, Fix, and Prevent Flaky Tests

In the dynamic world of software testing, flaky tests are like unwelcome ghosts in the machine—appearing and disappearing unpredictably and undermining the reliability of your testing suite.  Flaky tests are inconsistent—passing at times and failin...

General, Continuous Delivery

What is Continuous Testing in DevOps? (Strategy + Tools)

Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices, too.Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices,...

General, Business, Software Quality

DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC

Building QA into your SDLC is key to delivering quality. Here are the mistakes to avoid when building quality throughout the SDLC.