This is a guest post by Peter G Walen.
Want to get some interesting, knee-jerk reactions? Tell people their End to End tests are not telling them what they think they are. End to End tests are often seen as the gold-standard of what testing should be. Is that accurate? Is it even reasonable?
There is a long-lived belief that testing is too granular and massive amounts of time are spent on focused test efforts. The solution I often see presented is to forgo function level testing and integration testing and focus on “end to end” (E2E) tests.
I see where this might make sense in some circumstances. In the right situation, it is an excellent idea. However, in my experience, giving E2E testing precedence over all other testing often leads to quality problems and delayed delivery.
No two projects are the same. Even projects working on the same software can have significantly different results for any number of reasons, not simply differences in test approach. The reasons are varied. Let us focus on the testing.
Simplicity of Tracking
A common argument made in support of E2E is that there is a single point of reference for testing. All the related steps can be placed in sequence in a single location. This can make a great deal of sense and often is helpful.
Where I get nervous is considering the complexity of the application under test. If the possibilities are limited and straight forward, I get this approach. I have used it myself from time to time with good success.
As complexity grows, variations around possible paths within the same scenario can make things murky. If a test script of 15 or 20 steps there is a certain attraction. That attraction diminishes if there are five or six possible combinations resulting in 80 or 100 or more steps, all part of the same E2E test.
By forcing all related paths into a single “test” the concept of “End” gets lost — or at least diminished.
I prefer to break each of these combinations of the same test into their own 15- or 20-step script. This way, I can track each variation, or possible path, individually. If there is a problem, the broken path can be handled independently of other paths that may not show any obvious problems.
Simplicity Becoming Complexity
The challenge I see is how people define “simplicity.”A test script that exercises the same steps 4 or 5 (or more) times while changing some of the variables in use may be a time-saving approach. It can give a simple presentation of “these are the possible scenarios for this function.” It makes tracking progress very easy and obvious. It is very simple and straightforward.
I can see the appeal of a single test exercising many scenarios for the same function. I have sometimes used this approach when I was familiar with the software and with the expected behavior for each set of variables used.
It did make things a bit easier to manage and control. I could show in a single test, the range of behaviors one could expect from that portion of the software. It worked well: except when something unexpected happened and the test failed along the way. Then it often became very complex to decipher, very quickly.
When exercising software that is new or has been significantly changed, the odds of full understanding of how it will behave in different circumstances decreases dramatically. I have yet to see an instance where that was not the case.
The more variables a piece of software needs to process, the greater the number of value combinations. The greater the number of combinations, the greater the need to exercise sets of combinations of similar value patterns.
Certain proponents of E2E tests claim a key advantage is a single test will exercise every variable combination possible. To make that happen the variables need to be controlled in such a manner to limit the odds of entering them incorrectly in the first place. The challenge here is that each full set of variables needs a full set of test steps to execute them.
It is possible, in theory, using some automated test tools. In manual testing for any complex set of tests, the odds against this working are astounding.
In my experience, the chance of missing conditions that need to be tested increases each time data set combinations are added to a single script.
Complexity Toward Simplicity
I remember being told to “focus on the E2E testing and don’t worry about the individual pieces.” It took me some time to realize the idea could work as long as each component was tested on its own.
The next step to make this work was that each hand-off, the integration points between components, got tested before starting E2E testing.
I also remember the massive disapproval I encountered when I reported problems with the testing. Everything from invalid data handling to values that should have been numeric, but weren’t. Then there were the test runs, “end to end” that failed to complete after 36 hours of non-stop execution of a process that should run nightly.
I found the solution to be more direct than what I had been doing. Each time I was supposed to execute an E2E test it made perfect sense. Each took complex models and ran through various conditions in order to make sure the scenarios worked as needed. This is the intent of each of these directives.
What I encountered was a combination of problems from the nature of the test itself. We, the organization, did not have a firm understanding of the readiness of the individual components and each of the touchpoints involved. The integration points often turned up problems.
Sometimes problems were found far downstream of where they were introduced. Data could be pulled from one component and passed on to another far down-stream where it would be used. Except it was often invalid.
I realized that the problems I was seeing were not with the theory of running E2E tests, but how people were trying to use them. The challenge seemed to be taking everything in one large bite. The solution seemed simple: Test one bite at a time.
One Bite at a Time
If each development team responsible for each section or function in the software has tested their piece carefully, it stands to reason that we don’t need to worry about the individual functions themselves. The question, of course, is how well does each team test their work?
Setting that aside, if we can understand the points where the components come together, we can exercise each of those points individually and make sure everyone involved understands how this impacts the software’s “end to end” operations.
Taking two functions and making sure they and the touchpoint between them work, you can then look at two more functions. When those two functions work, all four can be combined.
By gradually growing the scope of the test, you will be able to build a valuable knowledge base about each component and how they interact. You will waste less time from teams waiting for other teams to resolve their problems.
The information gained and problems encountered along the way can inform everyone involved in precisely how the software works. This gives the project team the opportunity to determine if the software is doing what the organization needs to address the problem the project is intended to solve.
End to End testing is a powerful tool for any team to utilize. However, using it too soon or before individual functions are understood will waste valuable time, resources and will ultimately slow the project down.
Peter G. Walen has over 25 years of experience in software development, testing, and agile practices. He works hard to help teams understand how their software works and interacts with other software and the people using it. He is a member of the Agile Alliance, the Scrum Alliance and the American Society for Quality (ASQ) and an active participant in software meetups and frequent conference speaker.