What Testers Should Know About Microservices

This is a guest post by Cameron Laird.

Microservices arrange an application as a collection of independent, modular services. It’s just another architecture, so it doesn’t truly impact requirements, meaning it doesn’t affect testing — right? As sensible as that logic sounds, it has holes.

If you’re a tester working with microservices, you need familiarity with these basics.

Organizational Impact

Microservices are often more consequential organizationally than technologically. When a development group adopts a microservice architecture, testers don’t have to learn new techniques or practices.

Requirements are likely to follow architecture, though, so the requirements documented for testers to verify become the requirements of individual microservices, rather than the application as a whole, as customers experience it. Testers are likely to have more conversations about individual application programming interface (API) methods, and fewer about the user experience.

One responsibility of testers during microservice adoption, then, is to insist on receipt of user-level requirements and workflow. It’s certainly healthy to confirm that each microservice fulfills its “contract.” This needs to be balanced at all times with the operation of the application or library as a whole, though, as end-users see it.

Even better is for testers to be involved before a migration to microservices begins, so they can help verify that someone speaks for the end-user, that the technical architecture is compatible with implementation roles, and that all expressed requirements are indeed testable.

Push Left

Microservice implementation generally shifts the balance of documented requirements from user interactions to API invocations. The latter are easier to automate. More precisely, API requirements are expressible in objective terms that a computer can readily manage, such as strings in and strings out. User interactions tend to be at a more human level, mediated by a user interface, and their widget selections, button pushes, and visual results are at least a bit more challenging to automate.

In favorable circumstances, test automation can be so successful that it doesn’t have to wait for a designated testing team: Automation of testing can shift left so that it becomes a routine practice for developers. In fact, continuous testing (CT) makes it a goal that neither testers nor programmers have to choose to test, and instead tests run automatically as soon as it makes sense for them to run.

Does all this automation put testers out of work? Not at all; it just does a better job of turning testers’ efforts into enhancement of the quality of the finished product. As testers spend less time tediously pointing and clicking their way through simulations of end-user behavior, they’re better able to help configure CT that catches problems days or weeks earlier, improve automation of user interactions and focus on requirements that resist automation.

Even a highly refined CT workflow still deserves expert testing. The difference is just to make the results, at least for the parts expressible in CT, far more likely to succeed. That success frees testers’ time to test harder dimensions of the final product.


Microservices naturally fit in containers. If the programming team is new to containers, though, testers might help during the transition, sharing their own experience and knowledge.

Even more concretely, containerization sharpens testing. Containerization changes “Install the microservice, configure an appropriate web server, then verify that the API delivers $RESULT when presented with $INPUT” to “Launch the container and verify that the API delivers $RESULT when presented with $INPUT.”

Containerization eliminates an entire layer of “it worked on my machine” surprises having to do with web servers. Rather than dissipate effort configuring the microservice’s runtime, testers can focus more of that effort on the reproducible behavior that deserves testing.

API testing

API testing is a specialty within testing. While existing tooling applies to APIs, and therefore to microservices, several new tools that specifically target microservices are now available.

Are you better off to use the API extension from your existing provider or to find a new “best of breed” API tester? This particular market is changing rapidly, and advice that is both true and broadly applicable is rare. My main tip is to suggest that you first make the best use you can of any API testing provisions of your existing toolset. If you find it adequate for your particular situation, that’s a good outcome; if not, at least you’ll know more about what you need from other tools.


A move to microservices is a great time for testers to review practices. Help the team containerize; shift as much development effort as possible left, and especially take the opportunity to consider whether the microservices plan is testable and aligns with organizational culture; and start with the expectation that each test can and should be automated.

While microservices bring plenty of challenges and even surprises to different departments, testers ought to assume a leadership role in each move to microservices.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

General, Agile, Software Quality

How to Identify, Fix, and Prevent Flaky Tests

In the dynamic world of software testing, flaky tests are like unwelcome ghosts in the machine—appearing and disappearing unpredictably and undermining the reliability of your testing suite.  Flaky tests are inconsistent—passing at times and failin...

General, Continuous Delivery

What is Continuous Testing in DevOps? (Strategy + Tools)

Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices, too.Continuous delivery is necessary for agile development, and that cannot happen without having continuity in testing practices,...

General, Business, Software Quality

DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC

Building QA into your SDLC is key to delivering quality. Here are the mistakes to avoid when building quality throughout the SDLC.