This is a guest post by Nishi Grover Garg.
Planning and developing new features at the fast pace of agile is a hard game. Knowing when you are really done and ready to deliver is even harder.
Having predetermined exit criteria helps you be able to make the decision that a feature is truly ready to ship. Here is a list of exit criteria you must add to your user story to make it easy to bring conformity and quality to all your features.
This first one sounds obvious, but it may not be. I still see many teams struggling with getting their testing done within the sprint. Developers work on a user story and deem it done, while testers are left to play catch-up in the next sprint.
Put that practice to an end once and for all by making sure that no user story can be proclaimed done without having all tasks under it completed, including development tasks, testing tasks, design and review tasks, and any other tasks that were added to the user story at the beginning.
Ensuring all tasks are completed in a sprint also mandates that you begin thinking in depth about each user story and the tasks necessary for each activity to be completed, so that you do not miss out on anything at the end.
As our agile teams move toward continuous delivery and adopting DevOps, our testing also needs to be automated and made a part of our pipelines. Ensuring that test automation gets done within the sprint and is always up to pace with new features is essential.
By having test automation tasks be a part of a user story delivery, you can keep an eye out for opportunities to automate tests you are creating, allocate time to do that within the sprint, and have visibility of your automation percentages.
I have used the following exit criteria:
Depending on what your automation goals are, decide on a meaningful standard to apply to all your user stories.
It might be easy to get caught up in the pace of the sprints and keep tackling new features to develop, but without testing as we go, you may end up with a pile of defects that render those features useless. It is imperative to test features within the sprint and focus on the defects your testing finds.
A good start would be to link all issues related to the feature to its user story. When you look at the user story task to mark it complete, you must be able to see the issues found and then decide if the feature can actually be called “working software” based on the open defects.
To avoid having these defects linger on and pile up, it is good to have exit criteria related to the number and severity of issues that are open against the user story. We used to have a criterion that even one critical-severity issue left open against the user story would mean it could not be deemed done at the end of the sprint. That was a motivator for the team to find and close all major issues earlier in the sprint, saving lower-severity issues to be tackled later.
We have all heard that an ounce of prevention is better than a pound of cure. A great way to prevent defects is incorporating reviews and static tests into your sprint.
You can begin by mandating that every code check-in be reviewed by a buddy developer, with tasks and time for that activity allocated in each user story. You can do the same for testers getting their test cases reviewed by other teammates.
You can also add criteria for a static analysis tool to be run over the code base so you can resolve any formatting or refactoring errors, as that would eventually improve the production quality of your code.
Be it test plans, architectural designs or help documents, anything that can be reviewed can be added as a review task so that you ensure it gets done before the user story is deemed complete.
Adding meaningful exit criteria to a user story is important, and so is following through on them. Once you decide on a set of exit criteria, stick to them and make no exceptions. It may be hard at first to see a user story not meet the criteria and have to spill over into the next sprint, especially if it’s due to a small thing that was missed. But over time, it will make you and your team better at keeping up with the criteria and bring you closer to your quality goals.
Nishi is a corporate trainer, an agile enthusiast and a tester at heart! With 11+ years of industry experience, she currently works with Sahi Pro as an Evangelist and Trainings Head. She is passionate about training, organizing testing community events and meetups, and has been a speaker at numerous testing events and conferences. Check out her blog where she writes about the latest topics in Agile and Testing domains.
Help us improve this page!
What problem are you trying to solve?
Read more about how the QA team at ELEKS evaluated different test management solutions and why they ultimately chose TestRail.
This post covers how to show test coverage, manage requirements traceability, and create a traceability report in Jira.
Learn about the pros/cons of using Jira as a test management solution, alternatives, & how teams that use Jira manage their testing.