This is a guest post by Nishi Grover Garg.
Success and failure are a part of any journey. For agile teams, continuous delivery is the expectation, and that may be a hard thing to achieve. As sprints go on and tasks pile up, we may deter from the path.
Whether your team is beginning their agile journey or are already agile pros, you are bound to encounter a failed sprint at some point.
When do you deem a sprint as failed? Why does a sprint fail? What are the possible reasons, and how can you learn from the mistakes to avoid them in the future? Let’s examine four possible reasons for a failed sprint.
Estimates cannot be completely accurate every time. But when the agile team fails to see the correct depth or complexity of a task or a user story, the estimates may go haywire, leading to a big diversion from planned timelines within the sprint.
Let’s say the developer looked at a user story and thought its implementation would be simplistic, leading him to give an estimation of 10 hours. But he uncovered many dependencies later, and during implementation the team had to redo the design to take care of security and performance aspects, leading to the time spent being 20 hours instead. This will lead to a delay in the story being tested and defect fixes.
This may be a one-off case, and the team may be able to deal with it. But if these scenarios occur too frequently, it will lead to the sprints format going for a toss!
Everyone may have a different style of working, and this can include different interpretations. What do people mean when they say their work is “done”? Some developers would only check in the code and say they are done. Others may have ensured unit tests run and pass, documented design and run static analysis on their checked-in code before saying they are done.
What is the real definition of done for each person on the team? Having inaccurate or varied descriptions of being done may lead to incomplete work, which at the end of the sprint (or later) may become an issue. The sprint will fail with even one task being not completely done for any of its user stories.
To ensure true completeness, we must list coherent and agreed-upon definitions of done for each type of task we undertake within a sprint, be it development, testing, design, review tasks or test automation. This makes it easier to keep track of the quality of work and get every person’s understanding of the expected work on the same page.
More often than not, user stories being developed in the sprint get stuck at some tricky juncture toward the end. Situations may arise where you reached the last day of the sprint but there are still things holding up the team:
Due to any of these reasons or a similar situation, the user story will be incomplete at the end of the sprint. At this point, that feature cannot be deemed fit for release and cannot be counted as delivered.
It may be hard at first, but we need to enforce discipline by not allowing such user stories to be considered done. In my team, we used to count these stories as spill-over to the next sprint; even though we were 90% or more done with the work, we would still not get the story points in our sprint velocity. This basically meant our sprint had failed, since we did not deliver the promised business value.
In a fast-paced agile environment, we cannot shirk off any part of our work or leave it for later. This becomes technical debt that is hard to pay off. The longer we do not pick up the task, the harder it gets to find the time and spend the effort on it while working on ongoing tasks at the same pace.
If your team lags in automating their stories or having the promised code coverage from unit tests, it is a debt they will need to repay at some point before release. The sprint when you pick up that work will then suffer, since the team’s effort will not be spent on new work items. Consequently, repaying older technical debt can also cause a sprint to fail.
Strive to be constantly aware of where you can improve your processes, your ways of working and your mindset as a team, and use these tips to avoid having failed sprints.
Nishi is a corporate trainer, an agile enthusiast and a tester at heart! With 11+ years of industry experience, she currently works with Sahi Pro as an Evangelist and Trainings Head. She is passionate about training, organizing testing community events and meetups, and has been a speaker at numerous testing events and conferences. Check out her blog where she writes about the latest topics in Agile and Testing domains.
Help us improve this page!
What problem are you trying to solve?
Building QA into your SDLC is key to delivering quality. Here are the mistakes to avoid when building quality throughout the SDLC.
Organizations need conceptual and QA has crucial strengths to effect that change. Here are three winning action plans to change your QA culture and integrate it with the rest of your SDLC.
DevOps implementation involves shifting the attitude of not only QA but all roles in a team. This takes a considerable amount of effort!