Four Things That Can Sabotage a Sprint

This is a guest post by Nishi Grover Garg.

Success and failure are a part of any journey. For agile teams, continuous delivery is the expectation, and that may be a hard thing to achieve. As sprints go on and tasks pile up, we may deter from the path.

Whether your team is beginning their agile journey or are already agile pros, you are bound to encounter a failed sprint at some point.

When do you deem a sprint as failed? Why does a sprint fail? What are the possible reasons, and how can you learn from the mistakes to avoid them in the future? Let’s examine four possible reasons for a failed sprint.

Bad Estimation

Estimates cannot be completely accurate every time. But when the agile team fails to see the correct depth or complexity of a task or a user story, the estimates may go haywire, leading to a big diversion from planned timelines within the sprint.

Let’s say the developer looked at a user story and thought its implementation would be simplistic, leading him to give an estimation of 10 hours. But he uncovered many dependencies later, and during implementation the team had to redo the design to take care of security and performance aspects, leading to the time spent being 20 hours instead. This will lead to a delay in the story being tested and defect fixes.

This may be a one-off case, and the team may be able to deal with it. But if these scenarios occur too frequently, it will lead to the sprints format going for a toss!

Modern Test Case Management Software for QA and Development Teams

Incoherent Definition of Done

Everyone may have a different style of working, and this can include different interpretations. What do people mean when they say their work is “done”? Some developers would only check in the code and say they are done. Others may have ensured unit tests run and pass, documented design and run static analysis on their checked-in code before saying they are done.

What is the real definition of done for each person on the team? Having inaccurate or varied descriptions of being done may lead to incomplete work, which at the end of the sprint (or later) may become an issue. The sprint will fail with even one task being not completely done for any of its user stories.

To ensure true completeness, we must list coherent and agreed-upon definitions of done for each type of task we undertake within a sprint, be it development, testing, design, review tasks or test automation. This makes it easier to keep track of the quality of work and get every person’s understanding of the expected work on the same page.

Incomplete Stories

More often than not, user stories being developed in the sprint get stuck at some tricky juncture toward the end. Situations may arise where you reached the last day of the sprint but there are still things holding up the team:

  • Development of the story was completed but testing is still underway
  • Developers and testers paired to conduct tests but some critical issues remain in the feature that need fixing
  • Development and testing are completed but the automation script is yet to be created for regression of the feature (and automation was part of the exit criteria for the user story)
  • Code review is pending, although it is already checked in and working fine
  • Tests for the user story were not added to the test management system even though the tester has performed exploratory tests

Due to any of these reasons or a similar situation, the user story will be incomplete at the end of the sprint. At this point, that feature cannot be deemed fit for release and cannot be counted as delivered.

It may be hard at first, but we need to enforce discipline by not allowing such user stories to be considered done. In my team, we used to count these stories as spill-over to the next sprint; even though we were 90% or more done with the work, we would still not get the story points in our sprint velocity. This basically meant our sprint had failed, since we did not deliver the promised business value.

Technical Debt

In a fast-paced agile environment, we cannot shirk off any part of our work or leave it for later. This becomes technical debt that is hard to pay off. The longer we do not pick up the task, the harder it gets to find the time and spend the effort on it while working on ongoing tasks at the same pace.

If your team lags in automating their stories or having the promised code coverage from unit tests, it is a debt they will need to repay at some point before release. The sprint when you pick up that work will then suffer, since the team’s effort will not be spent on new work items. Consequently, repaying older technical debt can also cause a sprint to fail.

Strive to be constantly aware of where you can improve your processes, your ways of working and your mindset as a team, and use these tips to avoid having failed sprints.

All-in-one Test Automation Cross-Technology | Cross-Device | Cross-Platform


Nishi is a corporate trainer, an agile enthusiast and a tester at heart! With 11+ years of industry experience, she currently works with Sahi Pro as an Evangelist and Trainings Head. She is passionate about training, organizing testing community events and meetups, and has been a speaker at numerous testing events and conferences. Check out her blog where she writes about the latest topics in Agile and Testing domains.

In This Article:

Sign up for our newsletter

Share this article

Other Blogs

Key Factors to Consider When Selecting the Right Test Case Management Tool
General, Software Quality, TestRail

Choosing the Right Test Case Management Tool: Key Factors

Understanding the need you have is often the first step in defining the method for managing test cases that will work for you and your team.

Understanding QA Roles and Responsibilities
Agile, General

Understanding QA Roles and Responsibilities

The software development process relies on collaboration among experts, each with defined roles. Understanding these roles is crucial for effective management and optimizing contributions throughout the software development life cycle (SDLC). QA roles, respons...
DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC
General, Business, Software Quality

DevOps Testing Culture: Top 5 Mistakes to Avoid When Building Quality Throughout the SDLC

Building QA into your SDLC is key to delivering quality. Here are the mistakes to avoid when building quality throughout the SDLC.