One of the major advantages for automation that is often touted is that you can use it to perform regression testing. After having implemented, or attempted to implement, an automation strategy for multiple employers, I have learned the hard way that not everything makes sense to automate, especially when it comes to regression. Since regression is one of the of the areas most commonly targeted as a starting point for automation I thought it made sense to introduce you to SSX (no, not this one).
The Scope, Scripting, & eXecution (SSX) process is a really simple method that I came up with to explain why that grand automation plan you have will more than likely not turn out the way you expect. While you can use SSX to evaluate other forms of automation, I will be talking about it specifically in terms of regression testing for this article.
The SSX Method
So what exactly is SSX? While some of you may recognize it as a (pretty fun) video game from a few years ago, in reality it is just a simple concept boiled down to an easy to remember mnemonic. Because I am often forced to explain my reasoning for why I approach automation the way I do, I needed a simpler way to explain my thought processes. Explaining SSX is way simpler, and because of the name, it’s much more likely that it will be remembered when it’s needed.
Now that we know what SSX is, how do we apply it to regression testing? Let’s walk through each of the components, and go over how you can use them to ensure you aren’t leading a doomed expedition.
Get TestRail FREE for 30 days!
No matter where you are starting with regression testing, one of the hardest aspects is figuring out what needs to be included. Because it differs for every organization or environment, and no two situations are the same, it’s hard to implement a one size fits all approach. Thankfully, there are a few guidelines that you can use to help move you in the right direction.
Out of Scope
This should be the first place you start. By eliminating first, this allows you to focus on the items that are truly important. A few areas I typically flag as out of scope are:
- Tests that check UI elements
- Tests that only take a few minutes to execute manually
- Tests that only need to be run a minimal number of times
- Tests that have multiple results (a pet peeve of mine, but you may be ok with them)
Once you eliminate those tests that do not make sense, you are left with a much smaller bucket to work with. Some examples are:
- Tests that validate core functionality
- Tests that take a long time to execute manually
- Tests that are needing to be executed often
- Tests that have a single expected outcome
As you can see, these lists are basically complete opposites of each other. Your lists may look different, but will more than likely be similar in this regard, and that is what makes starting here so useful.
Now that you have a bucket of tests identified, it’s time to begin working on the process of scripting out the actual tests. In general, you should try to wait to start scripting a given test until all functionality covered by that test case has been completed, and marked ready for production (or whatever you use to designate code complete). This does not mean that it has gone to production, but rather that everything necessary to do so has occurred, and the feature will not be changing any more. This will allow you to only work on items that are not changing, therefore reducing the amount of churn (maintenance work, wasted scripting, etc.) you have. There are certainly exceptions to this rule, and it will be slightly different in each environment, so be sure to consider all aspects of your situation in making this determination.
A key thing that is worth pointing out here; I use the term scripting to refer to the process of converting a test case into an automated test. Whether you are writing code, using a record/playback tool, or something else entirely, it’s all covered here. The reason I point this out is due to the fact that the methods by which you automate a test case can vary wildly, even in the same organization.
So why do I include this section in my evaluation? Because the time to create a given automation test is one of the key variables that goes into figuring out whether your ROI will be acceptable or not. We’ll walk through how to figure your out ROI a little further down, but wanted to make sure I highlighted that. This simple metric often gets completely overlooked, but is the leading reason for most automation strategies failing.
When we are discussing regression tests, it’s important to keep in mind that the scripts you are working on will probably not be very useful for your existing sprint/release. This is due to the timing of when things are scripted, and can be run. Ideally, you will be able to execute your regression suite on a daily or weekly basis (depending on it’s size), but the number of times you execute should absolutely be taken into account when it comes to determining what makes it into your scope. Once you have an automated regression suite ready to go, make sure you get it on a consistent execution plan, whether this is nightly, weekly, bi-weekly, etc. In addition to the actual execution, make sure you have a place to record results, whether it is in your build system, on a wiki, in a spreadsheet, or your test case management suite. The where isn’t as important as making sure that the information is captured and available to everyone.
Newer automation tools and platforms have made this last point less important, but it’s still worth pointing out. The time it takes to execute a single test, as well as the suite as a whole, has to be factored in. As your suite grows (as it should), you’ll struggle to get through it all if each test adds minutes to execution time. This factor is the primary reason I stay away from UI based tests when it comes to regression. Due to how some apps are designed, wait times are inevitable, and that time adds up.
Determining your ROI
As I mentioned earlier, each environment is different as to what frequent execution means. Some places may think that once a week is pretty frequent, while others are daily (or multiple times a day even), and someone else is once a month. It all depends on what you are testing, and how long it takes to execute manually. Let’s take the following scenario:
If a given test case takes 15 minutes to execute manually and takes 8 hours to script out, you will need to execute that test 32 times in order to justify automating it. That does not take into account any maintenance work that might be needed. Depending on the maturity of your product, the amount of maintenance time needed will vary (maintenance could be at any point in the lifetime of that test). In addition, another oft-ignored variable that needs to be considered; if your automation team makes more than the manual testers (which is pretty standard), that also needs to be factored in. If we take these new factors into account, you will see why I consider them important.
Let’s say your automation team makes on average 1.5x what the manual testers make, and then we factor in an expected maintenance percentage of 50% over the life of the script (which is not unrealistic at all). Using these numbers, we get the following:
(480/(15/1.5)) + 50% = 72 executions
Based on this result, if you are doing weekly execution of your regression suite, it will take a year and a half before you get any payback on that particular test. Granted, these are rough metrics, but the formula is pretty simple and can be used to give you a pretty good idea of what to expect:
(scripting time in minutes/(manual execution time in minutes/salary difference factor)) + expected maintenance % = # of executions before payoff
Another metric that is useful to know is how much each hour of scripting will add to the number of runs:
1 hour / manual execution time = amount of runs to add per hour of additional scripting
Using this metric in the above scenario, each additional hour of scripting we do means we would add 5 runs to the # of executions needed. This example actually uses a fairly long manual execution cycle… If you drop the manual execution time down, the numbers can ramp up pretty quick.
The SSX method has served me well to this point, and I’m sure you can get some use out of it as well. You may go through this exercise, and still decide to move forward, and that is entirely ok. You will be going into it armed with an understanding of what you are taking on, and will be able to manage accordingly. One of the best things you can do for your company is to make sure automation is actually adding value, and not just becoming a resource drain. You can show what appears to be a tremendous amount of production, but it might have cost you far more in the long run than just executing manually. You have to make sure that you are saving time and/or money for the company at the end of the day…if you are not doing that, then no amount of automation is going to help you.
This is a guest posting by Jon Robinson. Jon has helped build and lead a wide variety of teams across all aspects of QA over the past 11 years, and recently launched The QA Consultancy, which specializes in helping organizations improve their overall Quality. Having worked with organizations like HomeAdvisor, ScrippsNetworks, and Victoria’s Secret in the past, Jon has battle-tested his sometimes unique approach to QA in some incredibly demanding environments. He is based out of Nashville, TN, where he lives with his wife and kids, and is a huge fan of the World Champion Chicago Cubs. He can be contacted on Twitter @jumpmancol or by email at firstname.lastname@example.org.
Test Automation – Anywhere, Anytime