To begin, we should start by establishing what a regression is. A regression is, in software terms, an unwanted or unintended malfunction (bug) in a feature of a software which formerly worked as expected. This type of bug is normally caused by a systematic error in the software’s design, or at a source code level when software undergoes changes like new features or bug fixes; it is, hence, a human error.
Suppose we add a new feature to our application –like a new item in the Options menu which opens a dialog for extra configurations, but that new feature somehow enters in conflict with the Tools menu, making it invisible or inaccessible, or, being accessible, causes the application to crash every time we try to open it. Imagine we solve that bug –we can now access the Tools menu again, only this time it’s the very Options menu who’s not responding. Again, we try to fix that, but the Tools menu bug comes back –and so on, and so on. That, roughly –and clumsily– speaking, is what’s called a regression. It’s an emergence or re-emergence of bugs.
So, when we say we perform regression testing, what we actually mean is we’re testing our software to verify the new features or fixes don’t conflict with its previous stable state, that is, that our software doesn’t suffer a regression.
Poor revision controls, bad coding practices, insufficient testing, among other negligent activities, are all potential triggers for software errors.
Regression test routines can be performed manually and/or automated and should be performed frequently, especially before a code release.
After an appropriate debugging/fixing phase, we can start doing regression testing. We will select the most relevant cases –usually the ones which require re-testing, based on the module or component of the software where changes have been applied. Obsolete test cases will be discarded, so we will refine the test case selection according to certain characteristics which will make them reusable: cases with a high frequency of errors, cases which verify the correct functionality of a software, cases which present features visible to the user, cases which have undergone recent changes at source code level, former successfully run cases, cases which have been known to fail in early testing stages, among other. In this way we will establish which set of tests will be the smoke tests and which the sanity tests.
After performing a proper exploratory testing and estimating the necessary time to execute the regression tests, we will follow by identifying candidates for test automation. Prioritization takes place right after this. High priority test cases will be executed first, since they imply core functionalities which could cause critical errors, while mid and low priority cases will be executed last. We’re ready at this stage to execute our test case suite.
As some may know, there are several technical approaches to regression testing. Here’s a varied –though not necessarily complete– list:
Naturally, regression tests are a time and resource intensive process. A QA team will usually need to run [a lot] of test cases in order to cover the most crucial components of the software potentially affected by new changes. That said, it would be just common sense to think regression tests are a natural candidate for automation.
The possibilities with automation are enormous, compared to manual testing. By using automated test scripts which can be written in a variety of programming languages like Java, JavaScript, Python, etc, QA testers can parametrize functions, locate and map elements, do assertions, debug errors, get analysis reports and more, with notable speed and efficiency. After the test preparation is ready, the tester can just execute it, sit back for a minute and watch the tool do it all, without one click. By implementing certain design patterns, testers can also adapt or scale their testware without necessarily changing any of its core scripts.
Features which make Autify an intelligent choice for your regression tests:
If you do a brief search online you will notice there are a lot of potential candidates to suit our toolbox. Nonetheless, when it comes to pricing, most of them aren’t exactly what we would deem straightforward. Besides, real, comprehensive tech customer support is a value that differentiates a good tool/service from an average one. That’s a thing to keep in mind.
At Autify we take such things in high account, because client success is cause and effect of our success.
We invite you to check these amazing customer stories:
You can see other of our client’s success stories here: https://nocode.autify.com/why-autify