Automated Testing... Slower Than What Exactly?

One of the myths in the software development world is that writing automated tests slows down productivity. Is this really the case? While we think we can code faster without tests, in the end it's just an illusion. The pity is that many companies fail to see the real added value of software quality.


Let's take an in-depth look at the myth of slow automated testing

Let me ask you this: writing automated tests is slower than what, exactly? Slower than writing code without testing it? A developer must test his code at least once, even if this is done manually.

How do you carry out manual tests?

You draw up a list of possible scenarios or test cases. You integrate your methods into your software, start the program and look at the results. It may seem quick at first glance, but you've started a server, if applicable, waited for the software to open, then looked for your test results. If you find a bug, you'll have to go through this time-consuming procedure again once it's been corrected.

And with automated tests?

You make the same list of scenarios representing your test cases. You translate your scenarios into code just once. You then write your code and see the results instantly, in a matter of seconds. This is the main advantage of automated testing.


Why do you need to start your software to find out if your method works? When you find a bug, all you have to do is change your code and you'll see your new results right away. You don't have to repeat the testing process, because it's already been written! So using automated testing allows you to be more agile in your process.



What's really interesting in the end is that your tests will still be around months from now. So you'll always be certain in the future that your methods work, with proof to back them up (yes, all the tests have passed!). What's more, the tests are a great reference for developers to understand your methods, as they are examples of use. Therein lies the return on investment of automated testing.


Validating regression, an added value

Making sure your software works is essential. Making sure it still works after a new feature has been added is just as important. When a modification is made, re-running the same old tests is a big waste of time. In fact, validating regression is one of the most important added values of automated testing. Automated non-regression testing ensures that your software works at all times, without any additional effort.


"It's all interesting, but it doesn't apply to our situation."

In fact, this applies to all companies involved in software development. If world leaders like Google, Microsoft and Facebook can automate their software testing, then you can definitely automate yours. If you're having trouble testing your software, it's simply because your architecture wasn't designed with testability in mind. Software architect Michael Feathers proposes a very interesting approach to this problem in his book Working Effectively with Legacy Code. He explains how to move from a "legacy" system to one that is testable and easy to maintain. I recommend this book to get you started!


I'd like to conclude by sharing with you this excellent quote from Félix-Antoine Bourbonnais, which aptly describes how I feel about testing: "You're not paid to test? You're not paid to create bugs either!"

Les articles en vedette
Structuring Your Work to Maximize Efficiency in Software Development
Starting with React: Immutability
Technical Coaching: To Better Equip Software Development Teams

Soyez les premiers au courant des derniers articles publiés

Abonnez-vous à l’infolettre pour ne jamais rater une nouvelle publication de notre blogue et toutes nos nouvelles.