HomeNews and blogs hub

Adopting automated testing

Bookmark this page Bookmarked

Adopting automated testing

Author(s)

Mike Jackson

Posted on 4 February 2015

Estimated read time: 3 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

Adopting automated testing

Posted by m.jackson on 4 February 2015 - 2:00pm

OK sign

By Mike Jackson, Software Architect.

Automated tests provide a way to check that research software both produces scientifically-valid results and that it continues to do so if it is extended, refactored, optimised or tidied. Yet one challenge that can face researchers, especially those with large, legacy codes, is this - where to start?

The prospect of having to write dozens of unit tests can be off-putting at the best of times, let alone if one has a data to analyse, a paper to write or a conference to prepare for. Our new guide on Adopting automated testing describes an approach for introducing tests by focusing on introducing end-to-end, or system tests first.

When working on open call consultancy projects with QuBIc and TPLS, I had to refactor their software, namely FABBER and TPLS. One problem I faced was that, in each case, there were no automated tests available, so there was no straightforward way for me to check that I was not introducing bugs during my refactoring. While I did have sample inputs I could run their software on, I lacked the domain knowledge, in terms of image analysis and computational flud dynamics simulation, to check whether the outputs were still bug-free or not. Similarly, while working with the Distance project I was asked how they could automatically test one of their analysis components, MCDS, a standalone executable, written in Fortran.

In each case the software (FABBER, TPLS, and MCDS) can be run from the command-line, without interaction. How the software behaves depends upon the input files and command-line parameters they are given, and each produces one or more output files. There are many tools which can be used in the same way and it is this quality that allows automated tests to be adopted without having to develop, at the outset, finer-grained or unit tests for each individual component, module, class or function.

We can treat the tool as a standalone component, which takes some inputs, namely its files and command-line parameters, and produces some outputs, which in this case is a return code and some output files. Given a set of valid inputs, and a set of outputs we know to be correct for those inputs, we can automate the process of both running updated versions of the software on these inputs and validating that the outputs match those we know to be correct.

By introducing system testing first, researchers can get some of the benefits of automated testing, such as the security to refactor, extend, optimise or tidy, their code without the overhead of having to implement dozens of unit tests at the outset. Unit testing can then be adopted at a later date, research demands permitting.

Our new guide on Adopting automated testing describes an approach to adopting automated testing by introducing system tests first. It uses Python, though the principles are applicable to any language, to implement automated tests for software.

Share on blog/article:
Twitter LinkedIn