By Andrew Walker, Sam Mangham, Robert Maftei, Adam Jackson, Becky Arnold, Sammie Buzzards, and Eike Mueller
Good practice guides demand tests are written alongside our software and continuous integration coupled to version control is supposed to enable productivity, but how do these ideas relate to the development of scientific software? Here we consider five ways in which tests and continuous integration can fail when developing scientific software in an attempt to illustrate what could work in this context. Two recurring themes are that scientific software is often large and unwieldy—sometimes taking months to run on the biggest computer systems in the world—, and that its development often goes hand in hand with scientific research. There isn’t much point writing a test that runs for weeks and then produces an answer when you have no idea if the answer is correct. Or is there?
1. Test fails but doesn’t say why
This one kind of explains itself. If the test doesn’t tell you what’s wrong, how are you…Continue Reading