Some software should be sustained, and some shouldn’t. But how can we choose, what is the cost of sustaining it, and what is the cost of letting it pass away?

Posted by s.aragon on 13 December 2018 - 10:00am
javier-allegue-barros-761133-unsplash.jpg
Image by Javier Allegue Barros.

By Andrew Edmondson​​​​​​​, Mike Zentner, and Cristian A. Marocico

This post is part of the WSSSPE6.1 speed blog posts series.

We’re writing this blog from the perspective of people who are responsible for helping researchers in our institutions develop their own software for their own research purposes. We want to help our communities to make the right decisions about the sustainability of their software – and therefore about their time and money. Whatever decisions we make now will have future consequences. Then there’s the added complexity of the ever-changing community in universities and research institutions. It’s not enough to equip people, we need to encourage a culture that is well-versed in the topic of software sustainability.

The implication here is that some software shouldn’t be sustained. That sounds wrong, doesn’t it? We can look at research software as an ecosystem subject to a kind of natural selection. New software comes into existence all the time, whenever a Masters or PhD student with some programming skills needs to solve their particular problem and it’s easier for them to write a short application to do it than to search and implement an existing solution. Some software is written to be run once and then cast aside. A quick script that one writes to re-organise one’s files doesn’t need to be curated. It could be kept around for future reference, but one wouldn’t expect it to work again next time without some modifications. Let’s not try to save it all for posterity.

Then there’s the slightly tricky topic of “bad” software. Let’s face it, some software just isn’t very good. And bad software must lead to bad research results. Even if the results are ok, using badly written and/or untested software in published research makes it more likely that the research is not reproducible. The Software Sustainability Institute’s maxim “Better Software, Better Research” has an obvious negative converse. So the choice, from reproducibility and sustainability perspectives, isn’t just “let’s preserve it.” The choice is “let’s make it good” or “let’s let it die”. Natural selection again. Even some good software should be allowed to pass away. Consider collaboration: if every research group uses its own software then collaboration between groups is harder.

So should we sustain any research software? Yes. At the other end of the spectrum from the untested, quickly written software for a specific situation and problem we find the software that is written carefully, well tested, well documented and used ubiquitously throughout a research field. Clearly this software must be preserved, curated and supported. In between these two ends of the spectrum is where most research software lives. We can’t just say “only keep the best, mature software - because we need innovation to support new research. So should you make your software sustainable? Indeed, that is probably shorthand for “which principles of software sustainability are appropriate for your software?” Let’s consider some of them one by one:

  • Availability/discoverability: Do people need to compile your code, or just install it? Do other people even need to use it at all? Will they need/want to use it next year? These questions will help you to decide where to host your code, and how to distribute your software. Will people need to compile it to use it, or will you ship pre-built packages? As for distributing, check if there’s a standard distribution method for the language: If your software is written in Python the use PyPI, if it’s R then use CRAN etc.

  • Versions: This could be wrapped up in the previous point, but it’s so important (and a cause of regular frustration for people who install lots of software on HPC for example). Please make sure you use versions appropriately (e.g. tags in git). If you run an experiment using software then at the very least you should record the exact version of the software (and dependencies etc.) that you used. It’s surprising how many research papers make no mention of such details. Would you be able to run your own experiment again and get the same result? Of course, the question of “reproducibility” is much more complex than that (what if the OS has been upgraded, or the timezone’s changed, or a thousand other seemingly inconsequential changes that have caused software to produce different results).

  • Testing: Do you want people to be able to make changes to your code (including yourself in the future)? Do they need to know that their changes haven’t broken existing functionality? (That’s a loaded question – the answer is “yes”). Then you want regression tests of some kind (and the easiest type is often to write unit tests). And if you’re doing that then you should also look into continuous integration so that your tests are run automatically.

  • Documentation: There are two distinct communities of people for whom you should write documentation. First there are the researchers who will use your software. If you want such a community then the documentation for them needs to tell them how to install it, and how to use it. It also needs to be clear. The second community is one made up of developers. Do you want people to contribute to the development of the software? Their documentation needs to explain how the software is architected and any design principles you’ve chosen. It needs to contain contribution guidelines. You’ll also need to think about issue tracking and processes for reviewing and merging contributions.

  • Licensing: Under what terms would you want to allow others to use and modify your software? Certain licenses may encourage any sort of use and any modification (which may or may not become part of what you consider the sustainable branch of the software). Others may place restrictions on your software’s redistribution (for example, GPL). A related question involves whether employment with your institution allows for such distribution, or does it consider your work proprietary?

  • Commitment: Do you love your software enough to put in the energy it takes to gain the mindshare of the community it serves necessary to make it become sustainable?  If you don’t have this commitment, who would? Even with all the best practices, people drive sustainability.

Let’s now consider the cost of sustaining software and the cost of letting it die. Commercial software teams know that the cost of the initial development of software is dwarfed by the cost of supporting, maintaining and fixing it through a multi-year lifetime. Will you need to keep researching and developing the software to include newly released libraries, methods and techniques? Will you need an outreach, marketing or sales operation to promote its use? If the revenue (financial or reputational) from the software is important to you then you need to promote it. And if people use it, they will find bugs with it (which you’ll need to fix). And if they like it they’ll ask you to add new features (which you may choose to develop). All of that is to say, it is way more than just the initial cost of development.

Could some of these costs be shared across multiple projects? Common practice in sustainable businesses is to consolidate expertise in various functions and focus innovation as the unique item. For example, customer support, outreach, user testing etc., are common activities that address multiple products in a commercial setting… and similar economies should be present in research software.

Now we talk a lot about how researchers (generally speaking) need to learn better software skills and practices. But perhaps the teams that support these communities, as experts, ought to be creating tools that by virtue of their existence already facilitate best practices among researchers who are not as skilled. In other words, are we doing the community a disservice by helping researchers develop special tools when our expertise could be applied toward toolkits that encourage better software?  If an institution, funding council, community, society provided such tools, the normal culture of researchers should be to use them. Here are some ideas:

  • A “health checker” for research software with proactive notifications. This tool would alert you telling you that you haven’t specified a licence, or included a README file, or any other “best practice” items. There are already examples of such things, but they are not widely used.

  • Automatic code checking and fixing (like flake8 and auto-pep8 in Python).

  • A tool that automatically creates a unit testing structure (with continuous integration) tailored to the local environment and systems.

Ultimately this is a question of time, money, and desire. It costs to create environments, tools, training courses and cultures that make it easier for researchers to produce higher quality software quicker and easier. But the economies of scale and sharing such costs across project and institutions must lead to savings. Even without that, a more widespread use of good software practices from the early days of a research software project will yield higher quality software at a lower price. Yet this requires a real desire for the software creator to have their software become sustainable. That doesn’t happen automatically. By rethinking how we as research software engineers build assets that help our customers with sustainability, we might even reach the nirvana of “cheaper software, better research”.