A path to the light: stopping ‘secret’ software, managing maintenance and evidencing impact
A path to the light: stopping ‘secret’ software, managing maintenance and evidencing impact
Posted on 27 May 2021
A path to the light: stopping ‘secret’ software, managing maintenance and evidencing impact
Photo by Linus Sandvide
By Yo Yehudi, Mario Antonioletti, James Graham, Matthew Brown and Shoaib Sufi.
This blog post is part of our Collaborations Workshop 2021 speed blog series.
Research software is a critical part of the research landscape and contributes to scientific discoveries across the full breadth of research. However, when it comes to grant-writing, software maintenance has the perception of being taboo - a phrase not to be uttered for fear of invoking sentiments like ‘lacking novelty’ or ‘incremental’. This has driven software maintenance underground, leading to a lack of visibility to funders, a sense of underappreciation from the developers, and reduced long-term planning.
We started this blog discussing how to cost software on grants as a way to move from the ‘secret’ software development that predominates currently. But we quickly realised that the core issues run deeper than putting numbers in a grant application. How can we better recognise the value of software (by thinking about impact from the start and resisting ‘secret’ software)? Should funders be the default option for software support (depends on the circumstances), and what other options exist (several)? Should all research software be maintained (probably not)? Below we address several of these points in more depth, with thoughts on how to approach each.
Evidence-based culture shifts away from ‘secret’ software
Software is crucial to research but its visibility is in question and its contribution is not recognised in far too many cases. Funded research thus generates a huge amount of software development that is then effectively secret. This software is only seen as part of the process of research, rather than a valued research outcome that is worth maintaining and nurturing in itself.
Evidencing impact might be a way to aid the case for recognising and supporting this research software, bringing to light this ‘secret’ software and providing a compelling case for maintenance funding. This will help raise awareness to funders, who must then act on this new information when assessing grants.
Think about how to evidence the impact of your software from the very beginning
Grant proposals to develop software often provide relatively little information on how reviewers might measure the prior or expected impact of the output. Likewise, funders don't provide ‘success metrics’ to aim for. You should consider how to evidence impact at the earliest stage in the thought process for a grant, articulating what the expected impact of the software is, and considering how best to build in measurement of this over the lifecourse of a proposal. This can be included as part of a proposal, to show that a plan to develop, expand and interact with new communities is being taken seriously.
This both helps funders see the ‘big picture vision’ of how this software would help enable a field, and provide a compelling body of evidence for future applications. There is no universal metric that can be used here - different fields will differ in community size and publication habits - and so funders must contextualise the proposed impact within the relevant community, usually through use of expert advice from the relevant field. Experts - both in software engineering and in relevant scientific domains - help funders understand what the implications of starting or stopping a particular project would be.
Plan to build a community around your software and consider alternative funding sources
You should not think of a funder as your sole long-term revenue stream - they may be there to provide you with your initial starting capital, but funders are very unlikely to provide you with a long-term secure funding line. Thus, it is worth spending some time thinking of alternative ways to obtain long-term funding from one or more revenue streams. For instance, you could try to develop an active community around your software that will help support and develop it once the initial funding has finished, but this has many inherent risks, particularly maintainer burnout. This community may themselves have access to grant funding or contribute on a voluntary basis.
Alternatively, you could develop a dual licensing scheme, if your software is attractive to commercial users: a free or low cost version for academics and a more expensive version for commercial use (eg: Zegami, Psychopy / Pavlovia and FSL). A long-term commercialisation strategy is also a possibility, and institutions can often help with this (although they may want a cut of the profits), but this isn’t for everyone and is a significant time investment in itself. Regardless, if your ultimate strategy is to have a long term maintained piece of software that may be of use to others, it will pay dividends to do some thinking early on on how to take your project beyond the initial funding phase(s).
Plan for a graceful software sunsetting, if appropriate
Much of the software developed as part of the research process may be considered to be ‘enabling software’ - software which enables a specific research project to be completed within the scope of a single funding cycle. This software has then often served its purpose after the paper has been published, the code archived, and the research has moved on.
However, it can be difficult to identify which software fits into this category and which should be maintained over a longer period. This stems, in part, from an emotional attachment to software you may have developed leading to the idea that just because something can be maintained, it should be maintained. Your success metrics and the process through which you designed them can again be useful here, as your project may be best served by allowing the software to gracefully conclude or even be officially sunsetted within the time available on the initial funding.
Neither funders nor researchers should be afraid to see the graceful, managed closure of a project when the right time has come. By planning in advance for this eventuality, you know that your software will be left in its best state when the project has run its course. Funders and institutions should be looking to support and enable ‘good practice’ in these cases, ensuring the outputs are robust and reproducible.
Closing remarks
Overall, it is clear that budgeting for software maintenance needs more than just numbers costed in a grant proposal. There are culture shifts within many parts of the academic system that need to be made before ‘secret’ software can emerge into the light (and we haven’t even dealt with credit assignment in publishing!). Funders must recognise software development and maintenance as both critical and legitimate components of research projects, seek the right expert opinions to help inform decisions and acknowledge the risks of discontinuing support for ‘mission critical’ software.
Software developers can put themselves in the best position by thinking about evidencing impact from the start, actively diversifying revenue streams, and thinking hard about the ‘natural lifespan’ of software designed for a time-limited job. Perhaps these additional factors should be included in associated Software Management Plans or Output Management Plans.