HomeNews and blogs hub

How do we help adapt software best practices to make them more applicable to domain researchers?

Bookmark this page Bookmarked

How do we help adapt software best practices to make them more applicable to domain researchers?

Author(s)
Alexander Morley

Alexander Morley

SSI fellow

Stuart Grieve

Stuart Grieve

SSI fellow

Tania Allard

Tania Allard

SSI fellow

Adam Tomkins

James Grant

Posted on 8 February 2018

Estimated read time: 5 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

How do we help adapt software best practices to make them more applicable to domain researchers?

Posted by s.aragon on 8 February 2018 - 9:16am

2011.11.15_building_software.pngBy Adam Tomkins (Chair), University of Sheffield, James Grant, University of Bath, Alexander Morley, University of Oxford, Stuart Grieve, University College London, Tania Allard, University of Sheffield.

There is a growing interest in the adoption of software best practices in research computing and allied fields. Best practices improve the quality of research software and efficiency in development and maintenance as well having the potential to deliver benefits outside software development.  However, this interest in these methods is not universal and there is a possibility that a drive for best practice could lead to a widening divide between those who embrace this change and those who do not. It is therefore vital that Research Software Engineers (RSEs) work closely with domain specialists, to bridge this divide and attempt to meet the challenges of efficiency and reproducibility:

  • How do we develop code efficiently?

  • How do we make efficient use of the technologies available to us, especially given current funding restrictions?  

  • How do we encourage adoption of methods that put reproducible research at the forefront of our work?

  • How can we advocate the use of tools beyond research software?

In many domains, advocates of better practice quietly or loudly push for cultural changes that can improve efficiency and reproducibility. Identifying these individuals, and working with them, is essential to develop approaches to engage researchers across all disciplines. They can help to deliver training which can adapt domain flavoured courses to demonstrate applications in specific subject areas. By using relevant case studies to motivate the learning of methods, half the battle is avoided. The career stage of these individuals is irrelevant, but having a range within departments means that RAs (RSEs) and students can work producing and delivering training whilst established researchers can push for training adoption and the funding to support it (at council and institutional levels).

Training should focus on minimum viable examples. Whether training is in programming a particular language, version control, or reproducible workflows, students must be interested in the application and take away usable examples that are relevant to their work. This is the only way that they will use the methods, tools and skills in their research and truly recognise the benefit. Developing training material that can achieve this is our challenge, along with identifying funding sources to secure the development of the training and ensuring the resources to support the hardware and software services to support it.

This training can also leverage additional benefits of the tools of software engineering.  Increasingly, sites such as Overleaf offer implicit version control for collaborative writing (for example, research papers or grant proposals). The use of repositories for data management and scripted workflows for automated analysis, graphing and accurate up-to-date paper generation, e.g. Jupyter Notebooks, are the simplest way to meet a journal’s requirements for reproducible science. Carrot and stick: why should you use these methods? They will make life easier, and without them you won’t be able to publish your research.

Testing of software is essential, but industry standards of unit testing and increasingly test driven development are not always intuitive to developers in research, indeed there is an argument that the research itself is the test(!?). However, the principle of validation is essential.  Alongside regression tests to validate development, in each discipline there will be some level of analytic theory, literature results or similar that must be reproduced by a code to demonstrate that it is an accurate implementation. Working with researchers to identify these and develop the tests to meet them is a clear opportunity for RSEs to demonstrate their value.

Whatever the application or domain, clear documentation is vital to engage users. While this is obvious to users of any software it is often difficult for its developers to produce. RSEs can contribute hugely by ensuring that code is not just documented but that the methods employed, and the means of implementation are clearly explained. Here, the benefit of understanding software and having a broad research background helps in distinguishing the ‘obvious’ from the specialism and expertise of each domain.

RSEs do not (just) write software and they certainly cannot do your research. They can help to deliver good practice and identify the benefit of the tools and methods in a range of disciplines to improve the the efficiency and quality of research. They add value in improving the sustainability of software and the reproducibility of the research it produces across all domains.

Share on blog/article:
Twitter LinkedIn