Software and research: the Institute's Blog

Automated GUI testing with AutoHotKey

Robot in front of a screen

By Mike Jackson, Software Architect.

As part of a recent open call collaboration with the Distance project at the University of St. Andrews, I was asked about open source tools to automatically test GUI-based applications on Windows.

By coincidence, an EPCC colleague had recently asked me the same question. So, I hit Google and Wikipedia, tracked down some candidates and decided to try the free open source AutoHotKey toolkit. In this blog post, I describe my experiences with this "scriptable desktop automation" tool.

Freeware and open source GUI test tools

Wikipedia lists a number of open source GUI test tools and Google revealed a couple of others. However, only a few of these are free or open source:

Project funding and economical sustainability in historical research

By Adam Crymble, Institute Fellow 2013

This is the first in a series of articles by the Institute's Fellows, each covering an area of interest that relates directly both to their own work and the wider issue of software's role in research.

If the Internet went down all historical software would cease to function, except for Microsoft Word. For an academic historian, a grant to build a high profile web-based project is likely the biggest pot of money he or she will ever receive during their career. That is, if they ever receive it as few historians will even apply. Instead, most are content to work in a fashion relatively similar to the way they did before the Internet came along. They go to the archives, read books and manuscripts, and write up their findings. This is their tried and tested mode of research, with costs limited to a few new books now and again, a train ticket or two to get to the archives, and refreshments while they're there.

Historical research is still largely a solo intellectual pursuit rather than a technical team-based one. There is nothing wrong with that. Not all discovery needs to be expensive, and as a tax-payer, I find it refreshing that there are still corners of the academic world in which spending more money isn't the easiest way to career progression. For the ambitious few who rise to the challenge and put in a proposal, meanwhile, the website that results, and in some cases the hundreds of thousands of pounds of funding that come with it, have made project leaders celebrities within the field. This celebrity comes with it all the accolades and resentment one might expect from fame.

A currency for peer review: PubCreds and Academic Karma

By Lachlan Coin, Academic Karma and Associate Professor, University of Queensland.

In 2010, Jeremy Fox and Owen Petchey proposed an innovative idea – fix peer review by introducing a peer review currency, which they called PubCreds[1]. Fox and Petchey noted that peer review suffers from a tragedy of the commons , in which "individuals have every incentive to exploit the reviewer commons by submitting manuscripts, but little or no incentive to contribute reviews. The result is a system increasingly dominated by cheats (individuals who submit papers without doing proportionate reviewing), with increasingly random and potentially biased results as more and more manuscripts are rejected without external review." Their solution was to privatise the commons by introducing a currency which is earned by reviewing and spent by getting reviewed.

Symptoms of the tragedy of commons in peer review

One of the main symptoms is slowing down communication of science. Fox and Petchey describe other symptoms, including an increasing tendency for journals to peer review only a small fraction of papers received, resulting in greater randomness in what eventually gets published. Another symptom is editors inviting many more reviewers than necessary in order to secure the minimum number necessary (anecdotally ~5x as many).

Feedback from Oxford Software Carpentry

By Philip Fowler, Software Sustainability Institute Fellow and postdoctoral researcher at the Department of Biochemistry at the University of Oxford.

Republished from the original post on Phil's blog.

The Wellcome Trust Centre for Human Genetics at the University of Oxford hosted its first Software Carpentry workshop this January. So how did the workshop go? I’m a bit biased, so to get a better idea I sent the participants a similar questionnaire to the one I sent to the Software Carpentry workshop I organised previously.

Open licences for people in a hurry (again)

Lindat license selector interface

By Mike Jackson, Software Architect.

Back in January, I blogged about tl;drLegal, an online resource to help us choose a suitable open-source licence. In the same spirit, the Institute of Formal and Applied Linguistics at Charles University in Prague provide the Lindat license selector​ to help select open licenses for both software or data.

Through a short set of questions, the Lindat license selector can help guide you to a license that both meets your software and data sharing requirements while satisfying any existing constraints on any software or data you have exploited.

The World beneath our feet

By Kristian Strutt, Experimental Officer at the University of Southampton, and Dean Goodman, Geophysicist at the Geophysical Archaeometry Laboratory, UC Santa Barbara.

This article is part of our series: a day in the software life, in which we ask researchers from all disciplines to discuss the tools that make their research possible.

Archaeological practice in the field seems so down to earth. The daily routine of excavation, recording of stratigraphy, finds and contexts, and understanding the different formation processes – it is what we are, and what we do. 

However, it is easy to overlook the scientific aspects of our work that integrate with the development of how archaeology understands past human activity.

Supercomputer driving tests

Motorway at night

By Mike Jackson, Software Architect, Andrew Turner ARCHER Computational Science and Engineering Support Team Leader, and Clair Barrass, ARCHER training administrator.

In 2013, the DiRAC consortium rolled out the DiRAC driving licence, a software skills aptitude test for researchers wanting to use DiRAC's high performance computing resources.

Now, ARCHER, the UK National Supercomputing Service, is to roll out an ARCHER driving test. Despite their similar names, these tests differ in nature, intent, scale and reward. In this post we compare and contrast these two supercomputer tests.

Software Carpentry for the NHS

By Aleksandra Pawlik, Training Leader.

Last month saw us run a special Software Carpentry course for students undertaking the MSc in Clinical Bioinformatics course at the University of Manchester. This combines an academic curriculum with a work-based programme.

The students are already qualified professionals and based at various clinical units throughout the UK, with teaching take place during short, intense training sessions.

The instructors at the Software Carpentry workshop were the Institute’s Aleksandra Nenadic, who taught for the first time, and myself. We were also supported by Mike Cornell and course leader Professor Andy Brass who acted as helpers.

Observations from the first UK Data Carpentry workshop

By Ian Dunlop, Software Engineer, University of Manchester

Last week I had the good fortune be a helper at the first Data Carpentry workshop run at the University of Manchester.

The workshop was supported by ELIXIR UK. The instructors were Alejandra Gonzalez-Beltran from the Oxford e-Research Centre (OeRC), as well as the Institute's Shoaib Sufi and Aleksandra Pawlik.

The workshop's helpers were Aleksandra Nenadic, Christian Brenninkmeijer and myself, all based at the University of Manchester. Here are some thoughts on it all.

It's impossible to conduct research without software, say 7 out of 10 UK researchers

By Simon Hettrick, Deputy Director.

No one knows how much software is used in research. Look around any lab and you’ll see software – both standard and bespoke – being used by all disciplines and seniorities of researchers. Software is clearly fundamental to research, but we can’t prove this without evidence. And this lack of evidence is the reason why we ran a survey of researchers at 15 Russell Group universities to find out about their software use and background.

Headline figures

  • 92% of academics use research software
  • 69% say that their research would not be practical without it
  • 56% develop their own software (worryingly, 21% of those have no training in software development
  • 70% of male researchers develop their own software, and only 30% of female researchers do so

Data

The data collected during this survey is available for download from Zenodo ("S.J. Hettrick et al, UK Research Software Survey 2014"​​, DOI:10.5281/zenodo.14809). It is licensed under a Creative Commons by Attribution licence (attribution to The University of Edinburgh on behalf of the Software Sustainability Institute).