HomeNews and blogs hub

How the Measuring the Impact of Workshops (MIW) meeting unfolded

Bookmark this page Bookmarked

How the Measuring the Impact of Workshops (MIW) meeting unfolded

Author(s)
Shoaib Sufi

Shoaib Sufi

Community Team Lead

Posted on 7 October 2016

Estimated read time: 5 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

How the Measuring the Impact of Workshops (MIW) meeting unfolded

Posted by s.aragon on 7 October 2016 - 10:01am

MIW imageBy Shoaib Sufi, Community Lead, Software Sustainability Institute

The Measuring the Impact of Workshops (MIW) meeting took place on the 20th September in sunny Oxford at the dashing Oxford e-Research Centre. It was a day of eye-opening presentations, revealing case studies, short informative talks, nuanced discussions and friendly networking all enveloped in a promise of something more enduring. Read on to find out what on earth I am talking about!

MIW brought together people interested in evaluating the impact of their workshops in a better way, to collect data for funders, to improve future events and to show value to potential attendees. Our working definition of workshop was broad; it included those that involve exploring topics (e.g. discussion/consensus-forming oriented meetings), learning new skills (e.g. training workshops) and those with a focus on making things (e.g. hackathons).

After the obligatory welcome and introduction to the Software Sustainability Institute there was an excellent context-setting talk: ‘The Practice of Measuring’ by Beth Duckles, Research Assistant Professor at Portland State University. She covered the art of commensuration—how we turn concepts into values, how being part of the environment that we are measuring introduces bias, qualitative vs. quantitative research, better question design, accepting the nature of bias and controlling for it, and a case study around Software Carpentry Instructor training. There were tips on using stacked bar graphs as a good way of visualising data from Likert scales. I recommend taking a look at the video of the talk; you are sure to gain something if you are interested in better ways to mindfully measure.

We then moved onto case studies covering the way that Software Carpentry, the Institute and Hackathons involving the Bodleian Library in Oxford are currently measuring impact. It became quite clear that we all have similar problems with measuring longer term impact, the best way to measure change even in the short term and whether the use of certain measures, such as ‘confidence’ as a particular topic, are really an accurate way to word questions.

The short talks took us through Fortran modernisation, measuring diversity, following a student cohort of Software Carpentry trainees in the NHS, experiences and experiments of the excellent EQUATOR Network who are in charge of improving the practice of publishing research amongst clinicians. We then saw how you might impact your future practice by sending yourself a postcard, evaluating learning through gamification and considerations when measuring difficult things. To see slides and watch the videos associated with this insightful section of the programme, please take a look at the annotated agenda.

After lunch, we moved into discussion sections, and the group of 16 formed three groups and discussed a range of topics from long-term assessment, tools and processes for measuring and questionnaire design. There were notes and recommendations recorded and then a report back by each group.

After the report back, we discussed the next steps:the workshop was not just about having a day to come and learn and discuss, but it was to seed and explore the issues that affect better measurement of workshop impact amongst those who run workshops so that we can help the wider community.

So how are we to do this? The workshops participants agreed to come together to work on a report of best practice in measuring impact and aid those who are planning and running workshops. We make no claim to be definitive. With representation from the Institute, ELIXIR, the EQUATOR Network, the EGI Foundation, Software Carpentry and the Numerical Algorithms Group we have confidence that the problems and the concerns that we have are similar and thus likely shared by other groups. We thus believe sharing our practices and lessons learned will be of benefit to those running workshops.

The MIW group will be working on the report over the coming months and we aim to make a request for comment on a public draft available by January 2017 to give the wider community a chance to give input.

Measuring MIW

Before the workshop we asked those attending:

“How confident are you in your ability to measure the impact of workshops (these include training workshops and Hackathons) you organise?”

We had 16 responses; the average was 2.6 (scale: 1–Low…5–High).

Even though there was much discussion at the workshop about how imprecise the word ‘confidence’ was in this context, we asked the same question after the workshop and we had 13 responses. The average was not 3.7 on the same scale. So we hope that this was a sign that participants did increase in their ability to measure the impact of workshops and that this is reflected in the advice that is being put together by those who attended (the MIW working group) for the wider community.

Share on blog/article:
Twitter LinkedIn