HomeNews and blogs hub

Carpentries: Beyond the basics

Bookmark this page Bookmarked

Carpentries: Beyond the basics

Author(s)
Colin Sauze

Colin Sauze

SSI fellow

Jannetta Steyn

Jannetta Steyn

SSI fellow

Andrew Brown

Posted on 29 June 2022

Estimated read time: 7 min
Sections in this article
Share on blog/article:
Twitter LinkedIn

Carpentries: Beyond the basics

Posted by j.laird on 29 June 2022 - 10:00am Arrows pointing upPhoto by Jungwoo Hong on Unsplash

By Colin Sauze, Jannetta Steyn and Andrew Brown.

This blog post is part of our Collaborations Workshop 2022 speed blog series

How do you teach beyond The Carpentries level while keeping everyone on the same page?

It is largely thanks to the Carpentries that today I am a Research Software Engineer. Attending a Software Carpentry course at the end of my PhD validated so much of the effort I had expended over the previous three years, and provided me with a new set of tools to bolster my post-doctoral research. Looking back, I can trace my journey back to that first, successfully initialised git repository.

Andrew Brown, RSE, Queen's University Belfast

Software Carpentry courses have been formative for many RSEs around the world, but they are just the start of the journey. The carpentries are so useful precisely because they are so basic. There is a significantly bigger impact from giving as many researchers as possible the absolute fundamentals, as opposed to more specialised training which would enhance the practices of a comparative minority. However, the need for intermediate training has been identified by so many Carpentries attendees and trainers that it even received the attention of one of last year’s speed blogs at CW21 as well as a new intermediate software carpentry course piloted in 2021 - the beta release was recently published on Zenodo.

This article is not about the need for intermediate training. Instead, it shines a spotlight on an issue raised by these new courses. In particular, how can trainers and attendees know whether or not the attendees are ready for the more advanced material in an intermediate course?

Approach 1: Self assessment

To a large extent, this is the de-facto approach already adopted by most trainers and attendees. The idea is that the learner knows best what their learning needs are, and should therefore be able to make sensible decisions about whether or not they are ready for the intermediate course. However, there are several problems with this approach.

It is not clear on what basis such a self-assessment would be made. Many people may assume (rightly or wrongly) that simply having completed the basic course is sufficient qualification for attending the intermediate course. However, in practice the need for the intermediate skills often only presents itself after years of implementing the basic material. Much of the intermediate material requires that the basic procedures are second nature, which often only comes after an extended period of practice. Add to this that self assessment is inherently unreliable (non-experts tend to overestimate their own competence), and it is possible to end up with a lot of unqualified people in your intermediate course.

On the flip side, there are a lot of people who would benefit from the intermediate course, but their range of experience is too narrow for them to make an informed decision about their readiness for it. For instance, a researcher who uses tool A may not choose to attend an intermediate course if that tool is not listed in the course contents, even though tool B may be a more appropriate choice for their application.

Approach 2: Pre-workshop questionnaire

This is the approach used by many instructors to help to target the course at the needs of the attendees. It can be used to prepare or tweak the material, and has the additional benefit that it acts as a reflective exercise for attendees to identify their own motivations and needs. In the context of an intermediate course, this could be used to present attendees with a simple checklist to ensure they have the prerequisites. However, this is prone to the same weaknesses as straightforward self-assessment.

Approach 3: Pre-workshop quiz

This is different from the questionnaire in that, in principle, there would be correct answers to the questions, and the results might be used to judge whether or not potential attendees are ready for the intermediate course. A simple web-form could be used to pose questions based on the prerequisite material. Then there could be a threshold score in order to secure attendance.

In principle, this is the ideal solution as it demonstrates directly whether or not someone has mastered the basics before moving on. However, it is difficult to automate and (as anyone who has attempted to set a quiz in a virtual learning environment will know) is not optimised for technical questions.

Approach 4: Summative assessments for intro courses

Carpentries courses in their current form do not contain any summative assessment (i.e. credit bearing examinations, or pass/fail metrics). This is good, insofar as many researchers might be put off by the prospect of being examined on their efforts in a training course.

However, the knock-on effect is now being felt in the intermediate courses. If there were a simple pass-fail metric for the introductory courses, it would serve as a very quick, very clear indication of attendees suitability for intermediate courses.

The ideal solution would combine the best of both of these extremes i.e. maintain the open, non-threatening vibe of the introductory courses but still provide a clear metric of students' learning.

One solution which has been effective in our experience is to set assessments in the form of coding challenges. In our own teaching we use the online IDE replit.com. For each challenge we provide a set of instructions, and perhaps some incomplete or buggy code. Then in the backend, the students' code is run through some simple unit tests which serve both as a means of providing feedback, and a pass/fail metric.

At this point, it is not clear that this approach would be appropriate for all of the Carpentries material. The online IDE we have used does serve Python, bash and R, but only provides unit testing for Python, for instance. Moreover, for ‘meta-skills’ like version control, it is not clear that this is a good solution. However, with the basic model of an automated, web-based assessment, and the collected efforts of the large Carpentries community, it should be possible to design a comprehensive suite of assessments that could be used in a number of different ways.

For instance, students who work through material quickly at a Carpentries course might be encouraged to try their hand at the assessments. Or they could be provided to students to attempt in their own time, or some time after their course to encourage them to revisit and retain the various skills they have learned. And, most importantly in the context of this article, instructors could request that potential attendees complete some subset of these assessments in advance of attending an intermediate course.

Conclusion

Whatever the form they will take, some kind of assessment would be at least useful for the Carpentries introductory workshops, either as a summative exercise at the conclusion of the course or as a follow-up for attendees to complete some time later. Additionally, such assessments would help attendees to prepare for future courses and hone their skills, but also provide a means for instructors to ensure attendees are qualified for higher-level courses. Given the huge number of courses taking place globally every year, it is clear that such assessments should be scalable, automated and, preferably, self-administered. Online solutions like simple quizzes using Kahoot, or more involved exercises with IDEs like replit.com or GitHub classroom may provide a means of realising that vision.

Share on blog/article:
Twitter LinkedIn