Notes from the recent GW4 meeting at Cardiff University

Last Friday, Han, Mike and I attended a GW4 event in Cardiff, where the main topics on the agenda were students as collaborators, and shared projects that we can embark on together.

The day started with brief updates from each team:

Cardiff…

  • Have a new space for learning and teaching experimentation
  • Are working on a Curriculum Design Toolkit, as part of which they are looking at unbundling content to work in different ways for different markets
  • Have a Learning Hub Showcase (http://www.cardiff.ac.uk/learning-hub)
  • Have funding that students can bid for, for teaching projects
  • Ran a Summer Intern Project – one of which focused on advice in how to use lecture capture
  • Had a blank course rollover with a new minimum standard

Bath…

  • Have a major curriculum design project upcoming
  • Are moving towards programme-level assessment rather than modular
  • Have new funding for staff and students to work together
  • Are championing a flipped learning approach
  • Have a placement student
  • Are working on a ‘Litebox project’ (http://blogs.bath.ac.uk/litebox/) where students create an environment where the whole University can learn about new and existing technologies for use in learning and teaching, and share their experiences of them
  • Are expanding their distance learning postgraduate numbers


During the second part of the day, we talked about students as producers. Splitting into small groups, we shared our experiences, challenges and tips for working with students on both accredited and unaccredited courses. It was widely accepted by the group that collaboration with students is mutually beneficial. Students are able to move from being passive consumers of knowledge to genuine partners in their education, and we as professionals have a lot to gain from the expertise, connections with other students, and knowledge of life at the university that students can offer us.

The experience of working with students a Bristol, Bath and Cardiff has been positive but limited. All three universities have hired student interns in the past, but would like to do more in terms of making ‘students as producers’ a key underpinning concept in accredited courses. The expectations of ‘what university learning will be like’ puts a dent in the willingness of students to engage with accredited collaborative projects. We discussed how students may see universities as institutions of teaching rather than of learning, particularly as  tuition fees have risen, and expect more teacher-to-student time for their money. Our group talked about introducing the idea of innovative learning techniques earlier in students’ degree programmes, and even on pre-university open days, in order to change the expectations of students from traditional lecture-based learning to problem-based modules and more.

For the last part of the day, we talked about projects that the GW4 could collaborate on, and contributed to this padlet board. We all shared ideas, then each of us cast three votes for the projects we’d like to see most. A common theme was the sharing of knowledge and expertise in areas like FutureLearn, ePortfolios and case studies. We also talked about working together to put pressure on companies or to bid for shared funding in order to improve practice in ways that wouldn’t be possible for a single institution.

Bath have volunteered to host the next meeting in February or March, in which we’ll talk about ePortfolios and assessment.

Programme level assessment – notes from the reading group

Suzi read Handbook from UMass – PROGRAM-Based Review and Assessment: Tools and Techniques for Program Improvement

A really clear and useful guide to the process of setting up programme level assessment. The guide contains well-pitched explanations, along with activities, worksheets, and concrete examples for each stage of the process: understanding assessment, defining programme goals and objectives, designing the assessment, selecting assessment methods, analysing and reporting. Even the “how to use this guide” section struck me as helpful, which is unheard of.

The proviso is that your understanding of what assessment is for would need to align with theirs, or you would need to be mindful of where it doesn’t. As others do, they talk about assessment to improve, to inform, and to prove and they do also nod to external requirements (QAA, TEF, etc in our context). However, their focus is on assessment as part of the project of continual (action) research into, and improvement of, education in the context of the department’s broader mission. This is a more holistic approach that might bring in a wide range of measures including student evaluations of the units, data about attendance, and input from employers. I like this focus but it might not be what people are expecting.

During the group we discussed the idea of combining some of the ideas from this, and the approach Suzanne read about (see below). A central team would collaborate with academic staff within the department in what is essentially a research project, supporting conversations between staff on a project, bringing in the student voice and leaving them with the evidence-base and tools to drive conversations about education in their context – empowering staff.

(Side note – on reflection I’m pretty sure this is the reason this particular reading appealed to me.)

Chrysanthi read Characterising programme‐level assessment environments that support learning by Graham Gibbs & Harriet Dunbar‐Goddet.

The authors propose a methodology for characterising programme-level assessment environments, so that they can later be studied along with the students’ learning.

In a nutshell, they selected 9 characteristics that are considered important either in quality assessment or for learning (e.g. variety and volume of assessment). Some of these were similar to the TESTA methodology Suzanne described. They selected 3 institutions that were different in terms of structure (e.g. more or less fixed, with less or more choice of modules, traditional or variety in assessment methods etc. They selected 3 subject areas, the same in all institutions. They then collected data about the assessment in these and coded each characteristic so there would be 3 categories: low, medium, high. Finally, they classified each characteristic for each subject in each institution according to this coding. They found that the characteristics were generally consistent within institution, showing a cultural approach to assessment, rather than a subject- related one. They also identified patterns, e.g. that assessment aligned well with goals correlates with variety in methods. While the methodology is useful, their coding of characteristics as low-medium-high is arbitrary and their sample small, so the stated quantities in the 3 categories are not necessarily good guidelines.

Chrysanthi also watched a video from the same author Suzanne read about: Tansy Jessop: Improving student learning from assessment and feedback – a programme-level view (video, 30 mins).

There was a comparison of 2 contradictory case studies, 1 that seemed like a “model” assessment environment, but where the students did not put in much effort and were unclear about the goals and unhappy, and 1 that seemed problematic in terms of assessment but where students knew the goals and were satisfied. The conclusion was that rather than having a teacher plan a course perfectly and transmit large amounts of feedback to each student, it might be worth encouraging students to construct it themselves in a “messy” context, expanding constructivism to assessment as well.

Additionally, as students are more motivated by summative assessment, have a staged assessment where students are required to complete some formative assessment that feeds into their summative assessment. Amy & Chris suggested that this has already started happening in some courses.

Finally, the speaker noted that making the formative assessment publicly available, such as in blog posts, motivates the students, that it would be better if assessment encouraged working steadily throughout the term, rather than mainly at peak times around examinations and that feedback is important for goal clarity and overall satisfaction.

Both paper and video emphasised the wide variety in assessment characteristics between different programs. In the paper’s authors’ words, “one wonders what the variation might have been in the absence of a quality assurance system”.

The discussion went into the marking system and the importance students give to the numbers, even when they are often irrelevant to the big picture and their future job.

Amy summarised a summary she had created after attending a Chris Rust Assessment Workshop at the University. The workshop focussed on the benefits of programme-level assessment, looking at the current problems with assessment in universities and offering practical solutions and advice on creating programme-level assessments. The workshop started by looking at curriculum sequencing – it’s benefits and drawbacks, and illustrated this with examples where it had been successful.

Chris then discussed ‘capstone and cornerstone’ modules as a model for programme-level assessment, and explain where it had been a success in other universities. He discussed the pseudo-currency of marks and looked at ways we can alter our marking systems to improve student’s attitude to assessments and feedback. He ended the session by looking at the ways you can engage students with feedback effectively, and workshop attendees shared their advice with colleagues on how they engage their students with feedback. You can find the summary here.

Suzanne read Transforming assessment through the TESTA project by Tansy Jessop (who will be the next Education Excellence speaker) and Yaz El Hakim, which briefly describes the TESTA project, the methods they use and the outcomes they have noted so far. There are also references within the text to more detailed publications on specific areas of the methods, or on specific outcomes, if you want to find out more detail.

In brief, the TESTA project started in 2009, and has now expanded to 20 universities in the UK, Australia and the Netherlands, with 70 programmes having used TESTA to develop their assessment. The article begins by giving a pretty comprehensive overview of the reasons why programme assessment is so high on the agenda, including the recognition that assessment affects student study behaviours, and that assessment demonstrates what we value in learning, so we should make sure it really is focused on the right things. There was also a discussion about how the ‘modularisation’ of university study has left us with a situation of very separated assessments, which make it difficult to really see the impact of assessment practices across a programme, particularly for students who take a slower approach to learning. Ultimately the TESTA project is about getting people to talk about their practices on a ‘big picture’ level, identity areas which could be improved, and then work from a base of evidence to make those improvements. There is a detailed system of auditing current courses, including sampling, interviews with teaching s and programme directors, student questionnaires, and focus groups. the information from this is then used as a catalyst for discussion and change, which will manifest differently in each different programme and context.

The final paragraph of the report sums it up quite well: “The value of TESTA seems to lie in getting whole programmes to discuss evidence and work together at addressing assessment and feedback issues as a team, with their disciplinary knowledge, experience of students, and understanding of resource implications. The voice of students, corroborated by statistics and programme evidence has a powerful and particular effect on programme teams, especially as discussion usually raises awareness of how students learn best.”

Suggested reading

Games and Simulation enhanced Learning (GSeL) Conference

GSeL Logo - 8bit style graphics.Last Thursday I caught the 8.44am cross country to Plymouth to attend the first Games and Simulation enhanced Learning (GSeL) Conference. GSeL is a newly formed interdisciplinary research theme group, part of Plymouth University’s Pedagogic Research Institute and Observatory (PedRIO).

VR Hackathon

Plymouth University 2nd November 2017

The main event was on Friday the 3rd, but a session billed for the previous day – ‘Hackathon: VR for Non-programmers’ sounded promising. So, I ventured down a day early to channel my inner geek. I’ve got a basic (but rusty) understanding of coding so hoped that the ‘non-programmers’ tagline was true. Turns out the session was well designed for those with little to no experience. Michael Straeubig expertly guided around 15 attendees with differing skills through the process of creating a simple VR equivalent of ‘Hello World’ over the course of 2 hours.

The Hackathon was a hands on workshop running through downloading a-frame framework template project files from Michael’s github, installing open source software atom.io for editing/coding.

Sounds complicated? Yeah, sort of – but Michael’s laid-back-whilst-enthusiastic delivery helped fill in the gaps and moved at a steady pace we could all keep up with. He guided us through creating our first scene, adding in various 3-dimensional objects, altering their size and colour. Setting up a local server on our laptops via atom.io, we were able to move beyond viewing the 3D space we’d programmed and view it on a pair of budget VR goggles (Google Cardboard) on our smartphones.

It was a great primer for dipping toes/feet/legs into creating simple VR spaces from scratch using free tools. The a-frame project files supplied had additional examples of how to extend and develop. I don’t mind admitting I spent a large part of the rest of the day tinkering. A Michael drily observed during the session, we’d become ‘cool coders’.

Games and Simulation enhanced Learning (GSeL) Conference

Plymouth University 3rd November 2017

A day of talks and workshops based around the use of Games and Simulations, both real-life and digital. Some personal highlights for me included:

Professor Nicola Whitton’s keynote ‘Play matters: exploring the pedagogic value of games and simulation’ which tapped eloquently into themes like Failing without Consequence and motivation/engagement through playing games.

Matthew Barr (University of Glasgow) ‘Playing games at University: the role of video games in higher education and beyond’ – a great talk about his work with ‘gaming groups’ and the benefits cooperative video game playing brought students. “If I ruled the world, every student would play Portal 2”.

James Moss (Imperial College) ‘Gamification: leveraging elements of game design in medical education’ – some brilliant examples of using scenario based games in medical education. ‘Stabed to Stable’ involved scenario/persona based learning, a horizontal whiteboard, post-it notes and pens, with students clustered around trying to map out processes (checks/actions) they needed to go through, whilst James periodically helped guide or threw related spanners into the works. An overhead time-lapse video showed a dynamic session in action. A second game involved teams becoming the ‘medical officer’ helping a team of characters climb Everest. This simulation included mountain noise recordings (incrementally getting louder), random wildcards presenting challenges, lighting changes and James squirting participants in the face with water.

Michael Parsons (University of South Wales) ‘Keeping it Real: Integrating Practitioners in a Public Relations Crisis Simulation’ – shared his experience running a real-time simulation for PR students. Students attempted to handle a recreation of the infamous Carnival Triumph ‘Poop Cruise’ in the University’s Hydra Minerva Suite. The simulation used news report recordings, archived social media posts and live interaction with actors via telephones over several ‘acts’ to simulate a PR teams attempts to handle a particularly disastrous voyage. It all went well till the passengers were close enough to land to get mobile phone reception (and access to social networks).

The conference presented a feast of examples of using games and simulations in teaching and learning. From creating crosswords to utilising digital badges to recognise achievements to data visualisation in Virtual Reality, the place was abuzz with ideas. The focus on the potential of play and gaming to engage students meant the event had something for everyone, whether die-hard techy or strictly analogue.