ABC mini conference – talk from Bristol

Notes from Suzanne Collins and Suzi Wells on using the ABC cards in Bristol. This talk was given at the ABC mini conference, UCL, London, 9 March 2018. See the ABC Learning Design web pages for further resources.

Suzi: Trialling ABC as a tool in workshops

I first came across the ABC curriculum design method while browsing UCL’s digital education pages looking for ideas. It immediately appealed. My background is in structuring and building websites, and I had used paper-based storyboarding in that context.

First trial: a single unit

Colleagues were enthusiastic and we started looking for contexts to trial it. An academic approached us with a view to involving us in significantly redesign a unit and we suggested the ABC approach.

As a tool for discussion, and for engaging a more diverse group of people – two academics, two learning technologists, one librarian, and someone else – it worked very well. They were very engaged and all could contribute. Although they couldn’t agree on a single tweet.

But we didn’t complete all the activities in the time. We also didn’t talk to them about how it should fit in to the overall development cycle and didn’t have much opportunity to follow up on what next. To me it felt like there was less value in talking about a single unit in isolation, that there would have been more benefit if we’d been working on a programme.

It was a useful tool and an enjoyable session but it wasn’t right yet.

Second trial: developing online courses

Not long after that we were asked to get involved in developing three online courses which would be promoted to our own students, as well as to the public more widely. Each course would be developed by a group of academics from a variety of different disciplines, many of whom had not worked together before.

The timescales were extremely short (by university standards). The academics involved were extremely busy with their existing work. These courses had to be innovative, transformative, cross-disciplinary, interlinked, approachable by anyone, essentially self-sustaining … and should encourage the development of transferable skills. No small ask.

Having pitched their ideas and been selected to lead or participate, the teams were assembled for an initial one day event. As part of this we ran several short sessions. We asked them to do an elevator pitch (they resolutely failed to follow the instructions on this). We also did a pre-mortem (imagine it’s a year down the line and these projects have been an absolute disaster, tell us what went wrong – very popular and a great way of surfacing problems and clearing the air).

We then ran an ABC workshop, with three tables myself and my colleagues Roger Gardner and Mike Cameron running a table each.

We modified the cards slightly to make them more platform-focused. We also added a time wheel to each week. Students would be expected to spend three hours a week in total on these courses and from conversations we’d had with the academics we knew that they were veering towards providing three hours of video a week (plus readings and activities). We wanted to focus attention on how students would spend their time.

We attempted to fit all this within an hour, because that was all the time available in the schedule.

For stimulating discussion, getting everyone to contribute, and shifting focus towards the student experience it worked well. The teams understood it and could work with it quickly. We were definitely over-ambitions about how much we could get through in an hour. Added to this, it was too early in the process and teams still had divergent or vague ideas about content (even on a big-picture scale) which couldn’t be resolved in a short time available.

One interesting finding was about the value of pushing people through the process. The other two facilitators used the framework and cards but took a more freeform approach, allowing discussions to run on. I was much stricter, pushing people through the activities. At the end of the day my group were the only one who asked to take the cards away and declared that they would use it themselves. Working through all the activities seemed to help people see the value of the process (though of course that may not mean that the discussion was more valuable).

Suzanne: Using ABC throughout online course design

My experiences of using the ABC method came later in the process of developing these online courses. My colleague Hannah O’Brien and I worked intensively with the three course teams, and we turned to ABC to help us do that. When we started, there were a lot of ideas, too many in fact(!), and we tried to find ways to get those ideas somehow on to paper, so that we could all evaluate them, and work them into a course design.

We ran a series of shorter, small group ABC sessions, using the modified cards from Suzi and Roger’s previous session. The courses were going to end up in the FutureLearn platform, so the course design by nature needed to work in a linear sequence of weeks of learning. In each week, we needed a series of ‘activities’, which were made up of different ‘steps’. Anyone familiar with FutureLearn can tell you that there isn’t a great deal of choice for what these steps are: a text article, a video, a discussion, a quiz, or a limited selection of external ‘exercises’.

What the ABC sessions highlighted early on for our teams was that having lots of video and articles explaining ideas might look jazzy, but is all very similar (and not very active) in terms of learning types. We all noticed there was far too much of the turquoise ‘acquisition’ happening in courses which were designed to develop skills such as communication and self-efficacy.

To help our academics come up with alternatives ideas for how students could, within the limits of FutureLearn, have a more interactive and challenging learning experience, we also created a bank of good examples, which we called our ‘Activity Bank’. As we worked to try and think of ideas for collaboration, or inquiry, for example, we could direct them to explore these examples, and adapt the ideas for their own purposes.

Overall, the ABC ended up being a useful tool to get everyone talking about the pedagogical choices they were making in a similar way. We could map the learning experience quickly and visually, so that we could prototype, evaluate and  iterate course designs. It also kept us all clearly focused on what the learners were doing during the course, rather than how amazingly we were presenting the materials.

Since then, I’ve found myself returning to the ABC tools and ideas regularly. The learning type ‘colours’ got quite embedded in our way of thinking and documenting learning designs. They cropped up in a graphic course design map created to demonstrate the pedagogical choices for the online courses (see below), and are now doing so again in a different context.

This new context, and the next big project for me is the Bristol Futures Optional Units. These are blended, scalable, credit bearing, multidisciplinary, investigative units, open to all students, around the Bristol Futures themes of Global Citizenship, Innovation and Enterprise and Sustainable Futures. So, no small ask, once again.

For this, the ABC cards have been tweaked again, this time to generate ideas for both online and face-to-face ideas for course elements, to allow for a flexible and student-choice driven learning experience. How can we provide a similar learning experience for students who might end up taking the unit in very different ways? We’re in the early days of course design, but I imagine that we’ll end up using the ABC workshops in various forms during the coming year!

In all, the ABC has become a bit of an ace up our sleeves. When we need temas to work more collaboratively, when we need the focus shifted back to the student, when we need to make progress rapidly and efficiently, even when we come to evaluate learning design – the ABC tools seem to provide us with a way to talk, act, design, and iterate.

Reflections on the ABC mini-conference from Suzi

On Friday 9 March myself and my colleague Suzanne Collins made our way to UCLs London Knowledge Lab, round the back of Lambs Conduit Street, to attend a mini-conference on the ABC curriculum design methodology developed by Clive Young and Nataša Perović.

It’s something we’ve been using an adapted version of at Bristol for just over a year, so it was great to see Clive and Nataša in action at the masterclass, and to hear about the great work being done at Glasgow, Canterbury Christ Church and Reading.

Some useful points from the day:

  • Glasgow have been using an online tool to make an electronic version (and have templates available)
  • Canterbury Christ Church have used PowerPoint to create an electronic copy while the workshop runs
  • Other coloured stars have been added to make visible: places where they engage with the education strategy; developing employability skills; other priorities (identified by the course teams)
  • Who is in the workshop is critical. Do you have students? Library staff? A critical friend?
  • It’s not just us – everybody adapts the cards (sometimes they even change the colours).

During the morning session people talked about using the cards with students, to allow them to design the course. One speaker suggested using them with evidence of BAME / gender engagement (in different types of activity), to address the way the course works for different learners. It was great to see how quickly people picked up the idea and started taking it on as their own.

Lots of potential and positivity. I look forward to seeing how the network grows.

Digital and physical spaces – notes from the reading group

Amy read – Institutional to Individual: realising the postdigital VLE? by Lawrie Phipps.

Lawrie starts this article by quoting himself – ‘Digital is about people’. He believes that learning is effective when we are connection in conversations and in groups – this is been proven many times over – but that these conversations should not be confined. The ‘confinement’ he talks of is the attempt by unnamed institutions to restrict their teaching staff by controlling the access and provision of alternative tools, which, Lawrie argues, don’t often align with their everyday activities. He mentions two projects that are taking places at universities – the Personalised User Learning & Social Environments (PULSE) project at Leeds Beckett (difficult to find anything about this online) and the Aula team, who have created a ‘conversational layer’ to run alongside a VLE and provide an ‘ecosystem for a range of other tools’.

The article moves on to discuss the emerging trend of disaggregation as being an indicator of ‘post digital academic practice’… I’d be interest to know what he means by this but the article does not shed any light on this. If I were to guess at its meaning I would think that the digital is becoming so integrated into our lives that it can no longer be considered a practice – it is seamless, and therefore doesn’t need to be recognised. He reminds us to be mindful of the other emerging themes of digital spaces; control, surveillance and ‘weaponised’ metrics used by corporate bodies and the use of algorithms to control our feeds.

Lawrie finishes by letting us know that ‘the report’ (I presume the ‘Next Generation Digital Learning Environments’, mentioned earlier in the article) is coming together nicely, and urges the reader get in touch if they have any relevant cases of disaggregation for practical purposes.

 

Chrysanthi read Digital sanctuary and anonymity on campus by Sian Bayne.

The article is trying to make a case for anonymity in online social exchange in the context of higher education.

The author points out that as part of the point of higher education is to help students own and defend their knowledge, anonymity is not usual, barring exceptions like peer review. But this works better for those with privileges than those without and it doesn’t work for every topic that a student would be interested in. In their view, anonymity offers 1. social value and 2. a way to resist digital surveillance. By looking at the use of an anonymous social media app called Yik Yak – which was popular for 2-3 years, but then removed the anonymity and then closed – they realised that it was a tool often used to facilitate anonymous peer support, which was very helpful to students concerning topics like social difficulties or isolation, relationships, health (sexual and emotional) or teaching-related issues.

Anonymity also serves to resist the ubiquitous surveillance that occurs in large part through social media, that record everything individuals do and like for their financial benefit. But there can be online social networks where students don’t need to hand over their data to be able to use them.

They argue that the absence of an app like this reduces students’ opportunities to a support group and that the counter argument usually put forward – that anonymous spaces facilitate abuse – is weak, considering abuse can and does happen everywhere, including non anonymous social media like Facebook. They are concerned about where the supportive conversations that people would previously have anonymously are happening now, for topics like mental health or relationships.  Overall, they believe universities need such anonymous spaces and should figure out how to implement them balancing data, trust & safety.

I think the author makes good points. Regarding where the conversations are happening now, I am assuming

1. other anonymous but not higher education specific spaces, such as reddit, which means people will get support, although from a broader population that is not coming from the same context, with all the challenges this implies.

2. non anonymous spaces, like Facebook, which means people are essentially broadcasting their issues on platforms that a. may use this information for their benefit and the student’s detriment, b. store and display the data with the user’s name for a long time, with no guarantees for who can/ cannot see it. This makes abuse easier, as well as enabling people looking up the individual (e.g. future employers) to see information they should otherwise not have access to.

3. they are not getting support, which could lead to isolation.

Overall, I do see the point of universities implementing anonymous digital spaces for their students.

 

Naomi read: The SCALE-UP Project: A Student-Centred Active Learning Environment for Undergraduate Programmes by Robert J. Beichner.

The author starts by describing these scale-up areas as places where ‘student teams are given interesting things to investigate, while their instructor roams.’ Although this is one of the short areas we hear about how the actual space is designed to improve learning and collaboration.The purpose of these teaching spaces is to encourage discussion between student’s and their peers. By working in small groups on separate tables within the classroom student’s can work on separate activities and use a shared laptop or whiteboard to research or make note of their findings. Thay can then discuss with other groups.

The main point of the paper revolves around the idea of social interaction between students and their teachers being the ‘active ingredient’ in making this approach to teaching work. Beichner talks about how student’s in these classes gain a better conceptual understanding then the student’s taking traditional lecture-based classes. Studies saw a high rise in student’s confidence, their problem-solving skills, as well as teamanship and communication. There is some concern about whether this approach is meaning less content is being delivered to the students, but Beichner argues the content is being developed and created by the student’s themselves.

Discussion led learning is always going to be popular, but we need to think about the physical space too and whether it is needed or not. The size of these classes needs to be considered too – what can be classed as too big? Beichner’s study was interesting, but not surprising, and it would have been good to know how the design or the space and tables aiding the learning too.

 

Suzi read The Educause NDingle and an API of one’s own by Michale Feldstein (which is a rebuttal to a rebuttal of the Educause Next Generation Digital Learning Environment report)

This is a clear and interesting artical discussing where learning management systems (LMSs) could/should go – as digital spaces for learning. The perspective on this is relatively technical, discussing the underlying architecture of the system but the key ideas are very approachable:

LMSs could move from being one application that tries to do everything, to being more like an oporating system on a mobile phone – hosting apps and managing the ways they can communicate with each other

Lego is also used as a metaphor for this more adaptable LMS, but Feldstein discusses the tension between having fairly generic blocks that don’t build anything in particular but allow you to be very creative (Lego from my childhood), and having sets which are intended to build a particular thing but which are then less adaptable (more typical of modern Lego). I found this a harder idea to apply, though I can appreciate that just because something comes in blocks and can be taken apart, doesn’t mean it is genuinely flexible and adaptable.

Personal ownership of data is discussed – the idea of students even hosting their own work and having a personal API via which they grant the institution’s LMS (and hence teachers) access to read and provide feedback on their work (“an API of one’s own”). This seems to me an attractive idea, in a purist origins-of-the-web way. People have suggested similar approaches in various domains, social media in particular, and I don’t know of any that have worked.

 

Suzanne read Semiotic Social Spaces and Affinity Spaces From The Age of Mythology to Today’s Schools by James Paul Gee. The premise of this text is to reconsider the idea of a community of practice, to think about it as related to the space in which people interact (and in what way), rather than membership of the community (particularly membership given to people by others, or through arbitrary groupings). Gee argues that thinking about community in this way is more useful, as membership means so many different things to different people, so trying to decide who is ‘in’ or ‘out’ of a group is problematic. He explains his ‘alternative’ to thinking about a ‘community of practice’ as an ‘affinity space’ in quite a lot of detail, using the analogy of a real-time computer game as an example, which here I won’t try to explain fully. However, some key ideas around what makes an ‘affinity space’ are that there needs to be some kind of content, generated by the community around a common endeavour. The people who interact with this content do so with an agreed set of ‘signs‘ with their own particular ‘grammar‘ or rules. This grammar can be internal (signs decided on within the group), or external (eg the way that people’s beliefs and identities are formed around these signs, and their relationship with them), and the external grammar can influence the internal grammar. Another interesting aspect is the idea of portals. An affinity space will have a number of ways that people can interact with it. To take the game example, the game itself could be a portal, but so could a website about game strategy, or a forum discussing the game. Importantly, the content, signs and grammar of the space can be changed by those interacting through those portals, so the content is not fixed. The final points are that people interacting in the space are both ‘expert’ and ‘novice’, and both intensive and extensive knowledge is valued. Individuals with specific skills or who a great amount of knowledge about a specific thing are as valued by the space as those who work to build a more distributed community of knowledge, and there are many different ways people can participate. Gee’s text presents quite an in depth concept, which seems quite theoretical. However, thinking about something like the Bristol Futures themes (Global Citizenship, Innovation and Enterprise or Sustainable Futures), we discussed how it might be applied, and how it might help us to think about things like reward and recognition, or success measures, in a very different way.

Suggested reading

Programme level assessment – notes from the reading group

Suzi read Handbook from UMass – PROGRAM-Based Review and Assessment: Tools and Techniques for Program Improvement

A really clear and useful guide to the process of setting up programme level assessment. The guide contains well-pitched explanations, along with activities, worksheets, and concrete examples for each stage of the process: understanding assessment, defining programme goals and objectives, designing the assessment, selecting assessment methods, analysing and reporting. Even the “how to use this guide” section struck me as helpful, which is unheard of.

The proviso is that your understanding of what assessment is for would need to align with theirs, or you would need to be mindful of where it doesn’t. As others do, they talk about assessment to improve, to inform, and to prove and they do also nod to external requirements (QAA, TEF, etc in our context). However, their focus is on assessment as part of the project of continual (action) research into, and improvement of, education in the context of the department’s broader mission. This is a more holistic approach that might bring in a wide range of measures including student evaluations of the units, data about attendance, and input from employers. I like this focus but it might not be what people are expecting.

During the group we discussed the idea of combining some of the ideas from this, and the approach Suzanne read about (see below). A central team would collaborate with academic staff within the department in what is essentially a research project, supporting conversations between staff on a project, bringing in the student voice and leaving them with the evidence-base and tools to drive conversations about education in their context – empowering staff.

(Side note – on reflection I’m pretty sure this is the reason this particular reading appealed to me.)

Chrysanthi read Characterising programme‐level assessment environments that support learning by Graham Gibbs & Harriet Dunbar‐Goddet.

The authors propose a methodology for characterising programme-level assessment environments, so that they can later be studied along with the students’ learning.

In a nutshell, they selected 9 characteristics that are considered important either in quality assessment or for learning (e.g. variety and volume of assessment). Some of these were similar to the TESTA methodology Suzanne described. They selected 3 institutions that were different in terms of structure (e.g. more or less fixed, with less or more choice of modules, traditional or variety in assessment methods etc. They selected 3 subject areas, the same in all institutions. They then collected data about the assessment in these and coded each characteristic so there would be 3 categories: low, medium, high. Finally, they classified each characteristic for each subject in each institution according to this coding. They found that the characteristics were generally consistent within institution, showing a cultural approach to assessment, rather than a subject- related one. They also identified patterns, e.g. that assessment aligned well with goals correlates with variety in methods. While the methodology is useful, their coding of characteristics as low-medium-high is arbitrary and their sample small, so the stated quantities in the 3 categories are not necessarily good guidelines.

Chrysanthi also watched a video from the same author Suzanne read about: Tansy Jessop: Improving student learning from assessment and feedback – a programme-level view (video, 30 mins).

There was a comparison of 2 contradictory case studies, 1 that seemed like a “model” assessment environment, but where the students did not put in much effort and were unclear about the goals and unhappy, and 1 that seemed problematic in terms of assessment but where students knew the goals and were satisfied. The conclusion was that rather than having a teacher plan a course perfectly and transmit large amounts of feedback to each student, it might be worth encouraging students to construct it themselves in a “messy” context, expanding constructivism to assessment as well.

Additionally, as students are more motivated by summative assessment, have a staged assessment where students are required to complete some formative assessment that feeds into their summative assessment. Amy & Chris suggested that this has already started happening in some courses.

Finally, the speaker noted that making the formative assessment publicly available, such as in blog posts, motivates the students, that it would be better if assessment encouraged working steadily throughout the term, rather than mainly at peak times around examinations and that feedback is important for goal clarity and overall satisfaction.

Both paper and video emphasised the wide variety in assessment characteristics between different programs. In the paper’s authors’ words, “one wonders what the variation might have been in the absence of a quality assurance system”.

The discussion went into the marking system and the importance students give to the numbers, even when they are often irrelevant to the big picture and their future job.

Amy summarised a summary she had created after attending a Chris Rust Assessment Workshop at the University. The workshop focussed on the benefits of programme-level assessment, looking at the current problems with assessment in universities and offering practical solutions and advice on creating programme-level assessments. The workshop started by looking at curriculum sequencing – it’s benefits and drawbacks, and illustrated this with examples where it had been successful.

Chris then discussed ‘capstone and cornerstone’ modules as a model for programme-level assessment, and explain where it had been a success in other universities. He discussed the pseudo-currency of marks and looked at ways we can alter our marking systems to improve student’s attitude to assessments and feedback. He ended the session by looking at the ways you can engage students with feedback effectively, and workshop attendees shared their advice with colleagues on how they engage their students with feedback. You can find the summary here.

Suzanne read Transforming assessment through the TESTA project by Tansy Jessop (who will be the next Education Excellence speaker) and Yaz El Hakim, which briefly describes the TESTA project, the methods they use and the outcomes they have noted so far. There are also references within the text to more detailed publications on specific areas of the methods, or on specific outcomes, if you want to find out more detail.

In brief, the TESTA project started in 2009, and has now expanded to 20 universities in the UK, Australia and the Netherlands, with 70 programmes having used TESTA to develop their assessment. The article begins by giving a pretty comprehensive overview of the reasons why programme assessment is so high on the agenda, including the recognition that assessment affects student study behaviours, and that assessment demonstrates what we value in learning, so we should make sure it really is focused on the right things. There was also a discussion about how the ‘modularisation’ of university study has left us with a situation of very separated assessments, which make it difficult to really see the impact of assessment practices across a programme, particularly for students who take a slower approach to learning. Ultimately the TESTA project is about getting people to talk about their practices on a ‘big picture’ level, identity areas which could be improved, and then work from a base of evidence to make those improvements. There is a detailed system of auditing current courses, including sampling, interviews with teaching s and programme directors, student questionnaires, and focus groups. the information from this is then used as a catalyst for discussion and change, which will manifest differently in each different programme and context.

The final paragraph of the report sums it up quite well: “The value of TESTA seems to lie in getting whole programmes to discuss evidence and work together at addressing assessment and feedback issues as a team, with their disciplinary knowledge, experience of students, and understanding of resource implications. The voice of students, corroborated by statistics and programme evidence has a powerful and particular effect on programme teams, especially as discussion usually raises awareness of how students learn best.”

Suggested reading

Role and future of universities – notes from the reading group

Maggie read Artificial intelligence will transform universities. Here’s how – World Economic Forum
The article presents the idea that Universities have created a need to innovate and evolve to meet the changing needs caused by the upsurge in AI. Already, the marking of student papers is becoming a thing of the past as AI is able to assess and even ” flag up” issues with ethics.Students are less able to distinguish between teacher marking and that of a “bot”. Teaching is additionally being impacted as students are able to undertake statistics courses using AI, massively reducing learning (and human teacher) time, with apparently equal learning and application outcomes. The author argues that Universities will need to up their game regarding employability and indeed attractive employment (remuneration). The paper is an easy-to-read item and clearly outlines the range of benefits and subsequent issues in relation to AI. All pertinent.

Suzi read three short opinion pieces: What are universities for and how do they work? by Keith Devlin, Everything must be measured: how mimicking business taints universities by Jonathan Wolff, and Universities are broke. So let’s cut the pointless admin and get back to teaching by André Spicer.

Devlin focused largely on the role of research within maths departments. The most interesting part, for me, came at the end when he talked about universities as communities and learning occurring “primarily by way of interpersonal interaction in a community”. Even without thinking about research outputs, there is value then in having a rich and varied community with faculty who have deep love and enthusiasm for their subject.

Wolf provided a clear and compelling dissection of how current educational policy is creating adverse incentives to community-mindedness (both within and between universities). Something detrimental to the education sector, which is such a significant part of the UK economy.

Spicer provides an insight into how this feels as an academic. He talks about how “In the UK, two thirds of universities now have more administrators than they do faculty staff.” and describes academics are “drowning in shit” (pointless admin).

For me, Spicer’s solutions for what universities could do to change this weren’t so compelling. If I could change one thing I would look at how we cost (or fail to cost) academic staff time. Academics can feel that they are expected to just do any amount of work they are given, or at least they often have no clear divide between work and not-work and have to constantly negotiate their time.

Amy didn’t read, but watched Why mayors should rule the world – Benjamin Barber – and would highly recommend it. Our modern democracy revolves around ancient institutions – we elect leaders most of us never meet and feel like we have very little input into the democratic process. This isn’t the case in cities – the leaders of cities, mayors, are seldom from anywhere other than the city they look after. They went to the local schools, they use the public transport and hospitals – they’ve watched their city grow. They have a vested interest in improving it. Positive changes towards existential issues such as climate change and terrorism are happening in cities (he gives an example of the LA port, which after an initiative to clear up, reduced the city’s overall emissions by 20%), and something can be learnt from the way that they operate. There are networks of mayors across the world, with a sense of competitiveness between them as to who can be the best city. Mayors from different cities meet up and share their practices, helping other cities implement changes using best practice, without the bureaucracy of central government slowing change down.

Suzanne watched  What are universities for? the RSA talk by Professor Stefan Collini and  Professor Paul O’Prey. The second half of the video was more focused on the way that higher tuition fees have changed the nature of the relationship between universities and students, but the introduction to the talk was much more on the topic we were discussing today. Stefan Collini began by saying that he believes universities are partially protected spaces which prioritise ‘deepening human understanding’, and that there are few if any other places where this happens. He compared them to other organisations which do research, such as R&D departments in industry, or teams working in politics, but said the difference was that universities were able to follow second order enquiries, and look at the boundaries of topics and knowledge, as they didn’t have a primary purpose of furthering one particular thing or ideal. So, although there are many benefits, such as increased GDP, from the kind of enquiry universities do (the ‘deepening of human understanding’ he started out with), that isn’t their aim or goal. He also went on to say that although we tend to see universities as primarily for the benefit of the individual students (furthering their careers, developing their own skills and knowledge) they should be seen as providing public good as well, for the reasons outlined above. In the group we discussed his basic premise, that universities are ‘protected spaces’, and decided that we aren’t sure that is really the case (especially with so much research being funded by grants from industry). However, it did lead to an interesting discussion about what we feel universities are actually for, if they aren’t what Collini outlined.

Suggested reading

Video – notes from the reading group

Hannah read ‘Motivation and Cognitive Strategies in the Choice to Attend Lectures or Watch them Online‘ by John N Bassilli. It was quite a in depth study but the main points were:

  • The notion of watching lectures online has a positive reaction from those who enjoy the course and find it important, but also from those who don’t want to learn in interaction with peers and aren’t inclined to monitor their learning.
  • From the above groups, the first group is likely to watch lectures online in addition to attending them face-to-face, whereas the second group are likely to replace face-to-face interaction with online study.
  • The attitude towards watching lectures online is related to motivation (ie. those who are motivated to do the course anyway are enthusiastic about extra learning opportunities), whereas the actual choice to watch them is related to cognitive strategies.
  • There is no demonstrable relation between online lecture capture and exam performance, but often the locus of control felt by students is marginally higher if they have the option to access lectures online.

Amy recommended Lifesaver (Flash required) as an amazing example of how interactive video can be used to teach.

Suzi read three short items which lead me to think about what video is good for. Themes that came up repeatedly were:

  • People look to video to provide something more like personal interaction and (maybe for that reason) to motivate students.
  • Videos cannot be skimmed – an important (and overlooked) difference compared to text.

The first two items were case studies in the use of video to boost learning, both in the proceedings of ASCILITE 2016.

Learning through video production – an instructional strategy for promoting active learning in a biology course, Jinlu Wu, National University of Singapore. Aim: enhance intrinsic motivation by ensuring autonomy, competence, and relatedness. Student video project in (theoretically) cross-disciplinary teams. Select a cell / aspect of a cell, build a 3D model, make a short video using the model and other materials, write a report on the scientific content and rationale for the video production. Students did well, enjoyed it, felt they were learning and seem to have learn more. Interesting points:

  • Students spent much longer on it than they were required to
  • Nearly 400 students on the module (I would like to have heard more about how they handled the marking)

Video-based feedback: path toward student-centred learning, Cedomir Gladovic, Holmesglen Institute. Aim: increase the student’s own motivation and enhance the possibility for self-assessment and reflection. They want to promote the idea of feedback as a conversation. Tutor talking over students online submission (main image) with webcam (corner image). Students like it but a drawback is that they can’t skim feedback. Interesting points:

  • How would tutors feel about this?
  • Has anyone compared webcam / no webcam?
  • Suggested video length <5 mins if viewed on smartphone, <10 mins if viewed on monitor

Here’s a simple way to boost your learning from videos: the “prequestion” looks at the effect of testing whether students remember more about specific questions and more generally when they are given prequestions on a short video. Answer seems to be yes on both counts. They thought that prequestions were particularly useful for short videos because students can’t easily skim through to just those topics.

Roger read “Using video in pedagogy”, an article from the Columbia University Center for Teaching and learning.

The article primarily focuses on the use of video as a tool for teacher reflection. The lecturer in question teaches Russian and was being observed. As she teaches in the target language which her observer didn’t speak her original motivation was to make the recording then talk the observer through what was happening. In actual fact she discovered additional benefits she had not envisaged. For example she was able to quantify how much time she was speaking compared to the students (as an important objective is to get students speaking as much as possible in the target language, and the teacher less). Secondly she could analyse and reflect on student use of the vocabulary and structures they had been taught. Thirdly it helped her to reflect on her own “quirks and mannerisms” and how these affected students. Finally the video provided evidence that actually contradicted her impressions of how an activity had gone . At the time she had felt it didn’t go well, but on reviewing the video afterwards she actually saw that it had been effective.

Suggested reading

Evidence – notes from the reading group

Suzi read Real geek: Measuring indirect beneficiaries – attempting to square the circle? From the Oxfam Policy & Practice blog. I was interested in the parallels with our work:

  • They seek to measure indirect beneficiaries of our work
  • Evaluation is used to improve programme quality (rather than organisational accountability)
  • In both cases there’s a pressure for “vanity metrics”
  • The approaches they talk about sound like an application of “agile” to a fundamentally non-technological processes

The paper is written at an early point in the process of redesigning their measurement and evaluation of influencing. Their aim is to improve the measurement of indirect beneficiaries at different stages of the chain, adjust plans, “test our theory of change and the assumptions we make”. Evaluation is different when you are a direct service provider than when you are a “convenor, broker or catalyst”. They are designing an evaluation approach that will be integrated into day to day running of any initiative – there’s a balance between rigor and amount of work to make it happen.

The approach they are looking at – which is something that came up in a number of the papers other people read – is sampling: identifying groups of people who they expect their intervention to benefit and evaluating it for them.

Linked to from this paper was Adopt adapt expand respond – a framework for managing and measuring systemic change processes. This paper presents a set of reflection questions (and gives some suggested measures) which I can see being adapted for an educational perspective:

  • Adopt – If you left now, would partners return to their previous way of working?
  • Adapt – If you left now, would partners build upon the changes they’ve adopted without us?
  • Expand – If you left now, would pro-poor outcomes depend on too few people, firms, or organisations?
  • Respond – If you left now, would the system be supportive of the changes introduced (allowing them to be upheld, grow, and evolve)?

Roger read “Technology and the TEF” from the 2017 Higher Education Policy Institute (HEPI)  report “Rebooting learning for the digital age: What next for technology-enhanced higher education?”.

This looks at how TEL can support the three TEF components, which evidence teaching excellence.

For the first TEF component, teaching quality, the report highlights the potential of TEL in increasing active learning, employability especially digital capabilities development, formative assessment, different forms of feedback and EMA generally, and personalisation. In terms of evidence for knowing how TEL is making an impact in these areas HEPI emphasises the role of learning analytics.

For the second component, learning environment, the report focusses on access to online resources, the role of digital technologies in disciplinary research-informed teaching, and again learning analytics as a means to provide targeted and timely support for learning. In terms of how to gather reliable evidence it mentions the JISC student digital experience tracker, a survey which is currently being used by 45 HE institutions.

For the third component, student outcomes and learning gain, the report once again highlights student digital capabilities development whilst emphasising the need to support development of digitally skilled staff to enable this. It also mentions the potential of TEL in developing authentic learning experiences, linking and networking with employers and showcasing student skills.

The final part of this section of the report covers innovation in relation to the TEF.  It warns that “It would be a disaster” if the TEF stifled innovation and increased risk-averse approaches in institutions. It welcomes the inclusion of ’impact and effectiveness of innovative approaches, new technology or educational research’ in the list of possible examples of additional evidence as a “welcome step.” (see Year 2 TEF specification Table 8)

Mike read  Sue Watling – TEL-ing tales, where is the evidence of impact and In defence of technology by Kerry Pinny. These blog posts reflect on an email thread started by Sue Watling in which she asked for evidence of the effectiveness of TEL. The evidence is needed if we are to persuade academics of the need to change practice.  In response, she received lots of discussion, including and what she perceived to be some highly defensive posts.  The responses contained very little by way of well- researched evidence. Watling, after Jenkins, ascribes ‘Cinderella Status’ to TEL research, which I take to mean based on stories, rather than fact.  She acknowledges the challenges of reward, time and space for academics engaged with TEL. She nevertheless  makes a pleas that we are reflective in our practice and look to gather a body of evidence we can use in support of the impact of TEL. Watling describes some fairly defensive responses to her original post (including the blog post from James Clay that Hannah read for this reading group). By contrast. Kerry Pinny’s post responds to some of the defensiveness, agreeing with Watling – if we can’t defend what we do with evidence, then this in itself is evidence that something is wrong.

The problem is clear, how we get the evidence is less clear. One point from Watling that I think is pertinent is that it is not just TEL research, but HE pedagogic research as a whole, that lacks evidence and has ‘Cinderella status’. Is it then surprising that TEL HE research, as a  subset of  HE pedagogic research, reflects the lack of proof and rigour? This may in part be down to the lack of research funding. As Pinny points out, it is often the school or academic has little time to evaluate their work with rigour.  I think it also relates to the nature of TEL as a  set of tools or enablers of pedagogy, rather than a singular approach or set of approaches. You can use TEL to support a range of pedagogies, both effective and non-effective, and a variety of factors will affect its impact.  Additionally, I think it relates to the way Higher Education works – the practice there is and what evidence results tends to be very localised, for example to a course, teacher or school. Drawing broader conclusions is much, much harder.  A lot of the evidence is at best anecdotal. That said, in my experience, anecdotes (particularly form peers) can be as persuasive as research evidence in persuading colleagues to change practice (though I have no rigorous research to prove that).

Suzanne read Mandernach, J. 2015, ” Assessment of Student Engagement in Higher  Education: A Synthesis of Literature and Assessment Tools“, International Journal of Learning, Teaching and Educational Research Vol. 12, No. 2, pp. 1-14, June 2015

This text was slightly tangential, as it didn’t discuss the ideas behind evidence in TEL specifically, but was a good example of an area in which we often find it difficult to find or produce meaningful evidence to support practice. The paper begins by recognising the difficulties in gauging, monitoring and assessing engagement as part of the overall learning experience, despite the fact that engagement is often discussed within HE. Mandernach goes back to the idea of ‘cognitive’ ‘behavioural’ and ‘affective’ criteria for assessing engagement, particularly related to Bowen’s ideas that engagement happens with the leaning process, the object of study, the context of study, and the human condition (or service learning). Interestingly for our current context of building MOOC-based courses, a lot of the suggestions for how these engagement types can be assessed is mainly classroom based – for example the teacher noticing the preparedness of the student at the start of a lesson, or the investment they put into their learning. On a MOOC platform, where there is little meaningful interaction on an individual level between the ‘educator’ and the learner, this clearly becomes more difficult to monitor, and self-reporting becomes increasingly important. In terms of how to go about measuring and assessing engagement, student surveys are discussed – such as the Student Engagement Questionnaire and the Student Course Engagement Questionnaire. The idea of experience sampling – where a selection of students are asked at intervals to rate their engagement at that specific time – is also discussed as a way of measuring overall flow of engagement across a course, which may also be an interesting idea to discuss for our context.

Suggested reading

Horizon Report 2017 – notes from the reading group

This time we looked at the NMC Horizon Report 2017 and related documents.

Amy read ‘Redesigning learning spaces’ from the 2017 Horizon report and ‘Makerspaces’ from the ELI ‘Things you should know about’ series. The key points were:

  • Educational institutions are increasingly adopting flexible and inclusive learning design and this is extending to physical environments.
  • Flexible workspaces, with access to peers from other disciplines, experts and equipment, reflect real-world work and create social environments that foster cross-discipline problem-solving.
  • For projects created in flexible environments to be successful, the facilitator allow the learners to shape the experience – much of the value of a makerspace lies in its informal nature, with learning being shaped by the participants rather than the facilitator.
  • There are endless opportunities for collaboration with makerspaces, but investment – both financial and strategic – is essential for successful projects across faculties.

Roger read blended learning designs. This is listed as a short term key trend, driving ed tech adoption in HE for the next 1 to 2 years. It claims that the potential of blended learning is now well understood, that blended approaches are widely used, and that the focus has moved to evaluating impact on learners. It suggests that the most effective uses of blended learning are for contexts where students can do something which they would not otherwise be able to, for example via VR. In spite of the highlighting this change in focus it provides little detailed evidence of impact in the examples mentioned.

Suzi read the sections on Managing Knowledge Obsolescence (which seemed to be around how we in education can make the most of / cope with rapidly changing technology) and Rethinking the Role of Educators. Interesting points were:

  • Educators as guides / curators / facilitators of learning experiences
  • Educators need time, money & space to experiment with new technology (and gather evidence), as well as people with the skills and time to support them
  • HE leaders need to engage with the developing technology landscape and build infrastructure that supports technology transitions

Nothing very new, and I wasn’t sure about the rather business-led examples of how the role of university might change, but still a good provocation for discussion.

Hannah read ‘Achievement Gap’ from the 2017 Horizon Report. It aimed to talk about the disparity in enrolment and performance between student groups, as defined by socioeconomic status, race, ethnicity and gender, but only really tackled some of these issues. The main points were:

  • Overwhelming tuition costs and a ‘one size fits all’ approach of Higher Education is a problem, with more flexible degree plans being needed. The challenge here is catering to all learners’ needs, as well as aligning programmes with deeper learning outcomes and 21st century problems.
  • A degree is becoming increasingly vital for liveable wages across the world, with even manufacturing jobs increasingly requiring post secondary training and skills.
  • There has been a growth in non-traditional students, with online or blended offerings and personalise and adaptive learning strategies being implemented as a retention solution.
  • Some Universities across the world have taken steps towards developing more inclusive offerings: Western Governors University are offering competency based education where students develop concrete skills relating to specific career goals; Norway, Germany and Slovenia offer free post secondary education; under the Obama administration, it was made so that students can secure financial aid 3 months earlier to help them make smarter enrolment decisions; in Scandinavian countries, there is a lot of flexibility in transferring to different subjects, something that isn’t widely accepted in the UK but could help to limit the drop-out rate.
  • Some countries are offering different routes to enrolment in higher education. An example of this is Australia’s Fast Forward programme provides early information to prospective students about alternative pathways to tertiary education, even if they have not performed well in high school. Some of these alternative pathways include online courses to bridge gaps in knowledge, as well as the submission of e-portfolios to demonstrate skills gained through non-formal learning.
  • One thing I thought the article didn’t touch on was the issue of home learning spaces for students. Some students will share rooms and IT equipment, or may not have access to the same facilities as others.

Flexible and inclusive learning – notes from reading group

Amy read: Why are we still using LMSs, which discusses the reasons LMS systems have not advanced dramatically since they came onto the market. The key points were:

  • There are five core features that all major LMS systems have: they’re convenient; they offer a one-stop-shop for all University materials, assessments and grades; they have many accessibility features built in; they’re well integrated into other institutional systems and there is a great deal of training available for them.
  • Until a new system with all these features comes onto the market, the status quo with regard to LMS systems will prevail.
  • Instructors should look to use their current LMS system in a more creative way.

Mike read: Flexible pedagogies: technology-enhanced learning HEA report

This paper provided a useful overview of flexible learning, including explanations of what it might mean, dilemmas and challenges for HE. The paper is interesting to consider alongside Bristol’s Flexible and Inclusive learning paper. For the authors, Flexible learning gives students choice in the pace, place and mode of their learning. This is achieved through application of pedagogical practice, with TEL positioned as an enable or way of enhancing this practice. Pace is about schedules (faster or slower), or allowing students to work at their own pace. Place is about  physical location and distance. Mode includes notions of distance and blended learning.

Pedagogies covered include personalised learning, flexible learning – (suggesting it is similar to adaptive learning in which materials adapt to individual progress), gamification, fully online and blended approaches. The paper considers the implications of offering choice to students for example, over what kind of assessment. An idealised form would offer a very individualised choice of learning pathway, but with huge implications on stakeholders.

In the reading, group, we had an interesting discussion as to whether students are always best equipped to understand and make such choices. We also wondered how we would resource the provision of numerous pathways.  Other  risks include potential for information overload for students, ensuring systems and approaches work with quality assurance processes. Barriers include interpretations of KIS data which favours contact time.

We would have a long way to go in achieving the idealised model set out here. Would a first step be to change the overall diet of learning approaches across a programme, rather than offering choice at each stage? Could we then introduce some elements of flexibility in certain areas of programmes, perhaps a bit like the Medical School’s Self Selected Components, giving students choice in a more manageable space within the curriculum?

Suzanne read:  Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. The main points were:

  • Self-regulated learning is something which happens naturally in HE, as students will assess their own work and give themselves feedback internally. This paper suggests this should be harnessed and built on in feedback strategies in HE.
  • Shift in focus to see students having a proactive rather than reactive role in feedback practices, particularly focused on deciphering, negotiating and acting on feedback.
  • The paper suggests 7 principles for good feedback practice, which encourages this self-regulation: 1. clarifying what good performance is; 2. facilitating self-assessment; 3. delivering high quality feedback information; 4. encouraging dialogue; encouraging self-esteem and motivation; 6. giving opportunities to close the gap between where the student is now and where they need/want to be; 7. using feedback to improve teaching.
  • For our context, this gives some food for thought in terms of the limitations of a MOOC environment for establishing effective feedback practices (dialogue with every student is difficult if not impossible, for example), and emphasises the importance of scaffolding or training effective peer and self-assessment, to give students the confidence and ability to ‘close the gap’ for themselves.

Suzanne also read: Professional Development Through MOOCs in Higher Education Institutions: Challenges and Opportunities for PhD Students Working as Mentors

This paper reports on a small-scale (20 participants), qualitative study into the challenges and opportunities for PhD students acting as mentors in the FutureLearn MOOC environment. As a follow-on from the above reading, using mentors can be a way to help students with the peer and self-assessment practices, which is why I decided to read it in parallel. However, it also focuses on the learning experiences of the PhD student themselves as they perform the mentor role, also giving these students a different (potentially more flexible and inclusive) platform to develop skills.

Overall, the paper is positive about the experiences of PhD MOOC mentors, claiming that they can develop skills in various areas, including:

  • confidence in sharing their knowledge and interacting with people outside their own field (especially for early career researchers, who may not yet have established themselves as ‘expert’ in their field);
  • teaching skills, particularly related to online communication, the need for empathy and patience, and tailoring the message to a diverse audience of learners. It’s noteworthy here that many of these mentors had little or no teaching experience, so this is also about giving them teaching experience generally, not teaching in MOOCs specifically;
  • subject knowledge, as having to discuss with the diverse learning community (of expert and not expert learners) helped them consolidate their understanding, and in some cases pushed them to find answers to questions they had not previously considered.

Roger read Authentic and Differentiated Assessments

This is a guide aimed at School teachers. Differentiated assessment involves students being active in setting goals, including the topic, how and when they want to be evaluated. It also involves teachers continuously assessing student readiness in order to provide support and evaluate when students are ready to move on in the curriculum.

The first part of the article describes authentic assessment, which it defines as asking students to apply knowledge and skills to real world settings, which can be a powerful motivator for them. A four stage process to design authentic assessment is outlined.

The second part of the article focuses on differentiated assessment. We all have different strengths and weaknesses in how we best demonstrate our learning, and multiple and varied assessments can help accommodate these. The article stresses that choice is key, including of learning activity as well as assessment. Project and problem based learning are particularly useful.  Learning activities should always consider multiple intelligences and the range of students’ preferred ways of learning, and there should be opportunities for individual and group tasks as some students will perform better in one or the other.

Hannah read: Research into digital inclusion and learning helps empower people to make the best choices, a blog by the Association for Learning and Teaching about bridging the gap between digital inclusion and learning technology. The main points were:

  • Britain is failing to exploit opportunities to give everyone fair and equal access to learning technology through not doing enough research into identifying the best way to tackle the problem of digital exclusion
  • Learning technology will become much more inclusive a way of learning once the digital divide is addressed
  • More must be done to ensure effective intervention; lack of human support and lack of access to digital technology are cited as two main barriers to using learning technology in a meaningful way
  • We need to broaden understanding of the opportunities for inclusion, look into how to overcome obstacles, develop a better understanding of the experiences felt by the excluded and understand why technological opportunities are often not taken up

Suzi read:  Disabled Students in higher education: Experiences and outcomes which discusses the experience of disabled students, based on surveys, analysis of results, interviews, and case studies at four, relatively varied, UK universities. Key points for me were:

  • Disability covers a wide range of types and severity of issues but adjustments tend to be formulaic, particularly for assessment (25% extra time in exams)
  • Disability is a problematic label, not all students who could do will choose to identify as disabled
  • Universal design is the approach they would advocate where possible

Suzi also read: Creating Better Tests for Everyone Through Universally Designed Assessments a paper written for the context of large-scale tests for US school students, which nonetheless contains interesting background and advice useful (if not earth-shattering). The key messages are:

  • Be clear about what you want to assess
  • Only assess that – be careful not to include barriers (cognitive, sensory, emotional, or physical) in the assessment that mean other things are being measured
  • Apply basic good design and writing approaches – clear instructions, legible fonts, plain language

 

Play – notes from a PM Studios lunchtime talk

I attended October’s lunchtime talk at Pervasive Media Studio by Simon Johnson of Free Ice Cream and igfest – about working in real world games. His big hit was the city-based zombie chase game, 2.8 Hours Later (these were heavier with social commentary than I had realised at the time – second version was about becoming an asylum seeker).

I loved his thought that playing a game is like running on a different operating system. And that it can help you see features of the existing operating system – say of a city – that would not otherwise be apparent. Creating a game was also described not as storytelling, but as creating a context in which people build their own stories.

This seems very relevant to thinking about teaching in the digital era, where dissemination of information is no longer such a key concern. We should be designing experiences which shake people out of their set patterns of thinking and allow them to explore new ones, helping them to try out new operating systems, creating rich environments in which they build their own stories.

Misc details

  • Simon emphasised the idea of fun – not “serious gaming”. Similar to Nic Whitton’s emphasis on playfulness?
  • His Cargo game, a city escape game focussed on how to build/undermine trust in a group, was designed to create a chaotic environment to test disaster relief principles.
  • igfest – a festival of interesting games that ran for several years. I think there were more frequent meet ups too. This gave game developers a play-testing community by regular events and some regular participants even became game designers.
  • Hat game – gps tracked bowler hat, whoever kept it longest would win (but there were unintended consequences… the hat-wearer ran away – the prize was too big)
  • theTweeture – such an advanced bot that people thought it was a puppet
  • A couple of the games were intended to help people conceptualise complex ideas: a hoop-rolling game set in a quantum computer; Calibration which puts the scale of the solar system in human terms.
  • He’s developing a conference-based game for the ODI to be played at the UN conference in March.