Video – notes from the reading group

Hannah read ‘Motivation and Cognitive Strategies in the Choice to Attend Lectures or Watch them Online‘ by John N Bassilli. It was quite a in depth study but the main points were:

  • The notion of watching lectures online has a positive reaction from those who enjoy the course and find it important, but also from those who don’t want to learn in interaction with peers and aren’t inclined to monitor their learning.
  • From the above groups, the first group is likely to watch lectures online in addition to attending them face-to-face, whereas the second group are likely to replace face-to-face interaction with online study.
  • The attitude towards watching lectures online is related to motivation (ie. those who are motivated to do the course anyway are enthusiastic about extra learning opportunities), whereas the actual choice to watch them is related to cognitive strategies.
  • There is no demonstrable relation between online lecture capture and exam performance, but often the locus of control felt by students is marginally higher if they have the option to access lectures online.

Amy recommended Lifesaver (Flash required) as an amazing example of how interactive video can be used to teach.

Suzi read three short items which lead me to think about what video is good for. Themes that came up repeatedly were:

  • People look to video to provide something more like personal interaction and (maybe for that reason) to motivate students.
  • Videos cannot be skimmed – an important (and overlooked) difference compared to text.

The first two items were case studies in the use of video to boost learning, both in the proceedings of ASCILITE 2016.

Learning through video production – an instructional strategy for promoting active learning in a biology course, Jinlu Wu, National University of Singapore. Aim: enhance intrinsic motivation by ensuring autonomy, competence, and relatedness. Student video project in (theoretically) cross-disciplinary teams. Select a cell / aspect of a cell, build a 3D model, make a short video using the model and other materials, write a report on the scientific content and rationale for the video production. Students did well, enjoyed it, felt they were learning and seem to have learn more. Interesting points:

  • Students spent much longer on it than they were required to
  • Nearly 400 students on the module (I would like to have heard more about how they handled the marking)

Video-based feedback: path toward student-centred learning, Cedomir Gladovic, Holmesglen Institute. Aim: increase the student’s own motivation and enhance the possibility for self-assessment and reflection. They want to promote the idea of feedback as a conversation. Tutor talking over students online submission (main image) with webcam (corner image). Students like it but a drawback is that they can’t skim feedback. Interesting points:

  • How would tutors feel about this?
  • Has anyone compared webcam / no webcam?
  • Suggested video length <5 mins if viewed on smartphone, <10 mins if viewed on monitor

Here’s a simple way to boost your learning from videos: the “prequestion” looks at the effect of testing whether students remember more about specific questions and more generally when they are given prequestions on a short video. Answer seems to be yes on both counts. They thought that prequestions were particularly useful for short videos because students can’t easily skim through to just those topics.

Roger read “Using video in pedagogy”, an article from the Columbia University Center for Teaching and learning.

The article primarily focuses on the use of video as a tool for teacher reflection. The lecturer in question teaches Russian and was being observed. As she teaches in the target language which her observer didn’t speak her original motivation was to make the recording then talk the observer through what was happening. In actual fact she discovered additional benefits she had not envisaged. For example she was able to quantify how much time she was speaking compared to the students (as an important objective is to get students speaking as much as possible in the target language, and the teacher less). Secondly she could analyse and reflect on student use of the vocabulary and structures they had been taught. Thirdly it helped her to reflect on her own “quirks and mannerisms” and how these affected students. Finally the video provided evidence that actually contradicted her impressions of how an activity had gone . At the time she had felt it didn’t go well, but on reviewing the video afterwards she actually saw that it had been effective.

Suggested reading

Evidence – notes from the reading group

Suzi read Real geek: Measuring indirect beneficiaries – attempting to square the circle? From the Oxfam Policy & Practice blog. I was interested in the parallels with our work:

  • They seek to measure indirect beneficiaries of our work
  • Evaluation is used to improve programme quality (rather than organisational accountability)
  • In both cases there’s a pressure for “vanity metrics”
  • The approaches they talk about sound like an application of “agile” to a fundamentally non-technological processes

The paper is written at an early point in the process of redesigning their measurement and evaluation of influencing. Their aim is to improve the measurement of indirect beneficiaries at different stages of the chain, adjust plans, “test our theory of change and the assumptions we make”. Evaluation is different when you are a direct service provider than when you are a “convenor, broker or catalyst”. They are designing an evaluation approach that will be integrated into day to day running of any initiative – there’s a balance between rigor and amount of work to make it happen.

The approach they are looking at – which is something that came up in a number of the papers other people read – is sampling: identifying groups of people who they expect their intervention to benefit and evaluating it for them.

Linked to from this paper was Adopt adapt expand respond – a framework for managing and measuring systemic change processes. This paper presents a set of reflection questions (and gives some suggested measures) which I can see being adapted for an educational perspective:

  • Adopt – If you left now, would partners return to their previous way of working?
  • Adapt – If you left now, would partners build upon the changes they’ve adopted without us?
  • Expand – If you left now, would pro-poor outcomes depend on too few people, firms, or organisations?
  • Respond – If you left now, would the system be supportive of the changes introduced (allowing them to be upheld, grow, and evolve)?

Roger read “Technology and the TEF” from the 2017 Higher Education Policy Institute (HEPI)  report “Rebooting learning for the digital age: What next for technology-enhanced higher education?”.

This looks at how TEL can support the three TEF components, which evidence teaching excellence.

For the first TEF component, teaching quality, the report highlights the potential of TEL in increasing active learning, employability especially digital capabilities development, formative assessment, different forms of feedback and EMA generally, and personalisation. In terms of evidence for knowing how TEL is making an impact in these areas HEPI emphasises the role of learning analytics.

For the second component, learning environment, the report focusses on access to online resources, the role of digital technologies in disciplinary research-informed teaching, and again learning analytics as a means to provide targeted and timely support for learning. In terms of how to gather reliable evidence it mentions the JISC student digital experience tracker, a survey which is currently being used by 45 HE institutions.

For the third component, student outcomes and learning gain, the report once again highlights student digital capabilities development whilst emphasising the need to support development of digitally skilled staff to enable this. It also mentions the potential of TEL in developing authentic learning experiences, linking and networking with employers and showcasing student skills.

The final part of this section of the report covers innovation in relation to the TEF.  It warns that “It would be a disaster” if the TEF stifled innovation and increased risk-averse approaches in institutions. It welcomes the inclusion of ’impact and effectiveness of innovative approaches, new technology or educational research’ in the list of possible examples of additional evidence as a “welcome step.” (see Year 2 TEF specification Table 8)

Mike read  Sue Watling – TEL-ing tales, where is the evidence of impact and In defence of technology by Kerry Pinny. These blog posts reflect on an email thread started by Sue Watling in which she asked for evidence of the effectiveness of TEL. The evidence is needed if we are to persuade academics of the need to change practice.  In response, she received lots of discussion, including and what she perceived to be some highly defensive posts.  The responses contained very little by way of well- researched evidence. Watling, after Jenkins, ascribes ‘Cinderella Status’ to TEL research, which I take to mean based on stories, rather than fact.  She acknowledges the challenges of reward, time and space for academics engaged with TEL. She nevertheless  makes a pleas that we are reflective in our practice and look to gather a body of evidence we can use in support of the impact of TEL. Watling describes some fairly defensive responses to her original post (including the blog post from James Clay that Hannah read for this reading group). By contrast. Kerry Pinny’s post responds to some of the defensiveness, agreeing with Watling – if we can’t defend what we do with evidence, then this in itself is evidence that something is wrong.

The problem is clear, how we get the evidence is less clear. One point from Watling that I think is pertinent is that it is not just TEL research, but HE pedagogic research as a whole, that lacks evidence and has ‘Cinderella status’. Is it then surprising that TEL HE research, as a  subset of  HE pedagogic research, reflects the lack of proof and rigour? This may in part be down to the lack of research funding. As Pinny points out, it is often the school or academic has little time to evaluate their work with rigour.  I think it also relates to the nature of TEL as a  set of tools or enablers of pedagogy, rather than a singular approach or set of approaches. You can use TEL to support a range of pedagogies, both effective and non-effective, and a variety of factors will affect its impact.  Additionally, I think it relates to the way Higher Education works – the practice there is and what evidence results tends to be very localised, for example to a course, teacher or school. Drawing broader conclusions is much, much harder.  A lot of the evidence is at best anecdotal. That said, in my experience, anecdotes (particularly form peers) can be as persuasive as research evidence in persuading colleagues to change practice (though I have no rigorous research to prove that).

Suzanne read Mandernach, J. 2015, ” Assessment of Student Engagement in Higher  Education: A Synthesis of Literature and Assessment Tools“, International Journal of Learning, Teaching and Educational Research Vol. 12, No. 2, pp. 1-14, June 2015

This text was slightly tangential, as it didn’t discuss the ideas behind evidence in TEL specifically, but was a good example of an area in which we often find it difficult to find or produce meaningful evidence to support practice. The paper begins by recognising the difficulties in gauging, monitoring and assessing engagement as part of the overall learning experience, despite the fact that engagement is often discussed within HE. Mandernach goes back to the idea of ‘cognitive’ ‘behavioural’ and ‘affective’ criteria for assessing engagement, particularly related to Bowen’s ideas that engagement happens with the leaning process, the object of study, the context of study, and the human condition (or service learning). Interestingly for our current context of building MOOC-based courses, a lot of the suggestions for how these engagement types can be assessed is mainly classroom based – for example the teacher noticing the preparedness of the student at the start of a lesson, or the investment they put into their learning. On a MOOC platform, where there is little meaningful interaction on an individual level between the ‘educator’ and the learner, this clearly becomes more difficult to monitor, and self-reporting becomes increasingly important. In terms of how to go about measuring and assessing engagement, student surveys are discussed – such as the Student Engagement Questionnaire and the Student Course Engagement Questionnaire. The idea of experience sampling – where a selection of students are asked at intervals to rate their engagement at that specific time – is also discussed as a way of measuring overall flow of engagement across a course, which may also be an interesting idea to discuss for our context.

Suggested reading

Horizon Report 2017 – notes from the reading group

This time we looked at the NMC Horizon Report 2017 and related documents.

Amy read ‘Redesigning learning spaces’ from the 2017 Horizon report and ‘Makerspaces’ from the ELI ‘Things you should know about’ series. The key points were:

  • Educational institutions are increasingly adopting flexible and inclusive learning design and this is extending to physical environments.
  • Flexible workspaces, with access to peers from other disciplines, experts and equipment, reflect real-world work and create social environments that foster cross-discipline problem-solving.
  • For projects created in flexible environments to be successful, the facilitator allow the learners to shape the experience – much of the value of a makerspace lies in its informal nature, with learning being shaped by the participants rather than the facilitator.
  • There are endless opportunities for collaboration with makerspaces, but investment – both financial and strategic – is essential for successful projects across faculties.

Roger read blended learning designs. This is listed as a short term key trend, driving ed tech adoption in HE for the next 1 to 2 years. It claims that the potential of blended learning is now well understood, that blended approaches are widely used, and that the focus has moved to evaluating impact on learners. It suggests that the most effective uses of blended learning are for contexts where students can do something which they would not otherwise be able to, for example via VR. In spite of the highlighting this change in focus it provides little detailed evidence of impact in the examples mentioned.

Suzi read the sections on Managing Knowledge Obsolescence (which seemed to be around how we in education can make the most of / cope with rapidly changing technology) and Rethinking the Role of Educators. Interesting points were:

  • Educators as guides / curators / facilitators of learning experiences
  • Educators need time, money & space to experiment with new technology (and gather evidence), as well as people with the skills and time to support them
  • HE leaders need to engage with the developing technology landscape and build infrastructure that supports technology transitions

Nothing very new, and I wasn’t sure about the rather business-led examples of how the role of university might change, but still a good provocation for discussion.

Hannah read ‘Achievement Gap’ from the 2017 Horizon Report. It aimed to talk about the disparity in enrolment and performance between student groups, as defined by socioeconomic status, race, ethnicity and gender, but only really tackled some of these issues. The main points were:

  • Overwhelming tuition costs and a ‘one size fits all’ approach of Higher Education is a problem, with more flexible degree plans being needed. The challenge here is catering to all learners’ needs, as well as aligning programmes with deeper learning outcomes and 21st century problems.
  • A degree is becoming increasingly vital for liveable wages across the world, with even manufacturing jobs increasingly requiring post secondary training and skills.
  • There has been a growth in non-traditional students, with online or blended offerings and personalise and adaptive learning strategies being implemented as a retention solution.
  • Some Universities across the world have taken steps towards developing more inclusive offerings: Western Governors University are offering competency based education where students develop concrete skills relating to specific career goals; Norway, Germany and Slovenia offer free post secondary education; under the Obama administration, it was made so that students can secure financial aid 3 months earlier to help them make smarter enrolment decisions; in Scandinavian countries, there is a lot of flexibility in transferring to different subjects, something that isn’t widely accepted in the UK but could help to limit the drop-out rate.
  • Some countries are offering different routes to enrolment in higher education. An example of this is Australia’s Fast Forward programme provides early information to prospective students about alternative pathways to tertiary education, even if they have not performed well in high school. Some of these alternative pathways include online courses to bridge gaps in knowledge, as well as the submission of e-portfolios to demonstrate skills gained through non-formal learning.
  • One thing I thought the article didn’t touch on was the issue of home learning spaces for students. Some students will share rooms and IT equipment, or may not have access to the same facilities as others.

Flexible and inclusive learning – notes from reading group

Amy read: Why are we still using LMSs, which discusses the reasons LMS systems have not advanced dramatically since they came onto the market. The key points were:

  • There are five core features that all major LMS systems have: they’re convenient; they offer a one-stop-shop for all University materials, assessments and grades; they have many accessibility features built in; they’re well integrated into other institutional systems and there is a great deal of training available for them.
  • Until a new system with all these features comes onto the market, the status quo with regard to LMS systems will prevail.
  • Instructors should look to use their current LMS system in a more creative way.

Mike read: Flexible pedagogies: technology-enhanced learning HEA report

This paper provided a useful overview of flexible learning, including explanations of what it might mean, dilemmas and challenges for HE. The paper is interesting to consider alongside Bristol’s Flexible and Inclusive learning paper. For the authors, Flexible learning gives students choice in the pace, place and mode of their learning. This is achieved through application of pedagogical practice, with TEL positioned as an enable or way of enhancing this practice. Pace is about schedules (faster or slower), or allowing students to work at their own pace. Place is about  physical location and distance. Mode includes notions of distance and blended learning.

Pedagogies covered include personalised learning, flexible learning – (suggesting it is similar to adaptive learning in which materials adapt to individual progress), gamification, fully online and blended approaches. The paper considers the implications of offering choice to students for example, over what kind of assessment. An idealised form would offer a very individualised choice of learning pathway, but with huge implications on stakeholders.

In the reading, group, we had an interesting discussion as to whether students are always best equipped to understand and make such choices. We also wondered how we would resource the provision of numerous pathways.  Other  risks include potential for information overload for students, ensuring systems and approaches work with quality assurance processes. Barriers include interpretations of KIS data which favours contact time.

We would have a long way to go in achieving the idealised model set out here. Would a first step be to change the overall diet of learning approaches across a programme, rather than offering choice at each stage? Could we then introduce some elements of flexibility in certain areas of programmes, perhaps a bit like the Medical School’s Self Selected Components, giving students choice in a more manageable space within the curriculum?

Suzanne read:  Formative assessment and self-regulated learning: A model and seven principles of good feedback practice. The main points were:

  • Self-regulated learning is something which happens naturally in HE, as students will assess their own work and give themselves feedback internally. This paper suggests this should be harnessed and built on in feedback strategies in HE.
  • Shift in focus to see students having a proactive rather than reactive role in feedback practices, particularly focused on deciphering, negotiating and acting on feedback.
  • The paper suggests 7 principles for good feedback practice, which encourages this self-regulation: 1. clarifying what good performance is; 2. facilitating self-assessment; 3. delivering high quality feedback information; 4. encouraging dialogue; encouraging self-esteem and motivation; 6. giving opportunities to close the gap between where the student is now and where they need/want to be; 7. using feedback to improve teaching.
  • For our context, this gives some food for thought in terms of the limitations of a MOOC environment for establishing effective feedback practices (dialogue with every student is difficult if not impossible, for example), and emphasises the importance of scaffolding or training effective peer and self-assessment, to give students the confidence and ability to ‘close the gap’ for themselves.

Suzanne also read: Professional Development Through MOOCs in Higher Education Institutions: Challenges and Opportunities for PhD Students Working as Mentors

This paper reports on a small-scale (20 participants), qualitative study into the challenges and opportunities for PhD students acting as mentors in the FutureLearn MOOC environment. As a follow-on from the above reading, using mentors can be a way to help students with the peer and self-assessment practices, which is why I decided to read it in parallel. However, it also focuses on the learning experiences of the PhD student themselves as they perform the mentor role, also giving these students a different (potentially more flexible and inclusive) platform to develop skills.

Overall, the paper is positive about the experiences of PhD MOOC mentors, claiming that they can develop skills in various areas, including:

  • confidence in sharing their knowledge and interacting with people outside their own field (especially for early career researchers, who may not yet have established themselves as ‘expert’ in their field);
  • teaching skills, particularly related to online communication, the need for empathy and patience, and tailoring the message to a diverse audience of learners. It’s noteworthy here that many of these mentors had little or no teaching experience, so this is also about giving them teaching experience generally, not teaching in MOOCs specifically;
  • subject knowledge, as having to discuss with the diverse learning community (of expert and not expert learners) helped them consolidate their understanding, and in some cases pushed them to find answers to questions they had not previously considered.

Roger read Authentic and Differentiated Assessments

This is a guide aimed at School teachers. Differentiated assessment involves students being active in setting goals, including the topic, how and when they want to be evaluated. It also involves teachers continuously assessing student readiness in order to provide support and evaluate when students are ready to move on in the curriculum.

The first part of the article describes authentic assessment, which it defines as asking students to apply knowledge and skills to real world settings, which can be a powerful motivator for them. A four stage process to design authentic assessment is outlined.

The second part of the article focuses on differentiated assessment. We all have different strengths and weaknesses in how we best demonstrate our learning, and multiple and varied assessments can help accommodate these. The article stresses that choice is key, including of learning activity as well as assessment. Project and problem based learning are particularly useful.  Learning activities should always consider multiple intelligences and the range of students’ preferred ways of learning, and there should be opportunities for individual and group tasks as some students will perform better in one or the other.

Hannah read: Research into digital inclusion and learning helps empower people to make the best choices, a blog by the Association for Learning and Teaching about bridging the gap between digital inclusion and learning technology. The main points were:

  • Britain is failing to exploit opportunities to give everyone fair and equal access to learning technology through not doing enough research into identifying the best way to tackle the problem of digital exclusion
  • Learning technology will become much more inclusive a way of learning once the digital divide is addressed
  • More must be done to ensure effective intervention; lack of human support and lack of access to digital technology are cited as two main barriers to using learning technology in a meaningful way
  • We need to broaden understanding of the opportunities for inclusion, look into how to overcome obstacles, develop a better understanding of the experiences felt by the excluded and understand why technological opportunities are often not taken up

Suzi read:  Disabled Students in higher education: Experiences and outcomes which discusses the experience of disabled students, based on surveys, analysis of results, interviews, and case studies at four, relatively varied, UK universities. Key points for me were:

  • Disability covers a wide range of types and severity of issues but adjustments tend to be formulaic, particularly for assessment (25% extra time in exams)
  • Disability is a problematic label, not all students who could do will choose to identify as disabled
  • Universal design is the approach they would advocate where possible

Suzi also read: Creating Better Tests for Everyone Through Universally Designed Assessments a paper written for the context of large-scale tests for US school students, which nonetheless contains interesting background and advice useful (if not earth-shattering). The key messages are:

  • Be clear about what you want to assess
  • Only assess that – be careful not to include barriers (cognitive, sensory, emotional, or physical) in the assessment that mean other things are being measured
  • Apply basic good design and writing approaches – clear instructions, legible fonts, plain language

 

Play – notes from a PM Studios lunchtime talk

I attended October’s lunchtime talk at Pervasive Media Studio by Simon Johnson of Free Ice Cream and igfest – about working in real world games. His big hit was the city-based zombie chase game, 2.8 Hours Later (these were heavier with social commentary than I had realised at the time – second version was about becoming an asylum seeker).

I loved his thought that playing a game is like running on a different operating system. And that it can help you see features of the existing operating system – say of a city – that would not otherwise be apparent. Creating a game was also described not as storytelling, but as creating a context in which people build their own stories.

This seems very relevant to thinking about teaching in the digital era, where dissemination of information is no longer such a key concern. We should be designing experiences which shake people out of their set patterns of thinking and allow them to explore new ones, helping them to try out new operating systems, creating rich environments in which they build their own stories.

Misc details

  • Simon emphasised the idea of fun – not “serious gaming”. Similar to Nic Whitton’s emphasis on playfulness?
  • His Cargo game, a city escape game focussed on how to build/undermine trust in a group, was designed to create a chaotic environment to test disaster relief principles.
  • igfest – a festival of interesting games that ran for several years. I think there were more frequent meet ups too. This gave game developers a play-testing community by regular events and some regular participants even became game designers.
  • Hat game – gps tracked bowler hat, whoever kept it longest would win (but there were unintended consequences… the hat-wearer ran away – the prize was too big)
  • theTweeture – such an advanced bot that people thought it was a puppet
  • A couple of the games were intended to help people conceptualise complex ideas: a hoop-rolling game set in a quantum computer; Calibration which puts the scale of the solar system in human terms.
  • He’s developing a conference-based game for the ODI to be played at the UN conference in March.

Teaching at scale: engagement, assessment and feedback – notes from reading group

Chris read #53ideas 27 – Making feedback work involves more than giving feedback – Part 1 the assessment context. A great little paper full of epithets that perfectly describe the situation I find myself in. ‘You can write perfect feedback and it still be an almost complete waste of time’. ‘University policies to ensure all feedback is provided within three weeks seem feeble’.’On many courses no thought has been given to the purpose of the learning other than that there is some subject matter that the teacher knows about’. ‘Part time teachers are seldom briefed properly about the course and its aims and rationale, and often ignore criteria’. The take home message, for me, was that the OU is an exemplar in the area of giving good, useful, consistent feedback even when the marking load is spread over a number of people: ‘If a course is going to hire part-time markers then it had better adopt some of the Open University’s practices or suffer the consequences.’

Jane recommended: Sea monsters& whirlpools: Navigating between examination and reflection in medical education. Hodges, D. (2015). Medical Teacher 37: 3, 261-266. Interesting paper around how diverse forms of reflective practice employed by medical educators are compatible with assessment. She also mentioned “They liked it if you said you cried”: how medical students perceive the teaching of professionalism

Suzi read E-portfolios enhancing students’ self-directed learning: a systematic review of influencing factors

This 2016 paper is based on a systematic literature review of the use of online portfolios, with most of the studies taking place in an HE context. They looked at what was required for portfolio use to foster self-directed learning. Their conclusions were that students need the time and motivation to use them, and also that portfolios must:

  • Be seamlessly-integrated into teaching
  • Use appropriate technology
  • Be supported by coaching from staff (this is “important if not essential”)

Useful classification: purpose (selection vs learning) and volition (voluntary vs mandated) from Smith and Tillema (2003). Useful “Practical implications” section towards the end.

Suzi read How & Why to Use Social Media to Create Meaningful Learning Assignments

A nice example of a hypothetical (but well thought-through) Instagram assignment for a history of art course, using hashtags and light gamification. Included good instructions and motivation for students.

Has some provocative claims about the use of social media:
“It’s inevitable if we want to make learning relevant, practical and effective.”
“social media, by the behaviours it generates, lends itself to involving students in learning”
Also an interesting further reading section.

Suzi read #53ideas 40 – Self assessment is central to intrinsic motivation

Feeling a sense of control over learning leads to higher levels of engagement and persistence. If possible this would be the what, how, where and when. But “taking responsibility for judgements about their own learning” – so good self & peer assessment – may be enough. Goes through an example of self & peer assessment at Oxford Polytechnic. Challenging to our context, as this was highly scaffolded, with the students practicing structured self-assessment for a year before engaging in peer assessment. Draws on Carl Rogers principles for significant learning. Interesting wrt the need to create a nurturing, emotionally supportive space for learning.

Suggested reading

Engagement and motivation

Social media and online communities

Assessment and feedback

More general, learning at scale

Resilience – notes from reading group

These seems to be a lot of interest in resilience in higher education at the moment. For myself, while I know we can all learn how to better cope with the stuff life throws at us, my initial reaction to the topic with was along these lines:

My impression from these papers is that resilience is not well-defined and interventions, although often very plausible, are not evidence-based. Putting that concern aside, the techniques which seemed most suited to be incorporated in university education were:

  • building nurturing social networks,
  • fostering a sense of purpose, and
  • encouraging reflection.

I read Resilience: how to train a tougher mind (BBC Future) and Jackson, D., Firtko, A. and Edenborough, M. (2007) ‘Personal resilience as a strategy for surviving and thriving in the face of workplace adversity: a literature review’, Journal of Advanced Nursing, 60, 1: 1-9.

Resilience is broadly: the ability to keep going in face of adversity and to get back to normal functioning afterwards. It can mean different things in different situations and might not always be wholly positive. For example, one study looked at at-risk youths for whom self-reported resilience meant disconnection and the ability to go-it-alone – not necessarily something to foster.

Both papers talked predominantly about quite extreme situations: children whose schools were close to the twin towers on 9/11, and nurses who work in high-pressure and traumatic environments. In both a lot of the conclusions seem to be based on self-report, for example how people say that they coped under extreme stress.

There are lots of traits, attitudes, and techniques mentioned as helpful for resilience and most of these are thought to be things which can be learned or developed. They include:

  • social support, especially nurturing relationships (including mentoring)
  • faith, spirituality, sense of purpose
  • positive outlook, optimism, humour, seeking the positive
  • emotional insight, for example through reflective journaling
  • life balance

There are several programmes seeking to develop these traits in school children through mindfulness, sometimes mixed with other techniques. These programmes include: Mindfulness in Schools Project (UK), Inner Resilience Programme (US), Penn Resiliency Training (US). The nursing paper does not mention mindfulness, focusing more on hardiness, optimism, repressive coping, and journaling (more stereotypical activities for middle-aged women, perhaps?).

Both papers touch on the idea that you can’t help others to be calm and resilient if you are not resilient yourself, and so on the importance of promoting resilience in those with caring responsibilities (nurses, teachers).

There are no magic bullets though and nobody claiming large or long-lasting effects for any intervention (once it’s finished). What we have is a bag of techniques and ideas.

Threshold concepts – notes from the reading group

Suzi read Before and after students “get it”: threshold concepts by James Rhem (2013)

This relatively short article is part general discussion but mostly practical advice. The points I found most interesting were:

  • “Learning thresholds” might have been a better name, according to Ray Land.
  • There’s been success using threshold concepts as a way to get academics talking about their subject from an education point of view. They are something that people “get” and often enjoy engaging with, though they might struggle to agree on a definitive list of concepts for their subject.
  • To get through the liminal space takes “recursive, deep learning” (which I take to mean an immersive experience). This can be difficult to achieve.
  • We need to help students become more resilient and more optimistic, to help them make it through (there was little idea of how to do this though).
  • Trying to simplify the concepts for students may be counter-productive as it may encourage mimicry.

It made me reflect on conversations I’ve had about students mathematical ability when they arrive at university: they might make it through a-level but not really understand or be able to apply the concepts. This seems very similar to the contrast between mimicry and crossing the threshold.

Mike read Demistifying thereshold concepts by Darrell Rowbottom is a critique of the concept from a philosophy professor (2007)

Threshold concepts, as an idea, appeal to me, but I have found them to be a slippery/troublesome concept in themselves. It was interesting to read this critique which critiqued Meyer’s and Land’s ideas, and those who state they have found examples of them in particular subject areas. The paper took issue with:

  • the interpretation of a concept and the application of the theory, which Rowbottom states is closer to ability
  • explores whether these things are bounded in the way the term threshold implies. Thresholds will be relative (different for different people)
  • the woolly language used eg they are ‘significant’ in terms of the transformation that occurs
  • suggests they are not definable and not measurable. You cannot empirically isolate them or test for them (the latter is part of a wider issue for education for me).

Whilst much of this is valid, and as Suzi mentioned, Land would  use a different name if starting from scratch, I still think the idea has some use. I suggest most theories of education are difficult to isolate or prove, and  thinking about the most troublesome and transformative concepts can still help design curricula and focus teaching and learning.

Gem read  What’s the matter with Threshold Concepts? by Lori Townsend, Amy Hofer and Korey Brunetti is a guest post on the ARClog Blog (Blogging by and for academic and research librarians, posted Jan 2015).   This short piece was a response to some of the arguments against Threshold Concepts. The authors attempted a reasonable rebuttal of seven main arguments against Threshold concepts (listed below for interest) and they made some good counter-arguments, some with respect to information literacy instruction (discipline-specific).

Arguments against Threshold Concepts

  1. Threshold concepts are aren’t based on current research about teaching
  2. Everything is a threshold concept
  3. Threshold concepts are unproven
  4. Threshold concepts don’t address skill development
  5. Threshold concepts ignore the diversity of human experience
  6. Threshold concepts are hegemonic
  7. Threshold concepts require us to agree on all the things

The authors (I felt) successfully argued that there was theoretical value to using these concepts and helped me appreciate the usefulness of this theory as a pedagogic model (this was discussed further with the reading group). Jargon and woolly language is a real barrier to comprehension and being able to critically appraise different educational theories (for me at least coming from a science background). I have struggled with some theoretical approaches to pedagogy but the Threshold concept model, or at least my understanding of it, is one approach that I see useful and comprehensible from the point of view of both teacher and leaner having related experiences of both to this model.

Their conclusion “it’s useful to think of threshold concepts as a model for looking at the content we teach in the context of how learning works” was very thought provoking.

For me I relate traversing the liminal space as acquiring a new, albeit difficult skill (ability, idea) and then the consolidation of this new acquisition. The application of this new skill occurs only once I have passed through the Threshold and am on the other side (thus able to apply this new knowledge successfully to a task).

Roger readThreshold concepts: implications for game design”. This paper describes a project to develop an educational game covering threshold concepts in information literacy.  The authors give an account of the lessons learnt through the process of designing and testing the game.  They conclude that their original idea of a single player game did not reflect the team-based nature of research, the individual competitive game structure did not match the collaborative educational approach they were trying to model, and opportunities were needed for expert input in the game process. They suggest strategies for future improvements including using more open game structures, incorporating debriefing and offering social as well as individual learning contexts.

Other suggested reading

Tips and examples for large online courses

Lessons learnt at Bristol and elsewhere. Also available as a printable handout: tips and examples for large online courses (pdf).

Developing an idea

Start with the learners. Who are they? What is their motivation (intrinsic and/or extrinsic)? How does the course fit into their lives? What is their journey through the course?

Make sure your team has a shared understanding of what you and others involved are trying to achieve by providing the course. What would success look like? Would it look different to different people?

Look at what other people have done. It can be tempting to fall into familiar patterns of course design. Enrol on some MOOCs to look around. Engage if you can. We’ve selected some examples to get you started (see second page, “Ideas for large online courses” in the pdf).

Planning your course

Keep thinking from the learner’s’ point-of­-view. What is their journey through the course? What are they doing at each stage?

Learners often feel a personal connection with the lead educators. Who will be the face of your course? Will it be one member of staff or a team? Do you need to plan for people leaving the university?

Don’t assume you have to use video for everything. Use video where it really does add something. Learners might well prefer text over a very straight-forward lecture ­style presentation (even a short one).

Video doesn’t necessarily need high production values. Low-­cost DIY approaches to creating video, such as filming on a phone, can be very effective, so long as you have good audio quality.

Learners need support and encouragement to engage. How will students who are less confident (socially, academically, technologically) be supported? Prompt the kinds of activity you want to see, rather than assuming they will happen. Provide clear aims and instructions. Incorporate orienting activities naturalistically within the course. So you might make sure they are encouraged to post, reply, and follow during the first week.

Set clear expectations from the start. As a student, how will I know if my engagement with the course has been a success? What should I hope to achieve? Don’t over-promise ­ it’s ok if the course isn’t life-changing for everyone.

Ideas for large online courses

Pedagogies that scale, alternative approaches, opportunities

Crowdsourcing

Large courses can provide a fantastic opportunity to hear from a wide range of learners, not just the course team. Allow students to contribute their ideas, and make mistakes safely. You could create videos where the course team reflect on this week’s comments, and augment your course materials based on learner feedback.

Finishing with presentations or a competition

An event, such as presenting projects to fellow students or even competing for a prize can be very motivating. Law Without Walls gets students to propose solutions to real-world problems, which are then presented to a panel of judges including venture capitalists.

Assess for learning

Assessment can be a good way to encourage active engaged learning. You might: ask students to reflect at the start of an activity, provide comparison statistics so students can see how their understanding fits within the wider cohort, allow peer review and feedback, or set quizzes for self-assessment.

Face-to-face study groups

Meeting with fellow students can be a great motivator. Learning Circles helps people set up regular public meetings to work through MOOCs with a small group of peers. Other people have used sites like Meetup.

Fast-track vs group working

Some students prefer to fast-track through the material, working as individuals. Others appreciate a longer more collaborative route. And some may want to “lurk”, reading but not engaging in more collaborative activities.

Contributing to something real

Students might contribute to a citizen science project or to a collaborative online space such as Wikipedia. If you plan to do this, make sure you look for advice for educators for the site first, such as Wikipedia for Educators

Digital and physical artefacts

Capturing data and making complex things on a small scale is becoming cheaper and easier. From image/video/audio capture on mobile phones to cheap sensors like PocketLab to Arduino and Raspberry Pi to clubs like Bristol Hackspace and events like Bristol Mini Maker Faire.

Short intense courses

Making a course very short is one way to manage commitment and keep momentum. How to change the world is a two-week challenge for UCL engineers. 700 students from different engineering disciplines are given global challenges to work on.

Students as teachers

Teaching online and coordinating distributed teams are useful skills. Harvard Law School’s CopyrightX hires current students as teaching fellows, each working with a group of 25 students.

Bring in outside expertise

Students can gain a lot from connections with professionals outside of academia. #phonar is an internationally successful photography class (initially made available free online without the knowledge of its host university). One of its strengths is the active involvement of professional photographers.

Try before you buy

Some courses allow students to engage on a lighter level before committing. Innovating in Healthcare from Harvard ran as a MOOC but a couple of weeks in, students had the opportunity to form project teams and apply to be on a more intensive track.

Eyes on the prize

Could you offer something for exceptional contributions to the course? Students from Harvard’s Innovating in Healthcare created video pitches for their business ideas. These were voted on by fellow students, with the winners receiving video consultations on their ideas with the lead academic.

Introduction to digital storytelling – notes from talk at BBC Digital Bristol Week

In contrast to yesterday’s talk, this talk from Colin Savage (BBC) seemed more like a formula for producing digital stories. Central to this were four questions:

  1. What question does it answer?
  2. What character will drive the story?
  3. What structure/platform might fit your story?
  4. What are the emotional touchpoints of the story?

There were some really interesting examples mentioned:

CS talked about all stories needing to answer a question, and touched briefly on reincorporation (“show the gun in act 1, fire it in act 3”). Both seem to relate to the curiosity gap mentioned yesterday.