MOOCs: what have we learnt? – notes from the reading group

Steve read HEA: Liberating learning: experiences of MOOCs

MOOCs are increasing in popularity. Will this continue? Registrations, drop outs, completions. Will they disrupt HE?

10-person sample size, people who completed Southampton MOOC. Want to understand motivations, opportunities, problems. Discussed findings with five academics who taught/led it. Aware of small scale, so no recommendations – but reflections and suggestions.

Themes from findings:
1 Flexible, fascinating and free – can fit into lives, customise pace, no financial commitment.
2 Feeling part of something – social & international aspect, even for passive ‘lurkers’
3 Ways of learning – prefer sequential over dipping in/out.
4 A bit of proof? – cost sensitivity to purchasing accreditation. Only 1 wanted this.

Four-quadrant model for MOOC engagement, suggests stuff to include. Two axes:
personal enjoyment vs work/education
studying alone vs social learning

Steve also read What are MOOCs Good For?

MOOC boom and bust? High-profile implementation at San Jose failed, inc backlash from academics. General completion/dropout rate  (SB: do we care about drop outs? Most are window shoppers). Experiments and options/opportunities are still expanding. In summary, more data needed but need to moderate expectations – still a place for innovation, also integrating with traditional teaching – take best bits of both?

Roger read: Practical Guidance from MOOC Research: Students Learn by Doing

This is one of a series of blog posts by Justin Reich, who is Executive Director of the Teaching Systems lab at MIT, which ” investigates the complex, technology-mediated classrooms of the future and the systems we need to develop to prepare teachers for those classrooms.”
In this post from July 2015, Justin’s main point is that when developing MOOCs it is better for student learning to focus on development of interactive activities as opposed to high production videos.  He mentions particularly the value of formative peer assessment, synchronous online discussion and simulations “that create learning experiences that students may not have in other contexts”.
If making videos then focus on the early parts of the course, as watching tends to drop off later in courses. There is some evidence that students prefer Khan academy type screencasts with pen animations rather than talking over slides.

Suzi read Why there are so many video lectures in online learning, and why there probably shouldn’t be

The article argues that video is expensive, particularly if you aim for higher production values (which many people do). Their methodology was a literature review, interviews with experts, and studying the use of video in over 20 MOOCs. There’s no evidence that video does (or doesn’t) work as a learning tool, and little or none that high production values add much. Learners wrongly self-report that they learn well from video (cf the study of physics videos – Saying the wrong thing: improving learning with multimedia by including misconceptions

They argue that people should:

  • think twice before using video
  • use video where it really does add value (virtual field trips, creating rapport, manipulating time and space, telling stories, motivating learners, showcasing historical footage, conducting demonstrations, visual juxtaposition)
  • focus on media-literacy for the content experts and DIY approaches (eg filming on mobile phones)

Suzi also read 10 ways MOOCs have forced universities into a rethink

Broadly an argument that MOOCs are changing HE. MOOCs have given universities the impetus to experiment with pedagogy (notably, fewer lectures), assessment, accreditation, and course structure. They have made more common to think in terms of a digital education strategy. They are also disrupting universities: HEIs are no longer the only providers of HE and cheaper degrees are becoming available. They’ve highlighted an unmet demand (for something like evening classes?) and particularly in vocational and practical subjects. Clark talks about global networks of universities being like airline consortia – the passenger buys one ticket but makes their journey over several airlines.

Mike read  ‘7 ways to make MOOCs Sticky’, a blog post by Donald Clark and also ‘Bringing the Social back to MOOCs’ by Todd Bryant in an EduCause review.

The former looked at design to keep a MOOC audience coming back.  The latter looked at how MOOCs can encompass social learning (rather than just provide content). A point of contention between the two is the importance of social learning – overemphasised if you believe Clark and missing from many MOOCs if you believe Todd.

Clark, drawing on MOOC data from Derby’s Dementia MOOC, listed 7 ways to retain learners. For me, his seven points divide into three related areas, audience, structure and the value of social. He framed the discussion in the recognition that we cannot apply metrics from campus courses to things that are free, open and massive  courses. Clark is often a provocative commentator though, and his downplaying of the social is interesting.

An overarching theme of Clark’s post is audience sensitivity, though at times the audience he is most sensitive to seems to be himself. In my experience, this is a tough challenge for MOOCs. To Clark this is about not treating MOOC learners like undergraduates who are ‘physically and psychologically at University’. He rightly states they have different needs and interests. As someone who has helped design MOOCs, it is hard to make something that is all things to all people, and often it is about providing a range of activities, levels and opportunities for learners to engage.

Related to audience sensitivity, Clark sees a value in keeping MOOCs shorter (definitely wise), modular (allowing people to dip into bits), with less reliance on a weekly structure and coherent whole. This is maybe less about keeping learners, and more about allowing them to get what they want from parts of a course. It would be great to come up with ways to evaluate MOOCs for learners who want to take bits of courses. Post-course surveys are self-selecting and largely made up of completers. It is also a tough design challenge to appeal to such learners whilst also trying to deliver depth and growth through a course. Clark is involved in some companies who develop adaptive learning systems, perhaps reflecting a similar philosophy. Adaptive approaches may provide some answers in the future.

Clark is also is not a fan of the weekly structure, at least in terms of following through with a cohort. I think many learners like both the structure and the social, and these is are the main differentiating factors for MOOCs that mean they are not just a set of online materials. Many learners find the event driven, weekly structure motivating, and it is event many enjoy and learn the social element of MOOCs more than the content. I was always keen to draw out the social elements, to give learners the chance to contribute to the course and learn from each other.  Clark is somewhat scathing of social constructivism and the kind of learning emphasised in C-MOOCs.

This is in contrast to Bryant’s article. For Bryant, too many MOOCs are ‘x-MOOCs’ – largely about content and neglecting the social.  Interestingly, he does cite features of EdX and Coursera that have the potential to change this by allowing learners to work in groups and buddy up during courses. We would have really valued such features when I was working on MOOC about Enterprise. FutureLearn is not currently well equipped in this area.  He goes on to explore other ways of helping people collaborate off platform through user groups and crowd sourcing/ knowledge building tools. This would work well for some, but doubtless exclude others. He considers simulations, virtual worlds and ‘alternate reality games’ – simulations played in the real world. These could all play a role, but for me, alongside a core MOOC structure. Bryant sees MOOCs as a potential ‘bridge between open content and collaborative learning’. I suspect Bryant and Clark would value very different kinds of MOOC. Should we try to appeal to both extremes (and all in between) or pitch the MOOC at a particular audience? Probably the latter, but it still isn’t easy.

Psychology and education – notes from the reading group

Chris read Is it time to rethink the way university lectures are delivered?, a short article about a Science paper from 2011. A class of Canadian physics-major freshmen was split into two and one week of material was delivered differently to the two halves of the class. The first half stuck to the tried and tested lecture-using-powerpoint format, whilst the other half used a more ‘interactive’ approach termed ‘deliberate practice’: discussion groups, preclass reading assignments, in-class clicker-questions, online quizzes. Lo and behold, in a test the following week the second cohort scored 74% on a test about the material and the other half  only got 41%, thus illustrating that three days later they could remember the material better. The study has come in for a lot of criticism about methodology – only 211 of 271 students actually took the test (how would the others have altered the results?), and the people that designed it were also the ones that delivered the intervention so may well have been ‘teaching to the test’. However, the general feeling seems to be that though the study is flawed, the conclusions are broadly correct. It also illustrates that having a Nobel Prize allows you to publish anything you like anywhere you want.

Chris also read A better way to practice, 2012 . Written by Noa Kagayame, a Julliard School of Music violinist turned performance psychologist. His argument is that it is better to practice smart than practice hard – take home aphorisms from this article are Practice makes permanent and Perfect practice makes perfect, the implication being that unless you practice correctly you can reinforce bad habits. That seems logical enough. He also argues that more thoughtful study can reduce the time needed for practice and increase the likelihood of successful performance, but I (and many of the commenters below the fold) disagree with him about this. Whilst this might be true at the highest levels, at lower levels when it’s all about training muscle memory there’s simply no substitute for doing it over and over again.

Steve watched The key to success? Grit and read True Grit, Angela Lee Duckworth & Lauren Eskreis-Winkler, 2013. I’d phrase ‘grit’ as perseverance – effort and stamina to achieve something difficult over an extended period of time. In the Tortoise and the Hare, the hare has talent, but the tortoise has grit and achieves more in the end. This summary indicates that talent and grit are often orthogonal, or negatively correlated. In the past persistence was assessed against physical challenges, but this may not relate to long-term mental grit. Modern assessment is by questioning against traits e.g. ‘I finish whatever I begin’. ((to complete)).

Suzi read Stereotype threat and women’s math performance and Mindsets and Math/Science Achievement

Both papers discuss how mindset might affect learning.

Stereotype threat is a stress-induced threat of self-fulfilling a negative and well-known stereotype. For example an elderly man looking for his keys may worry about looking senile, become stressed, and so find it harder to find his keys. The paper puts forward evidence that women’s performance in difficult maths tests can be affected by the threat of fulfilling a negative stereotype: that maths is not a girls subject. Other studies have looked at stereotype threat in relation to racial stereotypes.

Growth mindset is the belief that intelligence can be improved. Not everyone has it, others have a “fixed mindset”. Many people will tell you that they are just not a maths person. The paper states that mindsets can predict maths/science performance over time, and can mitigate for negative effects such as stereotype threat.

Both are interesting and seem plausible. Some of the suggested strategies for reducing stereotype threat and/or increasing growth mindset are:

  • feedback should emphasise the high standards of the test, and that the student has the potential to meet them
  • frame high-stakes tests as “assessing current skills and not long-term potential to learn”
  • praise effort and process, not intelligence
  • describe great mathematicians and scientists as people who loved and devoted themselves to the subject (not born geniuses)

Evidence in teaching – notes from the reading group

Suzi read Why “what works” won’t work: evidence-based practice and the democratic deficit in educational research, Biesta, G 2007 and a chapter by Alberto Masala from the forthcoming book From Personality to Virtue: Essays in the Philosophy of Character, ed Alberto Masala and Jonathan Webber, OUP, 2015

Biesta gives what is broadly an argument against deprofessionalisation in the context of government literacy and numeracy initiatives at primary school level. I found the main argument somewhat unclear. It was most convincing talking about the difficulty in defining what education is for, making it difficult to test whether an intervention has worked. Talks at length about John Dewey and his description of education as a moral practice and learning as reflective experimental problem solving.

“A democratic society is precisely one in which the purpose of education is not given but is a constant topic for discussion and deliberation.”

Masala’s paper is on virtue/character education but is of wider interest as it talks very clearly about educational theory. I found particularly useful in this context the distinction between skill as a competence (defined by performance, so easily testable) and skill as mastery (defined by a search for superior understanding and less easily tested), and the danger of emphasising competence.

Hilary read Version Two: Revising a MOOC on Undergraduate STEM Teaching, which briefly outlined some key approaches and intended developments in a Coursera MOOC aimed at STEM graduates and post docs interested in developing their teaching.

The author of the blog post is Derek Bruff (director of the Vanderbilt University Center for Teaching, and senior lecturer in the Vanderbilt Department of Mathematics with interests in agile learning, social media and SRS – amongst other things: see

Two key points:

  1. MOOC centred learning communities – the MOOC adopted a facilitated blended approach, building on the physical groupings of graduate student participants by facilitating 42 learning communities across the US, UK and Australia to use face to face activities to augment the course materials, and improve completion rates.
  2. Red Pill: Blue Pill – adopting the metaphor used by George Siemens in the Data, Learning and Analytics MOOC to give two ways to complete the course – either an instructor-led approach which was more didactic and focussed on the ability to understand and apply a broad spectrum of knowledge OR a student-directed approach which used peer graded assignments and gave the students the opportunity to pick the materials which most interested them, and so focus on gaining a deeper but less comprehensive understanding of the topic.

Final take away – networked learning is hard, as would be the logistics of offering staff / student development opportunities as online and face-to-face modules, with different pathways through the materials, but interesting …

Steve read Building evidence into education, 2013 report by Ben Goldacre for the UK government

Very accessible summary of the case for evidence-based pedagogy in the form of large-scale randomised controlled trials. Compares current ‘anecdote/authority’ edu research with past medical work – lots of interesting analogies. Focused on primary/secondary education but some ideas can transfer to higher – although would be more challenging.

Presents counterarguments to a number of common arguments against the RCT approach – it IS ethical if comparing methods where you don’t know which is best (and if you do know, why bother trialling?!). Difficulty in measuring is not a reason to discount, RCTs are a way to remove noise. Talks about importance of being aware of context and applicability. Uses some good medical examples to illustrate points.

Sketches out an initial framework – teachers don’t need to be research experts (doctors aren’t), should be research-focused team leading and guiding with stats/trials experts etc.

Got me thinking – definitely worth a read.

Roger read “Using technology for teaching and learning in higher education: a critical review of the role of evidence in informing practice, (2014) by Price and Kirkwood

This study explores the extent to which evidence informs teachers’ use of TEL in Higher Education. It involved a literature review, online questionnaire and focus groups. The authors found that there are differing views on what constitutes evidence which reflect differing views on learning and may be characteristic of particular disciplines. As an example they suggest a preference for large-scale quantitative studies in medical education.
In general evidence is under-used by teachers in HE, with staff influenced more by their colleagues and more concerned about what works rather than why. Educational development teams have an important role as mediators of evidence.

This was a very readable and engaging piece, although the conclusions didn’t come as much of a surprise!  The evidence framework they used (page 6) was interesting, with impact categorised as micro (e.g. individual teacher), meso (e.g. within a department) or Macro (across multiple institutions).

Mike read Evidence-based education: is it really that straightforward?, 2013, Marc Smith, Guardian Education response to Ben Goldacre

This is a thoughtful and well argued response to Goldacre’s call for educational research to learn from medical research, particularly in the form of randomised controlled trials. Smith is not against RCTs, but suggests they are not a silver bullet.

Smith applauds the idea that we need teachers to drive the research agenda and that we do need more evidence. His argument that it will be challenging to change the culture of teaching to achieve this, seems valid, but is not necessarily a reason not to try. The thrust of his argument is that  RCTs, whilst effective in medicine, are harder to apply to education due to the complexity of teaching and learning. He believes (and I tend to agree) that cause and effect are harder to determine in the educational context. Smith argues  that in medicine  there is a specific problem (an illness or condition) and a predefined intended outcome (change to that condition). This can be problematic in the medical context, but is even harder to measure in education. I would add that the environment as a whole is harder to control and interventions more difficult to replicate. Different teachers could attempt to deliver the same set of interventions, but actually deliver radically different sessions to learners who will interact with the learning in a variety of ways. Can education be thought of as a change of state caused by an intervention in the same way we would prescribe a drug for a specific ailment?

All this is not to say that RCTs cannot play a role, but that you have to think about what you are trying to research before choosing your methodology (some of the interventions Goldacre addressed related to specific quantitative measurable things like teenage pregnancy rates, or criminal activity). Perhaps it is my social scientist bias, bit I woudl still want to triangulate using a range of methods (quantitative and qualitative).

From a personal perspective, I sometimes think that ideas translated from science to a more social scientific context can lose some scientific validity in the process (though this is maybe most true at the level of theory than scientific practice. For example Dwarkins translated selfish genes into the concept of cultural memes, suggesting cultural traits are transmitted in the same way as genetic code. Malcolm Gladwell’s tipping point is a metaphor from epidemiology which he applies to the spreading of ideas, bringing much metaphorical baggage in the process. Perhaps random control trials could provide better evidence for the validity of these theories too?

53 powerful ideas (well, 4 of them at least) – notes from the reading group

This month we picked articles from SEDA’s 53 powerful ideas all teachers should know about blog.

Mike read Students’ marks are often determined as much by the way assessment is configured as by how much students have learnt

Many of the points made in this article are hard to dispute. Different institutions and subject areas vary so widely that not only are how marks are determined different between say Fine Art and Medicine, but also between similar subjects at the same institution, and also between the same subject at different institutions. This may reflect policy or process (eg dropping the lowest mark before calculating final grade).  In particular, Gibbs argues that coursework tends to encourage students to focus on certain areas of the curriculum, rather than testing knowledge of the whole curriculum.  Gibbs also feels these things are not always clear to external examiners. He does not feel that QAA emphasis on learning outcomes address these shortcomings.

The article (perhaps not surprisingly) does not come up with a perfect answer to what is a complex problem. Would we expect Fine Artists to be assessed in the same way as doctors? How can we ensure qualifications from different institutions are comparable? Some ideas are explored, such as asking students to write more course work essays to cover the curriculum, and then marking a sample. This is however rejected as something students would not tolerate. The main thing I can take from this is that thinking carefully about what you really need to assess when designing the assessment is important (nothing new really). For example, is it important that students take away a breadth of knowledge of the curriculum, or develop a sophistication of argument? Design the assessment to reflect the need.

Suzi read Standards applied to teaching are lower than standards applied to research and You can measure and judge teaching

The first article looks at the difference between the way academics receive training for teaching and the way research and the way teaching and research are evaluated and accredited. Teaching, as you might imagine, comes off worse in all cases. There aren’t any solutions proposed, though the author muses on what would happen if they were treated in the same way:

“Imagine a situation in which the bottom 75% of academics, in terms of teaching quality, were labelled ‘inactive’ as teachers and so didn’t do it (and so were not paid for it).”

The second argues that students can evaluate courses well if you ask them right things: to comment on behaviour which are known to affect learning. There didn’t seem to be enough evidence in the article to really evaluate his conclusions.

The argument put at the end seemed sensible: that evaluating for student engagement works well (while evaluating for satisfaction, as we do in the UK, doesn’t).

The SEEQ, a standardised (if long) list of questions for evaluating teaching by engagement, looks like a useful resource.

Roger read Students do not necessarily know what is good for them.

This describes three examples where students and/or the NUS have demanded or expressed a preference for certain things, which may not actually be to their benefit in the longer term. He believes that these cases can be due to a lack of sophistication of learners (“unsophisticated learners want unsophisticated teaching”) or a lack of awareness of what the consequences of their demands might be (in policy or practice). The first example is class contact hours. Gibbs asserts that there is a strong link between total study hours (including independent study) and learning gain, but no such link between class contact hours and learning gain. Increasing contact hours often means increasing class sizes which generally means a dip in student performance levels.   Secondly he looks at assessment criteria, saying that students are demanding “ever more detailed specification of criteria for marking” , which he states are ineffective in themselves for helping students get good marks, as people interpret criteria differently. A more effective mechanism would be discussion of a range of examples where students have approached a task in different ways, and how these meet the criteria. Thirdly he says that students want marks for everything, but evidence suggests that they learn more when receiving formative feedback with no marks, as otherwise they can focus more on the mark than the feedback itself.

The solution, he suggests is to make evidence-based judgements which take into account student views, but are not entirely driven by them, to try to help students develop their sophistication as learners and to explain why you are taking a certain approach. This article resonated with me in a number of ways, especially with regard to assessment criteria and feedback. There is an excellent example of practice in the Graduate School of Education where the lecturer provides a screencast in which she goes through an example of a top level assignment, explaining what makes it so good.  She has found that this has greatly reduced the number of student queries along the lines of “What do I need to do to get a first / meet the criteria”.  I also strongly agree with his point about explaining to students the rationale for taking a particular pedagogic approach. Sometimes we can assume that students know why a certain teaching method is educationally beneficial in a particular context, but in reality they don’t. And sometimes students resist particular approaches (peer review anyone!) without necessarily having the insight into how they may be helpful for their learning.

Active learning – notes from reading group

Active learning might be an unhelpfully broad topic but there are some very helpful ideas in these papers.

  • Bonwell, C. (1991), Active learning: creating excitement in the classroom, Eric Digest – The article starts by defining what AL is, the key factor being that students must do more than just listen e.g. read. write, discuss, problem solve. It identifies the main barrier to use of AL as risk, for example that students will not participate, or that the teacher loses control.  It suggests ways to address this for example by trying low risk strategies such as short, structured, well-planned activities.
  • Prince, M. (2004), Does Active Learning Work? A Review of the Research, Journal of Engineering Education, 93(3), 223-232. Splits active learning into constituent parts and looks at the evidence for (often relatively minor) interventions covering each of these parts, in an attempt to identify what really works. A useful reference for anyone looking for quantitative evidence for active learning type interventions and a useful discussion of what leads to successful (or unsuccessful) problem-based-learning.
  • Jenkins, M. (2010), Active Learning Typology: a case study of the University of Gloucestershire. The paper describes how an ‘active learning ‘strategy has been implemented at the University of Gloucester. In the first paragraph Jenkins provides some references on active learning to unpacks its meaning that helped us to better understand the term and put it into context,  for example, …the role of the teacher is not to transmit knowledge to a passive recipient, but to structure the learner’s engagement with the knowledge, practising the high-level cognitive skills that enable them to make that knowledge their own (Laurillard, 2008; 527). page 2. At the same time this is compared to the understanding of ‘active learning’ of the staff at the university which through  a survey were asked to identify their conceptions of active learning. The results identified three categories ‘families’, 1) external (student are active when they learn by doing), 2) ‘internal (student are active when they are engaged in cognitive processes) and 3) holistic (it is a composite of the two, and students are active learning is generally investigative, developmental, creative. An interesting perspective is a distinction in the interpretation where the emphasis is placed on the student or the teacher, Is active learning what the teacher gets the students to do or what learning is done by students? The data showed that there is a split between some staff practising ‘active teaching’ and other practising ‘active learning’. The outcome of the project has produced a framework for staff to work with which is very useful and identifies common elements of active learning in these five categories: Co-learning opportunities, Authenitcity, Reflection, Skills development, Student support.

Reading group notes: MOOCs

Characteristics of MOOCs (from Wikipedia) – (Roger) – participants distributed, course materials available on the web, built on a connectivist approach, typically free but may charge for accreditation, typical components might be a weekly presentation, discussion questions, suggested further resources, personal reflection and sharing of resources .  I also tried registering for Stephen Downes Change.– and was interested that the 4 types of activity suggested for the course reflect quite well important aspects of the way I work : these are 1. aggregate , 2. remix  3. re-purpose and 4. feed forward

Disrupting College, Clayton M. Christensen, Michael B. Horn, 2011 (Suzi) – Policy paper arguing that we are in for a massive change (a disruption) in the way HE works and that (amongst other things) the only way for existing institutions to take advantage of this is to create autonomous business units to work in this area.

What Can We Learn From Stanford University’s Free Online Computer Science Courses?, Seb Schmoller, 2011 (Suzi) – Seb’s experiences on the Stanford AI course and his thoughts about what this means for the sector. Stanford will be learning a lot and getting well ahead of the game by running these courses. Other institutions will not be able to get their numbers – collaboration may be the only way to compete.

Suggested reading

Reading group notes: Innovation in education

Challenge-based learning  (Roger) – aim of CBL is to build 21st skills and  make learning relevant by basing it on real challenges to which students find a solution. It is multidisciplinary, collaborative, student directed and works best in a technology-rich environment.  The New Media Consortium carried out a trial with a number of US schools and Universities, which found that CBL does build 21st century skills, it engages students, and teachers felt that it helped students master the material. TheCBL website gives an overview of the key elements.

“‘… who might gain access to hitherto ‘unlearnable’ ideas” – nice phrase from part 1 of the blog post.

What are Learning Analytics (Suzi) – Puts forward the radical/futuristic idea that with the amount of personal data being recorded (videos watched, events attended, blog posts written, etc) and advanced data processing – formal qualifications might no longer be explicitly pursued. Instead some system would look at everything you’ve done and tell you that you are “64% to achieving a phd in psychology, 92% to achieving a masters in science…”

2011 Horizon Report – Four to Five Years: Learning Analytics (Suzi) – Overview with sensible coherent examples which are basically web stats/web personalisation applied to education. Seems an idea in its infancy.

Ambient learning cities (Suzi) – Learning doesn’t need to be a specialist activity – we can embed it back in our lives through the environments we live in. The only projects I could find around this were museum-focused, crowd-sourcing / crowd-curating collections. The RunCoCo project produces software and a process to manage this.

Suggested reading

  • All three of these short articles, plus one related article of your own choosing: Innovation in Education and Learning (Department of BIS blog): part 1part 2part 3

Reading group notes: Models for evaluating e-learning

Evaluating E-learning, A Guide to the Evaluation of E-learning, Graham Attwell, 2006 (Suzi) – Framework of factors to take into consideration, then lists 5-6 clusters of types of evaluation. Not much detail of how these types of evaluation actually work, but may be useful as a starting point. Section 6: SPEAK tool – used in community education – may be useful for “students as agents of change”. Section 9 – evaluating e-learning policies – gives guidelines and prompts for developing a set of questions with which to evaluate a policy. Section 10 – management-oriented evaluation – possibly worth a look during project planning.

Evaluation of e-learning courses– Institute of Education , 2008  (Roger)   Covers evaluating courses, wholly or partly online.  Aims to provide an overview of practical evaluation resources. And its target audience is academics at IOE. Includes a literature review.Recommendations:

  • Plan evaluation before the course starts
  • Collect feedback from all stakeholders including students, tutors, admins and tech support staff.   For staff this could be done on an ongoing basis through frequent team meetings, and an end of course survey
  • Collect student feedback during and at the end of the course
  • Consider all relevant aspects of the use of tech for t and l in the course e.g. usefulness of the  content, how well online  activities run e.g. timing, sequencing (could be blended), instructions, the user experience ( levels of engagement, tutor participation, workload)
  • Make use of the specific tools available in typical online tools e.g. course stats in BB to get an idea of levels of activity (not quality)

Suggested reading

There was discussion about this on the ALT list in May 2011. References given include:


Reading group notes: Digital literacy

Information and digital competencies in HE  (Roger) – read one article in this collection  “HE and the knowledge society. Information and digital competencies”. (by Juan de Pablos Pons, 2010)  Knowledge society requires changes to teaching models – lecturers no longer the only source of knowledge, opportunities for technology to help the social dimension of teaching (“socialization of knowledge”). Universities are too compartmentalised, their structures and offer needs to reflect more the interdisciplinary nature of modern knowledge and research.   Distinguishes between IT competencies (how to use ICT) and “information competencies” (which would relate to more of Doug Belshaw’s 8 elements, e.g. criticality, creativity, cognitive etc. Lastly universities need to recognise that they are educating each student for multiple jobs (and these may not yet even exist)

The essential elements of digital literacies, Slideshare (Nic) This presentation offers a useful description of digital literacy (DL), i.e. DL is often understood as the ability to participate in a range of activities and creative practices that involve understanding, sharing and creating meaning with different kinds of media and technology”. This emphasise ICT and media. The presenation also introduces a ‘5-step process model’ and a matrix that can be used to map steps to different technologies. It also describes 8 essential elements of DL. Ones I found particularly interesting were:  As online texts are not exactly written, cyber literacy needs a different form of critical thinking; society is increasingly look for people who can confidently solve their own problem and manage their own life long learning, qualities ICT is believed to promote; DL must involve systemic awareness of how digital media are constructed. From our discussion, I’ve realised that althgough the term DL is complex and contested, the topic has an important relationship to TEL. Also, DL goes beyond reading (and writing) texts to actively and collaboratively creating meaning. This is where we, as learning technologist, can engage with it at a practical level, i.e. in the design of TEL activities. See a 38 page paper from this guy on ‘What is digital literacy?’ at

The SCONUL seven pillars of information literacy – core model – (Roger) aimed at Library staff engaged in information skills work. 7 pillars : 1. identify a personal need for info, 2. scope – assess current knowledge and identify gaps, 3. plan – construct strategies for finding info, 4. gather info, 5. evaluate – including reading critically and evaluating info, 6. manage – organise information professionally and ethically (academic integrity) , 7. present – including active creation of knowledge.

The SCONUL seven pillars of information literacy – research lens  (Suzi) rather complicated extended metaphor for digital-literacy

The Medium is not the Literacy (Suzi) – article critical of the term from 2002 – the example he gives about news is a bit dated (digital news is more than just reading news stories on a screen now) – but makes a good point about the shortcomings of the term

Digital Natives: Fact or Fiction? (Suzi) – Useful, brief critique of the term “digital natives” stating that it was based on opinion (not science)

Futurity Now: Bruce Sterling on Atemporality (Suzi) – Nice illustration of the changes we face – “old” Feynman on how science works (write down problem, think hard, write down answer) vs what we do now (start by Googling to see if someone already solved it, etc)

Education Technology Standards for Students – Nice clear breakdown of the components of digital literacy (what it should mean for students to be digitally literate) including: creativity, citizenship, critical thinking, research, collaboration, and an understanding of the technology infrastructure.

European e-Competance Framework – “A common European framework for ICT professionals in all industry sectors” (including education)

Suggested reading

Reading group notes: eBooks

How to Make an eBook (Suzi) – How-to guide from Smashing Magazine. Clear and seems to do a good job of covering the most popular formats. Good for getting a grasp on the tech.

JISC e-book observatory project  (Suzi) – Survey by JISC written up in 2009, looking at use of e-books particularly by libraries and with reading done on computer screen. Good stats on quite how dissatisfied students are with short-loan collections, and on the expectation for e-books to be provided through the library.

How do e‐book readers enhance learning opportunities for distance work‐based learners? (Roger) – recent article from “Research in LT” (March 2011) . Small scale study.  Context: PG work-based DL progs. e-books/readers pre-loaded with materials piloted in response to 3 challenges, all of which they were found to effectively address: 1. need for flexibility about when and where to study for highly mobile work-based learners 2. limits of access to key readings e.g. in real libraries or accessible via campus computers 3. maximising benefits of learners’ limited and often fragmented study time. e-book readers pre-loaded with content in e-pub format given to 28 learners in 2009/10. Methodology used included cognitive mapping. All of the predicted benefits above were realised, in addition to cost-saving as students printed less out. Biggest issue was copyright restrictions which limited the ability to make other essential readings available.

Nielson Alertbox on information design for e-books (Suzi) – Essentially: write the content as though for print, create indexes and navigation as though for the (mobile) web. Kindle (at least) is still a bit rubbish for texts that involve jumping around rather than linear reading.

The line between book an internet will disappear, Hugh McGuire, September 2010 (Suzi) – The e-book / print book battle is a false one – e-books will become webpages by around 2015. e-Books currently defined by what you can’t do: link or deep-link to them, copy and paste, search them (especially, search a bunch of them at once). In short: e-books do not live on the internet. epub is just a website, bundled up into a thing. Books need APIs (see also: Google Books API and Open Library API)

Post-artifact books and Publishing, (Zak) Craig Mod

Suggested reading for the session