Martin Weller is Professor of Educational Technology at the UK Open University. He chaired the OU’s first major elearning course in 1999 with 15,000 students, and has been Director of the VLE and SocialLearn projects. His research interests are in open education, the impact of new technologies, and digital scholarship. His recent book, The Digital Scholar, was published by Bloomsbury Academic under a Creative Commons licence. He blogs at edtechie.net
I’m tempted to suggest that above all MOOCs should hang a sign that declares “abandon all quality measures”, because most of the ones we have developed for formal education don’t apply in MOOCs. We have developed a set of quality measures based on a specific relationship between the education provider and the student. That relationship is fundamentally altered in a MOOC, and so those of existing measures are not applicable.
Let us consider why we measure quality. Largely it is to verify that aims and intentions have been met. The aims of the institution may be to have a sufficient number of students, for them to stay with and pass the course, and for the institution’s reputation to be upheld. The educator in charge of the course may have similar aims, along with those of personal interest. The student will have the aim of learning what they set out to, of passing the course, enjoying the experience and gaining useful skills.
We therefore develop quality measures and procedures that monitor these intentions. These could be student completion rates, student satisfaction scores, external assessment of course content, checks against external benchmarks, etc. In a MOOC many of these intentions are altered, either radically or subtly. At the moment it’s not entirely clear what the intentions of institutions are – is it to attract more formal students, to provide a public good, to make money? In the early experimental stage it might be a confused mixture of all of these, combined with a feeling of needing to do something about those bloody MOOCs. For educators the intention might be to experiment with curriculum or pedagogy, to gain personal reputation, or personal development.
The really interesting difference though is with the intentions of the learner. While some of the original aims may remain, for instance it may help in career development, others are exaggerated or absent. The need to pass the course for instance is drastically reduced, because progress on to subsequent courses is not dependent on it, and most importantly, because there is no financial commitment to passing. The personal interest in learning is heightened I would suggest. In conventional courses there will be a wide range of different types of learner, but in MOOCs, the presence of what we often term ‘leisure learners’ is much higher. They’re nearly all leisure learners – they don’t have to do this after all, it’s something that is competing with watching the TV or walking the dog. And a whole new class of learners exist in MOOCs that you rarely see in formal education. These are what we might term drive-by learners (after Jim Groom’s drive-by assignments). These are learners who are signing up because they can. It costs nothing to sign up, they can take a look, see if they like anything and move on. They may dip in and out over the course, taking bits they find engaging, or they may not even turn up at all. The financial and emotional commitment to formal education is much higher, making drive-by learners very rare. There is probably another, very small group who are also absent from formal education, and that is the antagonistic learner. These are learners who know they won’t like the course and take it for precisely that reason, so they can highlight its numerous faults.
If we consider these new types of learners and their intentions then the existing quality measures don’t map across well. For instance, very few of these learners have course completion as a major goal. And progression on to other courses is not yet a metric in a pick-and-choose world, although we will undoubtedly see increasing pressures to make MOOC learners stick with a particular brand of MOOC provider, just as we see this with computer or phone providers. With such a broad range of learners, MOOCs find themselves up against a tough comparison with formal education. In many ways higher education filters learners before they arrive, even with an Open University, the commitment to study performs a significant filtering function. To use Weinberger’s phrase, higher education filters on the way in, whereas MOOCs filter on the way out. The quality measures are therefore very different. Student satisfaction rates when you are open to enrolment and filter on the way out are unlikely to compare favourably with a system where there has been a filtering already. Which is not to say we should follow my advice and abandon all quality measures, but that the existing measures need to be recognised for what they are – tools for an entirely different purpose. Some measures may still remain, for example I ran a MOOC as part of an existing Masters programme and it was thus subject to the same quality procedure in production as any OU course, including two rounds of critical reading by an external audience, full editing and surveying. But when we filter on the way out, and operate in the open, then it opens us up to new types of quality measures. These could be altmetrics type measures (what kind of ‘buzz’ does it create, what is the public reaction of participants), or analytics (how many people come back, what is the dwell time, bounce rate, etc). But the comparisons here need to be with other MOOCs, not with formal education.
One last plea – MOOCs are still a new kid on the block. Let them make mistakes, let them be experimental, let people play and explore in this space without tying it down with the types of quality overhead we already have in formal education.