Week 5: A new classification for MOOCs by Gráinne Conole


Gráinne Conole is Professor of learning innovation at the University of Leicester. Her research interests include the use, integration and evaluation of Information and Communication Technologies and e-learning and the impact of technologies on organisational change. She regularly blogs on www.e4innovation.com and is @gconole on Twitter.

She has successfully secured funding from the EU, HEFCE, ESRC, JISC and commercial sponsors). She was awarded an HEA National Teaching Fellowship in 2012. She has published and presented over 1000 conference proceedings, workshops and articles, including the use and evaluation of learning technologies. She has recently published a Springer book entitled ‘Designing for learning in an open world.’ She was involved in the OLDS MOOC on Learning Design and is leading a team for a new Masters in Learning Innovation http://www2.le.ac.uk/study/postgrad/distance.

This post argues that the current discourse around the concept of xMOOCs (primarily based around interaction with content and essentially adopting a behaviourist learning approach), and cMOOCs (which focus on harnessing the power of social media and interaction with peers, adopting a connectivist learning approach), is an inadequate way of describing the variety of MOOCs and the ways in which learners engage with them. It will introduce an alternative means of categorising MOOCs, based on their key characteristics. It will describe how this can be used as part of the 7Cs of Learning Design framework, to design more pedagogically informed MOOCs, which enhances the learner experience and ensure quality assurance.

A fundamental aspect of ensuring a good learner experience is the quality of the course. It is important to distinguish between three main aspects of quality: quality audit, quality assurance and quality enhancement.

In general quality can be defined as ‘the standard of something as measured against other things of a similar kind; the degree of excellence of something: quality of life’.  Therefore arguably quality in e-learning is the degree to which it measure up to ‘good learning’ (although that might be construed as a somewhat contentious statement). It certainly points to the notion of excellence and worth.

Quality assurance mechanisms are now requirements in most formal educational institutions and indeed many countries have a requirement for institutions to undergo externally reviewed quality audits on a regular basis. Institutional quality audit aims ‘to contribute, in conjunction with other mechanisms, to the promotion and enhancement of high-quality in teaching and learning’.

The Quality Assurance Agency in the UK describes quality assurance as  ‘the means through which an institution ensures and confirms that the conditions are in place for students to achieve the standards set by it or by another awarding body’, and quality enhancement as ‘the process of taking deliberate steps at institutional level to improve the quality of learning opportunities …. Quality enhancement is therefore seen as an aspect of institutional quality management that is designed to secure, in the context of the constraints within which individual institutions operate, steady, reliable and demonstrable improvements in the quality of learning opportunities’.

Ehlers et al. argue that quality is very much the condition, which determines how effective and successful learning can take place. They go on to pose the following questions in relation to the quality of MOOCs:

  • What are MOOCs actually aiming at?
  • Can the quality of MOOCs be assessed in the same way as any defined university course with traditional degree awarding processes? Or do we have to take into account a different type of objective with MOOC learners?
  • Are the learners mostly interested in only small sequences of learning, tailored to their own individual purpose, and then sign off and move to other MOOCs because their own learning objective was fulfilled?

Discussing MOOCs and quality, Downes argues that:

When we are evaluating a tool, we evaluate it against its design specifications; mathematics and deduction tell us from there that it will produce its intended outcome. It is only when we evaluate the use of a tool that we evaluate against the actual outcome. So measuring drop-out rates, counting test scores, and adding up student satisfaction scores will not tell us whether a MOOC was successful, only whether this particular application of this particular MOOC was successful in this particular instance.

Therefore quality is a fundamental facet that needs to be considered in relation to both the design and delivery of MOOCs. We need to develop better metrics to understand the way in which learners are interacting with MOOCs and hence their experience of them.

Whilst mechanisms to ensure quality are well established in formal education institutions, such mechanisms are not in place, certainly not in any formal sense, for MOOCs. And arguably this is a key issue that needs to be address if MOOCs are going to valuable and viable learning experiences and be sustainable in the longer term.

As mentioned earlier, to date, MOOCs have been classified as either xMOOCs or cMOOCs. I want to argue that such a classification is too simplistic and in this section put forward an alternative mechanism for describing the nature of MOOCs.

I want to suggest that a better classification of MOOCs is in terms of a set of twelve dimensions: the degree of openness, the scale of participation (massification), the amount of use of multimedia, the amount of communication, the extent to which collaboration is included, the type of learner pathway (from learner centred to teacher-centred and highly structured), the level of quality assurance, the extent to which reflection is encouraged, the level of assessment, how informal or formal it is, autonomy, and diversity. The last two are taken from Stephen Downes. MOOCs can then be measured against these twelve dimensions (Table 1). The following MOOCs are shown to illustrate how different MOOCs map to these twelve dimensions:

  1. Connectivism and Connective Learning 2011 (CCK). The course took part over twelve weeks. The course uses a variety of technologies, for example, blogs, Second Life, RSS Readers, UStream, etc. Course resources were provided using gRSShopper and online seminars delivered using Elluminate. Participants were encouraged to use a variety of social media and to connect with peer learners, creating their own Personal Learning Environment and network of co-learners.
  2. Introduction to Artificial Intelligence (AI) 2011 (CS221). The course ran over three months and included feedback and a statement of accomplishment. A small percentage of participants enrolled registered for the campus-based Stanford course. The course was primarily based around interactive multimedia resources. The course is now based on the Audacity platform.
  3. OLDS (Learning Design) (OLDS) 2013The course ran over eight weeks, with a ninth reflection week. It was delivered using Google Apps, the main course site being built in Google Drive, Google forums and Hangouts were also used. Cloudworks was used as a space for participants to share and discuss their course artefacts and to claim credit for badges against course achievements.
  4. Openness and innovation in elearning (H817). The course is part of the Masters in Open and Distance Education offered by the Open University UK. H817 runs between February and October 2013 months, however the MOOC component of the course consists of 100 learning hours spread over seven weeks from March 2013 and is open to a wider audience than those registered on the OU course. The course adopts an ‘activity-based’ pedagogy. There is an emphasis on communication through blog postings and the forum.  Participants have the opportunity to acquire badges for accomplishments.
  5. Introduction to Openness in Education (OE). The course tutor advocates that “learning occurs through construction, annotation and maintenance of learning artifacts,” which is the philosophy that underpins the design of the course. Participant could acquire badges for various accomplishments.


Table 1: Mapping 5 course to the twelve dimensions of MOOCs

Dimension Low Medium High
Open H817, OE, AI CCK, OLDS
Massive OLDS, H817, OE CCK AI
Use of multimedia CCK, OLDS, H817, OE AI
Degree of communication AI OLDS, H817, OE CCK
Degree of collaboration AI CCK, OLDS, OE H817
Learning pathway CCK OLDS, H817, OE AI
Quality Assurance CCK AI, OLDS, OE H817
Amount of reflection AI OLDS, OE CCK
Certification CCK[7] OLDS, AI OE
Formal learning AI, CCK OLDS H817, OE
Autonomy H817, OE CCK, OLDS, AI
Diversity H817, AI, OLDS CCK, OE


The table demonstrates that, in terms of the twelve dimensions, the five MOOCs illustrate examples of low, medium and high degrees of each. I would argue that at a glance this classification framework gives a far better indication of the nature of each MOOC than the simple classification as xMOOCs and cMOOCs.

The MOOC criteria described in this blog fits under the Conceptualise C of the 7Cs of Learning Design framework. It can be use to plan the design of the MOOC against these twelve criteria. Table 2 shows how these criteria can be used to characterise a Continuing Professional Development course for Medics. The course is informal and is aimed at Medics in a local authority in the UK.


Table 2: Example of using the MOOC criteria in the design of a course

Dimension Degree of evidence
Open High – The course is built using open source tools and participants are encouraged to share their learning outputs using the creative commons license.
Massive Low – The course is designed for Continuing Professional Development for Medics in a local authority.
Use of multimedia High – The course uses a range of multimedia and interactive media, along with an extensive range of medical OER.
Degree of communication Medium – The participants are encourage to contribute to a number of key debates on the discussion forum, as well as keeping a reflective blog of how the course relates to their professional practice.
Degree of collaboration Low – The course is designed for busy working professionals, collaboration is kept to a minimum.
Learning pathway Medium – There are two structured routes through the course – an advanced and a lite version.
Quality Assurance Medium – The course is peer-reviewed prior to delivery.
Amount of reflection High – Participants are asked to reflect continually during the course, their personal blogs are particularly important in this respect.
Certification Medium – Participants can obtain a number of badges on completion of different aspects of the course and receive a certificate of attendance.
Formal learning Low – The course is informal and optional.
Autonomy High – Participants are expected to work individually and take control of their learning, there is little in the way of tutor support.
Diversity Low – The course is specialised for UK medics in one local authority.


The 7Cs framework can be used both to design and evaluate MOOCs. The tools and resources associated with each of the Cs enable the designer to make more informed design decisions. The evaluation rubric under the Consolidate C enables them to ensure that the design is fit for purpose, hence ensuring the quality of the MOOCs and the ultimate learner experience.

It is evident that there are a number of drivers impacting on education. Firstly, universities are increasingly looking to expand their online offerings and make more effective use of technologies. Secondly, there is increasing demand from higher student numbers and greater diversity. Thirdly, there is a need to shift from knowledge recall to development of skills to find and use information effectively. In this respect, there is a need to enable learners to develop 21st Century digital literacy skills to equip them for an increasingly complex and changing societal context. Finally, given the proliferation of new competitors, there is a need for traditional institutions to tackle new competitive niches and business models. MOOCs represent a sign of the times; they instantiate an example of how technologies can disrupt the status quo of education and are a forewarning of further changes to come. Whether or not MOOCs will reach the potential hype currently being discussed is a mote point, what is clear is that we need to take them seriously. More importantly, for both MOOCs and traditional educational offerings we need to make more informed design decisions that are pedagogically effective, leading to an enhanced learner experience and ensuring quality assurance.

Finally, the key value of MOOCs for me is that they are challenging traditional educational institutions and having to make them think about what they are offering, how it is distinctive and what the unique learner experience will be at their institution. As Cormier states:

When we use the MOOC as a lense to examine Higher Education, some interesting things come to light. The question of the ‘reason’ for education comes into focus.

Furthermore, UNESCO estimate that more than 100 million children can’t afford formal education, MOOCs provide them with a real lifeline to get above the poverty line. This, and the fact that MOOCs provide access to millions. As Creelman notes:

Whatever you think of them they are opening up new learning opportunities for millions of people and that is really the main point of it all.

So for me the value of MOOCs to promote social inclusion, coupled with them making traditional institutions look harder at what they are providing their students, signifies their importance as a disruptive technology. For me therefore, whether they survive or not, if they result in an opening up of education and a better quality of the learner experience that has got to be for the good.

A more detailed paper, along with the references and links, on this post is available on Google Drive.

12 thoughts on “Week 5: A new classification for MOOCs by Gráinne Conole

  1. The article is great and will help to judge and analyze MOOCs much more careful.

    One of the key issues seems to the aspect of support which also covers pre-assessment – “is the course the right one for me?” – the questions is whether people can judge whether a course is suitable for an individual and what a learner should do before joining the course. This might help to reduce drop-outs which are usually up to 90% in the beginning.

    Another aspect is contextualization – for me, this is the key of successful MOOCs: how can the generic MOOC contents be transferred to the environment / context of a learner. This of course needs support and is not solved by most MOOCs at the moment.

  2. Hi Yes good points. I think the learning design visualisation we have developed can help inform learners of the nature of the course and whether or not it is appropriate for them, For example our course features enables teachers to clarify what the core principles of the course are, whereas the Activity Profile helps give learners an indication of how much time they will spend on different types of activities.

  3. On reflection I think the dimension ‘Certification’ needs to be broaden to ‘Assessment and Certification.’ So low on this dimension would be little or now feedback, medium some degree of formative and/or summative feedback and high elements of both assessment and certification.

  4. Pingback: A new classification for MOOCs by Gráinn...

  5. Pingback: A new classification for #MOOCs by Gráin...

  6. Pingback: A new classification for #MOOCs by Gráinne Conole | cftc10

  7. Pingback: Week 6: Quality of MOOCs: Keeping our promises! | MOOC Quality Project

  8. Pingback: Week 5: A new classification for MOOCs by Gráinne Conole | Rosana Mondino

  9. I know this post is about MOOCs but the criteria could be used for any online course (presumably the existence of open/massive could (in theory) make any online course into a MOOC).

  10. Pingback: Week 5: A new classification for MOOCs by Gr&aa...

  11. Pingback: Week 5: A new classification for MOOCs by Gr&aa...

Leave a Reply

Your email address will not be published. Required fields are marked *