CSEDU, Day 2, Invited Talk, “How are MOOCs Disrupting the Educational Landscape?”, (#CSEDU14 #AdelEd)

I’ve already spent some time with Professor Hugh Davis, from Southampton, and we’ve had a number of discussions already around some of the matters we’re discussing today, including the issue when you make your slides available before a talk and people react to the content of the slides without having the context of the talk! (This is a much longer post for another time.) Hugh’s slides are available at http://www.slideshare.net/hcd99.

As Hugh noted, this is a very timely topic but he’s planning to go through the slides at speed so I may not be able to capture all of it. He tweeted his slides earlier, as I noted, and his comment that he was going to be debunking things earned him a minor firestorm. But, to summarise, his answer to the questions is “not really, probably” but we’ll come back to this. For those who don’t know, Southampton is about 25,000 students, Russell Group and Top 20 in the UK, with a focus on engineering and oceanography.

Back in 2012, the VC came back infused with the desire to put together a MOOC (apparently, Australians talked them into it – sorry, Hugh) and in December, 2012, Hugh was called in and asked to do MOOCs. Those who are keeping track will now that there was a lot of uncertainty about MOOCs in 2012 (and there still is) so the meeting called for staff to talk about this was packed – in a very big room. But this reflected excitement on the part of people – which waving around “giant wodges” of money to do blended learning had failed to engender, interestingly enough. Suddenly, MOOCs are more desirable because people wanted to do blended learning as long as you used the term MOOC. FutureLearn was produced and things went from there. (FutureLearn now has a lot of courses in it but I’ve mentioned this before. Interestingly, Monash is in this group so it’s not just a UK thing. Nice one, Monash!)

In this talk, Hugh’s planning to intro MOOCs, discuss the criticism, look at Higher Ed, ask why we are investing in MOOCs, what we can get out of it and then review the criticisms again. Hugh then defined what the term MOOC means: he defined it as a 10,000+, free and open registration, on-line course, where a course runs at a given time with a given cohort, without any guarantee of accreditation. (We may argue about this last bit later on.) MOOCs are getting shorter – with 4-6 weeks being the average for a MOOC, mostly due to fears of audience attrition over time.

The dreaded cMOOC/xMOOC timeline popped up from Florida Institute of Technology’s History of MOOCs:

moocs_picture

and then we went into the discussion of the stepped xMOOC with instructor led and a well-defined and assessable journey and the connectivist cMOOC  where the network holds the knowledge and the learning comes from connections. Can we really actually truly separate MOOCs into such distinct categories? A lot of xMOOC forums show cMOOC characteristics and you have to wonder how much structure you can add to a cMOOC without it getting “x”-y. So what can we say about the definition of courses? How do we separate courses you can do any time from the cohort structure of the MOOC? The synchronicity of human collision is a very connectivisty idea which is embedded implicitly in every xMOOC because of the cohort.

What do you share? Content or the whole course? In MOOCS, the whole experience is available to you rather than just bits and pieces. And students tend to dip in and out when they can, rather than just eating what is doled out, which suggests that they are engaging. There are a lot of providers, who I won’t list here, but many of them are doing pretty much the same thing.

What makes a MOOC? Short videos, on-line papers, on-line activities, links toe external resources, discussions and off platform activity – but we can no longer depend upon students being physical campus students and thus we can’t guarantee that they share our (often privileged) access to resources such as published journals. So Southampton often offer précis of things that aren’t publicly available. Off platform is an issue for people who are purely on-line.

If you have 13,000 people you can’t really offer to mark all their essays so assessment has to depend upon the self-motivated students and they have to want to understand what is going on – self evaluation and peer review have to be used. This is great, according to Hugh, because we will have a great opportunity to find out more about peer review than we ever have before.

What are the criticisms? Well, they’re demographically pants – most of the students are UK (77%) and then a long way down US (2%), with some minor representation from everywhere else. This isn’t isolated to this MOOC. 70% of MOOC users come from the home country, regardless of where it’s run. Of course, we also know that the people who do MOOCs also tend to have degrees – roughly 70% from the MOOCS@Edinburgh2013 Report #1. These are serial learners (philomaths) who just love to learn things but don’t necessarily have the time or inclination (or resources) to go back to Uni. But for those who register, many don’t do anything, and those who do drop out at about 20% a week – more weeks, more drop-out. Why didn’t people continue? We’ll talk about this later. (See http://moocmoocher.wordpress.com) But is drop out a bad thing? We’ll comeback to this.

Then we have the pedagogy, where we attempt to put learning design into our structure in order to achieve learning outcomes – but this isn’t leading edge pedagogy and there is no real interaction between educators and learners. There are many discussions, and they happen in volume, but this discussion is only over 10% of the community, with 1% making the leading and original contributions. 1% of 10-100,000 can be a big number compared to a standard class room.

What about the current Higher Ed context – let’s look at “The Avalanche Report“. Basically, the education business is doomed!!! DOOOMED, I tell you! which is hardly surprising for a report that mostly originates from a publishing house who wants to be a financially successful disruptor. Our business model is going to collapse! We are going to have our Napster moment! Cats lying down with dogs! In the HE context, fees are going up faster than the value of degree (across most of the developed world, apparently). There is an increased demand for flexibility of study, especially for professional development, in the time that they have. The alternative educational providers are also cashing up and growing. With all of this in mind, on-line education should be a huge growing market and this is what the Avalanche report uses to argue that the old model is doomed. To survive, Unis will have to either globalise or specialise – no room in the middle. MOOCs appear to be the vanguard of the on-line program revolution, which explains why there is so much focus.

Is this the end of the campus? It’s not the end of the pithy slogan, that’s for sure. So let’s look at business models. How do we make money on MOOCs? Freemium where there are free bits and value-added bits  The value-adds can be statements of achievement or tutoring. There are also sponsored MOOCs where someone pays us to make a MOOC (for their purposes) or someone pays us to make a MOOC they want (that we can then use elsewhere.) Of course there’s also just the old “having access to student data” which is a very tasty dish for some providers.

What does this mean to Southampton? Well it’s a kind of branding and advertising for Southampton to extend their reputation. It might also generate new markets, bring them in via Informal Learning, move to Non-Formal Learning, then up to the Modules of Formal Learning and then doing whole programmes under more Formal learning. Hugh thinks this is optimistic, not least because not many people have commodified their product into individual modules for starters. Hugh thinks it’s about 60,000 Pounds to make a MOOC, which is a lot of money, and so you need a good business model to justify dropping this wad of cash. But you can get 60K back from enough people with a small fee. Maybe on-line learning is another way to get students than the traditional UK “boarding school” degrees. But the biggest thing is when people accept on-line certification as this is when the product becomes valuable to the people who want the credentials. Dear to my heart, is of course that this also assists in the democratisation of education – which is a fantastic thing.

What can we gain from MOOCs? Well, we can have a chunk of a running course for face-to-face students that runs as a MOOC and the paying students have benefited from interacting with the “free attendees” on the MOOC but we have managed to derive value from it. It also allows us to test things quickly and at scale, for rapid assessment of material quality and revision – it’s hard not to see the win-win here. This automatically drives the quality up as it’s for all of your customers, not just the scraps that you can feed to people who can’t afford to pay for it. Again, hooray for democratisation.

Is this the End of the Lecture? Possibly, especially as we can use the MOOC for content and flip to use the face-to-face for much more valuable things.

There are on-line degrees and there is a lot of money floating around looking for brands that they will go on-line (and by brand, we mean the University of X.)  Venture capitalist, publishers and start-ups are sniffing around on-line so there’s a lot of temptation out there and a good brand will mean a lot to the right market. What about fusing this and articulating the degree programme, combining F2F modules. on-line, MOOC, and other aspects.

Ah, the Georgia Tech On-line Masters in Computer Science has been mentioned. This was going to be a full MOOC with free and paying but it’s not fully open, for reasons that I need to put into another post. So it’s called a MOOC but it’s really an on-line course. You may or may not care about this – I do, but I’m in agreement with Hugh.

The other thing about MOOC is that we are looking at big, big data sets where these massive cohorts can be used to study educational approaches and what happens when we change learning and assessment at the big scale.

So let’s address the criticisms:

  1. Pedagogically Simplistic! Really, as simple as a lecture? Is it worse – no, not really and we have space to innovate!
  2. No support and feedback!  There could be, we’d just have to pay for it.
  3. Poor completion rates! Retention is not the aim, satisfaction is. We are not dealing with paying students.
  4. No accreditation! There could be but, again, you’d have to pay for someone to mark and accredit.
  5. This is going to kill Universities! Hugh doesn’t think so but we’ll had to get a bit nimble. So only those who are not agile and responsive to new business models may have problems – and we may have to do some unbundling.

Who is actually doing MOOCs? The life-long learner crowd (25-65, 505/50 M/F and nearly always have a degree). People who are after a skill (PD and CPD). Those with poor access to Higher education, unsurprisingly. There’s also a tiny fourth cohort who are those who are dipping a toe in Uni and are so small as to be insignificant. (The statistics source was questioned, somewhat abruptly, in the middle of Hugh’s flow, so you should refer to the Edinburgh report.”

The patterns of engagement were identified as auditing, completing and sampling, from the Coursera “Emerging Student Pattersn in Open-Enrollment MOOCs”.

To finish up, MOOCs can give us more choice and more flexibility. Hugh’s happy because people want do online learning and this helps to develop capacity to develop high quality on-line courses. This does lead to challenges for institutional strategy: changing beliefs, changing curriculum design, working with the right academic staff (and who pays them), growing teams of learning designers and multimedia producers, legal matters, speed and agility, budget and marketing. These are commercial operations so you have a lot of commercial issues to worry about! (For our approach, going Creative Commons was one of the best things we every did.)

Is it the end of the campus? … No, not really, Hugh thinks that the campus will keep going and there’ll just be more on-line learning. You don’t stop going to see good music because you’ve got a recording, for example.

And now for the conclusions! MOOCs are a great marketing device and have a good reach for people who were out of reach before, But we can take high quality content and re-embed back into blended learning, use it to drive teaching practice change, get some big data and building capacity for online learning.

This may be the vanguard of on-line disruption but if we’re ready for it, we can live for it!

Well, that was a great talk but goodness, does Hugh speak quickly! Have a look at his slides in the context of this because I think he’s balanced an optimistic view of the benefits with a sufficient cynical eye on the weasels who would have us do this for their own purposes.


CSEDU Day 1, Session 1, “Information Technologies Supporting Learning”, Paper 3. (#csedu14 #AdelED)

The final talk, “The Time Factor in MOOCs” was not presented because the speaker didn’t show up. So we talked about other things.

I can only hope that the problem with the speaker was not timezone related!


CSEDU Day 1, Session 1, “Information Technologies Supporting Learning”, Paper 2. (#csedu14 #AdelED)

(I seem to be writing a lot so I’ll break these posts into smaller pieces. If I can fit these two talks into one post, I will. Apologies, dear reader, for the eye strain.)

The second talk was “MOOCs for Universities and Learners” presented by the irrepressible Su White (@suukii, material available here), from Southampton, who I have had the pleasure to meet before. Manuel Leon, the third author, is one of the PhD students who will be helping out and is also from Barcelona. Southampton has done MOOCs in the FutureLearn context. There’s quite a lot on offer in Future Learn and Southampton wanted something multi-disciplinary so they chose Web Science, which is also what MOOCs actually are so it was all somewhat self-referential in the good sense of reinforcement rather than the bad sense of Narcissus. The overwhelming lesson, not in the paper, is that getting academics to do stuff for this kind of environment is like herding cats once you start dealing with a team of excerpts. Goodness, they have a MOOC manager. (I don’t even know the poor person but I want to send them a nice calming box of chocolates.) There is a furious level of activity in an engaged on-line community and keeping up with this is very tricky – FAQs really help!

So where is FutureLearn today? There are nearly 30 institutions involved to date. Su sees a strong link with what went before, with OER and student desire for different learning approaches. The team wanted to know what motivated students and they wanted to be prepared and wanted to collect data from real live students. This includes the institution’s motivations and the student motivations. For the HEI, motivation was assessed by literature meta review and qualitative content analysis, with student motivations run with a survey on mostly qualitative grounds. The literature meta review was conducted over more than 60 articles including journal articles and “grey” literature, with a content analysis of the journals, using Herring’s (2004) adaption, after Krippendorf’s (1980) with categorising sources. (Grey literature includes magazine articles but are curated sources where authorship and provenance are both valued.)

I can’t draw the diagram but the perspectives were split between journals and grey literature, with open movements, evolution in distance education and disruptive innovation in the journal side, and sustainability (are MOOCs just a trend), quality (will they offer the same quality we get now) and impact (will the shake and change education) from the non-academic side including true believers and skeptics. A lot came out of this but Su noted the growing cynicism one develops reading the grey literature which appears to fall more into the zone of dinner party argument and that merit deeper exploration, while often not getting to that point. Are MOOCs the next stage in the slow progress from correspondence courses past flexible learning, Web 2.0 to MOOCs?

So what did they do? An online survey to find out the learners’ motivation to study: who are you, what is your education, what is your MOOC experience and (very importantly IMHO) what is your motivation? (I’m a big fan of Husman 2003 so this is very interesting to me.) In the motivations, the target communities were Spanish, Arabic and English, with wide dissemination and then the slide flipped so I lost it. 🙂 From the survey, they had 285 participants, with mostly male but that’s probably a cultural artefact because of the Arabic speakers being 77% male. Overwhelmingly, the platform was Coursera, followed by EdX, Udacity and Khan. They do it because it’s free, they have an interest and they are interested in the topic. Very few do it to get a taste of the University before enrolling – note to University administrators! Learners value free and open, convenience for time scheduling and moving, and over 60% are personalising the experience as part of planning their future.

So many issues and questions! Are there really pedagogic possibilities or is this an illusion? How do we deal with assessment? It’s notionally free but at what cost: reputational damage, cost of production and production values. Are we perpetuating old inequalities: are we giving them an illusion of value? Some stuff can’t be done online and you end up with unequal educators, with star performers creeping into local markets and undermining the value of a local and personalised experienced. Finally, we see the old issues of cultural imperialism and a creeping homogeneity that destroys diversity and alternative cultural perspectives. The last thing we want to produce is 99% the same and 1% other as the final step in a growing online community. The other never fares well inside that context. There are of course issues with the digital competency of the student, dropout issues and the spectre of plagiarism. Su’s take was that some people will just cheat anyway so focusing on this is wasted effort to some extent but certification changes when you make it something that the learner seeks, rather than the MOOC imposes.

MOOCs can be a great catalyst and engine for change – you can try things in there and fold them back into your curriculum. They’ve had 20,000 students and now have a lot of data on students. Hugh David will be talking on Wednesday about what they’ve discovered (taster for tomorrow). Institutions are going to have to become more agile and have feedback into the curriculum at a speed that we have not necessarily seen before. (In one generation, we’ve gone from courses that last for decades to courses that last for years to courses that last for one instance and then mutate? No wonder we’re tired!)

Learners are doing the learning: if they want to be a tourist, then that’s what is going to happen. You can’t force someone to finish a book – we wouldn’t call a book a failure if some people didn’t finish it! Are we being too harsh on the MOOC in how we assess the things that we can measure?

Ok, I can’t keep this short, sorry, readers! The next talk will be in another page.


CSEDU Day 1, Session 1, “Information Technologies Supporting Learning”, Paper 1. (#csedu14 #AdelED)

The first talk “Overcoming Cultural Distance in Social OER Environments”, presented by Henri Pirkkalainen, who liked the panel apparently but is a big fan of open stuff. He started with a “Finland in 30 Seconds” slide but it had cars and ice hockey rather than Sauna, with a tilt of the hat to PISA educational rankings, metal music, Marimekko. Oh, and Sauna. More seriously, the social environment of FInland comes with an expectation of how social resources will be used. While most people think of Finland as snow, this year it’s very rainy due to … well, you know. Today’s topic is Open Educational Resources and the complications and opportunities of open and on-line environments. Henri is going to look at the barriers that teacher face in adopting these resources. We don’t just share resources, we share practices and these are as important as plain assets. (We see this problem in technological development of classroom, where we confuse putting assets into a room with actually delivering a technology.) There are a large number of open resources, including those with social and collaborative aspects. The work presented today is based on social OER environments such as the Open Discovery Space (ODS). Consider the basic user and usage experience, which is often very teacher-centric and may allow some student customisation. What can teachers do with this? Explore new ideas and practices, look at and share resources, including lessons plans, but there are many different contexts for this. The example given for context was the difference between the Finnish and German instructions for “How to Sauna”. (I note that the Finnish instructions are very authentic: take branch and beer, sauna.) The point is that didacticism varies by culture and examples may not be relevant when transferred from one culture to another, including manipulations of the pedagogy involved that can lead, unexpectedly, to success or failure.

For the objectives and methodology, the investigation take place over 92 workshops and 19 countries, with 2300 participants. The team used a questionnaire with open questions on overcoming challenges in organisational, quality, social and culturally-related OER-barriers. What enablers and interventions could be used to deal with these barriers? In the end, there were 1175 individuals (49% response rate) and this was analysed using factor analysis to construct a summated scale for the cultural distance barrier (for followup work in three years), with a generalised linear model to predict the cultural distance barrier.

The results? Some barriers group together, combining problems with culturally distant believes, lack of trust towards other authors, lack of information on context for digital resources and a desire to contribute primarily to discussions in your native language.

Using this knowledge that cultural differences exist, can we perceive the cultural distance in Social OER environments? Use material created by contextual others, collaborating in foreign languages and dealing with foreign methods and issues. What did the GLM predict? The age and nationality of the participant can predict this barrier – the cultural distance barrier does NOT depend upon the role of the teacher or the learner. (Unsurprisingly, younger people perceive less of a barrier.) So teachers are no more likely to perceive a cultural barrier than the student is. Even where the barriers exist, they aren’t incredibly significant but Buglaria, Croatia and Latvia have more of a problem than most – no idea why. Something that triggered my Spidey sense was that Finland was one of the lowest which (always) makes me wonder about the bias in the questionnaire. (The other low pegger was the Netherlands.)

How can we address this? Let’s use technology to support multi-linguality properly – including the metadata! Let’s make function support sharing and collaborating with the people that you actually want to meet. Localise your interface! Make your metadata rich, versatile and full! Have some good quality mechanisms in place.

There are issues that technology can’t solve – broadening resources to fit context which requires knowledge of the community among other things. There are two more years of these workshops where they can work on finding the reasons behind all of this.

 


SIGCSE Day 3, “MOOCs”, Saturday, 10:45-12:00pm, (#SIGCSE2014)

This session is (unsurprisingly) of very great interest to the Adelaide Computer Science Education Research group and, as the expeditionary force of CSER, I’ve been looking forward to attending. (I’d call myself an ambassador except I’m not carrying my tuxedo.) The opening talk was “Facilitating Human Interaction in an Online Programming Course” presented by Joe Warren, from Rice University. They’ve been teaching a MOOC for a while and they had some observations to share on how to make things work better. The MOOC is an introduction to interactive programming in Python, based one that Joe had taught for years, which was based on building games. First on-line session was in Fall, 2012, after  face-to-face test run. 19,000 students completed three offerings over Fall ’12, Spring ’13 and Fall ’13.

The goal was to see how well they could put together a high quality on-line course. They sussed recorded videos and machine-graded quizzes, with discussion forums and peer-assessed mini projects. They provided a help desk manned by course staff. CodeSkulptor was the key tool to enable human interaction, a browser based IDE for Python, which was easy to set up, and cloud-saved URLs for code, which were easy to share. (It’s difficult to have novices install tools without causing problems and code visibility is crucial for sharing.) Because they needed a locally run version of Python for interactivity (games focus) so they used Skulpt which translated Python into JavaScript, combined it with CodeMirror, an editor, and then ran it in the browser. CodeSkulptor was built on top.

Students could write code and compile it in the browser, but when they save it a hash is generated for unique storage in a cloud-based account with an access URL – anyone can run your code if you share the URL. (The URL includes a link to CodeSkulptor.org) CodeSkulptor has about 12 million visits with 4 million files saved, which is pretty good. The demo shown had keyboard input, graphic images and sound output – and for those of you know about these things, this is a great result without having to install a local compiler – the browser-based solution works pretty well.

Peer assessment occurred at weekly mini-projects, where the Coursera course provided URLs for CodeSkulptor and a grading rubric which gets sent to students in a web-form. The system isn’t anonymised but students knew it was shared and were encouraged to leave out any personal details in their comments if they wanted to be anonymous (as the file handles were anonymised). (Apparently, the bigger problem was inappropriate content, rather than people worrying about anonymity.) The students run it, assess it in about 10 minutes so it takes about an hour to assess 6 peers. The big advantage is that the code form your URL is guaranteed to run on the grader’s machine because it’s the same browser-based environment. A very detailed rubric was required to ensure good grading: lots of small score items with little ambiguity. The rubric did’t leave much room for assessment – the students were human machines. Why? Having humans grade it was an educational experience and learned from reading and looking at each other’s programs. Also, machine graders have difficulty with animated games, so this is a generalisable approach.

The Help Desk addressed the problem of getting timely expert help for the students – this is a big issue for students. The Code Clinic had custom e-mail that focuses on coding problems (because posting code was not allowed under the class Honour Code). Solutions for common problems were then shared to the rest of the class via the forum. (It looked like the code hash changed every time it got saved? That is a little odd from a naming perspective if true.)

How do CodeClinic work? in Spring 2013 they had about 2,500 help requests. On due days, response time was about 15 minutes (usually averaged 40+), overall handling time average was 6 minutes (open e-mail, solve problem respond). Over 70 days, 3-4 staff sent about 4,000 e-mails. That handling time for a student coding request is very short and it’s a good approach to handling problems at scale. That whole issue about response time going DOWN on due date days is important – that’s normally where I get slammed and slow down! It’s the most popular class at the Uni, which is great!

The chose substantial human-human interaction, using traditional methods on line with peer assessment nd help desks. MOOCs have some advantages over in-person classes – the forums are active because of their size and the help desk scaling works really effectively because it’s always used and hence it makes sense to always staff it. The takeaway is that you have to choose your tools well and you’ll be able to do some good things.

The second talk was “An Environment for Learning Interactive Programming”, also from Rice, and presented by Terry Tang. There was a bit of adverblurb at the start but Terry was radiating energy so I can’t really blame him. He was looking at the same course as mentioned in the previous talk (which saves me some typing, thank you session organisers!). In this talk, Terry was going to focus on SimpleGUI, a browser-based Python GUI library, and Viz mode, a program visualisation tool. (A GUI is a  Graphical User Interface. When you use shapes and windows to interact with a computer, that’s the GUI.)

Writing games requires a fully-functional GUI library so, given the course is about games, this had to be addressed! One could use an existing Python library but these are rarely designed to support Python in the browser and many of them are too complicated as APIs for novice programmers (good to see this acknowledged!).  Desired features of the new library: event-driven support, drawing support and enable students to be able to create simple but interesting programs. So they wrote SimpleGUI . Terry presented a number of examples of this and you can read about it in the talk. (Nick code for “I’m not writing that bit.”) The program was only 227 lines long because a lot of the tricky stuff was being done in the GUI.

Terry showed some examples of student code, built from scratch, on the SimpleGUI, and showed us a FlappyBirds clone – scoring 3, which got a laugh from the crowd.

Terry then moved on to Viz mode, to meet the requirement of letting students visualise the execution of their own code. One existing solution is the Online Python Tutor, which runs code on a server, generates a log file and then ships the trace to some visualisation code in the browse (in JavaScript) to process the trace and produce a state diagram. The code, with annotations, is presented back to the user and they can step through the code, with the visualisations showing the evolution of state and control over time. The resulting visualisation is pretty good and very easy to follow. Now this is great but it runs on a backend server, which could melt on due dates, and OPT can’t visualise event-driven programs (for those who don’t know, game programming is MOSTLY event-driven). So they wrote Viz mode.

From CodeSkulptor, you can run your program in regular or Viz mode. In Viz mode, a new panel with state diagrams shows up, and a console that shows end tags. This is all happening in the browser, which scales well although there are limits to computation in this environment, and is all integrated with the existing CodeSkulptor environment. Terry then showed some more examples.

An important note is that event handlers don’t automatically fire in Viz mode so an GUI elements will have additional buttons to explicitly fire events (like Draw for graphical panes or Timers for events like that). It’s a pretty good tool, from what we were shown. Overall, the Rice experience looks very positive but their tool set and approach to support appears to be the keys to their success. Only some of the code is open source, which is a pity.

Barb Ericson asked a great question: could you set up something where the students are forced to stop and then guess what is going to happen next? They haven’t done it yet but, as Joe said, they might do it now!

The final talk was not from Rice but was from Australia, woohoo (Melbourne and NICTA)! “Teaching Creative Problem Solving in a MOOC” was presented by Carleton Coffrin from NICTA. Carleton was at Learning@SCale earlier and what has been seen over the past year is MOOCs 1.0 – scaling content delivery, with linear delivery, multiple-choice questions and socialisation only in the forums. What is MOOC 2.0? Flexible delivery, specific assessments, gamification, pedagogy, personalised and adaptive approaches. Well, it turns out that they’ve done it so let’s talk about it with a discrete optimisation MOOC offered on Coursera by University of Melbourne. Carleton then explained what discrete optimisation was – left to you to research in detail, dear reader, but it’s hard and the problems are very complex (NP-hard for those who care about such things). Discrete optimisation in practice is trying to explain known techniques to complicated real-world problems. Adaptation and modification of existing skills is a challenge.

How do we prepare students for new optimisation problems that we can’t anticipate? By teaching general problem-solving skills.

What was the class design? The scope of the course was over six areas in the domain, which you can find in the paper, and five assignments of NP-hard complexity. In the lectures, the lecturer used a weatherman format with a lecturer projected over the slides with a great deal of enthusiasm – and a hat. (The research question of the global optimum for hats was not addressed.) The lecturer was very engaging and highly animated, which added to the appeal of the recorded lectures. The instructor constructs problems, students write code and generate solution, encode the solution in a standard format, this is passed back, graded and feedback is returned. Students get the feedback and can then resubmit until the student is happy with their grade. (Yay! I love to see this kind of thing.) I will note that the feedback told them what quality of solution that had to present rather than suggestions of how to do it. Where constraint violations occurred, there was some targeted feedback. Overall, the feedback was pretty reasonable but what you’d expect in good automated feedback. The students did demonstrate persistence in response to this feedback.

From a pedagogical perspective, discovery-based learning was seen to be very important as part of the course. Rather than teach mass, volume and density by using a naked formula, exemplars were presented using water and floating (or sinking) objects to allow the students to explore the solutions and the factors. The material is all in the lectures but it’s left to the students to find the right approach to find solutions to new problems – they can try different lecture ideas on different problems.

The instructor can see all of the student results, rank them, strip out the results and then present a leader board to show quality. This does allow students to see that higher numbers are achieved but I’m not sure that there’s any benefit beyond what’s given in the hints. They did add a distribution graph for really large courses as the leader board got too long. (I’m not a big fans of leader boards, but you know that.)

The structure of the course was suggested, with introductory materials, but then students could bounce around. On-line doesn’t require a linear structure! The open framework was effectively require for iterative improvement of the assignments.

How well did it work? 17,000 people showed up. 795 stayed to the end, which is close to what we’d expect from previous MOOC data but still a bit depressing. However, only 4,000 only tried to do the assignments and, in the warm up, a lot of people dropped out after the warm-up assignment. Looking at this, 1,884 completed the warm-up and stayed (got qualified), which makes the stay rate about 42%. (Hmm, not sure I agree with this numerical handling but I don’t have a better solution.)

Did students use the open framework for structure? It looks like there was revision behaviour, using the freedom of the openness to improve previous solutions with new knowledge. The actual participation rate was interesting because some students completed in 20, and some in 60.

Was it a success or a failure? Well, the students love it (yeah, you know how I feel about that kind of thing). Well, they surveyed the students at the end and they had realised that optimisation takes time (which is very, very true). The overall experience was positive despite the amount of work involved, and the course was rated as being hard.  The students were asked what their favourite part of the course and this was presented as a word cloud. Programming dominated (?!) followed by assignments (?!?!?!?!).

Their assignment choice was interesting because they deliberately chose examples that would work for one solution approach but not another. (For example, the Travelling Salesman Problem was provided at a scale where the Dynamic Programming solution wouldn’t fit into memory.)

There’s still a lot of dependency on this notion that “leaderboards are motivating”. From looking at the word cloud, which is a very high level approach, the students enjoyed the assignments and were happy to do programming, in a safe, retry-friendly (and hence failure tolerant) environment. In my opinion, the reminder of the work they’ve done is potentially more likely to be the reason they liked leader boards rather than as a motivating factor. (Time to set up a really good research study!)

Anyway, the final real session was a corker and I greatly enjoyed it! On to lunch and FRIED CHICKEN.


Matt Damon: Computer Science Superstar?

There was a recent article in Salon regarding the possible use of celebrity presenters, professional actors and the more photogenic to present course material in on-line courses. While Coursera believes that, in the words of Daphne Koller, “education is not a performance”, Udacity, as voiced by Sebastian Thrun, believes that we can model on-line education more in the style of a newscast. In the Udacity model, there is a knowledgeable team and the content producer (primary instructor) is not necessarily going to be the presenter. Daphne Koller’s belief is that the connection between student and teacher would diminish if actors were reading scripts that had content they didn’t deeply understand.

My take on this is fairly simple. I never want to give students the idea that the appearance of knowledge is an achievement in the same league as actually developing and being able to apply that knowledge. I regularly give talks about some of the learning and teaching techniques we use and  I have to be very careful to explain that everything good we do is based on solid learning design and knowledge of the subject, which can be enhanced by good graphic design and presentation but cannot be replaced by these. While I have no doubt that Matt Damon could become a good lecturer in Computer Science, should he wish to, having him stand around and pretend to be one sends the wrong message.

Matt Damon demonstrating an extrinsic motivational technique called "fear of noisy death".

Matt Damon demonstrating an extrinsic motivational technique called “fear of noisy death”.

(And, from the collaborative perspective, if we start to value pleasant appearance over knowledge, do we start to sort our students into groups by appearance and voice timbre? This is probably not the path we want to go down. For now, anyway.)

 


Let’s not turn “Chalk and Talk” into “Watch and Scratch”

We are now starting to get some real data on what happens when people “take” a MOOC (via Mark’s blog). You’ll note the scare quotes around the word “take”, because I’m not sure that we have really managed to work out what it means to get involved in a course that is offered through the MOOC mechanism. Or, to be more precise, some people think they have but not everyone necessarily agrees with them. I’m going to list some of my major concerns, even in the face of the new clickstream data, and explain why we don’t have a clear view of the true value/approaches for MOOCs yet.

  1. On-line resources are not on-line courses and people aren’t clear on the importance of an overall educational design and facilitation mechanism. Many people have mused on this in the past. If all the average human needed was a set of resources and no framing or assistive pedagogy then our educational resources would be libraries and there would be no teachers. While there are a number of offerings that are actually courses, applying the results of the MIT 6.002x to what are, for the most part, unstructured on-line libraries of lecture recordings is not appropriate. (I’m not even going to get into the cMOOC/xMOOC distinction at this point.) I suspect that this is just part of the general undervaluing of good educational design that rears its head periodically.
  2. Replacing lectures with on-line lectures doesn’t magically improve things. The problem with “chalk and talk”, where it is purely one-way with no class interaction, is that we know that it is not an effective way to transfer knowledge. Reading the textbook at someone and forcing them to slowly transcribe it turns your classroom into an inefficient, flesh-based photocopier. Recording yourself standing in front a class doesn’t automatically change things. Yes, your students can time shift you, both to a more convenient time and at a more convenient speed, but what are you adding to the content? How are you involving the student? How can the student benefit from having you there? When we just record lectures and put them up there, then unless they are part of a greater learning design, the student is now sitting in an isolated space, away from other people, watching you talk, and potentially scratching their head while being unable to ask you or anyone else a question. Turning “chalk and talk” into “watch and scratch” is not an improvement. Yes, it scales so that millions of people can now scratch their heads in unison but scaling isn’t everything and, in particular, if we waste time on an activity under the illusion that it will improve things, we’ve gone backwards in terms of quality for effort.
  3. We have yet to establish the baselines for our measurement. This is really important. An on-line system us capable of being very heavily tracked and it’s not just links. The clickstream measurements in the original report record what people clicked on as they worked with the material. But we can only measure that which is set up for measurement – so it’s quite hard to compare the activity in this course to other activities that don’t use technology. But there are two subordinate problems to this (and I apologise to physicists for the looseness of the following) :
    1. Heisenberg’s MOOC: At the quantum scale, you can either tell where something is or what it is doing – the act of observation has limits of precision. Borrowing that for the macro scale: measure someone enough and you’ll see how they behave under measurement but the measurements we pick tend to fall into the stage they’ve reached or the actions they’ve taken. It’s very complex to combine quantitative and qualitative measures to be able to map someone’s stage and their comprehension/intentions/trajectory. You don’t have to accept arguments based on the Hawthorne Effect to understand why this does not necessarily tell you much about unobserved people. There are a large number of people taking these courses out of curiosity, some of whom already have appropriate qualifications, with only 27% the type of student that you would expect to see at this level of University. Combine that with a large number of researchers and curious academics who are inspecting each other’s courses, I know of at least 12 people in my own University taking MOOCs of various kinds to see what they’re like, and we have the problem that we are measuring people who are merely coming in to have a look around and are probably not as interested in the actual course. Until we can actually shift MOOC demography to match that of our real students, we are always going to have our measurements affected by these observers. The observers might not mind being heavily monitored and observed, but real students might. Either way, numbers are not the real answer here – they show us what but there is still too much uncertainty in the why and the how.
    2. Schrödinger’s MOOC: Oh, that poor reductio ad absurdum cat. Does the nature of the observer change the behaviour of the MOOC and force it to resolve one way or another (successful/unsuccessful)? If so, how and when? Does the fact of observation change the course even more than just in enrolments and uncertainty of validity of figures? The clickstream data tells us that the forums are overwhelmingly important to students, with 90% of people who viewed threads without commenting, and only 3% of total students enrolled every actually posted anything in a thread. What was the make-up of that 3% and was it actual students or the over-qualified observers who then provided an environment that 90% of their peers found useful?
    3. Numbers need context and unasked questions give us no data: As one example, the authors of the study were puzzled that so few people had logged in from China, which surprised them. Anyone who has anything to do with network measurement is going to be aware that China is almost always an outlier in network terms. My blog, for example, has readers from around the world – but not China. It’s also important to remember that any number of Chinese network users will VPN/SSH to hosts outside China to enjoy unrestricted search and network access. There may have been many Chinese people (who didn’t self-identify for obvious reasons) who were using proxies from outside China. The numbers on this particular part of the study do not make sense unless they are correctly contextualised. We also see a lack of context in the reporting on why people were doing the course – the numbers for why people were doing it had to be augmented from comments in the forum that people ‘wanted to see if they could make it through an MIT course’. Why wasn’t that available from the initial questions?
  4. We don’t know what pass/fail is going to look like in this environment. I can’t base any MOOC plans of my own on how people respond to a MIT-branded course but it is important to note that MIT’s approach was far more than “watch and scratch”, as is reflected by their educational design in providing various forms of materials, discussions forums, homework and labs. But still, 155,000 people signed up for this and only 7,000 received certificates. 2/3 of people who registered then went on to do nothing. I don’t think that we can treat a success rate of less than 5% as a success rate. Even where we say that 2/3 dropped out, this still equates to a pass rate under 14%. Is that good? Is that bad? Taking everything into account from above, my answer is “We don’t know.” If we get 17% next time, is that good or bad? How do we make this better?
  5. The drivers are often wrong. Several US universities have gone on the record to complain about undermining their colleagues and have refused to take part in MOOC-related activities. The reasons for this vary but the greatest fear is that MOOCs will be used to reduce costs by replacing existing lecturing staff with a far smaller group and using MOOCs to handle the delivery. From a financial argument, MOOCs are astounding – 155,000 people contacted for the cost of a few lecturers. Contrast that with me teaching a course to 100 students. If we look at it from a quality perspective, and dealing with all of the points so far, we have no argument to say that MOOCs are as good as our good teaching – but we do know that they are easily as good as our bad teaching. But from a financial perspective? MOOC is king. That is, however, not how we guarantee educational quality. Of course, when we scale, we can maintain quality by increasing resources but this runs counter to a cost-saving argument so we’re almost automatically being prevented from doing what is required to make the large scale course work by the same cost driver that led to its production in the first place!
  6. There are a lot of statements but perhaps not enough discussion. These are trying times for higher education and everyone wants an edge, more students, higher rankings, to keep their colleagues and friends in work and, overall, to do the right thing for their students. Senior management, large companies, people worried about money – they’re all talking about MOOCs as if they are an accepted substitute for traditional approaches – at the same time as we are in deep discussion about which of the actual traditional approaches are worthwhile and which new approaches are going to work better. It’s a confusing time as we try to handle large-scale adoption of blended learning techniques at the same time people are trying to push this to the large scale.

I’m worried that I seem to be spending most of my time explaining what MOOCs are to people who are asking me why I’m not using a MOOC. I’m even more worried when I am still yet to see any strong evidence that MOOCs are going to provide anything approaching the educational design and integrity that has been building for the past 30 years. I’m positively terrified when I see corporate providers taking over University delivery before we have established actual measurable quality and performance guidelines for this incredibly important activity. I’m also bothered by statements found at the end of the study, which was given prominence as a pull quote:

[The students] do not follow the norms and rules that have governed university courses for centuries nor do they need to.

I really worry about this because I haven’t yet seen any solid evidence that this is true, yet this is exactly the kind of catchy quote that is going to be used on any number of documents that will come across my desk asking me when I’m going to MOOCify my course, rather than discussing if and why and how we will make a transition to on-line blended learning on the massive scale. The measure of MOOC success is not the number of enrolees, nor is it the number of certificates awarded, nor is it the breadth of people who sign up. MOOCs will be successful once we have worked out how to use this incredibly high potential approach to teaching to deliver education at a suitably high level of quality to as many people as possible, at a reduced or even near-zero cost. The potential is enormous but, right now, so is the risk!


SIGCSE 2013: The Revolution Will Be Televised, Perspectives on MOOC Education

Long time between posts, I realise, but I got really, really unwell in Colorado and am still recovering from it. I attended a lot of interesting sessions at SIGCSE 2013, and hopefully gave at least one of them, but the first I wanted to comment on was a panel with Mehram Sahami, Nick Parlante, Fred Martin and Mark Guzdial, entitled “The Revolution Will Be Televised, Perspectives on MOOC Education”. This is, obviously, a very open area for debate and the panelists provided a range of views and a lot of information.

Mehram started by reminding the audience that we’ve had on-line and correspondence courses for some time, with MIT’s OpenCourseWare (OCW) streaming video from the 1990s and Stanford Engineering Everywhere (SEE) starting in 2008. The SEE lectures were interesting because viewership follows a power law relationship: the final lecture has only 5-10% of the views of the first lecture. These video lectures were being used well beyond Stanford, augmenting AP courses in the US and providing entire lecture series in other countries. The videos also increased engagement and the requests that came in weren’t just about the course but were more general – having a face and a name on the screen gave people someone to interact with. From Mehram’s perspective, the challenges were: certification and credit, increasing the richness of automated evaluation, validated peer evaluation, and personalisation (or, as he put it, in reality mass customisation).

Nick Parlante spoke next, as an unashamed optimist for MOOC, who has the opinion that all the best world-changing inventions are cheap, like the printing press, arabic numerals and high quality digital music. These great ideas spread and change the world. However, he did state that he considered artisinal and MOOC education to be very different: artisinal education is bespoke, high quality and high cost, where MOOCs are interesting for the massive scale and, while they could never replace artisinal, they could provide education to those who could not get access to artisinal.

It was at this point that I started to twitch, because I have heard and seen this argument before – the notion that MOOC is better than nothing, if you can’t get artisinal. The subtext that I, fairly or not, hear at this point is the implicit statement that we will never be able to give high quality education to everybody. By having a MOOC, we no longer have to say “you will not be educated”, we can say “you will receive some form of education”. What I rarely hear at this point is a well-structured and quantified argument on exactly how much quality slippage we’re tolerating here – how educational is the alternative education?

Nick also raised the well-known problems of cheating (which is rampant in MOOCs already before large-scale fee paying has been introduced) and credentialling. His section of the talk was long on optimism and positivity but rather light on statistics, completion rates, and the kind of evidence that we’re all waiting to see. Nick was quite optimistic about our future employment prospects but I suspect he was speaking on behalf of those of us in “high-end” old-school schools.

I had a lot of issues with what Nick said but a fair bit of it stemmed from his examples: the printing press and digital music. The printing press is an amazing piece of technology for replicating a written text and, as replication and distribution goes, there’s no doubt that it changed the world – but does it guarantee quality? No. The top 10 books sold in 2012 were either Twilight-derived sadomasochism (Fifty Shades of Unncessary) or related to The Hunger Games. The most work the printing presses were doing in 2012 was not for Thoreau, Atwood, Byatt, Dickens, Borges or even Cormac McCarthy. No, the amazing distribution mechanism was turning out copy after copy of what could be, generously, called popular fiction. But even that’s not my point. Even if the printing presses turned out only “the great writers”, it would be no guarantee of an increase in the ability to write quality works in the reading populace, because reading and writing are different things. You don’t have to read much into constructivism to realise how much difference it makes when someone puts things together for themselves, actively, rather than passively sitting through a non-interactive presentation. Some of us can learn purely from books but, obviously, not all of us and, more importantly, most of us don’t find it trivial. So, not only does the printing press not guarantee that everything that gets printed is good, even where something good does get printed, it does not intrinsically demonstrate how you can take the goodness and then apply it to your own works. (Why else would there be books on how to write?)  If we could do that, reliability and spontaneously, then a library of great writers would be all you needed to replace every English writing course and editor in the world. A similar argument exists for the digital reproduction of music. Yes, it’s cheap and, yes, it’s easy. However, listening to music does not teach you to how write music or perform on a given instrument, unless you happen to be one of the few people who can pick up music and instrumentation with little guidance. There are so few of the latter that we call them prodigies – it’s not a stable model for even the majority of our gifted students, let alone the main body.

Fred Martin spoke next and reminded us all that weaker learners just don’t do well in the less-scaffolded MOOC environment. He had used MOOC in a flipped classroom, with small class sizes, supervision and lots of individual discussion. As part of this blended experience, it worked. Fred really wanted some honest figures on who was starting and completing MOOCs and was really keen that, if we were to do this, that we strive for the same quality, rather than accepting that MOOCs weren’t as good and it was ok to offer this second-tier solution to certain groups.

Mark Guzdial then rounded out the panel and stressed the role of MOOCs as part of a diverse set of resources, but if we were going to do that then we had to measure and report on how things had gone. MOOC results, right now, are interesting but fundamentally anecdotal and unverified. Therefore, it is too soon to jump into MOOC because we don’t yet know if it will work. Mark also noted that MOOCs are not supporting diversity yet and, from any number of sources, we know that many-to-one (the MOOC model) is just not as good as 1-to-1. We’re really not clear if and how MOOCs are working, given how many people who do complete are actually already degree holders and, even then, actual participation in on-line discussion is so low that these experienced learners aren’t even talking to each other very much.

It was an interesting discussion and conducted with a great deal of mutual respect and humour, but I couldn’t agree more with Fred and Mark – we haven’t measured things enough and, despite Nick’s optimism, there are too many unanswered questions to leap in, especially if we’re going to make hard-to-reverse changes to staffing and infrastructure. It takes 20 years to train a Professor and, if you have one that can teach, they can be expensive and hard to maintain (with tongue firmly lodged in cheek, here). Getting rid of one because we have a promising new technology that is untested may save us money in the short term but, if we haven’t validated the educational value or confirmed that we have set up the right level of quality, a few years now from now we might discover that we got rid of the wrong people at the wrong time. What happens then? I can turn off a MOOC with a few keystrokes but I can’t bring back all of my seasoned teachers in a timeframe less than years, if not decades.

I’m with Mark – the resource promise of MOOCs is enormous and they are part of our future. Are they actually full educational resources or courses yet? Will they be able to bring education to people that is a first-tier, high quality experience or are we trapped in the same old educational class divisions with a new name for an old separation? I think it’s too soon to tell but I’m watching all of the new studies with a great deal of interest. I, too, am an optimist but let’s call me a cautious one!


“We are not providing an MIT education on the web…”

I’ve been re-watching some older announcements that describe open courseware initiatives, starting from one of the biggest, the MIT announcement of their OpenCourseWare (OCW) initiative in April, 2001. The title of this post actually comes from the video, around the 5:20 mark, (Video quoted under a CC-BY-NC-SA licence, more information available at: http://ocw.mit.edu/terms)

“Let me be very clear, we are not providing an MIT education on the Web. We are, however, providing core materials that are the infrastructure that undergirds that information. Real education, in our view, involves interaction between people. It’s the interaction between faculty and students, in our classrooms and our living group, in our laboratories that are the heart, the real essence, of an MIT education. “

While the OCW was going to be produced and used on campus, the development of OCW was seen as something that would make more time available for student interaction, not less. President Vest then goes on to confidently predict that OCW will not make any difference to enrolment, which is hardly surprising given that he has categorically excluded anyone from achieving an MIT education unless they enrol. We see here exactly the same discussion that keeps coming up: these materials can be used as augmenting materials in these conventional universities but can never, in the view of the President or Vice Chancellor, replace the actual experience of obtaining a degree from that institution.

Now, don’t get me wrong. I still think that the OCW initiative was excellent, generous and visionary but we are still looking at two fundamentally different use cases: the use of OCW to augment an existing experience and the use of OCW to bootstrap a completely new experience, which is not of the same order. It’s a discussion that we keep having – what happens to my Uni if I use EdX courses from another institution? Well, ok, let’s ask that question differently. I will look at this from two sides with the introduction of a new skill and knowledge area that becomes ubiquitous,  in my sphere, Computer Science and programming. Let’s look at this in terms of growth and success.

What happens if schools start teaching programming to first year level? 

Let’s say that we get programming into every single national curriculum for secondary school and we can guarantee that students come in knowing how to program to freshman level. There are two ways of looking at this and the first, which we have probably all seen to some degree, is to regard the school teaching as inferior and re-teach it. The net result of this will be bored students, low engagement and we will be wasting our time. The second, far more productive, approach is to say “Great! You can program. Now let’s do some Computer Science.” and we use that extra year or so to increase our discipline knowledge or put breadth courses back in so our students come out a little more well-rounded. What’s the difference between students learning it from school before they come to us, or through an EdX course on fundamental programming after they come to us?

Not much, really, as long as we make sure that the course meets our requirements – and, in fact, it gives us bricks-and-mortar-bound entities more time to do all that face-to-face interactive University stuff that we know students love and from which they derive great benefit. University stops being semi-vocational in some aspects and we leap into knowledge construction, idea generation, big projects and the grand dreams that we always talk about, yet often don’t get to because we have to train people in basic programming, drafting, and so on. Do we give them course credit? No, because they’re assumed knowledge, or barrier tested, and they’re not necessarily part of our structure anymore.

What happens if no-one wants to take my course anymore?

Now, we know that we can change our courses because we’ve done it so many times before over the history of the Academy – Latin, along with Greek the language of scholarship, was only used in half of the University publications of 1800. Let me wander through a classical garden for a moment to discuss the nature of change from a different angle, that of decline. Languages had a special place in the degrees of my University with Latin and Greek dominating and then with the daring possibility of allowing substitution of French or German for Latin or Greek from 1938. It was as recently as 1958 that Latin stopped being compulsory for high school graduation in Adelaide although it was still required for the study of Law – student demand for Latin at school therefore plummeted and Latin courses started being dropped from the school curriculum. The Law Latin requirement was removed around 1969-1970, which then dropped any demand for Latin even further. The reduction in the number of school teachers who could teach Latin required the introduction of courses at the University for students who had studied no Latin at all – Latin IA entered the syllabus. However, given that in 2007 only one student at all of the schools across the state of South Australian (roughly 1.2-1.4 million people) studied Latin in the final year of school, it is apparent that if this University wishes to teach Latin, it has to start by teaching all of Latin. This is a course, and a discipline, that is currently in decline. My fear is that, one day, someone will make the mistake of thinking that we no longer need scholars of this language. And that worries me, because I don’t know what people 30 years from now will actually want, or what they could add to the knowledge that we already have of one of our most influential civilisations.

This decline is not unique to Latin (or Greek, or classics in general) but a truly on-line course experience would allow us to actually pool those scholars we have left and offer scaled resources out for much longer than isolated pockets in real offices can potentially manage but, as President Vest notes, a storehouse of Latin texts does not a course make. What reduced the demand for Latin? Possibly the ubiquity of the language that we use which is derived from Latin combined with a change of focus away from a classical education towards a more job- and achievement-oriented (semi-vocational) style of education. If you ask me, programming could as easily go this way in about 20 years, once we have ways to let machines solve problems for us. A move towards a less go-go-go culture, smarter machines and a resurgence of the long leisure cycles associated with Science Fiction visions of the future and suddenly it is the engineers and the computer scientists who are looking at shrinking departments and no support in the schools. Let me be blunt: course popularity and desirability rises, stabilises and falls, and it’s very hard to tell if we are looking at a parabola or a pendulum. With that in mind, we should be very careful about how we define our traditions and our conventions, especially as our cunning tools for supporting on-line learning and teaching get better and better. Yes, interaction is an essential part of a good education, no argument at all, but there is an implicit assumption of critical mass that we have seen, time and again, to implicitly support this interaction in a face-to-face environment that is as much a function of popularity and traditionally-associated prestige as it is of excellence.

What are MIT doing now?

I look at the original OCW release and I agree that, at time of production, you could not reproduce the interaction between people that would give you an MIT education. But our tools are better now. They are, quite probably not close enough yet to give you an “MIT of the Internet” but should this be our goal? Not the production of a facsimile of the core materials that might, with MIT instructors, turn into a course, but the commitment to developing the tools that actually reproduce the successful components of the learning experience with group and personal interaction, allowing the formation of what we used to call a physical interactive experience in a virtual side? That’s where I think the new MIT initiatives are showing us how these things can work now, starting from their original idealistic roots and adding the technology of the 21st Century. I hope that other, equally prestigious, institutions are watching this, carefully.


Two Tier: Already Here

Hah! I look down on you, you apples!

Hah! I look down on you, you apples!

I was reading a Chronicle of Higher Ed article “For Whom is College Being Reinvented” and it was sobering reading. While I was writing yesterday about Oxford and Cambridge wanting to maintain their conventional University stance, Robert Archibald, an Economics Professor from the College of William and Mary, points out that the two tier system is already here in terms of good conventional and bad conventional – so that we would see an even larger disparity between luxury and economy courses. Getting into the “good” colleges will be a matter of money and prior preparation, much as it is many areas where the choice of school available to parents is increasingly driving residential moves in the early years of a child’s life. But it doesn’t end there because the ‘quality’ measure may be as much about the employability of the students after they’ve completed their studies – and, as the article says, now we start have to think about whether a “low-level” degree is then preferable to an “industry recognised” apprenticeship or trade training program. Now, our two tiers are as separate as radiographer and radiology but, as Robert Reich also observes in the same article, this is completely against what we should be doing: how can we do all this and maintain real equality between degrees and programs?

Of course, if you didn’t go to a great elementary and senior school, then, despite being on the path to the ‘second-tier’ school, which might be one that naturally migrates to a full electronic delivery for a number of perfectly reasonable economic reasons, you are probably someone who needs a more customised experience than a ‘boilerplate’ MOOC could offer: you actually need face-to-face. When we talk about disruption of the existing college system, we always assume that this is a positive thing, something that will lead to a better result for our students, so these potential issues with where these new technologies may get focused start to become very important.

For whom will these new systems work? Everyone or just the people that we’re happy to expose them to?

It’s perhaps the best question we have to frame the discussion – it’s not about whether the technology works, we know that it works well for certain things and it’s now matter of making sure that our pedagogical systems are correctly married to our computer systems to make the educational experience work. But, obviously, and as many much better writers than I have been saying, it has to work and be at least as good as the systems that it’s replacing – only now we realise that existing systems are not the same for everyone and that one person’s working system is someone else’s diabolically bad teaching experience. So the entire discussion about whether MOOCs work now have to be framed in the context of ‘compared to what‘?

It’s an interesting article that poses more questions than it answers, but it’s certainly part of the overall area we have to think about.