SIGCSE 2013: Special Session on Designing and Supporting Collaborative Learning Activities

Katrina and I delivered a special session on collaborative learning activities, focused on undergraduates because that’s our area of expertise. You can read the outline document here. We worked together on the underlying classroom activities and have both implemented these techniques but, in this session, Katrina did most of the presenting and I presented the collaborative assessment task examples, with some facilitation.

The trick here is, of course, to find examples that are both effective as teaching tools and are effective as examples. The approach I chose to take was to remind everyone in the room of what the most important aspects were to making this work with students and I did this by deliberately starting with a bad example. This can be a difficult road to walk because, when presenting a bad example, you need to convince everyone that your choice was deliberate and that you actually didn’t just stuff things up.

My approach was fairly simple. Break people into groups, based on where they were currently sitting, and then I immediately went into the question, which had been tailored for the crowd and for my purposes:

“I want you to talk about the 10 things that you’re going to do in the next 5 years to make progress in your career and improve your job performance.”

And why not? Everyone in the room was interested in education and, most likely, had a job at a time when it’s highly competitive and hard to find or retain work – so everyone has probably thought about this. It’s a fair question for this crowd.

Well, it would be, if it wasn’t so anxiety inducing. Katrina and I both observed a sea of frozen faces as we asked a question that put a large number of participants on the spot. And the reason I did this was to remind everyone that anxiety impairs genuine participation and willingness to engage. There were a large number of frozen grins with darting eyes, some nervous mumbles and a whole lot of purposeless noise, with the few people who were actually primed to answer that question starting to lead off.

I then stopped the discussion immediately. “What was wrong with that?” I asked the group.

Well, where do we start? Firstly, it’s an individual activity, not a collaborative activity – there’s no incentive or requirement for discussion, groupwork or anything like that. Secondly, while we might expect people to be able to answer this, it is a highly charged and personal areas, and you may not feel comfortable discussing your five year plan with people that you don’t know. Thirdly, some people know that they should be able to answer this (or at least some supervisors will expect that they can) but they have no real answer and their anxiety will not only limit their participation but it will probably stop them from listening at all while they sweat their turn. Finally, there is no point to this activity – why are we doing this? What are we producing? What is the end point?

My approach to collaborative activity is pretty simple and you can read any amount of Perry, Dickinson, Hamer et al (and now us as well) to look at relevant areas and Contributing Student Pedagogy, where students have a reason to collaborate and we manage their developmental maturity and their roles in the activity to get them really engaged. Everyone can have difficulties with authority and recognising whether someone is making enough contribution to a discussion to be worth their time – this is not limited to students. People, therefore, have to believe that the group they are in is of some benefit to them.

So we stepped back. I asked everyone to introduce themselves, where they came from and give a fact about their current home that people might not know. Simple task, everyone can do it and the purpose was to tell your group something interesting about your home – clear purpose, as well. This activity launched immediately and was going so well that, when I tried to move it on because the sound levels were dropping (generally a good sign that we’re reaching a transition), some groups asked if they could keep going as they weren’t quite finished. (Monitoring groups spread over a large space can be tricky but, where the activity is working, people will happily let you know when they need more time.) I was able to completely stop the first activity and nobody wanted me to continue. The second one, where people felt that they could participate and wanted to say something, needed to keep going.

Having now put some faces to names, we then moved to a simple exercise of sharing an interesting teaching approach that you’d tried recently or seen at the conference and it’s important to note the different comfort levels we can accommodate with this – we are sharing knowledge but we give participants the opportunity to share something of themselves or something that interest them, without the burden of ownership. Everyone had already discovered that everyone in the group had some areas of knowledge, albeit small, that taught them something new. We had started to build a group where participants valued each other’s contribution.

I carried out some roaming facilitation where I said very little, unless it was needed. I sat down with some groups, said ‘hi’ and then just sat back while they talked. I occasionally gave some nodded or attentive feedback to people who looked like they wanted to speak and this often cued them into the discussion. Facilitation doesn’t have to be intrusive and I’m a much bigger fan of inclusiveness, where everyone gets a turn but we do it through non-verbal encouragement (where that’s possible, different techniques are required in a mixed-ability group) to stay out of the main corridor of communication and reduce confrontation. However, by setting up the requirement that everyone share and by providing a task that everyone could participate in, my need to prod was greatly reduced and the groups mostly ran themselves, with the roles shifting around as different people made different points.

We covered a lot of the underlying theory in the talk itself, to discuss why people have difficulty accepting other views, to clarify why role management is a critical part of giving people a reason to get involved and something to do in the conversation. The notion that a valid discursive role is that of the supporter, to reinforce ideas from the proposer, allows someone to develop their confidence and critically assess the idea, without the burden of having to provide a complex criticism straight away.

At the end, I asked for a show of hands. Who had met someone knew? Everyone. Who had found out something they didn’t know about other places? Everyone. Who had learned about a new teaching technique that they hadn’t known before. Everyone.

My one regret is that we didn’t do this sooner because the conversation was obviously continuing for some groups and our session was, sadly, on the last day. I don’t pretend to be the best at this but I can assure you that any capability I have in this kind of activity comes from understanding the theory, putting it into practice, trying it, trying it again, and reflecting on what did and didn’t work.

I sometimes come out of a lecture or a collaborative activity and I’m really not happy. It didn’t gel or I didn’t quite get the group going as I wanted it to – but this is where you have to be gentle on yourself because, if you’re planning to succeed and reflecting on the problems, then steady improvement is completely possible and you can get more comfortable with passing your room control over to the groups, while you move to the facilitation role. The more you do it, the more you realise that training your students in role fluidity also assists them in understanding when you have to be in control of the room. I regularly pass control back and forward and it took me a long time to really feel that I wasn’t losing my grip. It’s a practice thing.

It was a lot of fun to give the session and we spent some time crafting the ‘bad example’, but let me summarise what the good activities should really look like. They must be collaborativeinclusiveachievable and obviously beneficial. Like all good guidelines there are times and places where you would change this set of characteristics, but you have to know your group well to know what challenges they can tolerate. If your students are more mature, then you push out into open-ended tasks which are far harder to make progress in – but this would be completely inappropriate for first years. Even in later years, being able to make some progress is more likely to keep the group going than a brick wall that stops you at step 1. But, let’s face it, your students need to know that working in that group is not only not to their detriment, but it’s beneficial. And the more you do this, the better their groupwork and collaboration will get – and that’s a big overall positive for the graduates of the future.

To everyone who attended the session, thank you for the generosity and enthusiasm of your participation and I’m catching up on my business cards in the next weeks. If I promised you an e-mail, it will be coming shortly.


SIGCSE 2013: The Revolution Will Be Televised, Perspectives on MOOC Education

Long time between posts, I realise, but I got really, really unwell in Colorado and am still recovering from it. I attended a lot of interesting sessions at SIGCSE 2013, and hopefully gave at least one of them, but the first I wanted to comment on was a panel with Mehram Sahami, Nick Parlante, Fred Martin and Mark Guzdial, entitled “The Revolution Will Be Televised, Perspectives on MOOC Education”. This is, obviously, a very open area for debate and the panelists provided a range of views and a lot of information.

Mehram started by reminding the audience that we’ve had on-line and correspondence courses for some time, with MIT’s OpenCourseWare (OCW) streaming video from the 1990s and Stanford Engineering Everywhere (SEE) starting in 2008. The SEE lectures were interesting because viewership follows a power law relationship: the final lecture has only 5-10% of the views of the first lecture. These video lectures were being used well beyond Stanford, augmenting AP courses in the US and providing entire lecture series in other countries. The videos also increased engagement and the requests that came in weren’t just about the course but were more general – having a face and a name on the screen gave people someone to interact with. From Mehram’s perspective, the challenges were: certification and credit, increasing the richness of automated evaluation, validated peer evaluation, and personalisation (or, as he put it, in reality mass customisation).

Nick Parlante spoke next, as an unashamed optimist for MOOC, who has the opinion that all the best world-changing inventions are cheap, like the printing press, arabic numerals and high quality digital music. These great ideas spread and change the world. However, he did state that he considered artisinal and MOOC education to be very different: artisinal education is bespoke, high quality and high cost, where MOOCs are interesting for the massive scale and, while they could never replace artisinal, they could provide education to those who could not get access to artisinal.

It was at this point that I started to twitch, because I have heard and seen this argument before – the notion that MOOC is better than nothing, if you can’t get artisinal. The subtext that I, fairly or not, hear at this point is the implicit statement that we will never be able to give high quality education to everybody. By having a MOOC, we no longer have to say “you will not be educated”, we can say “you will receive some form of education”. What I rarely hear at this point is a well-structured and quantified argument on exactly how much quality slippage we’re tolerating here – how educational is the alternative education?

Nick also raised the well-known problems of cheating (which is rampant in MOOCs already before large-scale fee paying has been introduced) and credentialling. His section of the talk was long on optimism and positivity but rather light on statistics, completion rates, and the kind of evidence that we’re all waiting to see. Nick was quite optimistic about our future employment prospects but I suspect he was speaking on behalf of those of us in “high-end” old-school schools.

I had a lot of issues with what Nick said but a fair bit of it stemmed from his examples: the printing press and digital music. The printing press is an amazing piece of technology for replicating a written text and, as replication and distribution goes, there’s no doubt that it changed the world – but does it guarantee quality? No. The top 10 books sold in 2012 were either Twilight-derived sadomasochism (Fifty Shades of Unncessary) or related to The Hunger Games. The most work the printing presses were doing in 2012 was not for Thoreau, Atwood, Byatt, Dickens, Borges or even Cormac McCarthy. No, the amazing distribution mechanism was turning out copy after copy of what could be, generously, called popular fiction. But even that’s not my point. Even if the printing presses turned out only “the great writers”, it would be no guarantee of an increase in the ability to write quality works in the reading populace, because reading and writing are different things. You don’t have to read much into constructivism to realise how much difference it makes when someone puts things together for themselves, actively, rather than passively sitting through a non-interactive presentation. Some of us can learn purely from books but, obviously, not all of us and, more importantly, most of us don’t find it trivial. So, not only does the printing press not guarantee that everything that gets printed is good, even where something good does get printed, it does not intrinsically demonstrate how you can take the goodness and then apply it to your own works. (Why else would there be books on how to write?)  If we could do that, reliability and spontaneously, then a library of great writers would be all you needed to replace every English writing course and editor in the world. A similar argument exists for the digital reproduction of music. Yes, it’s cheap and, yes, it’s easy. However, listening to music does not teach you to how write music or perform on a given instrument, unless you happen to be one of the few people who can pick up music and instrumentation with little guidance. There are so few of the latter that we call them prodigies – it’s not a stable model for even the majority of our gifted students, let alone the main body.

Fred Martin spoke next and reminded us all that weaker learners just don’t do well in the less-scaffolded MOOC environment. He had used MOOC in a flipped classroom, with small class sizes, supervision and lots of individual discussion. As part of this blended experience, it worked. Fred really wanted some honest figures on who was starting and completing MOOCs and was really keen that, if we were to do this, that we strive for the same quality, rather than accepting that MOOCs weren’t as good and it was ok to offer this second-tier solution to certain groups.

Mark Guzdial then rounded out the panel and stressed the role of MOOCs as part of a diverse set of resources, but if we were going to do that then we had to measure and report on how things had gone. MOOC results, right now, are interesting but fundamentally anecdotal and unverified. Therefore, it is too soon to jump into MOOC because we don’t yet know if it will work. Mark also noted that MOOCs are not supporting diversity yet and, from any number of sources, we know that many-to-one (the MOOC model) is just not as good as 1-to-1. We’re really not clear if and how MOOCs are working, given how many people who do complete are actually already degree holders and, even then, actual participation in on-line discussion is so low that these experienced learners aren’t even talking to each other very much.

It was an interesting discussion and conducted with a great deal of mutual respect and humour, but I couldn’t agree more with Fred and Mark – we haven’t measured things enough and, despite Nick’s optimism, there are too many unanswered questions to leap in, especially if we’re going to make hard-to-reverse changes to staffing and infrastructure. It takes 20 years to train a Professor and, if you have one that can teach, they can be expensive and hard to maintain (with tongue firmly lodged in cheek, here). Getting rid of one because we have a promising new technology that is untested may save us money in the short term but, if we haven’t validated the educational value or confirmed that we have set up the right level of quality, a few years now from now we might discover that we got rid of the wrong people at the wrong time. What happens then? I can turn off a MOOC with a few keystrokes but I can’t bring back all of my seasoned teachers in a timeframe less than years, if not decades.

I’m with Mark – the resource promise of MOOCs is enormous and they are part of our future. Are they actually full educational resources or courses yet? Will they be able to bring education to people that is a first-tier, high quality experience or are we trapped in the same old educational class divisions with a new name for an old separation? I think it’s too soon to tell but I’m watching all of the new studies with a great deal of interest. I, too, am an optimist but let’s call me a cautious one!


Expressiveness and Ambiguity: Learning to Program Can Be Unnecessarily Hard

One of the most important things to be able to do in any profession is to think as a professional. This is certainly true of Computer Science, because we have to spend so much time thinking as a Computer Scientist would think about how the machine will interpret our instructions. For those who don’t program, a brief quiz. What is the value of the next statement?

What is 3/4?

No doubt, you answered something like 0.75 or maybe 75% or possibly even “three quarters”? (And some of you would have said “but this statement has no intrinsic value” and my heartiest congratulations to you. Now go off and contemplate the Universe while the rest of us toil along on the material plane.) And, not being programmers, you would give me the same answer if I wrote:

What is 3.0/4.0?

Depending on the programming language we use, you can actually get two completely different answers to this apparently simple question. 3/4 is often interpreted by the computer to mean “What is the result if I carry out integer division, where I will only tell you how many times the denominator will go into the numerator as a whole number, for 3 and 4?” The answer will not be the expected 0.75, it will be 0, because 4 does not go into 3 – it’s too big. So, again depending on programming language, it is completely possible to ask the computer “is 3/4 equivalent to 3.0/4.0?” and get the answer ‘No’.

This is something that we have to highlight to students when we are teaching programming, because very few people use integer division when they divide one thing by another – they automatically start using decimal points. Now, in this case, the different behaviour of the ‘/’ is actually exceedingly well-defined and is not all ambiguous to the computer or to the seasoned programmer. It is, however, nowhere near as clear to the novice or casual observer.

I am currently reading Stephen Ramsay’s excellent “Reading Machines: Towards an Algorithmic Criticism” and it is taking me a very long time to read an 80 page book. Why? Because, to avoid ambiguity and to be as expressive and precise as possible, he has used a number of words and concepts with which I am unfamiliar or that I have not seen before. I am currently reading his book with a web browser and a dictionary because I do not have a background in literary criticism but, once I have the building blocks, I can understand his argument. In other words, I am having to learn a new language in order to read a book for that new language community. However, rather than being irked that “/” changes meaning depending on the company it keeps, I am happy to learn the new terms and concepts in the space that Ramsay describes, because it is adding to my ability to express key concepts, without introducing ambiguous shadings of language over things that I already know. Ramsay is not, for example, telling me that “book” no longer means “book” when you place it inside parentheses. (It is worth noting that Ramsay discusses the use of constraint as a creative enhancer, a la Oulipo, early on in the book and this is a theme for another post.)

The usual insult at this point is to trot out the accusation of jargon, which is as often a statement that “I can’t be bothered learning this” than it is a genuine complaint about impenetrable prose. In this case, the offender in my opinion is the person who decided to provide an invisible overloading of the “/” operator to mean both “division” and “integer division”, as they have required us to be aware of a change in meaning that is not accompanied by a change in syntax. While this isn’t usually a problem, spoken and written languages are full of these things after all, in the computing world it forces the programmer to remember that “/” doesn’t always mean “/” and then to get it the right way around. (A number of languages solve this problem by providing a distinct operator – this, however, then adds to linguistic complexity and rather than learning two meanings, you have to learn two ‘words’. Ah, no free lunch.) We have no tone or colour in mainstream programming languages, for a whole range of good computer grammar reasons, but the absence of the rising tone or rising eyebrow is sorely felt when we encounter something that means two different things. The net result is that we tend to use the same constructs to do the same thing because we have severe limitations upon our expressivity. That’s why there are boilerplate programmers, who can stitch together a solution from things they have already seen, and people who have learned how to be as expressive as possible, despite most of these restrictions. Regrettably, expressive and innovative code can often be unreadable by other people because of the gymnastics required to reach these heights of expressiveness, which is often at odds with what the language designers assumed someone might do.

We have spent a great deal of effort making computers better at handling abstract representations, things that stand in for other (real) things. I can use a name instead of a number and the computer will keep track of it for me. It’s important to note that writing int i=0; is infinitely preferable to typing “0000000000000000000000000000000000000000000000000000000000000000” into the correct memory location and then keeping that (rather large number) address written on a scrap of paper. Abstraction is one of the fundamental tools of modern programming, yet we greatly limit expressiveness in sometimes artificial ways to reduce ambiguity when, really, the ambiguity does seem a little artificial.

One of the nastiest potential ambiguities that shows up a lot is “what do we mean by ‘equals'”. As above, we already know that many languages would not tell you that “3/4 equals 3.0/4.0” because both mathematical operations would be executed and 0 is not the same as 0.75. However, the equivalence operator is often used to ask so many different questions: “Do these two things contain the same thing?”, “Are these two things considered to be the same according to the programmer?” and “Are these two things actually the same thing and stored in the same place in memory?”

Generally, however, to all of these questions, we return a simple “True” or “False”, which in reality reflects neither the truth nor the falsity of the situation. What we are asking, respectively, is “Are the contents of these the same?” to which the answer is “Same” or “Different”. To the second, we are asking if the programmer considers them to be the same, in which case the answer is really “Yes” or “No” because they could actually be different, yet not so different that the programmer needs to make a big deal about it. Finally, when we are asking if two references to an object actually point to the same thing, we are asking if they are in the same location or not.

There are many languages that use truth values, some of them do it far better than others, but unless we are speaking and writing in logical terms, the apparent precision of the True/False dichotomy is inherently deceptive and, once again, it is only as precise as it has been programmed to be and then interpreted, based on the knowledge of programmer and reader. (The programming language Haskell has an intrinsic ability to say that things are “Undefined” and to then continue working on the problem, which is an obvious, and welcome, exception here, yet this is not a widespread approach.) It is an inherent limitation on our ability to express what is really happening in the system when we artificially constrain ourselves in order to (apparently) reduce ambiguity. It seems to me that we have reduced programmatic ambiguity, but we have not necessarily actually addressed the real or philosophical ambiguity inherent in many of these programs.

More holiday musings on the “Python way” and why this is actually an unreasonable demand, rather than a positive feature, shortly.


Thanks for the exam – now I can’t help you.

I have just finished marking a pile of examinations from a course that I co-taught recently. I haven’t finalised the marks but, overall, I’m not unhappy with the majority of the results. Interestingly, and not overly surprisingly, one of the best answered sections of the exam was based on a challenging essay question I set as an assignment. The question spans many aspects of the course and requires the student to think about their answer and link the knowledge – which most did very well. As I said, not a surprise but a good reinforcement that you don’t have to drill students in what to say in the exam, but covering the requisite knowledge and practising the right skills is often helpful.

However, I don’t much like marking exams and it doesn’t come down to the time involved, the generally dull nature of the task or the repetitive strain injury from wielding a red pen in anger, it comes down to the fact that, most of the time, I am marking the student’s work at a time when I can no longer help him or her. Like most exams at my Uni, this was the terminal examination for the course, worth a substantial amount of the final marks, and was taken some weeks after teaching finished. So what this means is that any areas I identify for a given student cannot now be corrected, unless the student chooses to read my notes in the exam paper or come to see me. (Given that this campus is international, that’s trickier but not impossible thanks to the Wonders of Skypenology.) It took me a long time to work out exactly why I didn’t like marking, but when I did, the answer was obvious.

I was frustrated that I couldn’t actually do my job at one of the most important points: when lack of comprehension is clearly identified. If I ask someone a question in the classroom, on-line or wherever, and they give me an answer that’s not quite right, or right off base, then we can talk about it and I can correct the misunderstanding. My job, after all, is not actually passing or failing students – it’s about knowledge, the conveyance, construction and quality management thereof. My frustration during exam marking increases with every incomplete or incorrect answer I read, which illustrates that there is a section of the course that someone didn’t get. I get up in the morning with the clear intention of being helpful towards students and, when it really matters, all I can do is mark up bits of paper in red ink.

Quickly, Jones! Construct a valid knowledge framework! You're in a group environment! Vygotsky, man, Vygotsky!

Quickly, Jones! Construct a valid knowledge framework! You’re in a group environment! Vygotsky, man, Vygotsky!

A student who, despite my sweeping, and seeping, liquid red ink of doom, manages to get a 50 Passing grade will not do the course again – yet this mark pretty clearly indicates that roughly half of the comprehension or participation required was not carried out to the required standard. Miraculously, it doesn’t matter which half of the course the student ‘gets’, they are still deemed to have attained the knowledge. (An interesting point to ponder, especially when you consider that my colleagues in Medicine define a Pass at a much higher level and in far more complicated ways than a numerical 50%, to my eternal peace of mind when I visit a doctor!) Yet their exam will still probably have caused me at least some gnashing of teeth because of points missed, pointless misstatement of the question text, obscure song lyrics, apologies for lack of preparation and the occasional actual fact that has peregrinated from the place where it could have attained marks to a place where it will be left out in the desert to die, bereft of the life-giving context that would save it from such an awful fate.

Should we move the exams earlier and then use this to guide the focus areas for assessment in order to determine the most improvement and develop knowledge in the areas in most need? Should we abandon exams entirely and move to a continuous-assessment competency based system, where there are skills and knowledge that must be demonstrated correctly and are practised until this is achieved? We are suffering, as so many people have observed before, from overloading the requirement to grade and classify our students into neatly discretised performance boxes onto a system that ultimately seeks to identify whether these students have achieved the knowledge levels necessary to be deemed to have achieved the course objectives. Should we separate competency and performance completely? I have sketchy ideas as to how this might work but none that survive under the blow-torches of GPA requirements and resource constraints.

Obviously, continuous assessment (practicals, reports, quizzes and so on) throughout the semester provide a very valuable way to identify problems but this requires good, and thorough, course design and an awareness that this is your intent. Are we premature in treating the exam as a closing-off line on the course? Do we work on that the same way that we do any assignment? You get feedback, a mark and then more work to follow-up? If we threw resourcing to the wind, could we have a 1-2 week intensive pre-semester program that specifically addressed those issues that students failed to grasp on their first pass? Congratulations, you got 80%, but that means that there’s 20% of the course that we need to clarify? (Those who got 100% I’ll pay to come back and tutor, because I like to keep cohorts together and I doubt I’ll need to do that very often.)

There are no easy answers here and shooting down these situations is very much in the fish/barrel plane, I realise, but it is a very deeply felt form of frustration that I am seeing the most work that any student is likely to put in but I cannot now fix the problems that I see. All I can do is mark it in red ink with an annotation that the vast majority will never see (unless they receive the grade of 44, 49, 64, 74 or 84, which are all threshold-1 markers for us).

Ah well, I hope to have more time in 2013 so maybe I can mull on this some more and come up with something that is better but still workable.


Thinking about teaching spaces: if you’re a lecturer, shouldn’t you be lecturing?

I was reading a comment on a philosophical post the other day and someone wrote this rather snarky line:

He’s is a philosopher in the same way that (celebrity historian) is a historian – he’s somehow got the job description and uses it to repeat the prejudices of his paymasters, flattering them into thinking that what they believe isn’t, somehow, ludicrous. (Grangousier, Metafilter article 123174)

Rather harsh words in many respects and it’s my alteration of the (celebrity historian)’s name, not his, as I feel that his comments are mildy unfair. However, the point is interesting, as a reflection upon the importance of job title in our society, especially when it comes to the weighted authority of your words. From January the 1st, I will be a senior lecturer at an Australian University and that is perceived differently where I am. If I am in the US, I reinterpret this title into their system, namely as a tenured Associate Professor, because that’s the equivalent of what I am – the term ‘lecturer’ doesn’t clearly translate without causing problems, not even dealing with the fact that more lecturers in Australia have PhDs, where many lecturers in the US do not. But this post isn’t about how people necessarily see our job descriptions, it’s very much about how we use them.

In many respects, the title ‘lecturer’ is rather confusing because it appears, like builder, nurse or pilot, to contain the verb of one’s practice. One of the big changes in education has been the steady acceptance of constructivism, where the learners have an active role in the construction of knowledge and we are facilitating learning, in many ways, to a greater extent than we are teaching. This does not mean that teachers shouldn’t teach, because this is far more generic than the binding of lecturers to lecturing, but it does challenge the mental image that pops up when we think about teaching.

If I asked you to visualise a classroom situation, what would you think of? What facilities are there? Where are the students? Where is the teacher? What resources are around the room, on the desks, on the walls? How big is it?

Take a minute to do just this and make some brief notes as to what was in there. Then come back here.

It’s okay, I’ll still be here!

Read the rest of this entry »


Adelaide Computing Education Conventicle 2012: “It’s all about the people”

acec 2012 was designed to be a cross-University event (that’s the whole point of the conventicles, they bring together people from a region) and we had a paper from the University of South Australia:  ‘”It’s all about the people”; building cultural competence in IT graduates’ by Andrew Duff, Kathy Darzanos and Mark Osborne. Andrew and Kathy came along to present and the paper was very well received, because it dealt with an important need and a solid solution to address that need, which was inclusive, insightful and respectful.

For those who are not Australians, it is very important to remember that the original inhabitants of Australia have not fared very well since white settlement and that the apology for what happened under many white governments, up until very recently, was only given in the past decade. There is still a distance between the communities and the overall process of bringing our communities together is referred to as reconciliation. Our University has a reconciliation statement and certain goals in terms of representation in our staff and student bodies that reflect percentages in the community, to reduce the underrepresentation of indigenous Australians and to offer them the same opportunities. There are many challenges facing Australia, and the health and social issues in our indigenous communities are often exacerbated by years of poverty and a range of other issues, but some of the communities have a highly vested interest in some large-scale technical, ICT and engineering solutions, areas where indigenous Australians are generally not students. Professor Lester Irabinna Rigney, the Dean of Aboriginal Education, identified the problem succinctly at a recent meeting: when your people live on land that is 0.7m above sea level, a 0.9m sea-level rise starts to become of concern and he would really like students from his community to be involved in building the sea walls that address this, while we look for other solutions!

Andrea, Kathy and Mark’s aim was to share out the commitment to reconciliation across the student body, making this a whole of community participation rather than a heavy burden for a few, under the guiding statement that they wanted to be doing things with the indigenous community, rather than doing things to them. There’s always a risk of premature claiming of expertise, where instead of working with a group to find out what they want, you walk in and tell them what they need. For a whole range of very good and often heartbreaking reasons, the Australian indigenous communities are exceedingly wary when people start ordering them about. This was the first thing I liked about this approach: let’s not make the same mistakes again. The authors were looking for a way to embed cultural awareness and the process of reconciliation into the curriculum as part of an IT program, sharing it so that other people could do it and making it practical.

Their key tenets were:

  1. It’s all about the diverse people. They developed a program to introduce students to culture, to give them more than one world view of the dominant culture and to introduce knowledge of the original Australians. It’s an important note that many Australians have no idea how to use certain terms or cultural items from indigenous culture, which of course hampers communication and interaction.

    For the students, they were required to put together an IT proposal, working with the indigenous community, that they would implement in the later years of their degree. Thus, it became part of the backbone of their entire program.

  2. Doing with [people], not to [people]. As discussed, there are many good reasons for this. Reduce the urge to be the expert and, instead, look at existing statements of right and how to work with other peplum, such as the UN rights of indigenous people and the UniSA graduate attributes. This all comes together in the ICUP – Indigenous Content in Undergraduate Program

How do we deal with information management in another culture? I’ve discussed before the (to many) quite alien idea that knowledge can reside with one person and, until that person chooses or needs to hand on that knowledge, that is the person that you need. Now, instead of demanding knowledge and conformity to some documentary standard, you have to work with people. Talking rather than imposing, getting the client’s genuine understanding of the project and their need – how does the client feel about this?

Not only were students working with indigenous people in developing their IT projects, they were learning how to work with other peoples, not just other people, and were required to come up with technologically appropriate solutions that met the client need. Not everyone has infinite power and 4G LTE to run their systems, nor can everyone stump up the cash to buy an iPhone or download apps. Much as programming in embedded systems shakes students out of the ‘infinite memory, disk and power’ illusion, working with other communities in Australia shakes them out of the single worldview and from the, often disrespectful, way that we deal with each other. The core here is thinking about different communities and the fact that different people have different requirements. Sometimes you have to wait to speak to the right person, rather than the available person.

The online forum has four questions that students have to find a solution to, where the forum is overseen by an indigenous tutor. The four questions are:

  1. What does culture mean to you?
  2. Post a cultural artefact that describes your culture?
  3. I came here to study Computer Science – not Aboriginal Australians?
  4. What are some of the differences between Aboriginal and non-Aboriginal Australians?

The first two are amazing questions – what is your answer to question number 2? The second pair of questions are more challenging and illustrate the bold and head-on approach of this participative approach to reconciliation. Reconciliation between all of the Australian communities requires everyone to be involved and, being honest, questions 3 and 4 are going to open up some wounds, drag some silly thinking out into the open but, most importantly, allow us to talk through issues of concern and confusion.

I suspect that many people can’t really answer question 4 without referring back to mid-50s archetypal depictions of Australian Aborigines standing on one leg, looking out over cliffs, and there’s an excellent ACMI (Australian Centre for the Moving Image) exhibit in Melbourne that discusses this cultural misappropriation and stereotyping. One of the things that resonated with me is that asking these questions forces people to think about these things, rather than repeating old mind grooves and received nonsense overheard in pubs, seen on TV and heard in racist jokes.

I was delighted that this paper was able to be presented, not least because the goal of the team is to share this approach in the hope of achieving even greater strides in the reconciliation process. I hope to be able to bring some of it to my Uni over the next couple of years.

 


John Henry Died

Every culture has its myths and legends, especially surrounding those incredible individuals who stand out or tower over the rest of the society. The Ancient Greeks and Romans had their gods, demigods, heroes and, many times, cautionary tales of the mortals who got caught in the middle. Australia has the stories of pre- and post-federation mateship, often anti-authoritarian or highlighting the role of the larrikin. We have a lot of bushrangers (with suspiciously good hearts or reacting against terrible police oppression), Simpson and his donkey (a first world war hero who transported men to an aid station using his donkey, ultimately dying on the battlefield) and a Prime Minister who goes on YouTube to announce that she’s now convinced that the Mayans were right and we’re all doomed – tongue firmly in cheek. Is this the totality of the real Australia? No, but the stylised notion of ‘mateship’, the gentle knock and the “come off the grass, you officious … person” attitude are as much a part of how many Australians see themselves as shrimp on a barbie is to many US folk looking at us. In any Australian war story, you are probably more likely to hear about the terrible hangover the Gunner Suggs had and how he dragged his friend a kilometre over rough stones to keep him safe, than you are to hear about how many people he killed. (I note that this mateship is often strongly delineated over gender and racial lines, but it’s still a big part of the Australian story.)

The stores that we tell and those that we pass on as part of our culture strongly shape our culture. Look at Greek mythology and you see stern warnings against hubris – don’t rate yourself too highly or the gods will cut you down. Set yourself up too high in Australian culture and you’re going to get knocked down as well: a ‘tall poppies’ syndrome that is part cultural cringe, inherited from colonial attitudes to the Antipodes, part hubris and part cultural confusion as Anglo, Euro, Asian, African and… well, everyone, come to terms with a country that took the original inhabitants, the Australian Aboriginal and Torres Strait Islanders, quite a while to adapt to. As someone who wasn’t born in Australia, like so many others who live here and now call themselves Australia, I’ve spent a long time looking at my adopted homeland’s stories to see how to fit. Along the way, because of travel, I’ve had the opportunity to look at other cultures as well: the UK, obviously as it’s drummed into you at school, and the US, because it interests me.

The stories of Horatio Alger, from the US, fascinate me, because of their repeated statement of the rags to riches story. While most of Alger’s protagonists never become amazingly wealthy, they rise, through their own merits, to take the opportunities presented to them and, because of this, a good man will always rise. This is, fundamentally, the American Dream – that any person can become President, effectively, through the skills that they have and through rolling up their sleeves. We see this Dream become ugly when any of the three principles no longer hold, in a framing I first read from Professor Harlon Dalton:

  1. The notion that we are judged solely on our merits:For this to be true, we must not have any bias, racist, gendered, religious, ageist or other. Given the recent ruling that an attractive person can be sacked, purely for being attractive and for providing an irresistible attraction for their boss, we have evidence that not only is this point not holding in many places, it’s not holding in ways that beggar belief.
  2. We will each have a fair opportunity to develop these merits:This assumes equal opportunity in terms of education, in terms of jobs, which promptly ignores things like school districts, differing property tax levels, teacher training approaches and (because of the way that teacher districts work) just living in a given state or country because your parents live there (and can’t move) can make the distance between a great education and a sub-standard child minding service. So this doesn’t hold either.
  3. Merit will out:Look around. Is the best, smartest, most talented person running your organisation or making up all of the key positions? Can you locate anyone in the “important people above me” who is holding that job for reasons other than true, relevant merit?

Australia’s myths are beneficial in some ways and destructive in others. For my students, the notion that we help each other, we question but we try to get things done is a positive interpretation of the mild anti-authoritarian mateship focus. The downside is drinking buddies going on a rampage and covering up for each other, fighting the police when the police are actually acting reasonably and public vandalism because of a desire to act up. The mateship myth hides a lot of racism, especially towards our indigenous community, and we can probably salvage a notion of community and collaboration from mateship, while losing some of the ugly and dumb things.

The tunnel went through.

The tunnel went through.

Horatio Alger myths would give hope, except for the bleak reality that many people face which is that it is three giant pieces of boloney that people get hit about the head with. If you’re not succeeding, then Horatio Alger reasoning lets us call you lazy or stupid or just not taking the opportunities. You’re not trying to pull yourself up by your bootstraps hard enough. Worse still, trying to meet up to this, sometimes impossible, guideline leads us into John Henryism. John Henry was a steel driver, who hammered and chiseled the rock through the mountains to build tunnels for the railroad. One day the boss brought in a steam driven hammer and John Henry bet that he could beat it, to show that he and his crew should not be replaced. After a mammoth battle between man and machine, John Henry won, only to die with the hammer in his hand.

Let me recap: John Henry died – and the boss still got a full day’s work that was equal to two steam-hammers. (One of my objections to “It’s a Wonderful Life” is that the rich man gets away with stealing the money – that’s not a fairy tale, it’s a nightmare!) John Henryism occurs when people work so hard to lift themselves up by their bootstraps that they nearly (or do) kill themselves. Men in their 50s with incredibly high blood pressure, ulcers and arthritis know what I’m talking about here. The mantra of the John Henryist is:

“When things don’t go the way that I want them to, that just makes me work even harder.”

There’s nothing intrinsically wrong with this when your goal is actually achievable and you apply this maxim in moderation. At its extreme, and for those people who have people standing on their boot caps, this is a recipe to achieve a great deal for whoever is benefiting from your labour.

And then dying.

As John Henry observes in the ballad (Springsteen version), “I’ll hammer my fool self to death”, and the ballad of John Henry is actually a cautionary tale to set your pace carefully because if you’re going to swing a hammer all day, every day, then you have to do it at a pace that won’t kill you. This is the natural constraint on Horatio Alger and balances all of the issues with merit and access to opportunity: don’t kill your “fool self” striving for something that you can’t achieve. It’s a shame, however, that the stories line up like this because there’s a lot of hopelessness sitting in that junction.

Dealing with students always makes me think very carefully about the stories I tell and the stories I live. Over the next few days, I hope to put together some thoughts on a 21st century myth form that inspires without demanding this level of sacrifice, and that encourages without forcing people into despair if existing obstacles block them – and it’s beyond their current control to shift. However, on that last point, what I’d really like to come up with is a story that encourages people to talk about obstacles and then work together to lift them out of the way. I do like a challenge, after all. 🙂


Vitamin Ed: Can It Be Extracted?

Mmm. Taste the learnination.

Mmm. Taste the learnination.

There are a couple of ways to enjoy a healthy, balanced diet. The first is to actually eat a healthy, balanced diet made up from fresh produce across the range of sources, which requires you to prepare and cook foods, often changing how you eat depending on the season to maximise the benefit. The second is to eat whatever you dang well like and then use an array of supplements, vitamins, treatments and snake oil to try and beat your diet of monster burgers and gorilla dogs into something that will not kill you in 20 years. If you’ve ever bothered to look on the side of those supplements, vitamins, minerals or whatever, that most people have in their ‘medicine’ cabinets, you might see statements like “does not substitute for a balanced diet” or nice disclaimers like that. There is, of course, a reason for that. While we can be fairly certain about a range of deficiency disorders in humans, and we can prevent these problems with selective replacement, many other conditions are not as clear cut – if you eat a range of produce which contains the things that we know we need, you’re probably getting a slew of things that we also need but don’t make themselves as prominent.

In terms of our diet, while the debate rages about precisely which diet humans should be eating, we can have a fairly good stab at a sound basis from a dietician’s perspective built out of actual food. Recreating that from raw sugars, protein, vitamin and mineral supplements is technically possible but (a) much harder to manage and (b) nowhere near as satisfying as eating the real food, in most cases. Let’s nor forget that very few of us in the western world are so distant from our food that we regard it purely as fuel, with no regard for its presentation, flavour or appeal. In fact, most of us could muster a grimace for the thought of someone telling us to eat something because it was good for us or for some real or imagined medical benefit. In terms of human nutrition, we have the known components that we have to eat (sugars, proteins, fats…) and we can identify specific vitamins and minerals that we need to balance to enjoy good health, yet there is not shortage of additional supplements that we also take out of concern for our health that may have little or no demonstrated benefit, yet still we take them.

There’s been a lot of work done in trying to establish an evidence base for medical supplements and far more of the supplements fail than pass this test. Willow bark, an old remedy for pain relief, has been found to have a reliable effect because it has a chemical basis for working – evidence demonstrated that and now we have aspirin. Homeopathic memory water? There’s no reliable evidence for this working. Does this mean it won’t work? Well, here we get into the placebo effect and this is where things get really complicated because we now have the notion that we have a set of replacements that will work for our diet or health because they contain useful chemicals, and a set of solutions that work because we believe in them.

When we look at education, where it’s successful, we see a lot of techniques being mixed in together in a ‘natural’ diet of knowledge construction and learning. Face-to-face and teamwork, sitting side-by-side with formative and summative assessment, as part of discussions or ongoing dialogues, whether physical or on-line. Exactly which parts of these constitute the “balanced” educational diet? We already know that a lecture, by itself, is not a complete educational experience, in the same way that a stand-alone multiple-choice question test will not make you a scholar. There is a great deal of work being done to establish an evidence basis for exactly which bits work but, as MIT said in the OCW release, these components do not make up a course. In dietary terms, it might be raw fuel but is it a desirable meal? Not yet, most likely.

Now let’s get into the placebo side of the equation, where students may react positively to something just because it’s a change, not because it’s necessarily a good change. We can control for these effects, if we’re cautious, and we can do it with full knowledge of the students but I’m very wary of any dependency upon the placebo effect, especially when it’s prefaced with “and the students loved it”. Sorry, students, but I don’t only (or even predominantly) care if you loved it, I care if you performed significantly better, attended more, engaged more, retaining the information for longer, could achieve more, and all of these things can only be measured when we take the trouble to establish base lines, construct experiments, measure things, analyse with care and then think about the outcomes.

My major concern about the whole MOOC discussion is not whether MOOCs are good or bad, it’s more to do with:

  • What does everyone mean when they say MOOC? (Because there’s variation in what people identify as the components)
  • Are we building a balanced diet or are we constructing a sustenance program with carefully balanced supplements that might miss something we don’t yet value?
  • Have we extracted the essential Vitamin Ed from the ‘real’ experience?
  • Can we synthesise Vitamin Ed outside of the ‘real’ educational experience?

I’ve been searching for a terminological separation that allows me to separate ‘real’/’conventional’ learning experiences from ‘virtual’/’new generation’/’MOOC’ experiences and none of those distinctions are satisfying – one says “Restaurant meal” and the other says “Army ration pack” to me, emphasising the separation. Worse, my fear is that a lot of people don’t regard MOOC as ever really having Vitamin Ed inside, as the MIT President clearly believed back in 2001.

I suspect that my search for Vitamin Ed starts from a flawed basis, because it assumes a single silver bullet if we take a literal meaning of the term, so let me me spread the concept out a bit to label Vitamin Ed as the essential educational components that define a good learning and teaching experience. Calling it Vitamin Ed gives me a flag to wave and an analogue to use, to explain why we should be seeking a balanced diet for all of our students, rather than a banquet for one and dog food for the other.


“We are not providing an MIT education on the web…”

I’ve been re-watching some older announcements that describe open courseware initiatives, starting from one of the biggest, the MIT announcement of their OpenCourseWare (OCW) initiative in April, 2001. The title of this post actually comes from the video, around the 5:20 mark, (Video quoted under a CC-BY-NC-SA licence, more information available at: http://ocw.mit.edu/terms)

“Let me be very clear, we are not providing an MIT education on the Web. We are, however, providing core materials that are the infrastructure that undergirds that information. Real education, in our view, involves interaction between people. It’s the interaction between faculty and students, in our classrooms and our living group, in our laboratories that are the heart, the real essence, of an MIT education. “

While the OCW was going to be produced and used on campus, the development of OCW was seen as something that would make more time available for student interaction, not less. President Vest then goes on to confidently predict that OCW will not make any difference to enrolment, which is hardly surprising given that he has categorically excluded anyone from achieving an MIT education unless they enrol. We see here exactly the same discussion that keeps coming up: these materials can be used as augmenting materials in these conventional universities but can never, in the view of the President or Vice Chancellor, replace the actual experience of obtaining a degree from that institution.

Now, don’t get me wrong. I still think that the OCW initiative was excellent, generous and visionary but we are still looking at two fundamentally different use cases: the use of OCW to augment an existing experience and the use of OCW to bootstrap a completely new experience, which is not of the same order. It’s a discussion that we keep having – what happens to my Uni if I use EdX courses from another institution? Well, ok, let’s ask that question differently. I will look at this from two sides with the introduction of a new skill and knowledge area that becomes ubiquitous,  in my sphere, Computer Science and programming. Let’s look at this in terms of growth and success.

What happens if schools start teaching programming to first year level? 

Let’s say that we get programming into every single national curriculum for secondary school and we can guarantee that students come in knowing how to program to freshman level. There are two ways of looking at this and the first, which we have probably all seen to some degree, is to regard the school teaching as inferior and re-teach it. The net result of this will be bored students, low engagement and we will be wasting our time. The second, far more productive, approach is to say “Great! You can program. Now let’s do some Computer Science.” and we use that extra year or so to increase our discipline knowledge or put breadth courses back in so our students come out a little more well-rounded. What’s the difference between students learning it from school before they come to us, or through an EdX course on fundamental programming after they come to us?

Not much, really, as long as we make sure that the course meets our requirements – and, in fact, it gives us bricks-and-mortar-bound entities more time to do all that face-to-face interactive University stuff that we know students love and from which they derive great benefit. University stops being semi-vocational in some aspects and we leap into knowledge construction, idea generation, big projects and the grand dreams that we always talk about, yet often don’t get to because we have to train people in basic programming, drafting, and so on. Do we give them course credit? No, because they’re assumed knowledge, or barrier tested, and they’re not necessarily part of our structure anymore.

What happens if no-one wants to take my course anymore?

Now, we know that we can change our courses because we’ve done it so many times before over the history of the Academy – Latin, along with Greek the language of scholarship, was only used in half of the University publications of 1800. Let me wander through a classical garden for a moment to discuss the nature of change from a different angle, that of decline. Languages had a special place in the degrees of my University with Latin and Greek dominating and then with the daring possibility of allowing substitution of French or German for Latin or Greek from 1938. It was as recently as 1958 that Latin stopped being compulsory for high school graduation in Adelaide although it was still required for the study of Law – student demand for Latin at school therefore plummeted and Latin courses started being dropped from the school curriculum. The Law Latin requirement was removed around 1969-1970, which then dropped any demand for Latin even further. The reduction in the number of school teachers who could teach Latin required the introduction of courses at the University for students who had studied no Latin at all – Latin IA entered the syllabus. However, given that in 2007 only one student at all of the schools across the state of South Australian (roughly 1.2-1.4 million people) studied Latin in the final year of school, it is apparent that if this University wishes to teach Latin, it has to start by teaching all of Latin. This is a course, and a discipline, that is currently in decline. My fear is that, one day, someone will make the mistake of thinking that we no longer need scholars of this language. And that worries me, because I don’t know what people 30 years from now will actually want, or what they could add to the knowledge that we already have of one of our most influential civilisations.

This decline is not unique to Latin (or Greek, or classics in general) but a truly on-line course experience would allow us to actually pool those scholars we have left and offer scaled resources out for much longer than isolated pockets in real offices can potentially manage but, as President Vest notes, a storehouse of Latin texts does not a course make. What reduced the demand for Latin? Possibly the ubiquity of the language that we use which is derived from Latin combined with a change of focus away from a classical education towards a more job- and achievement-oriented (semi-vocational) style of education. If you ask me, programming could as easily go this way in about 20 years, once we have ways to let machines solve problems for us. A move towards a less go-go-go culture, smarter machines and a resurgence of the long leisure cycles associated with Science Fiction visions of the future and suddenly it is the engineers and the computer scientists who are looking at shrinking departments and no support in the schools. Let me be blunt: course popularity and desirability rises, stabilises and falls, and it’s very hard to tell if we are looking at a parabola or a pendulum. With that in mind, we should be very careful about how we define our traditions and our conventions, especially as our cunning tools for supporting on-line learning and teaching get better and better. Yes, interaction is an essential part of a good education, no argument at all, but there is an implicit assumption of critical mass that we have seen, time and again, to implicitly support this interaction in a face-to-face environment that is as much a function of popularity and traditionally-associated prestige as it is of excellence.

What are MIT doing now?

I look at the original OCW release and I agree that, at time of production, you could not reproduce the interaction between people that would give you an MIT education. But our tools are better now. They are, quite probably not close enough yet to give you an “MIT of the Internet” but should this be our goal? Not the production of a facsimile of the core materials that might, with MIT instructors, turn into a course, but the commitment to developing the tools that actually reproduce the successful components of the learning experience with group and personal interaction, allowing the formation of what we used to call a physical interactive experience in a virtual side? That’s where I think the new MIT initiatives are showing us how these things can work now, starting from their original idealistic roots and adding the technology of the 21st Century. I hope that other, equally prestigious, institutions are watching this, carefully.


Legitimisation and Agency: I Believe That’s My Ox on Your Bridge

There’s an infamous newspaper advertisement that never ran, which reflected the entry of IBM into the minicomputer market. A number of companies, Data General principal among them, but including such (historically) powerful players as Digital Equipment Corporation, Prime and Hewlett Packard, were quite successful in the minicomputer market, growing rapidly and stealing market share from IBM’s mainframe market. (For an excellent account of these times, I recommend “The Soul of a New Machine” by Tracy Kidder.) IBM finally decided to enter the minicomputer market and, as analysts remarked at the time, IBM’s move into minicomputers legitimised the market.

Ed DeCastro, CEO of Data General, had a full-page news paper advertisement prepared, which I reproduce (mildly bowdlerised to keep my all ages posting status):

“They Say IBM’s Entry Into the Minicomputer Market Will Legitimize the Industry. The B***ards Say, Welcome.”

The ad never actually ran but was framed and put on Ed’s wall. The point, however, was well and precisely made: IBM’s approval was neither required nor desired, and nobody had set a goal of being legitimised.

The Nova, the first Data General minicomputer, with Ed DeCastro in the background.

The Nova, the first Data General minicomputer, with Ed DeCastro in the background.

Over on Mark’s blog, we see that a large number of UK universities are banding together to launch an on-line project, including the highly successful existing player in the analogous space, the Open University, but also some high power players such as Southampton and the disturbingly successful St Andrews. As Mark notes in the title, this is a serious change in terms of allying a UK effort that will produce a competitor (or competitors) to the existing US dominance. As Mark also notes:

Hmm — OxBridge isn’t throwing hats into the rings yet.

And this is a very thoughtful Hmm, because the Universities of Oxford and Cambridge are the impossible-to-ignore legitimising agencies because of their sheer weight on the rubber sheet of UK Academy Spacetime. When it comes to talking about groups of Universities in the UK, and believe me there are quite a few, the Russell Group awards the lion’s share of PhDs, with 78% of the most highly graded research staff as well, across the 24 Universities. One of its stated goals is to lead the research efforts of the UK, with another being to attract the best staff and students to its member institutions. However, the group of participants in the new on-line project involve Russell Group Universities and those outside, which makes the non-participation of Oxford and Cambridge even more interesting. How can a trans-group on-line proposal bring the best students in – or is this why we aren’t seeing involvement from Oxbridge, because of the two-tier perception between traditional and on-line? One can easily argue that Oxford and Cambridge have no need to participate because they are so entrenched in their roles and their success that, as I’ve noted on a different post, any ranking system that rates them out of, say, the top 5 in the UK has made itself suspect as a ranking, rather than a reflection of dropping quality. Oxbridge is at the heart of the UK’s tertiary system and competition will continue to be fierce to gain entry for the foreseeable future. They have no need to get together with the others in their group or beyond, although it’s not from protecting themselves from competitors, as they are not really in competition with most of the other Russell Group members – because they are Oxford and Cambridge.

It’s worth noting that Cambridge’s vice-chancellor Leszek Borysiewicz did think that this consortium was exciting, and I quote from the THE article:

“Online education is becoming an important approach which may open substantial opportunities to those without access to conventional universities,” he said.

And that pretty much confirms why Cambridge is happy to stand back – because they are almost the definition of a conventional university, catering to a well-established market for whom attending a bricks-and-mortar University is as (if not more) important than the course content or delivery mechanisms. The “Gentleman’s Third”, receiving the lowest possible passing grade for your degree examinations, indicates a dedication to many things at the University that are, most likely, of a less-than-scholarly nature but it is precisely for these activities that some people go to Oxford and Cambridge and it is also precisely these non-scholarly activities that we will have great difficulty transferring into a MOOC. There will be no Oxford-Cambridge boat race carried out on a browser-based Flash game, with distributed participants hooked up to rowing machines across the globe, nor will the Footlights be conducted as a Google Hangout (except, of course, highly ironically).

Over time, we’ll find out more about the role of tradition and convention in the composition and participation, but let me return to my opening anecdote. We are already dealing with issues of legitimacy in the on-line learning space, whether from pedagogical fatigue, academic cultural inertia, xenophobia, or the fact that some highly vaunted previous efforts have not been very good. The absence of two of the top three Universities in the UK in this fascinating and potentially quite fruitful collaboration makes me think a lot about IBM. I think of someone sitting back, watching things happen, certain in the knowledge that what they do is what the market needs and it is, oh happy day, what they are currently doing. When Oxford and Cambridge come in and anoint the MOOC, if they every do or if we ever can, then we have the same antique avuncular approach to patting an entire sector on the head and saying “oh, well done, but the grownups are here now”, and this is unlikely to result in anything good in terms of fellow feeling or transferability and accreditation of students, key challenges in MOOCs being taken more seriously. Right now, Oxford and Cambridge are choosing not to step in, and there is no doubt that they will continue to be excellent Universities for their traditional attendees – but is this a sensible long term survival strategy? Could they be contributing to the exploration of the space in a productive manner by putting their legitimising weight in sooner rather than later, at a time when they are saying “Let’s all look at this to see if it’s any good”, rather than going “Oh, hell. Now we have to do something”? Would there be much greater benefit in bringing in their considerable expertise, teaching and research excellence, and resources now, when there is so much room for ground level innovation?

This is certainly something I’m fearful of in my own system, where the Group of 8 Universities has most of the research funding, most of the higher degree granting and, as a goal at least, targets the best staff and students. Our size and tradition can be barriers to agility and innovation, although our recent strategy is obviously trying to set our University on a more innovative and more agile course. A number of recent local projects are embracing the legitimacy of new learning and teaching approaches. It is, however, very important to remember the example of IBM and how the holders of tradition may not necessarily be welcomed as a legitimising influence when other have been highly successful innovating in a new space, which the tradition holder has deemed beneath them until reality finally intruded.

It’s easy to stand back and say “Well, that’s fine for people who can’t afford mainframes” but such a stance must be balanced with looking to see whether people still need or want to afford mainframes. I think the future of education is heavily blended – MOOC + face-to-face is somewhere where I think we can do great things – but for now it’s very interesting to see how we develop as we start to take more and more steps down this path.