Two Tier: Already Here

Hah! I look down on you, you apples!

Hah! I look down on you, you apples!

I was reading a Chronicle of Higher Ed article “For Whom is College Being Reinvented” and it was sobering reading. While I was writing yesterday about Oxford and Cambridge wanting to maintain their conventional University stance, Robert Archibald, an Economics Professor from the College of William and Mary, points out that the two tier system is already here in terms of good conventional and bad conventional – so that we would see an even larger disparity between luxury and economy courses. Getting into the “good” colleges will be a matter of money and prior preparation, much as it is many areas where the choice of school available to parents is increasingly driving residential moves in the early years of a child’s life. But it doesn’t end there because the ‘quality’ measure may be as much about the employability of the students after they’ve completed their studies – and, as the article says, now we start have to think about whether a “low-level” degree is then preferable to an “industry recognised” apprenticeship or trade training program. Now, our two tiers are as separate as radiographer and radiology but, as Robert Reich also observes in the same article, this is completely against what we should be doing: how can we do all this and maintain real equality between degrees and programs?

Of course, if you didn’t go to a great elementary and senior school, then, despite being on the path to the ‘second-tier’ school, which might be one that naturally migrates to a full electronic delivery for a number of perfectly reasonable economic reasons, you are probably someone who needs a more customised experience than a ‘boilerplate’ MOOC could offer: you actually need face-to-face. When we talk about disruption of the existing college system, we always assume that this is a positive thing, something that will lead to a better result for our students, so these potential issues with where these new technologies may get focused start to become very important.

For whom will these new systems work? Everyone or just the people that we’re happy to expose them to?

It’s perhaps the best question we have to frame the discussion – it’s not about whether the technology works, we know that it works well for certain things and it’s now matter of making sure that our pedagogical systems are correctly married to our computer systems to make the educational experience work. But, obviously, and as many much better writers than I have been saying, it has to work and be at least as good as the systems that it’s replacing – only now we realise that existing systems are not the same for everyone and that one person’s working system is someone else’s diabolically bad teaching experience. So the entire discussion about whether MOOCs work now have to be framed in the context of ‘compared to what‘?

It’s an interesting article that poses more questions than it answers, but it’s certainly part of the overall area we have to think about.


Legitimisation and Agency: I Believe That’s My Ox on Your Bridge

There’s an infamous newspaper advertisement that never ran, which reflected the entry of IBM into the minicomputer market. A number of companies, Data General principal among them, but including such (historically) powerful players as Digital Equipment Corporation, Prime and Hewlett Packard, were quite successful in the minicomputer market, growing rapidly and stealing market share from IBM’s mainframe market. (For an excellent account of these times, I recommend “The Soul of a New Machine” by Tracy Kidder.) IBM finally decided to enter the minicomputer market and, as analysts remarked at the time, IBM’s move into minicomputers legitimised the market.

Ed DeCastro, CEO of Data General, had a full-page news paper advertisement prepared, which I reproduce (mildly bowdlerised to keep my all ages posting status):

“They Say IBM’s Entry Into the Minicomputer Market Will Legitimize the Industry. The B***ards Say, Welcome.”

The ad never actually ran but was framed and put on Ed’s wall. The point, however, was well and precisely made: IBM’s approval was neither required nor desired, and nobody had set a goal of being legitimised.

The Nova, the first Data General minicomputer, with Ed DeCastro in the background.

The Nova, the first Data General minicomputer, with Ed DeCastro in the background.

Over on Mark’s blog, we see that a large number of UK universities are banding together to launch an on-line project, including the highly successful existing player in the analogous space, the Open University, but also some high power players such as Southampton and the disturbingly successful St Andrews. As Mark notes in the title, this is a serious change in terms of allying a UK effort that will produce a competitor (or competitors) to the existing US dominance. As Mark also notes:

Hmm — OxBridge isn’t throwing hats into the rings yet.

And this is a very thoughtful Hmm, because the Universities of Oxford and Cambridge are the impossible-to-ignore legitimising agencies because of their sheer weight on the rubber sheet of UK Academy Spacetime. When it comes to talking about groups of Universities in the UK, and believe me there are quite a few, the Russell Group awards the lion’s share of PhDs, with 78% of the most highly graded research staff as well, across the 24 Universities. One of its stated goals is to lead the research efforts of the UK, with another being to attract the best staff and students to its member institutions. However, the group of participants in the new on-line project involve Russell Group Universities and those outside, which makes the non-participation of Oxford and Cambridge even more interesting. How can a trans-group on-line proposal bring the best students in – or is this why we aren’t seeing involvement from Oxbridge, because of the two-tier perception between traditional and on-line? One can easily argue that Oxford and Cambridge have no need to participate because they are so entrenched in their roles and their success that, as I’ve noted on a different post, any ranking system that rates them out of, say, the top 5 in the UK has made itself suspect as a ranking, rather than a reflection of dropping quality. Oxbridge is at the heart of the UK’s tertiary system and competition will continue to be fierce to gain entry for the foreseeable future. They have no need to get together with the others in their group or beyond, although it’s not from protecting themselves from competitors, as they are not really in competition with most of the other Russell Group members – because they are Oxford and Cambridge.

It’s worth noting that Cambridge’s vice-chancellor Leszek Borysiewicz did think that this consortium was exciting, and I quote from the THE article:

“Online education is becoming an important approach which may open substantial opportunities to those without access to conventional universities,” he said.

And that pretty much confirms why Cambridge is happy to stand back – because they are almost the definition of a conventional university, catering to a well-established market for whom attending a bricks-and-mortar University is as (if not more) important than the course content or delivery mechanisms. The “Gentleman’s Third”, receiving the lowest possible passing grade for your degree examinations, indicates a dedication to many things at the University that are, most likely, of a less-than-scholarly nature but it is precisely for these activities that some people go to Oxford and Cambridge and it is also precisely these non-scholarly activities that we will have great difficulty transferring into a MOOC. There will be no Oxford-Cambridge boat race carried out on a browser-based Flash game, with distributed participants hooked up to rowing machines across the globe, nor will the Footlights be conducted as a Google Hangout (except, of course, highly ironically).

Over time, we’ll find out more about the role of tradition and convention in the composition and participation, but let me return to my opening anecdote. We are already dealing with issues of legitimacy in the on-line learning space, whether from pedagogical fatigue, academic cultural inertia, xenophobia, or the fact that some highly vaunted previous efforts have not been very good. The absence of two of the top three Universities in the UK in this fascinating and potentially quite fruitful collaboration makes me think a lot about IBM. I think of someone sitting back, watching things happen, certain in the knowledge that what they do is what the market needs and it is, oh happy day, what they are currently doing. When Oxford and Cambridge come in and anoint the MOOC, if they every do or if we ever can, then we have the same antique avuncular approach to patting an entire sector on the head and saying “oh, well done, but the grownups are here now”, and this is unlikely to result in anything good in terms of fellow feeling or transferability and accreditation of students, key challenges in MOOCs being taken more seriously. Right now, Oxford and Cambridge are choosing not to step in, and there is no doubt that they will continue to be excellent Universities for their traditional attendees – but is this a sensible long term survival strategy? Could they be contributing to the exploration of the space in a productive manner by putting their legitimising weight in sooner rather than later, at a time when they are saying “Let’s all look at this to see if it’s any good”, rather than going “Oh, hell. Now we have to do something”? Would there be much greater benefit in bringing in their considerable expertise, teaching and research excellence, and resources now, when there is so much room for ground level innovation?

This is certainly something I’m fearful of in my own system, where the Group of 8 Universities has most of the research funding, most of the higher degree granting and, as a goal at least, targets the best staff and students. Our size and tradition can be barriers to agility and innovation, although our recent strategy is obviously trying to set our University on a more innovative and more agile course. A number of recent local projects are embracing the legitimacy of new learning and teaching approaches. It is, however, very important to remember the example of IBM and how the holders of tradition may not necessarily be welcomed as a legitimising influence when other have been highly successful innovating in a new space, which the tradition holder has deemed beneath them until reality finally intruded.

It’s easy to stand back and say “Well, that’s fine for people who can’t afford mainframes” but such a stance must be balanced with looking to see whether people still need or want to afford mainframes. I think the future of education is heavily blended – MOOC + face-to-face is somewhere where I think we can do great things – but for now it’s very interesting to see how we develop as we start to take more and more steps down this path.


Education is not Music: A Long Winded Agreement with Aaron Bady

Mark Guzdial has been posting a great deal on MOOCs, as have we all although Mark is much easier to read than I am, and his recent comment on Aaron Bady’s response to Clay Shirky’s “Udacity is Napster” drew me to the great article by Bady and the following key quote inside Bady’s article:

“I think teaching is very different from music”

and I couldn’t agree more. Let me briefly list why I feel that a comparison to Napster has no real validity, to agree with Aaron that Clay Shirky’s argument is not well grounded for the discussion of education. What’s interesting is that I believe that Shirky identifies this point in his own essay, but doesn’t quite realise the full implications of what he’s saying:

Starting with Edison’s wax cylinders, and continuing through to Pandora and the iPod, the biggest change in musical consumption has come not from production but playback.

Those earlier inventions systems started out markedly inferior to the high-cost alternative: records were scratchy, PCs were crashy. But first they got better, then they got better than that, and finally, they got so good, for so cheap, that they changed people’s sense of what was possible.

The first thing we need to remember about music is that music is inherently fungible because, when viewed as a piece of work, you can replace it with another effectively identical item. Of course, here we need to be careful and define what we need by identical, because music, as it turns out, is almost never identical but it gets treated that way. If you doubt this, then go and review how much it costs to insert the song “Happy Birthday to You” into a movie or TV show. It doesn’t matter if it’s Homer Simpson yelling it drunkenly, or the Three Tenors singing it sotto voce as part of an Ally McBeal shower hallucination flashback, you will still be liable to fork out dollars to the company who claims to hold the copyright. If you understand the history of how we even made music small enough to send across the (much, much slower back then) Internet, we had to start with the MP3 format, which threw away enough ‘unneeded’ data from the original CD files to shrink the files to a little less than 10% of their original size. This is the technology that we needed before we could even get around the idea of Napster, because enough people had enough music on their hard drives (because we’d already dropped the size) to make file sharing useful. However, as Shirky also notes in his article, this lossy compression technique changes the way that music sounds and you can tell the difference if you listen carefully and know what to listen for. Yet, this is the same song and Napster got into trouble for sharing compressed artefacts of lower quality and perceptible difference from the CD originals, because music, as this kind of artefact, is fungible despite very different levels of quality. Identical, to an audiophile, means sounding precisely the same (or true to the source, really), but identical to the copyright owner is a representation that clearly indicates unauthorised use of copyright material – which is why George Harrison’s “My Sweet Lord” ended up begin described as sufficiently similar to “He’s So Fine”, despite it being a brand new recording and not just a compressed copy.

So, yes, Shirky’s original quotes are both true – we have improved playback and while MP3 is still very common, lossless and much higher quality reproductions are now available. However, the point that has been missed is that the vast majority of people do not care in the slightest. The average person will only notice a shift from MP3 to lossless if they suddenly discover that their iPod has dropped in capacity, when measured in number of songs, by a significant margin. If I listen to “Viva La Vida” by Coldplay, and yes, Joe Satriani fans, I picked that deliberately, then the effective difference in my enjoyment of the song, my ability to sing along tunelessly in the shower and the ability to recite the words if asked, has nothing to do with the quality. This is not true of certain pieces of classical music, where the compression artefacts start to have more of an effect, but these are not the core business of file sharers and those who trade in compressed artefacts. However, MP3 artefacts rarely sound like long scratches, dust on the record or a bad needle – yes, they can be irritating, but the electronic form, pre and post-compression, is generally protected from such things unless you get some serious cosmic ray action in your storage media and even then, you have to be very unlucky.

The Napster music argument, for me, falls down because the increase in quality does not have a direct connection to what the majority of the user base would have considered an acceptable product. Yes, it’s better now but, for most people, so what? Music sharing services are considered useful and valuable because they share songs that people want, where most people don’t think about the quality, they accept the name and the recognisable nature of the song as enough.

This is not at all true for education, because educational experiences vary wildly between lecturers, courses, institutions and eras to an extent that it is impossible to consider them in any way to be interchangeable – quality, here, is everything. If you have an international articulation program, you know that the first thing you have to do is to work out what has been taught, and how it has been taught, inside a course of the same name as one of yours. Even ‘name equivalence’ doesn’t mean anything here and we do not, or we should not, grant standing based on a coincidence of name for a course. There is no parallel guarantee that my low quality version of a course will give me the same ability to “sing in the shower” as the high quality course will – and this is, for me, an unassailable difference.

There is no doubt that the opportunities that might be offered by blended learning, full electronic offerings, and, yes, MOOCs (however they end up being defined) are something that we have to consider because, if they work, they allow us to educate the world, but claiming that this must occur because Udacity is like Napster completely ignores the core difference between education and music in terms of the consumer base and their focus on what it means for a service to meet their requirements. If students didn’t care about the perceived quality, then we wouldn’t have the notion of the ‘top schools’ or ‘low end schools’, so we know that this thinking exists. A student will happily put an MP3 on at a party, but it remains to be seen if they will constantly and out of design, not desperation, put a MOOC course on a job application, and expect a good result from it.


When Does Failing Turn You Into a Failure?

The threat of failure is very different from the threat of being a failure. At the Creative Innovations conference I was just at, one of the strongest messages there was that we learn more from failure than we do from success, and that failure is inevitable if you are actually trying to be innovative. If you learn from your failures and your failure is the genuine result of something that didn’t work, rather than you sat around and watched it burn, then this is just something that happens, was the message from CI, and any other culture makes us overly-cautious and risk averse. As most of us know, however, we are more strongly encouraged to cover up our failures than to celebrate them – and we are frequently better off not trying in certain circumstances than failing.

At the recent Adelaide Conventicle, which I promise to write up very, very soon, Dr Raymond Lister presented an excellent talk on applying Neo-Piagetian concepts and framing to the challenges students face in learning programming. This is a great talk (which I’ve had the good fortune to see twice and it’s a mark of the work that I enjoyed it as much the second time) because it allows us to talk about failure to comprehend, or failure to put into practice, in terms of a lack of the underlying mechanism required to comprehend – at this point in the student’s development. As part of the steps of development, we would expect students to have these head-scratching moments where they are currently incapable of making any progress but, framing it within developmental stages, allows us to talk about moving students to the next stage, getting them out of this current failure mode and into something where they will achieve more. Once again, failure in this case is inevitable for most people until we and they manage to achieve the level of conceptual understanding where we can build and develop. More importantly, if we track how they fail, then we start to get an insight into which developmental stage they’re at.

One thing that struck me with Raymond’s talk, was that he starts off talking about “what ruined Raymond” and discussing the dire outcomes promised to him if he watched too much television, as it was to me for playing too many games, and it is to our children for whatever high tech diversion is the current ‘finger wagging’ harbinger of doom. In this case, ruination is quite clearly the threat of becoming a failure. However, this puts us in a strange position, because if failure is almost inevitable but highly valuable if managed properly and understood, what is it about being a failure that is so terrible? It’s like threatening someone that they’ll become too enthusiastic and unrestrained in their innovation!

I am, quelle surprise, playing with words here because to be a failure is to be classed as someone for whom success is no longer an option. If we were being precise, then we would class someone as a perpetual failure or, more simply, unsuccessful. This is, quite usually, the point at which it is acceptable to give up on someone – after all, goes the reasoning, we’re just pouring good money after bad, wasting our time, possibly even moving the deck chairs on the Titanic, and all those other expressions that allow us to draw that good old categorical line between us and others and put our failures into the “Hey, I was trying something new” basket and their failures into the “Well, he’s just so dumb he’d try something like that.” The only problem with this is that I’m really not sure that a lifetime of failure is a guaranteed predictor of future failure. Likely? Yeah, probably. So likely we can gamble someone’s life on it? No, I don’t believe so.

When I was failing courses in my first degree, it took me a surprisingly long time to work out how to fix it, most of which was down to the fact that (a) I had no idea how to study but (b) no-one around me was vaguely interested in the fact that I was failing. I was well on my way to becoming a perpetual failure, someone who had no chance of holding down a job let alone having a career, and it was a kind and fortuitous intervention that helped me. Now, with a degree of experience and knowledge, I can look back into my own patterns and see pretty much what was wrong with me – although, boy, would I have been a difficult cuss to work with. However, failing, which I have done since then and I will (no doubt) do again, has not appeared to have turned me into a failure. I have more failings than I care to count but my wife still loves me, my friends are happy to be seen with me and no-one sticks threats on my door at work so these are obviously in the manageable range. However, managing failure has been a challenging thing for me and I was pondering this recently – how people deal with being told that they’re wrong is very important to how they deal with failing to achieve something.

I’m reading a rather interesting, challenging and confronting, article on, and I cannot believe there’s a phrase for this, rage murders in American schools and workplaces, which claims that these horrifying acts are, effectively, failed revolts, which is with Mark Ames, the author of “Going Postal” (2005). Ames seems to believe that everything stems from Ronald Reagan (and I offer no opinion either way, I hasten to add) but he identifies repeated humiliation, bullying and inhumane conditions as taking ordinary people, who would not usually have committed such actions, and turning them into monstrous killing machines. Ames’ thesis is that this is not the rise of psychopathy but a rebellion against breaking spirit and the metaphorical enslavement of many of the working and middle class that leads to such a dire outcome. If the dominant fable of life is that success is all, failure is bad, and that you are entitled to success, then it should be, as Ames says in the article, exactly those people who are most invested in these cultural fables who would be the most likely to break when the lies become untenable. In the language that I used earlier, this is the most awful way to handle the failure of the fabric of your world – a cold and rational journey that looks like madness but is far worse for being a pre-meditated attempt to destroy the things that lied to you. However, this is only one type of person who commits these acts. The Monash University gunman, for example, was obviously delusional and, while he carried out a rational set of steps to eliminate his main rival, his thinking as to why this needed to happen makes very little sense. The truth is, as always, difficult and muddy and my first impression is that Ames may be oversimplifying in order to advance a relatively narrow and politicised view. But his language strikes me: the notion of the “repeated humiliation, bullying and inhumane conditions”, which appears to be a common language among the older, workplace-focused, and otherwise apparently sane humans who carry out such terrible acts.

One of the complaints made against the radio network at the heart of the recent Royal Hoax, 2DayFM, is that they are serial humiliators of human beings and show no regard for the general well-being of the people involved in their pranks – humiliation, inhumanity and bullying. Sound familiar? Here I am, as an educator, knowing that failure is going to happen for my students and working out how to bring them up into success and achievement when, on one hand, I have a possible set of triggers where beating down people leads to apparent madness, and at least part of our entertainment culture appears to delight in finding the lowest bar and crawling through the filth underneath it. Is telling someone that they’re a failure, and rubbing it in for public enjoyment, of any vague benefit to anyone or is it really, as I firmly believe, the best way to start someone down a genuinely dark path to ruination and resentment.

Returning to my point at the start of this (rather long) piece, I have met Raymond several times and he doesn’t appear even vaguely ruined to me, despite all of the radio, television and Neo-Piagetian contextual framing he employs. The message from Raymond and CI paints failure as something to be monitored and something that is often just a part of life – a stepping stone to future success – but this is most definitely not the message that generally comes down from our society and, for some people, it’s becoming increasingly obvious that their inability to handle the crushing burden of permanent classification as a failure is something that can have catastrophic results. I think we need to get better at genuinely accepting failure as part of trying, and to really, seriously, try to lose the classification of people as failures just because they haven’t yet succeeded at some arbitrary thing that we’ve defined to be important.


Please, Not Again

A terrible thing happened yesterday and many people are now dead because of it, including a horrifically large group of children. This is heartbreakingly awful and my thoughts are with the parents, the siblings, the families, the teachers and the survivors, because the stain of this dark day will be on the Newtown community for years to come.

I’m not going to get into any specific advocacy or politics here, it’s not why most people read me and it’s also not as if I’m an American, but, as an outsider, I am so saddened by the frequency of these events. I commented on my Facebook that the challenge here was not about which specific law or cultural aspect to manipulate, the key challenge was “how do we stop this from happening again”?

Some of you may find the following upsetting, because I’m going to talk about some things that upset me in an attempt to find a solid and lasting call to action. Please feel free to stop reading now.

There are twenty cupboards across Newtown that hold gifts, for one celebration or another, that are going to gather dust until the parents can steel themselves to bring down those cheery, carefully wrapped, thoughtfully selected gifts and sit there, staring at them, until they work out what the hell to do with them. Some of those gifts will sit there forever, foil wrapped markers of a life cut too short, too soon.

How do we stop this from happening again?

We will bury the dead and salute the heroes, admiring their bravery and, as an educational community, we will look at those teachers who stood before their classes and probably cry as we think that it could have been us. The final act of in loco parentis because the parent isn’t there to shield – and yet, to have bravery, we must have an event that is awful or unpleasant, so every act of bravery tells us that something bad is happening. Why are we so good at praising and ennobling our brave dead, and so bad at taking away the need for bravery?

How do we stop this from happening again?

The arguments have already started about what could have been done, in terms of specifics, but we have seen these arguments before and fact quickly surrenders to factional interest and grand standing, where time is wasted but little is achieved. Our children, your children, my students, deserve more than this. They deserve a school experience that is educational, exciting, challenging and safe. Safe. Safe. Safe. Safe. The expectation of an elementary school kid is that school is out soon and most families will get together and then next year I might be moving up and, hey, did Gracie just take my Oreo? The expectation is that tomorrow will come and that is as it should be.

How do we stop this from happening again?

I can drive safely my entire life and be killed by a truck driver running a red light because that’s how physics works. That’s why my compact, fuel efficient, city driving car has a highly rated safety shell and six airbags. I am preparing for the day that some accident occurs because I want my family to be safe. I am preparing for the day when someone, through thoughtlessness, accident or random malignity, tries to put their car through mine, because nothing I can do at that point will make much of a difference when that much metal and energy are involved. I don’t know how to do this with schools. I don’t know how to do this with Universities.

How do we stop this from happening again?

I am hollow, right now, and this is going to drain me for some time to come. I am trying to change a lot of things and I joke about taking on impossible projects because any progress is glorious defiance. Despite what many people think, I still sincerely believe that there has to be some common ground between arguing groups because nobody wants to see this happen again. I am, however, not an American so I have no business getting involved in US politics but, of course, such nightmares are not restricted to the US. As a global community, we appear to entering a time where Amok is becoming a semi-legitimatised, certainly well-publicised and semi-glorified, response to frustration and pressure. Amok is the killing of other people that you encounter in a frenzied attack that belies an otherwise calm demeanour with no history of violence. This was a Malay phenomenon as originally discussed, as a cultural tradition in reaction to loss of face, wealth, family – people reaching a point where this kind of insanity, often an indirect form of suicide, becomes honourable.

This phenomenon, recorded as early as the 18th Century, shows us how complex this all is and how important our societal and cultural structures can be. We have much better tools for enhancing the carnage of amok now, and I am not a fan of those tools, but we cannot focus just on these; we have to accept that we need to change the will as well as the way. I don’t how to start, but I’m hoping that one of you may do. So I leave you with the question that transcends politics, gun rights, educational systems and country boundaries.

How do we stop this from happening again?


Is It Called Ranking Because It Smells Funny?

Years ago, I was a professional winemaker, which is an awesome job but with very long hours (that seems to be a trend for me). One of the things that we did a lot in winemaking was to assess the quality of wine to work out if we’d made what we wanted to but also to allow us to blend this parcel with that parcel and come up with a better wine. Wine judging, for wine shows, is an important part of getting feedback on the quality of your wine as it’s perceived by other professionals. Wine is judged on a 20 point scale most of the time, although some 100 point schemes are in operation. The problem is that this scale is not actually as wide as it might look. Wines below 12/20 are usually regarded as faulty or not at commercial level – so, in reality, most wine shows are working in the range 12-19.5 (20 was relatively rare but I don’t know what it’s like now). This gets worse for the “100 point” ranges, where Wine Spectator claim to go from 50-100, but James Halliday (a prominent wine critic) rates from 75-100, where ‘Good’ starts at 80. This is really quite unnecessarily confusing, because it means that James Halliday is effectively using a version of the 16 available ranks (12-19.5 at 0.5 interval) of the 20 point scale, mapped into a higher range.

Of course, the numbers are highly subjective, even to a very well trained palate, because the difference between an 87 and an 88 could be colour, or bouquet, or flavour – so saying that the wine at 88 is better doesn’t mean anything unless you know what the rater actually means by that kind of ranking. I used to really enjoy the wine selections of a wine writer called Mark Shields, because he used a straightforward rating system and our palates were fairly well aligned. If Mark liked it, I’d probably like it. This is the dirty secret of any ranking mechanism that has any aspect of subjectivity or weighting built into it – it needs to agree with your interpretation of reasonable or you will always be at odds with it.

In terms of wine, the medal system that is in use really does give us four basic categories: commercially sound (no medal), better than usual (bronze), much better than usual (silver) and “please pass me another bottle” (gold). On top of that, you have the ‘best in show’ effectively which says that, in this place and from these tasters, this was the best overall in this category. To be frank, the gold wines normally blow away the bronzes and the no-awards, but the line between silver and gold is a little more blurred. However, the show medals have one advantage in that a given class has been inspected by the same people and the wines have actually been compared (in one sense) and ranked. However, if nothing is outstanding then no medals will be awarded because it is based on the marks on the 20 point scale, so if all the wines come in at 13, there will be no gongs – there doesn’t have to be a gold or a silver, or even a bronze, although that would be highly unusual. More subtly, gold at one show may not even get bronze at another – another dirty little secret of subjective ranking, sometimes what you are comparing things to makes a very big difference.

Which brings me to my point, which is the ranking on Universities. You’re probably aware that there are national and international rankings of Universities across a range of metrics, often related to funding and research, but that the different rankings have broad agreement rather than exact agreement as to who is ‘top’, top 5 and so on. The Times Higher Education supplement provides a stunning area of snazzy looking graphics, with their weightings as to what makes a great University. But, when we look at this, and we appear to have accuracy to one significant figure (ahem), is it significant that Caltech is 1.8 points higher than Stanford? Is this actually useful information in terms of which university a student might wish to attend? Well, teaching (learning environment) makes 30% of the score, international outlook makes up 7.5%, industry income makes up 2.5%, research is 30% (volume, income and reputation) and citations (research influence) make up the last 30%. If we sort by learning environment (because I am a curious undergraduate, say) then the order starts shifting, not at the very top to the list, but certainly further down – Yale would leap to 4th in the US instead of 9th. Once we get out of the top 200, suddenly we have very broad bands and, honestly, you have to wonder why we are still putting together the numbers if the thing that people appear to be worrying about is the top 200. (When you deem worthiness on a rating scale as only being a subset of the available scale, you rapidly turn something that could be considered continuous into something with an increasingly constrained categorical basis.) But let’s go to the Shanghai rankings, where Caltech is dropped from number 1 to number 6. Or the QS World Rankings, who rate Caltech as #10.

Obviously, there is no doubt about the general class of these universities, but it does appear that the judges are having some difficulty in consistently awarding best in class medals. This would be of minor interest, were it not for the fact that these ratings do actually matter in terms of industry confidence in partnership, in terms of attracting students from outside of your home educational system and in terms of who get to be the voices who decide what constitutes a good University. It strikes me that broad classes are something could apply quite well here. Who really cares whether Caltech is 1, 6 or 10 – it’s obviously rating well across the board and, barring catastrophe, always will.

So why keep ranking it? What we’re currently doing is polishing the door knob on the ranking system, devoting effort to ranking Universities like Caltech, Stanford, Harvard, Cambridge and Oxford, who could not, with any credibility be ranked low – or we’d immediately think that the ranking mechanism was suspect. So let’s stop ranking them, because it’s compressing the ranking at a point where the ranking is not even vaguely informational. What would be interesting was more devotion to the bands further down, where a University can assess its global progress against its peers to find out if it’s cutting the mustard.

If I put a bottle of Grange (one of Australia’s best red wines and it is pretty much worthy of its reputation if not its price) into a wine show and it came back with less than 17/20, I’d immediately suspect the rating system and the professionalism of the judges. The question is, of course, why would I put it in other than to win gold medals – what am I actually achieving in this sense? If it’s a commercial decision to sell more wine then I get this but wine is, after all, just wine and you and I drink it the same way. Universities, especially when ranked across complex weighted metrics and by different people, are very different products to different people. The single figure ranking may carry prestige and probably attracts both students and money but should it? Does it make any sense to be so detailed (one significant figure, indeed) about how one stacks up against each other, when in reality you have almost exponentially separated groups – my University will never ‘challenge’ Caltech, and if Caltech ‘drops’ to the graded level of the University of Melbourne (one of our most highly ranked Unis), I’m not sure that the experience will tell Caltech anything other than “Ruh oh!”

The Scooby Gang, stunned that Caltech was now in the range 50-100.

The Scooby Gang, stunned that Caltech was now in the range 50-100.

If I could summarise all of this, it would be to say that our leader board and ranking obsession would be fine, were it not for the amount of time spent on these things, the weight placed upon what is ultimately highly subjective even in terms of the weighting, and is not clearly defined as to how these rankings can be used to make sensible decisions. Perhaps there is something more useful we could be doing with our time?


Leading the Innovation Charge: Research and Teachers (NESTA Report on Digital Education)

I’m currently reading the NESTA report “Decoding Learning: The Proof, Promise and Potential of Digital Education” and the report talks about ways of learning with technology and sources of innovation. At the start, in scene setting, the two sources of innovation are identified as being either research efforts that were based on large amount of gathered evidence (research-led) and informal literature such as blogs and teacher networks (teacher-led) – which means, woohoo, if anyone does anything based on what I’ve written in here, it’s a teacher-led innovation. (I realise that there is argument for overlap in here but it appears that formal research publication denotes the division and it appears that there was no reason why a teacher-led initiative couldn’t be high quality if it was still evidence-based, even if there was no strict formal publication.)

Looking across the world, the report started with 210 cases that were either research- or teacher-led and narrowed this down to a representative sample of 150. What’s interesting, to me, is the split by country between research- and teacher-led projects. The US has 65 ‘innovations’, 28 teacher-led, 37 research-led. The UK has 64, 45 teacher-led, 19 research. Australia has 9, all of which are teacher-led. Outside of the UK and Australia, the most likely approach to educational innovation is through a research-based approach. It appears that our relationship to the UK educational system may be even closer than we thought in this respect. However, to look in more detail at these innovations, we have to look at the breakdown of that ways that we see students learning with technology. The learning themes in this document are:

  • Learning from Experts
  • Learning with Others
  • Learning through Making
  • Learning through Exploring
  • Learning through Inquiry
  • Learning through Practising
  • Learning from Assessment
  • Learning in and from Settings

Most of these are pretty self-explanatory (and highly constructivist, unsurprisingly) but they are based on the learners’ actions and include factors such as the resources employed and the structure – which gives a greater potential depth to the classification as you can’t just say you’re doing X, you have to support it with technological resources and learning design.

A very important point raised early on in the teacher-driven, research-driven dichotomy is that the requirement for large volumes of evidence, in the case of research publication, can have a tendency to make the research-led initiatives more risk averse, in that much more information has to be gathered before recommendations can be adopted or conclusions can be drawn. The teacher-led initiatives can highlight serious innovations that are worth trying, but may not yet have the evidence behind them to actually provide a convincing argument. What a dilemma! I can either have evidence for something that I probably already thought of or take a chance on something for which I have no evidence – and in the world of technology, where innovation often costs money, good luck getting a solid amount of cash with a good feeling about an innovation direction. I need to go and look further in the case of Australia, because I know a great number of excellent educational researchers here who are, as far as I know, proposing solid research-led innovations but they aren’t showing up on this particular radar. And, being cynical, if it’s not showing up on NESTA’s radar, it’s probably not showing up at the government level and, hearts and minds, we want the government to be aware that the research approaches (often University-driven) are visible, viable and valuable. (Another thing for the to-do list, apart from finding alliterative phrases starting with ‘x’.)

In looking at the themes, I find it interesting to think about how these themes are both guidelines of good practice and cautionary tales. When set up technology that enables us to Learn from Experts, which is one of the potential underlying principles of the MOOC, we have to make sure that we’re actually providing experts. There’s an interesting example of the statistics expert who tore about an on-line stats course and, while it was rapidly corrected, we have that slight worry that the power to set up a course in no way correlates with your ability to actually provide the course information. Of course, I’m not a trained teacher but my qualification in my academic discipline and prior industry experience does provide me with a level of expected expertise in an area. I’m not allowed to get out in front of students unless I reach a certain bar of qualification – but that is most certainly not always the case. Suddenly the technology innovation theme “Learning from Experts” becomes the source of a philosophical reflection on how we are doing this at all – do we even refer to experts in innovation, education or the discipline? If we want a combination of these, how does it work? As noted in the report, it’s not just access to the expert that learners need, it’s the supporting dialogue between them that assists in knowledge construction and learning. How can innovation in technology support this new dialogue in a way that works?

The future is not just about the provision of information; we solved that problem in the first instance with the book, refined it with the library and then did … something … with it when we developed Wikipedia (all joking aside, on-line resources have added immediacy and ubiquity to the information provision solution). The future is about successful learning, which involves the development of knowledge, and thus involves the arrangement, storage, organisation, retrieval, and development of information in order to support that newly constructed knowledge. There’s a lot of scope for the development of innovative technological tools in this space but, as the report clearly indicates through its themes, this involves thinking about how we learn, how we’re going to learn and how the tech can help us to achieve it.

There’s still a lot of research- and teacher-led innovation to come, which is great because we all love a challenge, but I’d like to finish by noting what is not one of the key themes from the NESTA report. There is no “Learning from watching dull videos of uninteresting material presented with the least effort possible, because that’s how it’s always been done” because this is, quite simply, not innovative. We already know how well that works and that’s why we have to innovate now. Viva the glorious fusion of cutting edge innovation and sufficient evidence to allow us to leap off the metaphorical cliff!

Oh good, it's Monday.(Photo by John Moore/Getty Images)

Oh good, it’s Monday.
(Photo by John Moore/Getty Images)


Taught for a Result or Developing a Passion

According to a story in the Australian national broadcaster, the ABC, website, Australian school children are now ranked 27th out of 48 countries in reading, according to the Progress in International Reading Literacy Study, and that a quarter of Australia’s year 4 students had failed to meet the minimum standard defined for reading at their age. As expected, the Australian government  has said “something must be done” and the Australian Federal Opposition has said “you did the wrong thing”. Ho hum. Reading the document itself is fascinating because our fourth graders apparently struggle once we move into the area of interpretation and integration of ideas and information, but do quite well on simple inference. There is a lot of scope for thought about how we are teaching, given that we appear to have a reasonably Bloom-like breakdown on the data but I’ll leave that to the (other) professionals. Another international test, the Program for International School Assessment (PISA) which is applied to 15 year olds, is something that we rank relatively highly in, which measures reading, mathematics and science. (And, for the record, we’re top 10 on the PISA rankings after a Year 4 ranking of 27th. Either someone has gone dramatically wrong in the last 7 years of Australian Education, or Year 4 results on PIRLS doesn’t have as much influence as we might have expected on the PISA).We don’t yet have the results for this but we expect it out soon.

The PISA report front cover (C) OECD.

The PISA report front cover (C) OECD.

What is of greatest interest to me from the linked article on the ABC is the Oslo University professor, Svein Sjoberg, who points out the comparing educational systems around the globe is potentially too difficult to be meaningful – which is a refreshingly honest assessment in these performance-ridden and leaderboard-focused days. As he says:

“I think that is a trap. The PISA test does not address the curricular test or the syllabus that is set in each country.

Like all of these tests, PIRLS and PISA measure a student’s ability to perform on a particular test and, regrettably, we’re all pretty much aware, or should be by now, that using a test like this will give you the results that you built the test to give you. But one thing that really struck me from his analysis of the PISA was that the countries who perform better on the PISA Science ranking generally had a lower measure of interest in science. Professor Sjoberg noted that this might be because the students had been encouraged to become result-focused rather than encouraging them to develop a passion.

If Professor Sjoberg is right, then is not just a tragedy, it’s an educational catastrophe – we have now started optimising our students to do well in tests but be less likely to go and pursue the subjects in which they can get these ‘good’ marks. If this nasty little correlation holds, then will have an educational system that dominates in the performance of science in the classroom, but turns out fewer actual scientists – our assessment is no longer aligned to our desired outcomes. Of course, what it is important to remember is that the vast majority of these rankings are relative rather than absolute. We are not saying that one group is competent or incompetent, we are saying that one group can perform better or worse on a given test.

Like anything, to excel at a particular task, you need to focus on it, practise it, and (most likely) prioritise it above something else. What Professor Sjoberg’s analysis might indicate, and I realise that I am making some pretty wild conjecture on shaky evidence, is that certain schools have focused the effort on test taking, rather than actual science. (I know, I know, shock, horror) Science is not always going to fit into neat multiple choice questions or simple automatically marked answers to questions. Science is one of the areas where the viva comes into its own because we wish to explore someone’s answer to determine exactly how much they understand. The questions in PISA theoretically fall into roughly the same categories (MCQ, short answer) as the PIRLS so we would expect to see similar problems in dealing with these questions, if students were actually having a fundamental problem with the questions. But, despite this, the questions in PISA are never going to be capable of gauging the depth of scientific knowledge, the passion for science or the degree to which a student already thinks within the discipline. A bigger problem is the one which always dogs standardised testing of any sort, and that is the risk that answering the question correctly and getting the question right may actually be two different things.

Years ago, I looked at the examination for a large company’s offering in a certain area, I have no wish to get sued so I’m being deliberately vague, and it became rapidly apparent that on occasion there was a company answer that was not the same as the technically correct answer. The best way to prepare for the test was not to study the established base of the discipline but it was to read the corporate tracts and practise the skills on the approved training platforms, which often involved a non-trivial fee for training attendance. This was something that was tangential to my role and I was neither of a sufficiently impressionable age nor strongly bothered enough by it for it to affect me. Time was a factor and incorrect answers cost you marks – so I sat down and learned the ‘right way’ so that I could achieve the correct results in the right time and then go on to do the work using the actual knowledge in my head.

However, let us imagine someone who is 14 or 15 and, on doing the practice tests for ‘test X’ discovers that what is important is in hitting precisely the right answer in the shortest time – thinking about the problem in depth is not really on the table for a two-hour exam, unless it’s highly constrained and students are very well prepared. How does this hypothetical student retain respect for teachers who talk about what science is, the purity of mathematics, or the importance of scholarship, when the correct optimising behaviour is to rote-learn the right answers, or the safe and acceptable answers, and reproduce those on demand. (Looking at some of the tables in the PISA document, we see that the best performing nations in the top band of mathematical thinking are those with amazing educational systems – the desired range – and those who reputedly place great value in high power-distance classrooms with large volumes of memorisation and received wisdom – which is probably not the desired range.)

Professor Sjoberg makes an excellent point, which is that trying to work out what is in need of fixing, and what is good, about the Australian education system is not going to be solved by looking at single figure representations of our international rankings, especially when the rankings contradict each other on occasion! Not all countries are the same, pedagogically, in terms of their educational processes or their power distances, and adjacency of rank is no guarantee that the two educational systems are the same (Finland, next to Shanghai-China for instance). What is needed is reflection upon what we think constitutes a good education and then we provide meaningful local measures that allow us to work out how we are doing with our educational system. If we get the educational system right then,  if we keep a bleary eye on the tests we use, we should then test well. Optimising for the tests takes the effort off the education and puts it all onto the implementation of the test – if that is the case, then no wonder people are less interested in a career of learning the right phrase for a short answer or the correct multiple-choice answer.


Successful Organisms Use Their Environment Well

I saw a fascinating talk at Creative Innovations 2012 from Wade Davis, who has the coolest title in the world, National Geographic’s explorer-in-residence. Wade made the point that successful human civilisations use their environment optimally, rather than fighting it. He gave several examples but the one that stuck in my head was that on the Inuit, who use the cold of their environment as an additional advantage. Instead of using metal rails, their sleds ran on frozen fish – because the fish is a low-friction thing and, frozen, it’s hard enough to sled on. Get trapped somewhere and unable to get home? You can always eat your runners.

I am writing this in Australia at the start of Summer, where we regularly hit 100+ F (38+ C) and, because it’s going to be hot today and I don’t have any serious meetings, I’m wearing a short sleeved shirt and shorts because I’ll be out of the sun for most of the day and fewer clothes means less heat. This is sensible adaptation. However, what is not sensible is that, if I had serious meetings, not only would I be wearing at least long sleeved shirt and trousers, I might even be in a suit and tie. This is, quite frankly, dumb and not showing any adaptation of work processes to the local environment.

This is, however, not new as any studies of the British Empire in the tropics will show you. Once convention, and conformity to that convention, dominate over adaptation to the environment, you end up making bad decisions. Worse still, if you don’t take the environment into account, you might see perfectly reasonable adaptations as being rebellious and unconventional.

In Australia, the trade off is always how much clothing do I have to wear to balance fashion, sun screening, temperature requirements, business requirements and avoid being prosecuted for public nudity. If we don’t wear the right clothing, we put additional demand on our environmental masking technologies, such as air conditioning or public transport. If I dress in a way that I can’t walk to work then I’m now fighting my real environment because of my overlaid work environment.

How is this an educational issue? I think that this ties in with our overlaid assessment environments for our students. If we create an environment that doesn’t actually encourage the behaviours that we want, as natural extensions and relations, then we will start to get adaptive behaviour to the real triggers in the environment, which will appear to us as aberrant and rebellious behaviours.

We want our students to do all of the work because it contributes to their development of knowledge and their ability to apply their knowledge. However, by providing certain commonly used assessment types, for example,mass-produced assignments that don’t vary from year to year, I believe that we are risking the formation of an environment where the assignments are seen as barriers rather than achievements. Humans optimise to get around barriers and we are very good at finding the easiest way to do this. We have a cultural convention that students shouldn’t cheat or plagiarise and this is a perfectly reasonable convention. If we build an environment where we weaken the perceived sincerity of this convention, or we set up an environment that implicitly rewards this activity without a high probability of detecting it, then we have set up a conflict. We are asking students to be less optimal in the way that they are working in an environment, with unnatural constraints to keep them in place. With better design, we can create environments that are a better fit to our conventions and are more consistent and integrated – but this takes design. “Because I say so” is never as strong as “because it’s actually necessary”.

Humans adapt to their environment in order to succeed. This is why we dominate and it’s part of who we are. By thinking about this behaviour, I think that we can get a clearer view of why our students sometimes do what they do, even when they are acting at odds with what we’ve explicitly told them to do. I’m most certainly not saying that we can accept students not doing enough work to get the knowledge, or passing off other people’s work as their own, as that’s completely at odds with helping them to build their knowledge. But perhaps it’s worth looking at every assignment we set up to see if the optimised behaviour, in terms of effort, innovation, autonomy, mastery, purpose and enjoyment, in the assignment environment will actually be along the lines that we are after.

I realise that some people will think that I’m putting the blame for cheating on our shoulders and, no, I accept the active role students have and that some students will cheat no matter what we do. But some assignment environments and types are better than others at encouraging our students to work as we want them to, and I think it’s worth thinking of this as an environmental optimisation.


Core Values of Education and Why We Have To Oppose “Pranking”

I’ve had a lot of time to think about education this year (roughly 400 hours at current reckoning) so it’s not surprising that I have some opinions on what constitutes the key values of education. Of course, as was noted at the Creative Innovations conference I went to, a corporate values statement is a wish list that doesn’t necessarily mean much so I’m going to talk about what I see when education is being performed well. After I’ve discussed these, I’m then going to briefly argue for why these values mean that stupid stunts (such as the Royal prank where some thoughtless DJs called up a hospital) should be actions that we identify as cruel and unnecessary interpretations of the term ‘entertainment’.

  • Truth. 

    We start from the assumption that we only educate or train our students in what we, reasonably, assume to be the truth. We give the right answers when we know them and we admit it when we don’t. Where we have facts, we use them. When we are standing on opinion, we identify it. When we are telling a story, where the narrative matters more than the contents, we are careful to identify what we are doing and why we are doing it. We try not to deceive, even accidentally, and we do not make a practice of lying, even to spare someone’s feelings. In order to know the truth, we have to know our subject and we try to avoid blustering, derision and appealing to authority when we feel that we are being challenged.

    There is no doubt that this can be hard in contentious and emerging areas but, as a primary value, it’s at the core of our educational system. Training someone to recite something that is not true, while still popular in many parts of the world, is indoctrination, not education.

  • Respect.

    We respect the students that we teach and, in doing this, we prepare them to respect us. We don’t assume that they are all the same, that they all learn at the same rate, that they have had all the preparation that they need for courses or our experiences, nor do assume that they can take anything that we feel inclined to fling at them. We respect them by treating them as people, as individuals, as vulnerable, emotional and potentially flawed humans. We evaluate their abilities before we test their mettle. We give them space to try again. We do all this because it then allows them, without hypocrisy or obligation, to treat us the same way. Respect of effort and of application does not demand perfection or obsession from either party.

  • Fairness.

    We are objective in our assignment and assessment of work and generous in our interpretations when such generosity does not compromise truth or respect. We do not give false praise but we do all give all praise that is due, at the same time giving all of the notes for improvement. We strive to ensure that every student has the same high-quality and fair experience, regardless of who they are and what they do. When we define the rules, we stick to them, unless we have erred in their construction when, having fixed the rules, we then offer the best interpretation to every student. Our students acting in error or unfairly does not allow us to reciprocate in kind. The fairness of our system is not conditional upon a student being a perfect person and its strength lies in the fact that it is fair for all, regardless. What we say, we mean and what we mean, we say. A student’s results are ultimately the reflection of their own application to the course, relative to their opportunities to excel. Students are not unfairly punished because we have not bothered to work out if they are prepared for the course (which is very different from their own application of effort inside the course, which is ultimately their responsibility moderated by the unforeseen and vagaries of life), nor does the action of one student unduly influence the results of another, except where this is clearly identified and students have sufficient autonomy to control the outcome of this situation.

These stupid pranking stunts on the radio are usually considered acceptable because the person being pranked is contacted after the fact to ask if it can be broadcast. Frankly, I think this is bordering on coercive (because you risk being a bad sport if you don’t participate and I suspect that the radio stations don’t accept a simple first ‘no’) but some may disagree. (It’s worth noting that while the radio station tried to contact the nurses, they failed to get approval to broadcast.)

These pranks are, at heart, valueless lies, usually calculated to embarrass someone or expose them undertaking a given behaviour. They are neither truthful nor respectful. While this is often the high horse of pomposity (haven’t you got a sense of humour), it is important to realise that truly funny things can usually be enjoyed by everyone and that there is a world of difference between a joke that involves old friends and one that exploits strangers. The second situation just isn’t fair. The radio station is setting up a situation that is designed to elicit a response that everyone other than the victim will find amusing, because the victim is somehow funny or vulnerable. Basically, it’s unfair. You don’t get to laugh at or humiliate someone in a public forum just because you think it’s funny – didn’t we get over this in primary school? A lack of fairness often leads to situations that are coercive because we impose cultural norms, or peer-pressure, to force people to ‘go along with the joke’.

I had a student in my office recently, while another academic who happened to be my wife was helping me clear a backlog of paper, and before I discussed his final mark, I asked my wife if she would mind leaving the room. This was because there was no way I could ask the student if he minded discussing his mark with my wife in the room and not risk the situation being coercive. It’s a really simple thing to fix if you think about it. In order to respect the student’s privacy, I needed to be fair in the way that I controlled his ability to make decisions. Now I’m not worried that this student is easily coerced but that’s not my call to make – it’s not up to me to tell a student if they are going to be comfortable or not.

The Royal prank has clearly identified that that we can easily go down very dark and unexpected roads when we start to treat people as props, without sticking to the truth or respecting them enough to think about how they might feel about our actions, and that’s patently unfair. If these are our core values, and again many would disagree, then we have to stand up and object when we see them being mucked around with by our society. As educators, we have to draw a line and say that “just because you think it’s funny, doesn’t mean that you were right to do it” and we can do that and not be humourless or party-poopers. We do it because we want to allow people to still be funny, and have fun, muck around and have a joke with people that they know – because we’ve successfully trained them to know when they should stop, because we’ve correctly instilled the values of truth, respect and fairness.