Vitamin Ed: Can It Be Extracted?
Posted: December 22, 2012 Filed under: Education | Tags: advocacy, blogging, community, curriculum, design, education, educational problem, educational research, ethics, Generation Why, higher education, in the student's head, learning, measurement, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking, tools, vygotsky, workload Leave a commentThere are a couple of ways to enjoy a healthy, balanced diet. The first is to actually eat a healthy, balanced diet made up from fresh produce across the range of sources, which requires you to prepare and cook foods, often changing how you eat depending on the season to maximise the benefit. The second is to eat whatever you dang well like and then use an array of supplements, vitamins, treatments and snake oil to try and beat your diet of monster burgers and gorilla dogs into something that will not kill you in 20 years. If you’ve ever bothered to look on the side of those supplements, vitamins, minerals or whatever, that most people have in their ‘medicine’ cabinets, you might see statements like “does not substitute for a balanced diet” or nice disclaimers like that. There is, of course, a reason for that. While we can be fairly certain about a range of deficiency disorders in humans, and we can prevent these problems with selective replacement, many other conditions are not as clear cut – if you eat a range of produce which contains the things that we know we need, you’re probably getting a slew of things that we also need but don’t make themselves as prominent.
In terms of our diet, while the debate rages about precisely which diet humans should be eating, we can have a fairly good stab at a sound basis from a dietician’s perspective built out of actual food. Recreating that from raw sugars, protein, vitamin and mineral supplements is technically possible but (a) much harder to manage and (b) nowhere near as satisfying as eating the real food, in most cases. Let’s nor forget that very few of us in the western world are so distant from our food that we regard it purely as fuel, with no regard for its presentation, flavour or appeal. In fact, most of us could muster a grimace for the thought of someone telling us to eat something because it was good for us or for some real or imagined medical benefit. In terms of human nutrition, we have the known components that we have to eat (sugars, proteins, fats…) and we can identify specific vitamins and minerals that we need to balance to enjoy good health, yet there is not shortage of additional supplements that we also take out of concern for our health that may have little or no demonstrated benefit, yet still we take them.
There’s been a lot of work done in trying to establish an evidence base for medical supplements and far more of the supplements fail than pass this test. Willow bark, an old remedy for pain relief, has been found to have a reliable effect because it has a chemical basis for working – evidence demonstrated that and now we have aspirin. Homeopathic memory water? There’s no reliable evidence for this working. Does this mean it won’t work? Well, here we get into the placebo effect and this is where things get really complicated because we now have the notion that we have a set of replacements that will work for our diet or health because they contain useful chemicals, and a set of solutions that work because we believe in them.
When we look at education, where it’s successful, we see a lot of techniques being mixed in together in a ‘natural’ diet of knowledge construction and learning. Face-to-face and teamwork, sitting side-by-side with formative and summative assessment, as part of discussions or ongoing dialogues, whether physical or on-line. Exactly which parts of these constitute the “balanced” educational diet? We already know that a lecture, by itself, is not a complete educational experience, in the same way that a stand-alone multiple-choice question test will not make you a scholar. There is a great deal of work being done to establish an evidence basis for exactly which bits work but, as MIT said in the OCW release, these components do not make up a course. In dietary terms, it might be raw fuel but is it a desirable meal? Not yet, most likely.
Now let’s get into the placebo side of the equation, where students may react positively to something just because it’s a change, not because it’s necessarily a good change. We can control for these effects, if we’re cautious, and we can do it with full knowledge of the students but I’m very wary of any dependency upon the placebo effect, especially when it’s prefaced with “and the students loved it”. Sorry, students, but I don’t only (or even predominantly) care if you loved it, I care if you performed significantly better, attended more, engaged more, retaining the information for longer, could achieve more, and all of these things can only be measured when we take the trouble to establish base lines, construct experiments, measure things, analyse with care and then think about the outcomes.
My major concern about the whole MOOC discussion is not whether MOOCs are good or bad, it’s more to do with:
- What does everyone mean when they say MOOC? (Because there’s variation in what people identify as the components)
- Are we building a balanced diet or are we constructing a sustenance program with carefully balanced supplements that might miss something we don’t yet value?
- Have we extracted the essential Vitamin Ed from the ‘real’ experience?
- Can we synthesise Vitamin Ed outside of the ‘real’ educational experience?
I’ve been searching for a terminological separation that allows me to separate ‘real’/’conventional’ learning experiences from ‘virtual’/’new generation’/’MOOC’ experiences and none of those distinctions are satisfying – one says “Restaurant meal” and the other says “Army ration pack” to me, emphasising the separation. Worse, my fear is that a lot of people don’t regard MOOC as ever really having Vitamin Ed inside, as the MIT President clearly believed back in 2001.
I suspect that my search for Vitamin Ed starts from a flawed basis, because it assumes a single silver bullet if we take a literal meaning of the term, so let me me spread the concept out a bit to label Vitamin Ed as the essential educational components that define a good learning and teaching experience. Calling it Vitamin Ed gives me a flag to wave and an analogue to use, to explain why we should be seeking a balanced diet for all of our students, rather than a banquet for one and dog food for the other.
“We are not providing an MIT education on the web…”
Posted: December 21, 2012 Filed under: Education | Tags: advocacy, blogging, collaboration, community, curriculum, design, education, educational research, ethics, Generation Why, higher education, in the student's head, learning, measurement, moocs, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking, vygotsky Leave a commentI’ve been re-watching some older announcements that describe open courseware initiatives, starting from one of the biggest, the MIT announcement of their OpenCourseWare (OCW) initiative in April, 2001. The title of this post actually comes from the video, around the 5:20 mark, (Video quoted under a CC-BY-NC-SA licence, more information available at: http://ocw.mit.edu/terms)
“Let me be very clear, we are not providing an MIT education on the Web. We are, however, providing core materials that are the infrastructure that undergirds that information. Real education, in our view, involves interaction between people. It’s the interaction between faculty and students, in our classrooms and our living group, in our laboratories that are the heart, the real essence, of an MIT education. “
While the OCW was going to be produced and used on campus, the development of OCW was seen as something that would make more time available for student interaction, not less. President Vest then goes on to confidently predict that OCW will not make any difference to enrolment, which is hardly surprising given that he has categorically excluded anyone from achieving an MIT education unless they enrol. We see here exactly the same discussion that keeps coming up: these materials can be used as augmenting materials in these conventional universities but can never, in the view of the President or Vice Chancellor, replace the actual experience of obtaining a degree from that institution.
Now, don’t get me wrong. I still think that the OCW initiative was excellent, generous and visionary but we are still looking at two fundamentally different use cases: the use of OCW to augment an existing experience and the use of OCW to bootstrap a completely new experience, which is not of the same order. It’s a discussion that we keep having – what happens to my Uni if I use EdX courses from another institution? Well, ok, let’s ask that question differently. I will look at this from two sides with the introduction of a new skill and knowledge area that becomes ubiquitous, in my sphere, Computer Science and programming. Let’s look at this in terms of growth and success.
What happens if schools start teaching programming to first year level?
Let’s say that we get programming into every single national curriculum for secondary school and we can guarantee that students come in knowing how to program to freshman level. There are two ways of looking at this and the first, which we have probably all seen to some degree, is to regard the school teaching as inferior and re-teach it. The net result of this will be bored students, low engagement and we will be wasting our time. The second, far more productive, approach is to say “Great! You can program. Now let’s do some Computer Science.” and we use that extra year or so to increase our discipline knowledge or put breadth courses back in so our students come out a little more well-rounded. What’s the difference between students learning it from school before they come to us, or through an EdX course on fundamental programming after they come to us?
Not much, really, as long as we make sure that the course meets our requirements – and, in fact, it gives us bricks-and-mortar-bound entities more time to do all that face-to-face interactive University stuff that we know students love and from which they derive great benefit. University stops being semi-vocational in some aspects and we leap into knowledge construction, idea generation, big projects and the grand dreams that we always talk about, yet often don’t get to because we have to train people in basic programming, drafting, and so on. Do we give them course credit? No, because they’re assumed knowledge, or barrier tested, and they’re not necessarily part of our structure anymore.
What happens if no-one wants to take my course anymore?
Now, we know that we can change our courses because we’ve done it so many times before over the history of the Academy – Latin, along with Greek the language of scholarship, was only used in half of the University publications of 1800. Let me wander through a classical garden for a moment to discuss the nature of change from a different angle, that of decline. Languages had a special place in the degrees of my University with Latin and Greek dominating and then with the daring possibility of allowing substitution of French or German for Latin or Greek from 1938. It was as recently as 1958 that Latin stopped being compulsory for high school graduation in Adelaide although it was still required for the study of Law – student demand for Latin at school therefore plummeted and Latin courses started being dropped from the school curriculum. The Law Latin requirement was removed around 1969-1970, which then dropped any demand for Latin even further. The reduction in the number of school teachers who could teach Latin required the introduction of courses at the University for students who had studied no Latin at all – Latin IA entered the syllabus. However, given that in 2007 only one student at all of the schools across the state of South Australian (roughly 1.2-1.4 million people) studied Latin in the final year of school, it is apparent that if this University wishes to teach Latin, it has to start by teaching all of Latin. This is a course, and a discipline, that is currently in decline. My fear is that, one day, someone will make the mistake of thinking that we no longer need scholars of this language. And that worries me, because I don’t know what people 30 years from now will actually want, or what they could add to the knowledge that we already have of one of our most influential civilisations.
This decline is not unique to Latin (or Greek, or classics in general) but a truly on-line course experience would allow us to actually pool those scholars we have left and offer scaled resources out for much longer than isolated pockets in real offices can potentially manage but, as President Vest notes, a storehouse of Latin texts does not a course make. What reduced the demand for Latin? Possibly the ubiquity of the language that we use which is derived from Latin combined with a change of focus away from a classical education towards a more job- and achievement-oriented (semi-vocational) style of education. If you ask me, programming could as easily go this way in about 20 years, once we have ways to let machines solve problems for us. A move towards a less go-go-go culture, smarter machines and a resurgence of the long leisure cycles associated with Science Fiction visions of the future and suddenly it is the engineers and the computer scientists who are looking at shrinking departments and no support in the schools. Let me be blunt: course popularity and desirability rises, stabilises and falls, and it’s very hard to tell if we are looking at a parabola or a pendulum. With that in mind, we should be very careful about how we define our traditions and our conventions, especially as our cunning tools for supporting on-line learning and teaching get better and better. Yes, interaction is an essential part of a good education, no argument at all, but there is an implicit assumption of critical mass that we have seen, time and again, to implicitly support this interaction in a face-to-face environment that is as much a function of popularity and traditionally-associated prestige as it is of excellence.
What are MIT doing now?
I look at the original OCW release and I agree that, at time of production, you could not reproduce the interaction between people that would give you an MIT education. But our tools are better now. They are, quite probably not close enough yet to give you an “MIT of the Internet” but should this be our goal? Not the production of a facsimile of the core materials that might, with MIT instructors, turn into a course, but the commitment to developing the tools that actually reproduce the successful components of the learning experience with group and personal interaction, allowing the formation of what we used to call a physical interactive experience in a virtual side? That’s where I think the new MIT initiatives are showing us how these things can work now, starting from their original idealistic roots and adding the technology of the 21st Century. I hope that other, equally prestigious, institutions are watching this, carefully.
Two Tier: Already Here
Posted: December 20, 2012 Filed under: Education | Tags: advocacy, blogging, community, curriculum, design, education, grand challenge, higher education, moocs, principles of design, resources, teaching, teaching approaches, thinking Leave a commentI was reading a Chronicle of Higher Ed article “For Whom is College Being Reinvented” and it was sobering reading. While I was writing yesterday about Oxford and Cambridge wanting to maintain their conventional University stance, Robert Archibald, an Economics Professor from the College of William and Mary, points out that the two tier system is already here in terms of good conventional and bad conventional – so that we would see an even larger disparity between luxury and economy courses. Getting into the “good” colleges will be a matter of money and prior preparation, much as it is many areas where the choice of school available to parents is increasingly driving residential moves in the early years of a child’s life. But it doesn’t end there because the ‘quality’ measure may be as much about the employability of the students after they’ve completed their studies – and, as the article says, now we start have to think about whether a “low-level” degree is then preferable to an “industry recognised” apprenticeship or trade training program. Now, our two tiers are as separate as radiographer and radiology but, as Robert Reich also observes in the same article, this is completely against what we should be doing: how can we do all this and maintain real equality between degrees and programs?
Of course, if you didn’t go to a great elementary and senior school, then, despite being on the path to the ‘second-tier’ school, which might be one that naturally migrates to a full electronic delivery for a number of perfectly reasonable economic reasons, you are probably someone who needs a more customised experience than a ‘boilerplate’ MOOC could offer: you actually need face-to-face. When we talk about disruption of the existing college system, we always assume that this is a positive thing, something that will lead to a better result for our students, so these potential issues with where these new technologies may get focused start to become very important.
For whom will these new systems work? Everyone or just the people that we’re happy to expose them to?
It’s perhaps the best question we have to frame the discussion – it’s not about whether the technology works, we know that it works well for certain things and it’s now matter of making sure that our pedagogical systems are correctly married to our computer systems to make the educational experience work. But, obviously, and as many much better writers than I have been saying, it has to work and be at least as good as the systems that it’s replacing – only now we realise that existing systems are not the same for everyone and that one person’s working system is someone else’s diabolically bad teaching experience. So the entire discussion about whether MOOCs work now have to be framed in the context of ‘compared to what‘?
It’s an interesting article that poses more questions than it answers, but it’s certainly part of the overall area we have to think about.
Legitimisation and Agency: I Believe That’s My Ox on Your Bridge
Posted: December 19, 2012 Filed under: Education | Tags: advocacy, blogging, collaboration, community, curriculum, design, education, educational problem, Generation Why, grand challenge, higher education, learning, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking, tools, workload 1 CommentThere’s an infamous newspaper advertisement that never ran, which reflected the entry of IBM into the minicomputer market. A number of companies, Data General principal among them, but including such (historically) powerful players as Digital Equipment Corporation, Prime and Hewlett Packard, were quite successful in the minicomputer market, growing rapidly and stealing market share from IBM’s mainframe market. (For an excellent account of these times, I recommend “The Soul of a New Machine” by Tracy Kidder.) IBM finally decided to enter the minicomputer market and, as analysts remarked at the time, IBM’s move into minicomputers legitimised the market.
Ed DeCastro, CEO of Data General, had a full-page news paper advertisement prepared, which I reproduce (mildly bowdlerised to keep my all ages posting status):
“They Say IBM’s Entry Into the Minicomputer Market Will Legitimize the Industry. The B***ards Say, Welcome.”
The ad never actually ran but was framed and put on Ed’s wall. The point, however, was well and precisely made: IBM’s approval was neither required nor desired, and nobody had set a goal of being legitimised.
Over on Mark’s blog, we see that a large number of UK universities are banding together to launch an on-line project, including the highly successful existing player in the analogous space, the Open University, but also some high power players such as Southampton and the disturbingly successful St Andrews. As Mark notes in the title, this is a serious change in terms of allying a UK effort that will produce a competitor (or competitors) to the existing US dominance. As Mark also notes:
Hmm — OxBridge isn’t throwing hats into the rings yet.
And this is a very thoughtful Hmm, because the Universities of Oxford and Cambridge are the impossible-to-ignore legitimising agencies because of their sheer weight on the rubber sheet of UK Academy Spacetime. When it comes to talking about groups of Universities in the UK, and believe me there are quite a few, the Russell Group awards the lion’s share of PhDs, with 78% of the most highly graded research staff as well, across the 24 Universities. One of its stated goals is to lead the research efforts of the UK, with another being to attract the best staff and students to its member institutions. However, the group of participants in the new on-line project involve Russell Group Universities and those outside, which makes the non-participation of Oxford and Cambridge even more interesting. How can a trans-group on-line proposal bring the best students in – or is this why we aren’t seeing involvement from Oxbridge, because of the two-tier perception between traditional and on-line? One can easily argue that Oxford and Cambridge have no need to participate because they are so entrenched in their roles and their success that, as I’ve noted on a different post, any ranking system that rates them out of, say, the top 5 in the UK has made itself suspect as a ranking, rather than a reflection of dropping quality. Oxbridge is at the heart of the UK’s tertiary system and competition will continue to be fierce to gain entry for the foreseeable future. They have no need to get together with the others in their group or beyond, although it’s not from protecting themselves from competitors, as they are not really in competition with most of the other Russell Group members – because they are Oxford and Cambridge.
It’s worth noting that Cambridge’s vice-chancellor Leszek Borysiewicz did think that this consortium was exciting, and I quote from the THE article:
“Online education is becoming an important approach which may open substantial opportunities to those without access to conventional universities,” he said.
And that pretty much confirms why Cambridge is happy to stand back – because they are almost the definition of a conventional university, catering to a well-established market for whom attending a bricks-and-mortar University is as (if not more) important than the course content or delivery mechanisms. The “Gentleman’s Third”, receiving the lowest possible passing grade for your degree examinations, indicates a dedication to many things at the University that are, most likely, of a less-than-scholarly nature but it is precisely for these activities that some people go to Oxford and Cambridge and it is also precisely these non-scholarly activities that we will have great difficulty transferring into a MOOC. There will be no Oxford-Cambridge boat race carried out on a browser-based Flash game, with distributed participants hooked up to rowing machines across the globe, nor will the Footlights be conducted as a Google Hangout (except, of course, highly ironically).
Over time, we’ll find out more about the role of tradition and convention in the composition and participation, but let me return to my opening anecdote. We are already dealing with issues of legitimacy in the on-line learning space, whether from pedagogical fatigue, academic cultural inertia, xenophobia, or the fact that some highly vaunted previous efforts have not been very good. The absence of two of the top three Universities in the UK in this fascinating and potentially quite fruitful collaboration makes me think a lot about IBM. I think of someone sitting back, watching things happen, certain in the knowledge that what they do is what the market needs and it is, oh happy day, what they are currently doing. When Oxford and Cambridge come in and anoint the MOOC, if they every do or if we ever can, then we have the same antique avuncular approach to patting an entire sector on the head and saying “oh, well done, but the grownups are here now”, and this is unlikely to result in anything good in terms of fellow feeling or transferability and accreditation of students, key challenges in MOOCs being taken more seriously. Right now, Oxford and Cambridge are choosing not to step in, and there is no doubt that they will continue to be excellent Universities for their traditional attendees – but is this a sensible long term survival strategy? Could they be contributing to the exploration of the space in a productive manner by putting their legitimising weight in sooner rather than later, at a time when they are saying “Let’s all look at this to see if it’s any good”, rather than going “Oh, hell. Now we have to do something”? Would there be much greater benefit in bringing in their considerable expertise, teaching and research excellence, and resources now, when there is so much room for ground level innovation?
This is certainly something I’m fearful of in my own system, where the Group of 8 Universities has most of the research funding, most of the higher degree granting and, as a goal at least, targets the best staff and students. Our size and tradition can be barriers to agility and innovation, although our recent strategy is obviously trying to set our University on a more innovative and more agile course. A number of recent local projects are embracing the legitimacy of new learning and teaching approaches. It is, however, very important to remember the example of IBM and how the holders of tradition may not necessarily be welcomed as a legitimising influence when other have been highly successful innovating in a new space, which the tradition holder has deemed beneath them until reality finally intruded.
It’s easy to stand back and say “Well, that’s fine for people who can’t afford mainframes” but such a stance must be balanced with looking to see whether people still need or want to afford mainframes. I think the future of education is heavily blended – MOOC + face-to-face is somewhere where I think we can do great things – but for now it’s very interesting to see how we develop as we start to take more and more steps down this path.
Education is not Music: A Long Winded Agreement with Aaron Bady
Posted: December 18, 2012 Filed under: Education | Tags: advocacy, blogging, community, curriculum, design, education, educational research, ethics, Generation Why, higher education, moocs, teaching, teaching approaches, thinking, universal principles of design Leave a commentMark Guzdial has been posting a great deal on MOOCs, as have we all although Mark is much easier to read than I am, and his recent comment on Aaron Bady’s response to Clay Shirky’s “Udacity is Napster” drew me to the great article by Bady and the following key quote inside Bady’s article:
“I think teaching is very different from music”
and I couldn’t agree more. Let me briefly list why I feel that a comparison to Napster has no real validity, to agree with Aaron that Clay Shirky’s argument is not well grounded for the discussion of education. What’s interesting is that I believe that Shirky identifies this point in his own essay, but doesn’t quite realise the full implications of what he’s saying:
Starting with Edison’s wax cylinders, and continuing through to Pandora and the iPod, the biggest change in musical consumption has come not from production but playback.
…
Those earlier inventions systems started out markedly inferior to the high-cost alternative: records were scratchy, PCs were crashy. But first they got better, then they got better than that, and finally, they got so good, for so cheap, that they changed people’s sense of what was possible.
The first thing we need to remember about music is that music is inherently fungible because, when viewed as a piece of work, you can replace it with another effectively identical item. Of course, here we need to be careful and define what we need by identical, because music, as it turns out, is almost never identical but it gets treated that way. If you doubt this, then go and review how much it costs to insert the song “Happy Birthday to You” into a movie or TV show. It doesn’t matter if it’s Homer Simpson yelling it drunkenly, or the Three Tenors singing it sotto voce as part of an Ally McBeal shower hallucination flashback, you will still be liable to fork out dollars to the company who claims to hold the copyright. If you understand the history of how we even made music small enough to send across the (much, much slower back then) Internet, we had to start with the MP3 format, which threw away enough ‘unneeded’ data from the original CD files to shrink the files to a little less than 10% of their original size. This is the technology that we needed before we could even get around the idea of Napster, because enough people had enough music on their hard drives (because we’d already dropped the size) to make file sharing useful. However, as Shirky also notes in his article, this lossy compression technique changes the way that music sounds and you can tell the difference if you listen carefully and know what to listen for. Yet, this is the same song and Napster got into trouble for sharing compressed artefacts of lower quality and perceptible difference from the CD originals, because music, as this kind of artefact, is fungible despite very different levels of quality. Identical, to an audiophile, means sounding precisely the same (or true to the source, really), but identical to the copyright owner is a representation that clearly indicates unauthorised use of copyright material – which is why George Harrison’s “My Sweet Lord” ended up begin described as sufficiently similar to “He’s So Fine”, despite it being a brand new recording and not just a compressed copy.
So, yes, Shirky’s original quotes are both true – we have improved playback and while MP3 is still very common, lossless and much higher quality reproductions are now available. However, the point that has been missed is that the vast majority of people do not care in the slightest. The average person will only notice a shift from MP3 to lossless if they suddenly discover that their iPod has dropped in capacity, when measured in number of songs, by a significant margin. If I listen to “Viva La Vida” by Coldplay, and yes, Joe Satriani fans, I picked that deliberately, then the effective difference in my enjoyment of the song, my ability to sing along tunelessly in the shower and the ability to recite the words if asked, has nothing to do with the quality. This is not true of certain pieces of classical music, where the compression artefacts start to have more of an effect, but these are not the core business of file sharers and those who trade in compressed artefacts. However, MP3 artefacts rarely sound like long scratches, dust on the record or a bad needle – yes, they can be irritating, but the electronic form, pre and post-compression, is generally protected from such things unless you get some serious cosmic ray action in your storage media and even then, you have to be very unlucky.
The Napster music argument, for me, falls down because the increase in quality does not have a direct connection to what the majority of the user base would have considered an acceptable product. Yes, it’s better now but, for most people, so what? Music sharing services are considered useful and valuable because they share songs that people want, where most people don’t think about the quality, they accept the name and the recognisable nature of the song as enough.
This is not at all true for education, because educational experiences vary wildly between lecturers, courses, institutions and eras to an extent that it is impossible to consider them in any way to be interchangeable – quality, here, is everything. If you have an international articulation program, you know that the first thing you have to do is to work out what has been taught, and how it has been taught, inside a course of the same name as one of yours. Even ‘name equivalence’ doesn’t mean anything here and we do not, or we should not, grant standing based on a coincidence of name for a course. There is no parallel guarantee that my low quality version of a course will give me the same ability to “sing in the shower” as the high quality course will – and this is, for me, an unassailable difference.
There is no doubt that the opportunities that might be offered by blended learning, full electronic offerings, and, yes, MOOCs (however they end up being defined) are something that we have to consider because, if they work, they allow us to educate the world, but claiming that this must occur because Udacity is like Napster completely ignores the core difference between education and music in terms of the consumer base and their focus on what it means for a service to meet their requirements. If students didn’t care about the perceived quality, then we wouldn’t have the notion of the ‘top schools’ or ‘low end schools’, so we know that this thinking exists. A student will happily put an MP3 on at a party, but it remains to be seen if they will constantly and out of design, not desperation, put a MOOC course on a job application, and expect a good result from it.
When Does Failing Turn You Into a Failure?
Posted: December 17, 2012 Filed under: Education, Opinion | Tags: advocacy, authenticity, blogging, ci2012, community, conventicle, curriculum, design, education, educational research, ethics, feedback, Generation Why, grand challenge, higher education, icer2012, in the student's head, learning, principles of design, reflection, research, resources, student perspective, teaching, teaching approaches, thinking, tools, workload Leave a commentThe threat of failure is very different from the threat of being a failure. At the Creative Innovations conference I was just at, one of the strongest messages there was that we learn more from failure than we do from success, and that failure is inevitable if you are actually trying to be innovative. If you learn from your failures and your failure is the genuine result of something that didn’t work, rather than you sat around and watched it burn, then this is just something that happens, was the message from CI, and any other culture makes us overly-cautious and risk averse. As most of us know, however, we are more strongly encouraged to cover up our failures than to celebrate them – and we are frequently better off not trying in certain circumstances than failing.
At the recent Adelaide Conventicle, which I promise to write up very, very soon, Dr Raymond Lister presented an excellent talk on applying Neo-Piagetian concepts and framing to the challenges students face in learning programming. This is a great talk (which I’ve had the good fortune to see twice and it’s a mark of the work that I enjoyed it as much the second time) because it allows us to talk about failure to comprehend, or failure to put into practice, in terms of a lack of the underlying mechanism required to comprehend – at this point in the student’s development. As part of the steps of development, we would expect students to have these head-scratching moments where they are currently incapable of making any progress but, framing it within developmental stages, allows us to talk about moving students to the next stage, getting them out of this current failure mode and into something where they will achieve more. Once again, failure in this case is inevitable for most people until we and they manage to achieve the level of conceptual understanding where we can build and develop. More importantly, if we track how they fail, then we start to get an insight into which developmental stage they’re at.
One thing that struck me with Raymond’s talk, was that he starts off talking about “what ruined Raymond” and discussing the dire outcomes promised to him if he watched too much television, as it was to me for playing too many games, and it is to our children for whatever high tech diversion is the current ‘finger wagging’ harbinger of doom. In this case, ruination is quite clearly the threat of becoming a failure. However, this puts us in a strange position, because if failure is almost inevitable but highly valuable if managed properly and understood, what is it about being a failure that is so terrible? It’s like threatening someone that they’ll become too enthusiastic and unrestrained in their innovation!
I am, quelle surprise, playing with words here because to be a failure is to be classed as someone for whom success is no longer an option. If we were being precise, then we would class someone as a perpetual failure or, more simply, unsuccessful. This is, quite usually, the point at which it is acceptable to give up on someone – after all, goes the reasoning, we’re just pouring good money after bad, wasting our time, possibly even moving the deck chairs on the Titanic, and all those other expressions that allow us to draw that good old categorical line between us and others and put our failures into the “Hey, I was trying something new” basket and their failures into the “Well, he’s just so dumb he’d try something like that.” The only problem with this is that I’m really not sure that a lifetime of failure is a guaranteed predictor of future failure. Likely? Yeah, probably. So likely we can gamble someone’s life on it? No, I don’t believe so.
When I was failing courses in my first degree, it took me a surprisingly long time to work out how to fix it, most of which was down to the fact that (a) I had no idea how to study but (b) no-one around me was vaguely interested in the fact that I was failing. I was well on my way to becoming a perpetual failure, someone who had no chance of holding down a job let alone having a career, and it was a kind and fortuitous intervention that helped me. Now, with a degree of experience and knowledge, I can look back into my own patterns and see pretty much what was wrong with me – although, boy, would I have been a difficult cuss to work with. However, failing, which I have done since then and I will (no doubt) do again, has not appeared to have turned me into a failure. I have more failings than I care to count but my wife still loves me, my friends are happy to be seen with me and no-one sticks threats on my door at work so these are obviously in the manageable range. However, managing failure has been a challenging thing for me and I was pondering this recently – how people deal with being told that they’re wrong is very important to how they deal with failing to achieve something.
I’m reading a rather interesting, challenging and confronting, article on, and I cannot believe there’s a phrase for this, rage murders in American schools and workplaces, which claims that these horrifying acts are, effectively, failed revolts, which is with Mark Ames, the author of “Going Postal” (2005). Ames seems to believe that everything stems from Ronald Reagan (and I offer no opinion either way, I hasten to add) but he identifies repeated humiliation, bullying and inhumane conditions as taking ordinary people, who would not usually have committed such actions, and turning them into monstrous killing machines. Ames’ thesis is that this is not the rise of psychopathy but a rebellion against breaking spirit and the metaphorical enslavement of many of the working and middle class that leads to such a dire outcome. If the dominant fable of life is that success is all, failure is bad, and that you are entitled to success, then it should be, as Ames says in the article, exactly those people who are most invested in these cultural fables who would be the most likely to break when the lies become untenable. In the language that I used earlier, this is the most awful way to handle the failure of the fabric of your world – a cold and rational journey that looks like madness but is far worse for being a pre-meditated attempt to destroy the things that lied to you. However, this is only one type of person who commits these acts. The Monash University gunman, for example, was obviously delusional and, while he carried out a rational set of steps to eliminate his main rival, his thinking as to why this needed to happen makes very little sense. The truth is, as always, difficult and muddy and my first impression is that Ames may be oversimplifying in order to advance a relatively narrow and politicised view. But his language strikes me: the notion of the “repeated humiliation, bullying and inhumane conditions”, which appears to be a common language among the older, workplace-focused, and otherwise apparently sane humans who carry out such terrible acts.
One of the complaints made against the radio network at the heart of the recent Royal Hoax, 2DayFM, is that they are serial humiliators of human beings and show no regard for the general well-being of the people involved in their pranks – humiliation, inhumanity and bullying. Sound familiar? Here I am, as an educator, knowing that failure is going to happen for my students and working out how to bring them up into success and achievement when, on one hand, I have a possible set of triggers where beating down people leads to apparent madness, and at least part of our entertainment culture appears to delight in finding the lowest bar and crawling through the filth underneath it. Is telling someone that they’re a failure, and rubbing it in for public enjoyment, of any vague benefit to anyone or is it really, as I firmly believe, the best way to start someone down a genuinely dark path to ruination and resentment.
Returning to my point at the start of this (rather long) piece, I have met Raymond several times and he doesn’t appear even vaguely ruined to me, despite all of the radio, television and Neo-Piagetian contextual framing he employs. The message from Raymond and CI paints failure as something to be monitored and something that is often just a part of life – a stepping stone to future success – but this is most definitely not the message that generally comes down from our society and, for some people, it’s becoming increasingly obvious that their inability to handle the crushing burden of permanent classification as a failure is something that can have catastrophic results. I think we need to get better at genuinely accepting failure as part of trying, and to really, seriously, try to lose the classification of people as failures just because they haven’t yet succeeded at some arbitrary thing that we’ve defined to be important.
Brief Admin Notice: If You Looked Like Spam I Just Bulk-Deleted You
Posted: December 16, 2012 Filed under: Education | Tags: blogging, community, education, spam 3 CommentsMy spam folder had hit over 200 messages, as I’ve been doing a lot of updates via the iPad and it’s more work to clean up the spam from there, hence an executive decision was made to flush the lot without reading. The spam filter is pretty good and I’m pretty confident I didn’t throw away anything real – but if you did send me something and you haven’t had a reply (or my silence doesn’t make sense) then please drop me a line.
Thank you!
Is It Called Ranking Because It Smells Funny?
Posted: December 15, 2012 Filed under: Education | Tags: advocacy, community, education, educational problem, higher education, measurement, MIKE, ranking, reflection, resources, thinking, times higher education supplement 3 CommentsYears ago, I was a professional winemaker, which is an awesome job but with very long hours (that seems to be a trend for me). One of the things that we did a lot in winemaking was to assess the quality of wine to work out if we’d made what we wanted to but also to allow us to blend this parcel with that parcel and come up with a better wine. Wine judging, for wine shows, is an important part of getting feedback on the quality of your wine as it’s perceived by other professionals. Wine is judged on a 20 point scale most of the time, although some 100 point schemes are in operation. The problem is that this scale is not actually as wide as it might look. Wines below 12/20 are usually regarded as faulty or not at commercial level – so, in reality, most wine shows are working in the range 12-19.5 (20 was relatively rare but I don’t know what it’s like now). This gets worse for the “100 point” ranges, where Wine Spectator claim to go from 50-100, but James Halliday (a prominent wine critic) rates from 75-100, where ‘Good’ starts at 80. This is really quite unnecessarily confusing, because it means that James Halliday is effectively using a version of the 16 available ranks (12-19.5 at 0.5 interval) of the 20 point scale, mapped into a higher range.
Of course, the numbers are highly subjective, even to a very well trained palate, because the difference between an 87 and an 88 could be colour, or bouquet, or flavour – so saying that the wine at 88 is better doesn’t mean anything unless you know what the rater actually means by that kind of ranking. I used to really enjoy the wine selections of a wine writer called Mark Shields, because he used a straightforward rating system and our palates were fairly well aligned. If Mark liked it, I’d probably like it. This is the dirty secret of any ranking mechanism that has any aspect of subjectivity or weighting built into it – it needs to agree with your interpretation of reasonable or you will always be at odds with it.
In terms of wine, the medal system that is in use really does give us four basic categories: commercially sound (no medal), better than usual (bronze), much better than usual (silver) and “please pass me another bottle” (gold). On top of that, you have the ‘best in show’ effectively which says that, in this place and from these tasters, this was the best overall in this category. To be frank, the gold wines normally blow away the bronzes and the no-awards, but the line between silver and gold is a little more blurred. However, the show medals have one advantage in that a given class has been inspected by the same people and the wines have actually been compared (in one sense) and ranked. However, if nothing is outstanding then no medals will be awarded because it is based on the marks on the 20 point scale, so if all the wines come in at 13, there will be no gongs – there doesn’t have to be a gold or a silver, or even a bronze, although that would be highly unusual. More subtly, gold at one show may not even get bronze at another – another dirty little secret of subjective ranking, sometimes what you are comparing things to makes a very big difference.
Which brings me to my point, which is the ranking on Universities. You’re probably aware that there are national and international rankings of Universities across a range of metrics, often related to funding and research, but that the different rankings have broad agreement rather than exact agreement as to who is ‘top’, top 5 and so on. The Times Higher Education supplement provides a stunning area of snazzy looking graphics, with their weightings as to what makes a great University. But, when we look at this, and we appear to have accuracy to one significant figure (ahem), is it significant that Caltech is 1.8 points higher than Stanford? Is this actually useful information in terms of which university a student might wish to attend? Well, teaching (learning environment) makes 30% of the score, international outlook makes up 7.5%, industry income makes up 2.5%, research is 30% (volume, income and reputation) and citations (research influence) make up the last 30%. If we sort by learning environment (because I am a curious undergraduate, say) then the order starts shifting, not at the very top to the list, but certainly further down – Yale would leap to 4th in the US instead of 9th. Once we get out of the top 200, suddenly we have very broad bands and, honestly, you have to wonder why we are still putting together the numbers if the thing that people appear to be worrying about is the top 200. (When you deem worthiness on a rating scale as only being a subset of the available scale, you rapidly turn something that could be considered continuous into something with an increasingly constrained categorical basis.) But let’s go to the Shanghai rankings, where Caltech is dropped from number 1 to number 6. Or the QS World Rankings, who rate Caltech as #10.
Obviously, there is no doubt about the general class of these universities, but it does appear that the judges are having some difficulty in consistently awarding best in class medals. This would be of minor interest, were it not for the fact that these ratings do actually matter in terms of industry confidence in partnership, in terms of attracting students from outside of your home educational system and in terms of who get to be the voices who decide what constitutes a good University. It strikes me that broad classes are something could apply quite well here. Who really cares whether Caltech is 1, 6 or 10 – it’s obviously rating well across the board and, barring catastrophe, always will.
So why keep ranking it? What we’re currently doing is polishing the door knob on the ranking system, devoting effort to ranking Universities like Caltech, Stanford, Harvard, Cambridge and Oxford, who could not, with any credibility be ranked low – or we’d immediately think that the ranking mechanism was suspect. So let’s stop ranking them, because it’s compressing the ranking at a point where the ranking is not even vaguely informational. What would be interesting was more devotion to the bands further down, where a University can assess its global progress against its peers to find out if it’s cutting the mustard.
If I put a bottle of Grange (one of Australia’s best red wines and it is pretty much worthy of its reputation if not its price) into a wine show and it came back with less than 17/20, I’d immediately suspect the rating system and the professionalism of the judges. The question is, of course, why would I put it in other than to win gold medals – what am I actually achieving in this sense? If it’s a commercial decision to sell more wine then I get this but wine is, after all, just wine and you and I drink it the same way. Universities, especially when ranked across complex weighted metrics and by different people, are very different products to different people. The single figure ranking may carry prestige and probably attracts both students and money but should it? Does it make any sense to be so detailed (one significant figure, indeed) about how one stacks up against each other, when in reality you have almost exponentially separated groups – my University will never ‘challenge’ Caltech, and if Caltech ‘drops’ to the graded level of the University of Melbourne (one of our most highly ranked Unis), I’m not sure that the experience will tell Caltech anything other than “Ruh oh!”
If I could summarise all of this, it would be to say that our leader board and ranking obsession would be fine, were it not for the amount of time spent on these things, the weight placed upon what is ultimately highly subjective even in terms of the weighting, and is not clearly defined as to how these rankings can be used to make sensible decisions. Perhaps there is something more useful we could be doing with our time?
Educating Women: A Global Priority
Posted: December 13, 2012 Filed under: Education, Opinion | Tags: advocacy, blogging, community, education, empowering women, ethics, Generation Why, higher education, teaching, teaching approaches, women in education Leave a commentThis came through my Facebook feed and I wanted to share it with you (if you haven’t already seen it).
If you can read, write and perform arithmetic then you are in a far better position with a much greater chance of being able to control your own life and that’s why education is such an important tool if we are to empower everyone.
When you are educated, you do not need to depend upon anyone else to tell you what something means or how much of something you are entitled to.
When you are educated, you can ask the difficult questions that are needed to be asked, the first of which is often “Why am I treated this way because of the way that I was born?” And this is an important step towards empowerment.
The global educational initiative depends upon getting education everywhere and valuing education. The key to that, in many countries, is empower women and give them access to education.






