When Does Failing Turn You Into a Failure?

The threat of failure is very different from the threat of being a failure. At the Creative Innovations conference I was just at, one of the strongest messages there was that we learn more from failure than we do from success, and that failure is inevitable if you are actually trying to be innovative. If you learn from your failures and your failure is the genuine result of something that didn’t work, rather than you sat around and watched it burn, then this is just something that happens, was the message from CI, and any other culture makes us overly-cautious and risk averse. As most of us know, however, we are more strongly encouraged to cover up our failures than to celebrate them – and we are frequently better off not trying in certain circumstances than failing.

At the recent Adelaide Conventicle, which I promise to write up very, very soon, Dr Raymond Lister presented an excellent talk on applying Neo-Piagetian concepts and framing to the challenges students face in learning programming. This is a great talk (which I’ve had the good fortune to see twice and it’s a mark of the work that I enjoyed it as much the second time) because it allows us to talk about failure to comprehend, or failure to put into practice, in terms of a lack of the underlying mechanism required to comprehend – at this point in the student’s development. As part of the steps of development, we would expect students to have these head-scratching moments where they are currently incapable of making any progress but, framing it within developmental stages, allows us to talk about moving students to the next stage, getting them out of this current failure mode and into something where they will achieve more. Once again, failure in this case is inevitable for most people until we and they manage to achieve the level of conceptual understanding where we can build and develop. More importantly, if we track how they fail, then we start to get an insight into which developmental stage they’re at.

One thing that struck me with Raymond’s talk, was that he starts off talking about “what ruined Raymond” and discussing the dire outcomes promised to him if he watched too much television, as it was to me for playing too many games, and it is to our children for whatever high tech diversion is the current ‘finger wagging’ harbinger of doom. In this case, ruination is quite clearly the threat of becoming a failure. However, this puts us in a strange position, because if failure is almost inevitable but highly valuable if managed properly and understood, what is it about being a failure that is so terrible? It’s like threatening someone that they’ll become too enthusiastic and unrestrained in their innovation!

I am, quelle surprise, playing with words here because to be a failure is to be classed as someone for whom success is no longer an option. If we were being precise, then we would class someone as a perpetual failure or, more simply, unsuccessful. This is, quite usually, the point at which it is acceptable to give up on someone – after all, goes the reasoning, we’re just pouring good money after bad, wasting our time, possibly even moving the deck chairs on the Titanic, and all those other expressions that allow us to draw that good old categorical line between us and others and put our failures into the “Hey, I was trying something new” basket and their failures into the “Well, he’s just so dumb he’d try something like that.” The only problem with this is that I’m really not sure that a lifetime of failure is a guaranteed predictor of future failure. Likely? Yeah, probably. So likely we can gamble someone’s life on it? No, I don’t believe so.

When I was failing courses in my first degree, it took me a surprisingly long time to work out how to fix it, most of which was down to the fact that (a) I had no idea how to study but (b) no-one around me was vaguely interested in the fact that I was failing. I was well on my way to becoming a perpetual failure, someone who had no chance of holding down a job let alone having a career, and it was a kind and fortuitous intervention that helped me. Now, with a degree of experience and knowledge, I can look back into my own patterns and see pretty much what was wrong with me – although, boy, would I have been a difficult cuss to work with. However, failing, which I have done since then and I will (no doubt) do again, has not appeared to have turned me into a failure. I have more failings than I care to count but my wife still loves me, my friends are happy to be seen with me and no-one sticks threats on my door at work so these are obviously in the manageable range. However, managing failure has been a challenging thing for me and I was pondering this recently – how people deal with being told that they’re wrong is very important to how they deal with failing to achieve something.

I’m reading a rather interesting, challenging and confronting, article on, and I cannot believe there’s a phrase for this, rage murders in American schools and workplaces, which claims that these horrifying acts are, effectively, failed revolts, which is with Mark Ames, the author of “Going Postal” (2005). Ames seems to believe that everything stems from Ronald Reagan (and I offer no opinion either way, I hasten to add) but he identifies repeated humiliation, bullying and inhumane conditions as taking ordinary people, who would not usually have committed such actions, and turning them into monstrous killing machines. Ames’ thesis is that this is not the rise of psychopathy but a rebellion against breaking spirit and the metaphorical enslavement of many of the working and middle class that leads to such a dire outcome. If the dominant fable of life is that success is all, failure is bad, and that you are entitled to success, then it should be, as Ames says in the article, exactly those people who are most invested in these cultural fables who would be the most likely to break when the lies become untenable. In the language that I used earlier, this is the most awful way to handle the failure of the fabric of your world – a cold and rational journey that looks like madness but is far worse for being a pre-meditated attempt to destroy the things that lied to you. However, this is only one type of person who commits these acts. The Monash University gunman, for example, was obviously delusional and, while he carried out a rational set of steps to eliminate his main rival, his thinking as to why this needed to happen makes very little sense. The truth is, as always, difficult and muddy and my first impression is that Ames may be oversimplifying in order to advance a relatively narrow and politicised view. But his language strikes me: the notion of the “repeated humiliation, bullying and inhumane conditions”, which appears to be a common language among the older, workplace-focused, and otherwise apparently sane humans who carry out such terrible acts.

One of the complaints made against the radio network at the heart of the recent Royal Hoax, 2DayFM, is that they are serial humiliators of human beings and show no regard for the general well-being of the people involved in their pranks – humiliation, inhumanity and bullying. Sound familiar? Here I am, as an educator, knowing that failure is going to happen for my students and working out how to bring them up into success and achievement when, on one hand, I have a possible set of triggers where beating down people leads to apparent madness, and at least part of our entertainment culture appears to delight in finding the lowest bar and crawling through the filth underneath it. Is telling someone that they’re a failure, and rubbing it in for public enjoyment, of any vague benefit to anyone or is it really, as I firmly believe, the best way to start someone down a genuinely dark path to ruination and resentment.

Returning to my point at the start of this (rather long) piece, I have met Raymond several times and he doesn’t appear even vaguely ruined to me, despite all of the radio, television and Neo-Piagetian contextual framing he employs. The message from Raymond and CI paints failure as something to be monitored and something that is often just a part of life – a stepping stone to future success – but this is most definitely not the message that generally comes down from our society and, for some people, it’s becoming increasingly obvious that their inability to handle the crushing burden of permanent classification as a failure is something that can have catastrophic results. I think we need to get better at genuinely accepting failure as part of trying, and to really, seriously, try to lose the classification of people as failures just because they haven’t yet succeeded at some arbitrary thing that we’ve defined to be important.


Leading the Innovation Charge: Research and Teachers (NESTA Report on Digital Education)

I’m currently reading the NESTA report “Decoding Learning: The Proof, Promise and Potential of Digital Education” and the report talks about ways of learning with technology and sources of innovation. At the start, in scene setting, the two sources of innovation are identified as being either research efforts that were based on large amount of gathered evidence (research-led) and informal literature such as blogs and teacher networks (teacher-led) – which means, woohoo, if anyone does anything based on what I’ve written in here, it’s a teacher-led innovation. (I realise that there is argument for overlap in here but it appears that formal research publication denotes the division and it appears that there was no reason why a teacher-led initiative couldn’t be high quality if it was still evidence-based, even if there was no strict formal publication.)

Looking across the world, the report started with 210 cases that were either research- or teacher-led and narrowed this down to a representative sample of 150. What’s interesting, to me, is the split by country between research- and teacher-led projects. The US has 65 ‘innovations’, 28 teacher-led, 37 research-led. The UK has 64, 45 teacher-led, 19 research. Australia has 9, all of which are teacher-led. Outside of the UK and Australia, the most likely approach to educational innovation is through a research-based approach. It appears that our relationship to the UK educational system may be even closer than we thought in this respect. However, to look in more detail at these innovations, we have to look at the breakdown of that ways that we see students learning with technology. The learning themes in this document are:

  • Learning from Experts
  • Learning with Others
  • Learning through Making
  • Learning through Exploring
  • Learning through Inquiry
  • Learning through Practising
  • Learning from Assessment
  • Learning in and from Settings

Most of these are pretty self-explanatory (and highly constructivist, unsurprisingly) but they are based on the learners’ actions and include factors such as the resources employed and the structure – which gives a greater potential depth to the classification as you can’t just say you’re doing X, you have to support it with technological resources and learning design.

A very important point raised early on in the teacher-driven, research-driven dichotomy is that the requirement for large volumes of evidence, in the case of research publication, can have a tendency to make the research-led initiatives more risk averse, in that much more information has to be gathered before recommendations can be adopted or conclusions can be drawn. The teacher-led initiatives can highlight serious innovations that are worth trying, but may not yet have the evidence behind them to actually provide a convincing argument. What a dilemma! I can either have evidence for something that I probably already thought of or take a chance on something for which I have no evidence – and in the world of technology, where innovation often costs money, good luck getting a solid amount of cash with a good feeling about an innovation direction. I need to go and look further in the case of Australia, because I know a great number of excellent educational researchers here who are, as far as I know, proposing solid research-led innovations but they aren’t showing up on this particular radar. And, being cynical, if it’s not showing up on NESTA’s radar, it’s probably not showing up at the government level and, hearts and minds, we want the government to be aware that the research approaches (often University-driven) are visible, viable and valuable. (Another thing for the to-do list, apart from finding alliterative phrases starting with ‘x’.)

In looking at the themes, I find it interesting to think about how these themes are both guidelines of good practice and cautionary tales. When set up technology that enables us to Learn from Experts, which is one of the potential underlying principles of the MOOC, we have to make sure that we’re actually providing experts. There’s an interesting example of the statistics expert who tore about an on-line stats course and, while it was rapidly corrected, we have that slight worry that the power to set up a course in no way correlates with your ability to actually provide the course information. Of course, I’m not a trained teacher but my qualification in my academic discipline and prior industry experience does provide me with a level of expected expertise in an area. I’m not allowed to get out in front of students unless I reach a certain bar of qualification – but that is most certainly not always the case. Suddenly the technology innovation theme “Learning from Experts” becomes the source of a philosophical reflection on how we are doing this at all – do we even refer to experts in innovation, education or the discipline? If we want a combination of these, how does it work? As noted in the report, it’s not just access to the expert that learners need, it’s the supporting dialogue between them that assists in knowledge construction and learning. How can innovation in technology support this new dialogue in a way that works?

The future is not just about the provision of information; we solved that problem in the first instance with the book, refined it with the library and then did … something … with it when we developed Wikipedia (all joking aside, on-line resources have added immediacy and ubiquity to the information provision solution). The future is about successful learning, which involves the development of knowledge, and thus involves the arrangement, storage, organisation, retrieval, and development of information in order to support that newly constructed knowledge. There’s a lot of scope for the development of innovative technological tools in this space but, as the report clearly indicates through its themes, this involves thinking about how we learn, how we’re going to learn and how the tech can help us to achieve it.

There’s still a lot of research- and teacher-led innovation to come, which is great because we all love a challenge, but I’d like to finish by noting what is not one of the key themes from the NESTA report. There is no “Learning from watching dull videos of uninteresting material presented with the least effort possible, because that’s how it’s always been done” because this is, quite simply, not innovative. We already know how well that works and that’s why we have to innovate now. Viva the glorious fusion of cutting edge innovation and sufficient evidence to allow us to leap off the metaphorical cliff!

Oh good, it's Monday.(Photo by John Moore/Getty Images)

Oh good, it’s Monday.
(Photo by John Moore/Getty Images)


Successful Organisms Use Their Environment Well

I saw a fascinating talk at Creative Innovations 2012 from Wade Davis, who has the coolest title in the world, National Geographic’s explorer-in-residence. Wade made the point that successful human civilisations use their environment optimally, rather than fighting it. He gave several examples but the one that stuck in my head was that on the Inuit, who use the cold of their environment as an additional advantage. Instead of using metal rails, their sleds ran on frozen fish – because the fish is a low-friction thing and, frozen, it’s hard enough to sled on. Get trapped somewhere and unable to get home? You can always eat your runners.

I am writing this in Australia at the start of Summer, where we regularly hit 100+ F (38+ C) and, because it’s going to be hot today and I don’t have any serious meetings, I’m wearing a short sleeved shirt and shorts because I’ll be out of the sun for most of the day and fewer clothes means less heat. This is sensible adaptation. However, what is not sensible is that, if I had serious meetings, not only would I be wearing at least long sleeved shirt and trousers, I might even be in a suit and tie. This is, quite frankly, dumb and not showing any adaptation of work processes to the local environment.

This is, however, not new as any studies of the British Empire in the tropics will show you. Once convention, and conformity to that convention, dominate over adaptation to the environment, you end up making bad decisions. Worse still, if you don’t take the environment into account, you might see perfectly reasonable adaptations as being rebellious and unconventional.

In Australia, the trade off is always how much clothing do I have to wear to balance fashion, sun screening, temperature requirements, business requirements and avoid being prosecuted for public nudity. If we don’t wear the right clothing, we put additional demand on our environmental masking technologies, such as air conditioning or public transport. If I dress in a way that I can’t walk to work then I’m now fighting my real environment because of my overlaid work environment.

How is this an educational issue? I think that this ties in with our overlaid assessment environments for our students. If we create an environment that doesn’t actually encourage the behaviours that we want, as natural extensions and relations, then we will start to get adaptive behaviour to the real triggers in the environment, which will appear to us as aberrant and rebellious behaviours.

We want our students to do all of the work because it contributes to their development of knowledge and their ability to apply their knowledge. However, by providing certain commonly used assessment types, for example,mass-produced assignments that don’t vary from year to year, I believe that we are risking the formation of an environment where the assignments are seen as barriers rather than achievements. Humans optimise to get around barriers and we are very good at finding the easiest way to do this. We have a cultural convention that students shouldn’t cheat or plagiarise and this is a perfectly reasonable convention. If we build an environment where we weaken the perceived sincerity of this convention, or we set up an environment that implicitly rewards this activity without a high probability of detecting it, then we have set up a conflict. We are asking students to be less optimal in the way that they are working in an environment, with unnatural constraints to keep them in place. With better design, we can create environments that are a better fit to our conventions and are more consistent and integrated – but this takes design. “Because I say so” is never as strong as “because it’s actually necessary”.

Humans adapt to their environment in order to succeed. This is why we dominate and it’s part of who we are. By thinking about this behaviour, I think that we can get a clearer view of why our students sometimes do what they do, even when they are acting at odds with what we’ve explicitly told them to do. I’m most certainly not saying that we can accept students not doing enough work to get the knowledge, or passing off other people’s work as their own, as that’s completely at odds with helping them to build their knowledge. But perhaps it’s worth looking at every assignment we set up to see if the optimised behaviour, in terms of effort, innovation, autonomy, mastery, purpose and enjoyment, in the assignment environment will actually be along the lines that we are after.

I realise that some people will think that I’m putting the blame for cheating on our shoulders and, no, I accept the active role students have and that some students will cheat no matter what we do. But some assignment environments and types are better than others at encouraging our students to work as we want them to, and I think it’s worth thinking of this as an environmental optimisation.


AAEE 2012 – Yes, Another Conference

In between writing up the conventicle (which I’m not doing yet), the CI Conference (which I’m doing slowly) and sleep (infrequent), I’m attending the Australasian Association for Engineering Education 2012 conference. Today, I presented a paper on e-Enhancing existing courses and, through a co-author, another paper on authentic teaching tool creation experiences.

My first paper gave me a chance to look at the Google analytics and tracking data for the on-line material I created in 2009. Since then, there have been:

  • 11,118 page views
  • 2.99 pages viewed/visit
  • 1,721 unique visitors
  • 3,715 visits overall

The other thing that is interesting is that roughly 60% of the viewers return to view the podcasts again. The theme of my talk was “Is E-Enhancement Worth It” and I had the pleasure of pointing out that I felt that it was because, as I was presenting, I was simultaneously being streamed giving my thoughts of computer networks to students in Singapore and (strangely enough) Germany. As I said in the talk and in the following discussion, the podcasts are far from perfect and, to increase their longevity, I need to make them shorter and more aligned to a single concept.

Why?

Because while the way I present concepts may change, because of sequencing and scaffolding changes, the way that I present an individual concept is more likely to remain the same over time. My next step is to make up a series of conceptual podcasts that are maybe 3-5 minutes in duration. Then the challenge is how to assemble these – I have ideas but not enough time.

One of the ideas raised today is the idea that we are seeing the rise of the digital native, a new type of human acclimatised to a short gratification loop, multi-tasking, and a non-linear mode of learning. I must be honest and say that everything I’ve read on the multi-tasking aspect, at least, leads me to believe that this new generation don’t multi-task any better than anyone else did. If they do two things, then they do them more slowly and don’t achieve the same depth: there’s no shortage of research work on this and given the limits of working memory and cognition this makes a great deal of sense. Please note, I’m not saying that I don’t believe that Homo Multiplexor can’t emerge, it’s just that I have not yet any strong scientific evidence to back up the anecdotes. I’m perfectly willing to believe that default searching activities have changed (storing ways of searching rather than the information) because that is a logical way to reduce cognitive load but I am yet to see strong evidence that my students can do two things at once well and without any loss of time. Either working memory has completely changed, which we should be able to test, or we risk confusing the appearance of doing two things at once with actually doing two things at once.

This is one of those situations that, as one of my colleagues observed, leaves us in that difficult position of being told, with great certainty, about a given student (often someone’s child) who can achieve great things while simultaneously watching TV and playing WoW. Again, I do not rule out the possibility of a significant change in humanity (we’re good at it) but I have often seen that familiar tight smile and the noncommittal nod as someone doesn’t quite acknowledge that your child is somehow the spearhead of a new parallelised human genus.

It’s difficult sometimes to express ideas like this. Compare this to the numbers I cited above. Everyone who reads this will look at those numbers and, while they will think many things, they are unlikely to think “I don’t believe that”. Yet I know that there are people who have read this and immediately snorted (or the equivalent) because they frankly disbelieve me o the multi-tasking, with no more or less hard evidence than that supporting the numbers. I’m actually expecting some comments on this one because the notion of the increasing ability of young people to multitask is so entrenched. If there is a definitive set of work supporting this, then I welcome it. The only problem is that all I can find supports the original work on working memory and associated concepts – there are only so many things you can focus on and beyond that you might be able to function but not at much depth. (There are exceptions, of course, but the 0.1% of society do not define the rule.)

The numbers are pasted straight out of my Google analytics for the learning materials I put up – yet you have no more reason to believe them than if I said “83% of internet statistics are made up”, which is a made up statistic. (If is is true, it is accidentally true.) We see again one of the great challenges in education: numbers are convincing, evidence that contradicts anecdote is often seen as wrong, finding evidence in the first place can be hard.

One more day of conference tomorrow! I can only wonder what we’ll be exposed to.


David and Goliath: Who Needs The Strategy? (CI 2012 Masterclass)

I attended a second masterclass on the first day of Creative Innovations, this one entitled “Strategic Diagnosis and Action”, given by Richard Rumelt from UCLA. Richard gave a fascinating and well-polished presentation on how you can actually get a strategy that you can use, as oppose to something that you can print up, put on the wall, and completely ignore. Richard’s definition of strategy is pretty straightforward:

A strategy is a coherent mix of policy and action designed to surmount a high-stakes challenge.

As I believe I’ve noted before, having a strategy is not just useful in terms of knowing where you’re going, it also allows you to make a choice between two (apparently) equal choices. Richard’s question is “What are we going to do in order to meet a challenge?” and, in my application, this makes any choice a matter of “which of these choices will give me the greatest assistance in meeting the challenge?”

As Richard said in his talk, when David met Goliath, if you’re Goliath you don’t think you need a strategy. David, however, has a high-stakes challenge and better be as fast with his mind as he is with his feet. Goliath winning? That’s not a strategy story; a strategy story is about the discovery or creation of strength (where we’re surprised to see it).

David has a post-negotiation debrief with the head of Goliath industries.

David has a post-negotiation debrief with the head of Goliath industries.

Another point that was clearly emphasised is that if you take a challenge focus to your strategy, your strategy can be specific to that challenge and, as a result, clearer and more goal focused. The example of Apple was given. When Steve Jobs returned in 1997, he had to implement a new strategy but it wasn’t one of growth or market domination, it was a survival strategy. Cutting 15 desktops to 1? Survival. Cutting 5 of the 6 national retailers? Survival? Off-shoring everything possible  to reduce expenditure? Survival. This is a coherent strategy with clear and sharp action – this is a survival strategy. The strategy is all about addressing the most important challenge that you have. If it’s survival, fix that first.

The speaker had apparently spoken to Jobs in 1998 and Jobs had said that he was going to “wait for the next big thing”. Well, in 1998 that made sense. Rather than being a second-rate (or small share) PC producer, Apple’s approach was to find a new market where they could dominate. The survival strategy kept the company going for long enough that they could switch to a new strategy of dominating the new music and mobile markets. And, of course, by doing this, Jobs got to set the new rules for that area. There’s a reason that iPods, by default, only work with iTunes and that Apple has complete vertical control. That reason is predominantly because it allows Apple to totally control that market, to avoid having to go through the hard lessons of 1997 and 1998 again.

So, taking this into my Educational Sector setting, what is the strategy that Universities should be employing? Well, first of all, the global tertiary sector is not one business so we’re restricted to individual institution decision making, even where state and federal guidelines are in play. The survival strategy is, to me, effectively off the table. If global education is under an extinction threat then we are facing a catastrophe of such proportion that human survival is probably the requisite strategy. If the MOOC is so successful, and of the required quality, that it can replace the University then a survival strategy for the Unis is ethically questionable as we are spending more money to achieve the same result, assuming that when MOOCs are fully costed they end up being cheaper.

So, either way we slice it, a survival strategy for Universities doesn’t actually look like a valid one. But what does a strategy for a University look like anyway? Let’s step back and ask what Richard things a general strategy is and isn’t.

Firstly, strategy is not a set of goals and, according to Richard, you know it’s a bad strategy when it’s all performance goals and no diagnosis and analysis. “We will increase revenue by 20%.” Great. How?

You know it’s a bad strategy when it’s all fluff. “Our fundamental strategy is one of customer-centric intermediation” from a bank. Good, you’re a retail bank, now what? How do you apply a values statement meaningfully to 30,000 people? Richard sees this as a childish approach – a third grade recitation of “I will not chew gum in school” and not productive when contaminating a strategy.

If no-one has bothered to diagnose what the problem is – bad strategy. To act with intelligence and to get a good strategy, you need to define the nature of the problem. (One of the most refreshing things about the new strategy that is about to be released for my Uni is that I know that a great deal of problem definition underpins it – so I’m quite looking forward to reading it when it’s released shortly. I am quite hopeful that little or any of the critique here will apply.)

What about if you have 47 strategies, 178 Action items and Action Item #122 is “Develop a strategic plan”? It’s a dog’s dinner that everything has gone into that you could find in the fridge, with no discipline, diagnosis, analysis or thought.

So how do we make a good strategy? Diagnose the challenge. Provide guiding policy. Build a set of coherent actions into the strategy and don’t just provide goals as if they are self-solving problem elements.

In terms of Universities, and the whole higher education sector, this means that we not only have to work out what our challenges are, but we have to pick challenges that we can solve. (A previous Prime Minister of Australia famously declared that “by 1990, no Australian child will be living in poverty.” Given that the definition of poverty in Australia is relative to the affluence enjoyed by other sectors, rather than the ‘true’ international definition of subsistence, this is declaring war on an unwinnable challenge. The goal “Stop the drug trade” is equally fraught as it requires legal powers that do not, and may not, exist.)

What are the key challenges facing Universities? Well, if we take survival off the table for the institutions, we change the challenge focus to “what are the key challenges facing the post-school education market in Australia?” and that gives us an entirely new lens on the problem.

I have a lot of thinking to do but, as I said, I’m looking forward to what our new strategic plan will look like for the next 5 years, because I hope that it will help me to identify a subset of challenges that I can look at. Having done that, then I can ask “which are the ones I can help to solve?”


Killing Your Darlings: The Cost of Innovation (CI 2012)

I’m going to take a little more informal approach to some of the themes expressed at CI 2012, because I have a lot of things to do, and you have a lot of things to do, so we can’t sit here waiting for me write everything up and you most certainly don’t want to read 100,000 words about What Nick Did In Late Spring In Melbourne. So let’s go forward.

Innovation is the introduction of the new, whether product, service or idea, but we know what this really means – it means that we have to let go of something old. Letting go of something old is not going to be easy, and how difficult it is can be a very complicated and emotional calculus, so innovation, which can already be hard, is made harder because change can hurt.

If you’re a writer, you may have heard the term “Kill your darlings”, which is attributed to Faulkner (the other one) and is a recasting of the following quote from Sir Arthur Quiller-Couch:

“Whenever you feel an impulse to perpetrate a piece of exceptionally fine writing, obey it – whole-heartedly – and delete it before sending your manuscripts to press. Murder your darlings”

On shallow reading, it appears that any attachment to something makes it eligible for extinction when what is really meant is that sentimentality is the enemy of objectivity. Innovative change is full of situations where your attachment to elements of your existing situation, or an entrenched commitment to the status quo (no, not the band), will compromise your ability to objectively assess whether you are making a correct decision.

There is a statement that every industry will go away at some stage – we’ve seen the rise and fall of so many that such a statement appears to have some credibility. But what about education? We have changed a great deal but will the industry of education every truly disappear? I honestly can’t say but I can talk about a simpler problem, which is what the “darlings” are in the traditional Higher Education system. And, sure enough, when we start talking about innovation and the threat of the new, we see these darlings protected in a way that doesn’t necessarily always seem objective. Now, we don’t have to kill any of them but change is inevitable and, if change is to come in, something has to go out. I have a starting list, which I’m planning to work on over time.

  1. Darling #1, The Lecture:

    We know that the traditional 1-to-many broadcast lecture is a successful way to occupy the time of everyone in the room but it is most certainly not the best way to get certain types of information across. There are many different aspects to this but conference talks and seminars are a world away from the traditional “today I will talk slowly about differential equations while I flash hundreds of slides past you at a speed that you can’t record and no you can’t have any notes or recording”.

    Yes, some lecturers are better than others but when information transfer and retention is important, the lecture is not the right delivery mechanism. Yet, it’s almost unassailable in its ubiquity. It’s a darling.

  2. Darling #2, The Exam:

    I was looking back at my Grand Challenges course, which had a 20% final examination of some of the core topics, and thought about what it had achieved. From my marking of the exam and review of how students prepared, my goal for the exam worked for most of the class. Most had reviewed all of the core material and organised it in a useful way to be able to summarise the core content of the course.

    But did it have to be assigned as a 1 hour exam in a giant examination hall? Did it anything to the course?

    You know, I’m not sure that it did. Next time, I might just assign an exercise to provide a portfolio of work from the course in an organised form and then have an assessment of that which is effectively a viva voce examination to assess that students had done enough work to produce a useful index and had sufficient familiarity to rapidly contextualise problems and knowledge. But, and this is important, far more conversationally.

    The examination can be made highly objective and has the advantage that you are really pretty sure that the student is doing the work – but we’re already seeing cheating technology that we will have more and more trouble dealing with. If the only supporting argument for the exam is that it’s harder to cheat, we need a better reason. If the argument is that it will force the student to learn the work, then we’ve got that around the wrong way. We need to bring motivation back into the rest of the course. Right now, the vast majority of learning happens 2-3 days before the exam and is forgotten by the following weekend.

    And yet, exams are everywhere. They’re entrenched institutional artefacts. Hello, darling.

  3. Darling #3, Me and my University:

    Oh no! Apostasy! But let’s be honest, the primary question around MOOC is whether we need the Universities that we’ve had for so many hundreds of years. If we’re questioning the University, then we’re starting to question the role and future of the teaching academic. Teacherless education was a theme that popped up occasionally at CI 2012 and, while I instinctively react to this in terms of ‘well, who builds these experiences’, we can still learn a lot by looking at what we actually need to make things work.

    I have a small office in a big and old University, with my academic robes hanging on the door for when I walk into the graduation ceremony in the giant old sandstone building once or so every year to farewell and congratulate my graduating students. How much of this is necessary recognition of achievement and how much is a darling?

    Let’s face it – we’re darlings ourselves.

Let me stress that I am not saying that everything must go, but innovation needs space and that means something else has to go. Rather than saying that everything is sacrosanct, we should really be looking at what can and should go, which will drive a search for the new and innovative. My hope would be that by looking at these things, we find the reasons why some of these could stay and belong in the future, rather than propping them up with sentimentality and an ultimately weak approach to necessary change and reinvigoration.

What are your darlings?


Systems Thinking (CI 2012 MasterClass on the Change Lab)

I can’t quite believe how much mileage I’m getting out of the first masterclass but it’s taking me almost as long to go through my notes as it did to write them! I should be back into a semi-normal posting cycle fairly soon – thanks for any patience that you have chosen to extend. 🙂

Can we see all of a system if we’re only in contact with one part? The Change Lab facilitators used the old parable of the six blind man and the elephant to remind us that we can be completely correct about our perception but, due to limitations in our horizon, we fail to appreciate the whole. Another example that was brought up was the role of the police in the protection of abused women and children. If a police officer can look at a situation and think either “Well, I don’t think thats my problem” or “I don’t know what to do”, it’s easy to see how the protective role of the police officer becomes focused on the acute and the extraordinary, rather than the chronic and the systemic.

(That theme, a change in thinking and support from acute to chronic, showed up periodically throughout the conference and my notes.)

In the area of study, the police were retrained to identify what they had to do if they attended and thought that there might be a problem. The police had to get involved, their duties now included the assurance of safety for the at-risk family members and, if they couldn’t get involved themselves, their duty was to find someone else who could fix it and make the connection. We do have protective systems and mechanisms for abused people in domestic situations but there was often a disconnect between domestic violence events that police attended (acute and extraordinary events) and the connecting of people into the existing service network.

Of course, this was very familiar to me because we have the same possibility of disconnection in the tertiary sector. It’s easy to say “go and see the Faculty Office” but it’s that bit harder to ring up the Faculty Office, find the right person, brief them on what a student has already discussed with you and then hand the student over. However, that second set of events is what should happen if you want to minimise the risk of disconnection.

It’s possible to do a remarkable job in some parts of your work and do a terrible job in others, because you don’t realise that you are supposed to be responsible for other areas. It has taken me years to work out how many more things that are required of me as an educator. Yes, scholarship and the practise of learning and teaching are the core but how do we do that with real, breathing students? Here are my current thoughts, based on the police example:

  1. Getting Involved: If a student comes to me with a problem, then if I can fix it, I should try and fix it. My job does not begin when I walk into the lecture theatre and finish when I leave the room – I do have a real and meaningful commitment to my students while they are in my course. Yes, this is more work. Yes, this takes more time. Yes, I don’t know what to do sometimes and that’s scary. However, I do hope that my students know that I’m trying and, even when I’m moving slowly, I’m still involved.
  2. The Assurance of Safety: Students have a right to feel safe and to be safe when they’re studying. That means a learning space free from discrimination, bullying and fear, working in an atmosphere of mutual respect. If they feel unsafe, then they should feel safe to come to me to talk about it. This also means that students have a right to feel safe in the pursuit of their studies: no indifferent construction of assignment where 60% of students fail and it’s dismissed as ‘dumb students’.
  3. If You Can’t Fix It, Find Someone Who Can: Once you’ve done a PhD, one of the key things you work out is how much you don’t know. My Uni, like most Unis, is a giant and complex administrative structure. I don’t have the answer to all of the questions but I do have a spreadsheet of duties for people in my school and a phone book. However, saying “Go to X” is never going to be as good as trying to help someone by connecting them to another person and handing them over. If I can answer a question, I should try to. If I can’t, I should try and find the right person and then connect the student. The final part of this is that I should follow up where I can to see what happened and learn so I know the answer for next time.

The final point is, to me, fascinating because it has made me aware of how hard it can be to find the answer, even when you’re inside the system as a staff member! I always tell my students that if they need something done and aren’t making headway, get me involved because I have the big, scary signature block on my e-mail. Now, mostly our culture is very good and you don’t have to be a Professor or Associate Dean to get progress made… but it is funny how much more attention you sometimes get. I’m very happy to use my (really very insignificant) mild corner of borrowed status if it will help someone to start on the pathway to fixing a problem but I’m also very happy to report that it’s rare that I have to use it, except for the occasional person outside of the University.

It’s important to note that I don’t always succeed in doing all of this. I’m always involved and I’m always working to guarantee safety, but the work involved in a connected handover is sometimes so large that I don’t actually have enough time or resources to close the connection. This, to me, illustrates a good place to focus my efforts on improving the entry points to our systems so that we all end up at the right destination with the minimum number of false starts and dead ends.

Like I said, we’re normally pretty good but I think that we can be better – and thinking about our system as a system makes me aware of how many things I need to do as well as educate, when I’m calling myself an educator.