Three Stories: #1 What I Learned from Failure

It’s considered bad form to start ‘business stories’ with “Once upon a time” but there’s a strong edge of bard to my nature and it’s the end of a long year. (Let’s be generous.) So, are you sitting comfortably? (Ok, I’ll spare you ‘Once…’)

Many years ago, I went to university, after a relatively undistinguished career at school. I got into a course that was not my first preference but, rather than wonder why I had not set the world on fire academically, I assumed that it was because I hadn’t really tried. The companion to this sentiment is that I could achieve whatever I wanted academically, as long as I really wanted it and actually tried. This concept, that I could achieve anything academic I wanted if I tried, got a fairly good workout over the next few years, despite evidence that I was heading in a downward spiral academically. What I became good at was barely avoiding failure, rather than excelling, and while this is a skill, it’s a dangerous line to try and walk. If you’re genuinely aiming to excel, which includes taking the requisite planning steps and time commitment you need, and you fall short then you will probably still do quite well and pass. If you are focused lower down, then missing that bar means failure.

What I didn’t realise at the time was that I was almost doomed to fail when I tried to set my own interpretation of what constituted the right level of effort and participation. If you are a student who has a good knowledge of the whole course then you will have a pretty good idea of how you have answered questions in exams, what is required for assignments and, if you wanted to, you could choose to answer part of a question and have some idea of how many marks are involved. If you don’t know the material in detail, then your perception of your own performance is going to be heavily filtered by your own lack of knowledge. (A reminder of a previous post on this for those who are new here or are vague post-Christmas.)

After some years out in the workforce, and coming back to do postgraduate study, I finally learned something from what should have been quite clear to me, if it hadn’t been hidden by two things: my firm conviction that I could change things immediately if I wished to, and my completely incorrect assumption that my own performance in a subject could be assessed by someone with my level of knowledge!

I became a good student because I finally worked out three key things (with a lot of help and support from my teachers and my friends);

  1. There is no “lower threshold” of knowledge that allows you to predict if you’re going to pass. If you have enough grasp of the course to know how much you need to do to pass, then you probably know enough to do much better than that! (Terry Pratchett covers this beautifully in a book called “Moving Pictures“, where a student has to know the course better than the teachers to maintain a very specific grade over the years.)
  2. Telling yourself that you “could have done better” is almost completely useless unless you decide to do better and put a plan in place to achieve that. This excuse gets you off the hook but, unless it’s teamed with remedial action, it’s just an excuse.
  3. Setting yourself up for failure is just as effective as setting yourself up for success, but it can be far subtler and comprised of many small actions that you don’t take, rather than a few actions that you do take.

Knowing what is going wrong (or thinking you do) doesn’t change anything unless you actively try to change it. It’s a simple truth that, I hope, is a useful and interesting story.


A Break in the Silence: Time to Tell a Story

It has been a while since I last posted here but that is a natural outcome of focusing my efforts elsewhere – at some stage I had to work out what I had time to do and do it. I always tell my students to cut down to what they need to do and, once I realised that the time I was spending on the blog was having one of the most significant impacts on my ability to juggle everything else, I had to eat my own dogfood and cut back on the blog.

Of course, I didn’t do it correctly because instead of cutting back, I completely cut it out. Not quite what I intended but here’s another really useful piece of information: if you decide to change something then clearly work out how you are going to change things to achieve your goal. Which means, ahem, working out what your goals are first.

I’ve done a lot of interesting stuff over the last 6 months, and there are more to come, which means that I do have things to write about but I shall try and write about one a week as a minimum, rather than one per day. This is a pace that I hope to keep up and one that will mean that more of you will read more of what I write, rather than dreading the daily kiloword delivery.

I’ll briefly reflect here on some interesting work and seminars I’ve been looking at on business storytelling – taking a personal story, something authentic, and using it to emphasise a change in business behaviour or to emphasise a characteristic. I recently attended one of the (now defunct) One Thousand and One’s short seminars on engaging people with storytelling. (I’m reading their book “Hooked” at the moment. It’s quite interesting and refers to other interesting concepts as well.) I realise that such ideas, along with many of my notions of design paired with content, will have a number of readers peering at the screen and preparing a retort along the lines of “Storytelling? STORYTELLING??? Whatever happened to facts?”

Why storytelling? Because bald facts sometimes just don’t work. Without context, without a way to integrate information into existing knowledge and, more importantly, without some sort of established informational relationship, many people will ignore facts unless we do more work than just present them.

How many examples do you want: Climate Change, Vaccination, 9/11. All of these have heavily weighted bodies of scientific evidence that states what the answer should be, and yet there is powerful and persistent opposition based, largely, on myth and storytelling.

Education has moved beyond the rationing out of approved knowledge from the knowledge rich to those who have less. The tyrannical informational asymmetry of the single text book, doled out in dribs and drabs through recitation and slow scrawling at the front of the classroom, looks faintly ludicrous when anyone can download most of the resources immediately. And yet, as always, owning the book doesn’t necessarily teach you anything and it is the educator’s role as contextualiser, framer, deliverer, sounding board and value enhancer that survives the death of the drip-feed and the opening of the flood gates of knowledge. To think that storytelling is the delivery of fairytales, and that is all it can be, is to sell such a useful technique short.

To use storytelling educationally, however, we need to be focused on being more than just entertaining or engaging. Borrowing heavily from “Hooked”, we need to have a purpose in telling the story, it needs to be supported by data and it needs to be authentic. In my case, I have often shared stories of my time in working with  computer networks, in short bursts, to emphasise why certain parts of computer networking are interesting or essential (purpose), I provide enough information to show this is generally the case (data) and because I’m talking about my own experiences, they ring true (authenticity).

If facts alone could sway humanity, we would have adopted Dewey’s ideas in the 1930s, instead of rediscovering the same truths decade after decade. If only the unembellished truth mattered, then our legal system would look very, very different. Our students are surrounded by talented storytellers and, where appropriate, I think those ranks should include us.

Now, I have to keep to the commitment I made 8 months ago, that I would never turn down the chance to have one of my cats on my lap when they wanted to jump up, and I wish you a very happy new year if I don’t post beforehand.


Skill Games versus Money Games: Disguising One Game As Another

I recently ran across a very interesting article on Gamasutra on the top tips for turning a Free To Play (F2P) game into a Paying game by taking advantage of the way that humans think and act. F2P games are quite common but, obviously, it costs money to make a game so there has to be some sort of associated revenue stream. In some cases, the F2P is a Lite version of the pay version, so after being hooked you go and buy the real thing. Sometimes there is an associated advertising stream, where you viewing the ads earns the producer enough money to cover costs. However, these simple approaches pale into insignificance when compared with the top tips in the link.

Ramin identifies two games for this discussion: games of skill, where it is your ability to make sound decisions that determines the outcome, and money games, where your success is determined by the amount of money you can spend. Games of chance aren’t covered here but, given that we’re talking about motivation and agency, we’re depending upon one specific blindspot (the inability of humans to deal sensibly with probability) rather than the range of issues identified in the article.

I dont want to rehash the entire article but the key points that I want to discuss are the notion of manipulating difficulty and fun pain. A game of skill is effectively fun until it becomes too hard. If you want people to keep playing then you have to juggle the difficulty enough to make it challenging but not so hard that you stop playing. Even where you pay for a game up front, a single payment to play, you still want to get enough value out of it – too easy and you finish too quickly and feel that you’ve wasted your money; too hard and you give up in disgust, again convinced that you’ve wasted your money. Ultimately, in a pure game of skill, difficulty manipulation must be carefully considered. As the difficulty ramps up, the player is made uncomfortable, the delightful term fun pain is applied here, and resolving the difficulty removes this.

Or, you can just pay to make the problem go away. Suddenly your game of skill has two possible modes of resolution: play through increasing difficulty, at some level of discomfort or personal inconvenience, or, when things get hard enough, pump in a deceptively small amount of money to remove the obstacle. The secret of the P2P game that becomes successfully monetised is that it was always about the money in the first place and the initial rounds of the game were just enough to get you engaged to a point where you now have to pay in order to go further.

You can probably see where I’m going with this. While it would be trite to describe education as a game of skill, it is most definitely the most apt of the different games on offer. Progress in your studies should be a reflection of invested time in study, application and the time spent in developing ideas: not based on being ‘lucky’, so the random game isn’t a choice. The entire notion of public education is founded on the principle that educational opportunities are open to all. So why do some parts of this ‘game’ feel like we’ve snuck in some covert monetisation?

I’m not talking about fees, here, because that’s holding the place of the fee you pay to buy a game in the first place. You all pay the same fee and you then get the same opportunities – in theory, what comes out is based on what the student then puts in as the only variable.

But what about textbooks? Unless the fee we charge automatically, and unavoidably, includes the cost of the textbook, we have now broken the game into two pieces: the entry fee and an ‘upgrade’. What about photocopying costs? Field trips? A laptop computer? An iPad? Home internet? Bus fare?

It would be disingenuous to place all of this at the feet of public education – it’s not actually the fault of Universities that financial disparity exists in the world. It is, however, food for thought about those things that we could put into our courses that are useful to our students and provide a paid alternative to allow improvement and progress in our courses. If someone with the textbook is better off than someone without the textbook, because we don’t provide a valid free alternative, then we have provided two-tiered difficulty. This is not the fun pain of playing a game, we are now talking about genuine student stress, a two-speed system and a very high risk that stressed students will disengage and leave.

From my earlier discussions on plagiarism, we can easily tie in Ramin’s notion of the driver of reward removal, where players have made so much progress that, on facing defeat, they will pay a fee to reduce the impact of failure; or, in some cases, to remove it completely. As Ramin notes:

“This technique alone is effective enough to make consumers of any developmental level spend.”

It’s not just lost time people are trying to get back, it’s the things that have been achieved in that time. Combine that with, in our case, the future employability and perception of that piece of paper, and we have a very strong behavioural driver. A number of the tricks Ramin describes don’t work as well on mature and aware thinkers but this one is pretty reliable. If it’s enough to make people pay money, regardless of their development level, then there are lots of good design decisions we can make from this – lower risk assessment, more checkpointing, steady progress towards achievement. We know lots of good ways to avoid this, if we consider it to be a problem and want to take the time to design around it.

This is one of the greatest lessons I’ve learned about studying behaviour, even as a rank amateur. Observing what people do and trying to build systems that will work despite that makes a lot more sense than building a system that works to some ideal and trying to jam people into it. The linked article shows us how people are making really big piles of money by knowing how people work. It’s worth looking at to make sure that we aren’t, accidentally, manipulating students in the same way.


The defining question.

There has been a lot going on for me recently. A lot of thinking, a lot of work and an amount of getting involved in things because my students trust me and will come to me to ask questions, which sometimes puts me in the uncomfortable position of having to juggle my accommodation for the different approaches of my colleagues and my own beliefs, as well as acting in everyone’s best interests. I’m not going to go into details but I think that I can summarise my position on everything, as an educator, by phrasing it in one question.

Is this course of action to the student’s benefit?

I mean, that’s it, isn’t it? If the job is educating students and developing the citizens of tomorrow, then everything that we do should be to the benefit of the student and/or future graduate. But it’s never simple, is it, because the utilitarian calculus to derive benefit quickly becomes complicated when we consider the effect of institutional reputation or perception on the future benefit to the student. But maybe that’s over thinking things (gasp, I hear regular readers cry). I’m not sure I know how to guide student behaviour to raise my University’s ranking in various measures – but I do know how to guide student behaviour to reduce the number of silly or thoughtless things they do, to enhance their learning and to help them engage. Maybe the simple question is the best? Will the actions I take today improve my students’ knowledge or enhance their capacity to learn? Have I avoided wasting their time doing something that we do because we have always done it, rather than giving them something to do because it is what we should be doing? Am I always considering the benefit to the largest group of students, while considering the needs of the individual?

Every time I see a system that has a fixed measure of success, people optimise for it. If it’s maximum profit, people maximise profit. If it’s minimum space, people cut their space. Guidelines help a lot in working out which course of action to take: when faced with a choice between A and B, choose the option that maximises your objective. This even works without a strong vision of the future, which is good because I’m not sure we have a clear enough view of the long path to graduation to really be specific about this. There is always a risk that people will get the assessment of benefit wrong, which can lead to soft marking or lax standards, but I’m not a believer that post hoc harshness is the solution to inherited laxity from another system (especially where that may be a perception that’s not grounded in reality). Looking at all of my actions in terms of a real benefit, to the student, to their community, to our equality standards, to our society – that shines a bright light on what we do so we can clearly see what we’re doing and, if it requires change, illuminates the path to change.


Let’s not turn “Chalk and Talk” into “Watch and Scratch”

We are now starting to get some real data on what happens when people “take” a MOOC (via Mark’s blog). You’ll note the scare quotes around the word “take”, because I’m not sure that we have really managed to work out what it means to get involved in a course that is offered through the MOOC mechanism. Or, to be more precise, some people think they have but not everyone necessarily agrees with them. I’m going to list some of my major concerns, even in the face of the new clickstream data, and explain why we don’t have a clear view of the true value/approaches for MOOCs yet.

  1. On-line resources are not on-line courses and people aren’t clear on the importance of an overall educational design and facilitation mechanism. Many people have mused on this in the past. If all the average human needed was a set of resources and no framing or assistive pedagogy then our educational resources would be libraries and there would be no teachers. While there are a number of offerings that are actually courses, applying the results of the MIT 6.002x to what are, for the most part, unstructured on-line libraries of lecture recordings is not appropriate. (I’m not even going to get into the cMOOC/xMOOC distinction at this point.) I suspect that this is just part of the general undervaluing of good educational design that rears its head periodically.
  2. Replacing lectures with on-line lectures doesn’t magically improve things. The problem with “chalk and talk”, where it is purely one-way with no class interaction, is that we know that it is not an effective way to transfer knowledge. Reading the textbook at someone and forcing them to slowly transcribe it turns your classroom into an inefficient, flesh-based photocopier. Recording yourself standing in front a class doesn’t automatically change things. Yes, your students can time shift you, both to a more convenient time and at a more convenient speed, but what are you adding to the content? How are you involving the student? How can the student benefit from having you there? When we just record lectures and put them up there, then unless they are part of a greater learning design, the student is now sitting in an isolated space, away from other people, watching you talk, and potentially scratching their head while being unable to ask you or anyone else a question. Turning “chalk and talk” into “watch and scratch” is not an improvement. Yes, it scales so that millions of people can now scratch their heads in unison but scaling isn’t everything and, in particular, if we waste time on an activity under the illusion that it will improve things, we’ve gone backwards in terms of quality for effort.
  3. We have yet to establish the baselines for our measurement. This is really important. An on-line system us capable of being very heavily tracked and it’s not just links. The clickstream measurements in the original report record what people clicked on as they worked with the material. But we can only measure that which is set up for measurement – so it’s quite hard to compare the activity in this course to other activities that don’t use technology. But there are two subordinate problems to this (and I apologise to physicists for the looseness of the following) :
    1. Heisenberg’s MOOC: At the quantum scale, you can either tell where something is or what it is doing – the act of observation has limits of precision. Borrowing that for the macro scale: measure someone enough and you’ll see how they behave under measurement but the measurements we pick tend to fall into the stage they’ve reached or the actions they’ve taken. It’s very complex to combine quantitative and qualitative measures to be able to map someone’s stage and their comprehension/intentions/trajectory. You don’t have to accept arguments based on the Hawthorne Effect to understand why this does not necessarily tell you much about unobserved people. There are a large number of people taking these courses out of curiosity, some of whom already have appropriate qualifications, with only 27% the type of student that you would expect to see at this level of University. Combine that with a large number of researchers and curious academics who are inspecting each other’s courses, I know of at least 12 people in my own University taking MOOCs of various kinds to see what they’re like, and we have the problem that we are measuring people who are merely coming in to have a look around and are probably not as interested in the actual course. Until we can actually shift MOOC demography to match that of our real students, we are always going to have our measurements affected by these observers. The observers might not mind being heavily monitored and observed, but real students might. Either way, numbers are not the real answer here – they show us what but there is still too much uncertainty in the why and the how.
    2. Schrödinger’s MOOC: Oh, that poor reductio ad absurdum cat. Does the nature of the observer change the behaviour of the MOOC and force it to resolve one way or another (successful/unsuccessful)? If so, how and when? Does the fact of observation change the course even more than just in enrolments and uncertainty of validity of figures? The clickstream data tells us that the forums are overwhelmingly important to students, with 90% of people who viewed threads without commenting, and only 3% of total students enrolled every actually posted anything in a thread. What was the make-up of that 3% and was it actual students or the over-qualified observers who then provided an environment that 90% of their peers found useful?
    3. Numbers need context and unasked questions give us no data: As one example, the authors of the study were puzzled that so few people had logged in from China, which surprised them. Anyone who has anything to do with network measurement is going to be aware that China is almost always an outlier in network terms. My blog, for example, has readers from around the world – but not China. It’s also important to remember that any number of Chinese network users will VPN/SSH to hosts outside China to enjoy unrestricted search and network access. There may have been many Chinese people (who didn’t self-identify for obvious reasons) who were using proxies from outside China. The numbers on this particular part of the study do not make sense unless they are correctly contextualised. We also see a lack of context in the reporting on why people were doing the course – the numbers for why people were doing it had to be augmented from comments in the forum that people ‘wanted to see if they could make it through an MIT course’. Why wasn’t that available from the initial questions?
  4. We don’t know what pass/fail is going to look like in this environment. I can’t base any MOOC plans of my own on how people respond to a MIT-branded course but it is important to note that MIT’s approach was far more than “watch and scratch”, as is reflected by their educational design in providing various forms of materials, discussions forums, homework and labs. But still, 155,000 people signed up for this and only 7,000 received certificates. 2/3 of people who registered then went on to do nothing. I don’t think that we can treat a success rate of less than 5% as a success rate. Even where we say that 2/3 dropped out, this still equates to a pass rate under 14%. Is that good? Is that bad? Taking everything into account from above, my answer is “We don’t know.” If we get 17% next time, is that good or bad? How do we make this better?
  5. The drivers are often wrong. Several US universities have gone on the record to complain about undermining their colleagues and have refused to take part in MOOC-related activities. The reasons for this vary but the greatest fear is that MOOCs will be used to reduce costs by replacing existing lecturing staff with a far smaller group and using MOOCs to handle the delivery. From a financial argument, MOOCs are astounding – 155,000 people contacted for the cost of a few lecturers. Contrast that with me teaching a course to 100 students. If we look at it from a quality perspective, and dealing with all of the points so far, we have no argument to say that MOOCs are as good as our good teaching – but we do know that they are easily as good as our bad teaching. But from a financial perspective? MOOC is king. That is, however, not how we guarantee educational quality. Of course, when we scale, we can maintain quality by increasing resources but this runs counter to a cost-saving argument so we’re almost automatically being prevented from doing what is required to make the large scale course work by the same cost driver that led to its production in the first place!
  6. There are a lot of statements but perhaps not enough discussion. These are trying times for higher education and everyone wants an edge, more students, higher rankings, to keep their colleagues and friends in work and, overall, to do the right thing for their students. Senior management, large companies, people worried about money – they’re all talking about MOOCs as if they are an accepted substitute for traditional approaches – at the same time as we are in deep discussion about which of the actual traditional approaches are worthwhile and which new approaches are going to work better. It’s a confusing time as we try to handle large-scale adoption of blended learning techniques at the same time people are trying to push this to the large scale.

I’m worried that I seem to be spending most of my time explaining what MOOCs are to people who are asking me why I’m not using a MOOC. I’m even more worried when I am still yet to see any strong evidence that MOOCs are going to provide anything approaching the educational design and integrity that has been building for the past 30 years. I’m positively terrified when I see corporate providers taking over University delivery before we have established actual measurable quality and performance guidelines for this incredibly important activity. I’m also bothered by statements found at the end of the study, which was given prominence as a pull quote:

[The students] do not follow the norms and rules that have governed university courses for centuries nor do they need to.

I really worry about this because I haven’t yet seen any solid evidence that this is true, yet this is exactly the kind of catchy quote that is going to be used on any number of documents that will come across my desk asking me when I’m going to MOOCify my course, rather than discussing if and why and how we will make a transition to on-line blended learning on the massive scale. The measure of MOOC success is not the number of enrolees, nor is it the number of certificates awarded, nor is it the breadth of people who sign up. MOOCs will be successful once we have worked out how to use this incredibly high potential approach to teaching to deliver education at a suitably high level of quality to as many people as possible, at a reduced or even near-zero cost. The potential is enormous but, right now, so is the risk!


Another semester, more lessons learned (mostly by me).

I’ve just finished the lecturing component for my first year course on programming, algorithms and data structures. As always, the learning has been mutual. I’ve got some longer posts to write on this at some time in the future but the biggest change for this year was dropping the written examination component down and bringing in supervised practical examinations in programming and code reading. This has given us some interesting results that we look forward to going through, once all of the exams are done and the marks are locked down sometime in late July.

Whenever I put in practical examinations, we encounter the strange phenomenon of students who can mysteriously write code in very short periods of time in a practical situation very similar to the practical examination, but suddenly lose the ability to write good code when they are isolated from the Internet, e-Mail and other people’s code repositories. This is, thank goodness, not a large group (seriously, it’s shrinking the more I put prac exams in) but it does illustrate why we do it. If someone has a genuine problem with exam pressure, and it does occur, then of course we set things up so that they have more time and a different environment, as we support all of our students with special circumstances. But to be fair to everyone, and because this can be confronting, we pitch the problems at a level where early achievement is possible and they are also usually simpler versions of the types of programs that have already been set as assignment work. I’m not trying to trip people up, here, I’m trying to develop the understanding that it’s not the marks for their programming assignments that are important, it’s the development of the skills.

I need those people who have not done their own work to realise that it probably didn’t lead to a good level of understanding or the ability to apply the skill as you would in the workforce. However, I need to do so in a way that isn’t unfair, so there’s a lot of careful learning design that goes in, even to the selection of how much each component is worth. The reminder that you should be doing your own work is not high stakes – 5-10% of the final mark at most – and builds up to a larger practical examination component, worth 30%, that comes after a total of nine practical programming assignments and a previous prac exam. This year, I’m happy with the marks design because it takes fairly consistent failure to drop a student to the point where they are no longer eligible for redemption through additional work. The scope for achievement is across knowledge of course materials (on-line quizzes, in-class scratchy card quizzes and the written exam), programming with reference materials (programming assignments over 12 weeks), programming under more restricted conditions (the prac exams) and even group formation and open problem handling (with a team-based report on the use of queues in the real world). To pass, a student needs to do enough in all of these. To excel, they have to have a good broad grasp of theoretical and practical. This is what I’ve been heading towards for this first-year course, a course that I am confident turns out students who are programmers and have enough knowledge of core computer science. Yes, students can (and will) fail – but only if they really don’t do enough in more than one of the target areas and then don’t focus on that to improve their results. I will fail anyone who doesn’t meet the standard but I have no wish to do any more of that than I need to. If people can come up to standard in the time and resource constraints we have, then they should pass. The trick is holding the standard at the right level while you bring up the people – and that takes a lot of help from my colleagues, my mentors and from me constantly learning from my students and being open to changing the learning design until we get it right.

Of course, there is always room for improvement, which means that the course goes back up on blocks while I analyse it. Again. Is this the best way to teach this course? Well, of course, what we will do now is to look at results across the course. We’ll track Prac Exam performance across all practicals, across the two different types of quizzes, across the reports and across the final written exam. We’ll go back into detail on the written answers to the code reading question to see if there’s a match for articulation and comprehension. We’ll assess the quality of response to the exam, as well as the final marked outcome, to tie this back to developmental level, if possible. We’ll look at previous results, entry points, pre-University marks…

And then we’ll teach it again!


The Continuum of Ethical Challenge: Why the Devil Isn’t Waiting in the Alleyway and The World is Harder than Bioshock.

This must be a record for a post title but I hope to keep the post itself shortish. Years ago, when I was still at school, a life counsellor (who was also a pastor) came to talk to us about life choices and ethics. He was talking about the usual teen cocktail: sex, drugs and rebellion.. However, he made an impression on me by talking about his early idea of temptation. Because of the fire and brimstone preaching he’d grown up with, he half expected temptation to take the form of the Devil, beckoning him into an alleyway to take an illicit drag on a cigarette. As he grew up, and grew wiser, he realised that living ethically was really a constant set of choices, interlocking or somewhat dependant, rather than an easy life periodically interrupted by strictly defined challenges that could be overcome with a quick burst of willpower.

A picture of an alley with green eyes floating, superimposed over it.

It’s still a creepy mental image, of course.

I recently started replaying the game Bioshock, which I have previously criticised elsewhere, and was struck by the facile nature of the much-vaunted ethical aspect to game play. For those who haven’t played it, you basically have a choice between slaughtering or saving little girls – apart from that, you have very little agency or ability to change the path you’re on. In fact, rather than provide you with the continual dilemma of whether you should observe, ignore or attack the inhabitants of the game world, you very quickly realise that there are no ‘good’ people in the world (or there are none that you are actually allowed to attack, they are all carefully shielded from you) so you can reduce your ‘choices’ when encountering a figure crouching over a pram to “should I bludgeon her to death, or set her on fire and shoot her in the head”. (It’s ok, if you try anything approaching engagement, she will try and kill you.) In fact, one of the few ‘innocents’ in the game is slaughtered in front of you while you watch impotently. So your ethical engagement is restricted, at very distinctly defined intervals, to either harvesting or rescuing the little girls who have been stolen from orphanages and turned into corpse scavenging monsters. This is as ridiculous as the intermittent Devil in the alleyway, in fact, probably more so!

I completely agree with that counsellor from (goodness) 30 years ago – it would be a nonsense to assume that tests of our ethics can be conveniently compartmentalised to a time when our resolve is strong and can be so easily predicted. The Bioshock model (or models like it, such as Call of Duty 4, where everyone is an enemy or can’t be shot in a way that affects our game beyond a waggled finger and being taken back to a previous save) is flawed because of the limited extent of the impact of the choices you make – in fact, Bioshock is particularly egregious because the ‘outcome’ of your moral choice has no serious game impact except to show you a different movie at the end. Before anyone says “it’s only a game”, I agree, but they were the ones who imposed the notion that this ethical choice made a difference. Games such as Deus Ex gave you very much un-cued opportunities to intervene or not – with changes to the game world depending on what happened. As a result, people playing Deus Ex had far more moral engagement with the game and everyone I’ve spoken to felt as if they were making the choices that led to the outcome: autonomy, mastery and purpose anyone? That was in 2000 – very few games actually see the world as one that you can influence (although some games are now coming up to par on this).

I think about this a lot for my learning design. While my students  may recognise ethical choices in the real world, I am always concerned that a learning design that reduces their activities to high stakes hurdle challenges will mimic the situation where we have, effectively, put the Devil in the alleyway and you can switch on your ‘ethical’ brain at this point. I posed a question to my students in their sample exam where I proposed that they had commissioned someone to write their software for an assignment – and them asked to think about the effect that this decision would have on their future self in terms of knowledge development, if we assumed that they would always be better prepared if they did the work themselves. This takes away the focus from the day or so leading up to an individual assignment and starts to encourage continuum thinking, where every action is take as part of a whole life of ethical actions. I’m a great believer that skills only develop with practice and knowledge only stays in your head when you reinforce it, so any opportunity to encourage further development of ethical thinking is to be encouraged!


“Hi, my name is Nick and I specialise in failure.”

I recently read an article on survivorship bias in the “You Are Not So Smart” website, via Metafilter. While the whole story addressed the World War II Statistical Research Group, it focused on the insight contributed by Abraham Wald, a statistician. The World War II Allied bomber losses were large, very large, and any chances of reducing this loss was incredibly valuable. The question was “How could the US improve their chances of bringing their bombers back intact?” Bombers landing back after missions were full of holes but armour just can’t be strapped willy-nilly on to a plane without it becoming land-locked. (There’s a reason that birds are so light!) The answer, initially, was obvious – find the place where the most holes were, by surveying the fleet, and patching them. Put armour on the colander sections and, voila, increased survival rate.

No, said Wald. That wouldn’t help.

Wald’s logic is both simple and convincing. If a plane was coming back with those holes in place, then the holes in the skin were not leading to catastrophic failure – they couldn’t have been if the planes were returning! The survivors were not showing the damage that would have led to them becoming lost aircraft. Wald used the already collected information on the damage patterns to work out how much damage could be taken on each component and the likelihood of this occurring during a bombing run. based on what kind of forces it encountered.

It’s worth reading the entire article because it’s a simple and powerful idea – attributing magical properties to the set of steps taken by people who have become ultra-successful is not going to be as useful as looking at what happened to take people out of the pathway to success. If you’ve read Steve Jobs’ biography then you’re aware that he had a number of interesting traits, only some of which may have led to him becoming as successful as he did. Of course, if you’ve been reading a lot, you’ll be aware of the importance of  Paul Jobs, Steve Wozniak, Robert Noyce, Bill Gates, Jony Ive, John Lasseter, and, of course, his wife, Laurene Powell Jobs. So the whole “only eating fruit” thing, the “reality distortion field” thing and “not showering” thing (some of which he changed, some he didn’t) – which of these are the important things? Jobs, like many successful people, failed at some of his endeavours, but never in a way that completely wiped him out. Obviously. Now, when he’s not succeeding, he’s interesting, because we can look at the steps that took him down and say “Oh, don’t do that”, assuming that it’s something that can be changed or avoided . When he’s succeeding, there are so many other things getting in the way that depend upon what’s happened to you so far, who your friends are, and how many resources you get to play with, it’s hard to be able to give good advice on what to do.

I have been studying failure for some time. Firstly in myself, and now in my students. I look for those decisions, or behaviours, that lead to students struggling in their academic achievement, or to falling away completely in some cases. The majority of the students who come to me with a high level of cultural, financial and social resources are far less likely to struggle because, even when faced with a set-back, they rarely hit the point where they can’t bounce back – although, sadly, it does happen but in far fewer numbers. When they do fall over, it is for the same reasons as my less-advantaged students, who just do so in far greater numbers because they have less resilience to the set-backs. By studying failure, and the lessons learned and the things to be avoided, I can help all of my students and this does not depend upon their starting level. If I were studying the top 5% of students, especially those who had never received a mark less than A+, I would be surprised if I could learn much that I could take and usefully apply to those in the C- bracket. The reverse, however? There’s gold to be mined there.

By studying the borderlines and by looking for patterns in the swirling dust left by those departing, I hope that I can find things which reduce failure everywhere – because every time someone fails, we run the risk of not getting them back simply because failure is disheartening. Better yet, I hope to get something that is immediately usable, defensible and successful. Probably rather a big ask for a protracted study of failure!


Why You Won’t Finish This Post

A friend of mine on Facebook posted a link to a Slate article entitled “You Won’t Finish This Article: Why people online don’t read to the end” and it’s told me everything that I’ve been doing wrong with this blog for about the last 410 hours. Now, this doesn’t even take into account that, by linking to something potentially more interesting on a well-known site, I’ve now buried the bottom of this blog post altogether because a number of you will follow the link and, despite me asking it to appear in a new window, you will never come back to this article. (This has quite obvious implications for the teaching materials we put up, so it’s well worth a look.)

Now, on the off-chance that you did come back (hi!), we have to assume that you didn’t read all of the linked article (if you read any at all) because 28% of you ‘bounced’ immediately and didn’t actually read much at all of that page – you certainly didn’t scroll. Almost none of you read to the bottom. What is, however, amusing is that a number of you will have either Liked or forwarded a link to one or both of these pages – never having stepped through or scrolled once, but because the concept at the start looks cool. Of course, according the Slate analysis, I’ve lost over half my readers by now. Of course, this does assume the Slate layout, where an image breaks things up and forces people to scroll through. So here’s an image that will discourage almost everyone from continuing. However, it is a pretty picture:

This graph shows the relationship between scroll depth and Tweet (From Slate and courtesy of Chartbeat)

This graph shows the relationship between scroll depth and Tweet (From Slate and courtesy of Chartbeat)

What it says is that there is not an enormously strong correlation between depth of reading and frequency of tweet. So, the amount that a story is read doesn’t really tell you how much people will want to (or actually) share it. Overall, the Slate article makes it fairly clear that unless I manage to make my point in the first paragraph, I have little chance of being read any further – but if I make that first paragraph (or first images) appealing enough, any number of people will like and share it.

Of course, if people read down this far (thanks!) then they will know that I secretly start advocating the most horrible things known to humanity so, when someone finally follows their link and miraculously reads down this far, survives the Slate link out, and doesn’t end up mired in the picture swamp above, they will discover…

Oh, who am I kidding. I’ll just come back and fill this in later.

(Having stolen a time machine, I can now point out that this is yet another illustration of why we need to be thoughtful about what our students are going to do in response to on-line and hyperlinked materials rather than what we would like them to do. Any system that requires a better human, or a human to act in a way that goes against all the evidence we have of their behaviour, requires modification.)


Note to Self

I’ve mentioned the “Meditations” of the Emperor Marcus Aurelius before – I’ve been writing this blog for over 450 hours, I’m not sure there’s anything I haven’t mentioned except my feelings on the season finale of Doctor Who, Series 7. (Eh.) Marcus Aurelius, philosopher, statesman, Roman, and Emperor wrote twelve “books” which were apparently never meant to be published. These are the private musings, notes to self, of a thoughtful man, written stoically and Stoically. When he lectures anyone, he lectures himself. He even poses questions to parts of himself: his soul, most notably.

There is much to admire in the simplicity and purpose of Marcus Aurelius’ thoughts. They are brief, because Emperors are busy people, especially when earning titles such as Germanicus (which usually involves squashing a nation state or two). They are direct, because he is talking to himself and he needs to be honest. He repeats himself for emphasis and to indicate importance, not out of forgetfulness.

Best, he writes for himself, for clarity, for the now and without thinking of a future audience.

There is a great deal to think about in this, because if you have read “Meditations”, you will know that every page contains a gem and some pages have jewels cascading from them. Yet these are the private thoughts of a person recording ways to improve himself and to keep himself in check – while he managed the Roman empire.

When I talk about improvement, I’m always trying to improve myself. When I find fault, I’ve usually found it in myself first. Yet, what a lot of words I write! Perhaps it is time to reinvestigate brevity, directness and a generosity towards the self that translates well into a kindness to strangers who might stumble upon this. The last thing I’d want to do is to stop people finding what zircons there are because the preamble is too demanding or the journey to the point too long.

Once again, I give my thanks to the writings of someone who died 2000 years ago and gave me so much to think about. Vale, Marcus Aurelius.