Post 300 – 2012, the Year of the Plague

As it turns out, this is post 300 and I’m going to use it to make a far more opinionated point than usual. I’m currently in Auckland, New Zealand, and there is a warning up on the wall about a severe outbreak of measles. This is one of the most outrageously stupid signs to see on a wall, anywhere, given that we have had a solid vaccine since 1971 and, despite ill-informed and unscientific studies that try to contradict this, the overall impact of the MMR vaccine is overwhelmingly positive. There is no reasonable excuse for the outbreak of an infectious, dangerous disease 40 years after the development of a reliable (and overwhelmingly safe) vaccine.

Is this really what we want?

My fear is that, rather than celebrating the elimination of measles and polio (under 200 cases this year so far according to the records I’ve seen) in the same way that we eradicated smallpox, we will be seeing more and more of these signs identifying outbreaks of eradicable and controllable diseases, because ignorance is holding sway.

Be in no doubt, if we keep going down this path, the risk increases rapidly that a disease will finish us off because we will not have the correct mental framing and scientific support to quickly respond to a lethal outbreak or mutation. The risk we take is that, one day, our cities lie empty with signs like this up all over the place, doors sealed with crosses on them, a quiet end to a considerable civilisation. All attributable to a rejection of solid scientific evidence and the triumph of ignorance. We have survived massive outbreaks before, even those with high lethality, but we have been, for want of a better word, lucky. We live in much denser environments and are far more connected than we were before. I can step around the world in a day and, with every step, a disease can follow my footsteps.

One of my students recently plotted 2009 Flu cases relative to air routes. While disease used to rely upon true geographical contiguity, we now connect the world with the false adjacency of the air route. Outbreaks in isolated parts of the world map beautifully to the air hubs and their importance and utilisation: more people, higher disease.

So, in short, it’s not just the way that we control the controllable diseases that is important, it is accepting that the lower risk of vaccination is justifiable in the light of the much greater risk of infection and pandemic. This fights the human tendency to completely misunderstand probability, our susceptibility to fallacious thinking, and our desperate desire to do no harm to our children. I get this but we have to be a little bit smarter or we are putting ourselves at a much higher risk – regrettably, this is a future risk so temporal discounting gets thrown into the mix to make it ever harder for people to make a good decision.

Here’s what the Smallpox Wikipedia page says: “Smallpox was an infectious disease unique to humans” (emphasis mine). This is one of the most amazing things that we have achieved. Let’s do it again!

I talk a lot about education, in terms of my thoughts on learning and teaching, but we must never forget why we educate. It’s to enlighten, to inform, to allow us to direct our considerable resources to solving the considerable problems that beset us. It’s helping people to make good decisions. It’s being aware of why people find it so hard to accept scientific evidence: because they’re scared, because someone lied to them, because no-one has gone to the trouble to actually try and explain it to them properly. Ignorance of a subject is the state that we occupy before we become informed and knowledgable. It’s not a permanent state!

That sign made me angry. But it underlined the importance of what it is that we do.


Conference Blogging! (Redux)

I’m about to head off to another conference and I’ve taken a new approach to my blogging. Rather than my traditional “Pre-load the queue with posts” activity, which tends to feel a little stilted even when I blog other things around it, I’ll be blogging in direct response to the conference and not using my standard posting time.

I’m off to ICER, which is only my second educational research conference, and I’m very excited. It’s a small but highly regarded conference and I’m getting ready for a lot of very smart people to turn their considerably weighty gaze upon the work that I’m presenting. My paper concerns the early detection of at-risk students, based on our analysis of over 200,000 student submissions. In a nutshell, our investigations indicate that paying attention to a student’s initial behaviour gives you some idea of future performance, as you’d expect, but it is the negative (late) behaviour that is the most telling. While there are no astounding revelations in this work, if you’ve read across the area, putting it all together with a large data corpus allows us to approach some myths and gently deflate them.

Our metric is timeliness, or how reliably a student submitted their work on time. Given that late penalties apply (without exception, usually) across the assignments in our school, late submission amounts to an expensive and self-defeating behaviour. We tracked over 1,900 students across all years of the undergraduate program and looked at all of their electronic submissions (all programming code is submitted this way, as are most other assignments.) A lot of the results were not that unexpected – students display hyperbolic temporal discounting, for example – but some things were slightly less expected.

For example, while 39% of my students hand in everything on time, 30% of people who hand in their first assignment late then go on to have a blemish-free future record. However, students who hand up that first assignment late are approximately twice as likely to have problems – which moves this group into a weakly classified at-risk category. Now, I note that this is before any marking has taken place, which means that, if you’re tracking submissions, one very quick and easy way to detect people who might be having problems is to look at the first assignment submission time. This inspection takes about a second and can easily be automated, so it’s a very low burden scheme for picking up people with problems. A personalised response, with constructive feedback or a gentle question, in the zone where the student should have submitted (but didn’t), can be very effective here. You’ll note that I’m working with late submitters not non-submitters. Late submitters are trying to stay engaged but aren’t judging their time or allocating resources well. Non-submitters have decided that effort is no longer worth allocating to this. (One of the things I’m investigating is whether a reminder in the ‘late submission’ area can turn non-submitters into submitters, but this is a long way from any outcomes.)

I should note that the type of assignment work is important here. Computer programs, at least in the assignments that we set, are not just copied in from text. They are not remembering it or demonstrating understanding, they are using the information in new ways to construct solutions to problems. In Bloom’s revised taxonomic terms, this is the “Applying” phase and it requires that the student be sufficiently familiar with the work to be able to understand how to apply it.

Bloom’s Revised Taxonomy

I’m not measuring my students’ timeliness in terms of their ability to show up to a lecture and sleep or to hand up an essay of three paragraphs that barely meets my requirements because it’s been Frankenwritten from a variety of sources. The programming task requires them to look at a problem, design a solution, implement it and then demonstrate that it works. Their code won’t even compile (turn into a form that a machine can execute) unless they understand enough about the programming language and the problem, so this is a very useful indication of how well the student is keeping up with the demands of the course. By focusing on an “Applying” task, we require the student to undertake a task that is going to take time and the way in which they assess this resource and decide on its management tells us a lot about their metacognitive skills, how they are situated in the course and, ultimately, how at-risk they actually are.

Looking at assignment submission patterns is a crude measure, unashamedly, but it’s a cheap measure, as well, with a reasonable degree of accuracy. I can determine, with 100% accuracy, if a student is at-risk by waiting until the end of the course to see if they fail. I have accuracy but no utility, or agency, in this model. I can assume everyone is at risk at the start and then have the inevitable problem of people not identifying themselves as being in this area until it’s too late. By identifying a behaviour that can lead to problems, I can use this as part of my feedback to illustrate a concrete issue that the student needs to address. I now have the statistical evidence to back up why I should invest effort into this approach.

Yes, you get a lot of excuses as to why something happened, but I have derived a great deal of value from asking students questions like “Why did you submit this late?” and then, when they give me their excuse, asking them “How are you going to avoid it next time?” I am no longer surprised at the slightly puzzled look on the student’s face as they realise that this is a valid and necessary question – I’m not interested in punishing them, I want them to not make the same mistake again. How can we do that?

I’ll leave the rest of this discussion for after my talk on Monday.


And more on the Harvard Scandal: Scandal? Apparently it’s not?

I’ve just read a Salon article regarding the Harvard cheating issue. Apparently, according to Farhad Manjoo, these students should be “celebrated for collaborating“.

Note that word? It’s the one that I picked on in the Crimson article and the reason that I did so is that it’s a very mild word, and a very positive one at that. However, this article, while acknowledging that the students were prevented from any such sharing, Manjoo then asks, to me somewhat disingenuously, “What’s the point of prohibiting these students from working together?”

Urm, well, for most of the course, they don’t. At the end of the course, when they want to see how much each individual knows, they attempt to test them individually. That’s not an unusual pattern.

Manjoo’s interpretation of the other articles goes well beyond anything else that I’ve seen, including putting all of the plagiarism claims together as group work and tutor consultation. I can’t speak to this as I don’t have his sources but, given that this was explicitly forbidden anyway, he’s making an empty argument. It doesn’t matter how you slice it, if students worked together, they did something that they weren’t supposed to do. However Manjoo argues that their actions are justified, I’m not sure that this argument is.

The author obviously disagrees with the nature of the open book test and, to my reading, has no real idea of what he’s talking about. Sentences like “But if you want to determine how well students think, why force them to think alone?” are almost completely self-defeating. It also ignores the need to build knowledge in a way that functions when the group isn’t there. We don’t use social constructivism in the assumption that we will always be travelling in packs, we do it to assist the construction of knowledge inside the individual by leveraging the advantages of the social structure. To evaluate how well it has happened, and to isolate group effects so that we can see the individual performing, we use rules such as Harvard clearly defined to set these boundaries.

Manjoo waxes rhetorical in this essay. “Rather than punishing these students, shouldn’t we be praising them for solving these problems the only way they could? ” Well, no, I think that we shouldn’t. There were many ways that, if they thought this approach was unreasonable or unfair, they could have legitimately protested. I note that half the class managed to not (apparently, as far as the number suspected) cheat during this test – what do we say about these people? Are these people worthy of double-plus-praise for somehow transcending the impossible test, or are they fools for not collaborating?

I’m not sure why these articles are providing so much padding for these students, if they have actually done nothing wrong (I hasten to add that they are merely suspected at the moment but if they are to be martyrs then let us assume a bleak outcome). At least, unlike the writers in the Crimson, Manjoo is a Cornell alumnus so he has some distance. I do note that he has a book called “True Enough: Learning to Live in a Post-Fact Society” which, according to the reviews, is about the media establishing views of reality that aren’t necessarily the facts so he’s aware of the impact that his words have on how people will see this issue. He is also writing in a column with, among its bylines, “The Conventional Wisdom Debunked”, so it’s not surprising that this article is written this way.

Manjoo has created (another) Harvard bogeyman: scared of collaboration, unfair to students, and out of step with reality. However, his argument is ultimately a series of misdirections and Manjoo’s opinion that don’t address the core issue: if these students worked with each other, they shouldn’t have. Until he accepts that this, and that this is not a legitimate course, I’m not sure that his arguments have much weight with me.

 


Loading the Dice: Show and Tell

I’ve been using a set of four six-sided dice to generate random numbers for one of my classes this year, generally to establish a presentation order or things like that. We’ve had a number of students getting the same number and so we have to have roll-offs. Now in this case, the most common number rolled so far has been in the range of 17-19 but we have only generated about 18-20 rolls so, while that’s a little high, it’s not high enough to arouse suspicion.

Today we rolled again, and one student wasn’t quite there yet so I did it with the rest of the class. Once again, 18 showed up a bit. This time I asked the class about it. Did that seem suspicious? Then I asked them to look at the dice.

Oh.

Only two of the dice are actually standard dice. One has the number five on every face. One has three sixes and three twos. The students have seen these dice numerous times and have never actually examined them – of course, I didn’t leave them lying around for them to examine but, despite one or two starting to think “Hey, that’s a bit weird”, nobody ever twigged to the loading.

All of the dice in this picture are loaded through weight manipulation, rather than dot alteration. You can buy them for just about any purpose. Ah, Internet!

Having exposed this trick, to some amusement, the last student knocked on the door and I picked up the dice. He was then asked to roll for his position, with the rest of the class staying quiet. (Well, smirking.) He rolled something in 17-19, I forget what, and I wrote that up on the board. Then I asked him if it seemed high to him? On reflection, he said that these numbers all seemed pretty high, especially as the theoretical maximum was 24. I then asked if he’d like to inspect the dice.

He then did so, as I passed him the dice one at a time, and storing the inspected dice in my other hand. (Of course, as he peered at each die to see if it was altered, I quickly swapped one of the ‘real’ dice back into the position in my hand and, as the rest of the class watched and kept admirably quiet, I then forced a real die onto him. Magic is all about misdirection, after all.)

So, having inspected all of them, he was convinced that they were normal. I then plonked them down on the table and asked him to inspect them, to make sure. He lined them up, looked across the top face and, then, looked at the side. Light dawned. Loudly! What, of course, was so startling to him was that he had just inspected the dice and now they weren’t normal.

What was my point?

My students have just completed a project on data visualisation where they provided a static representation of a dataset. There is a main point to present, supported by static analysis and graphs, but the poster is fundamentally explanatory. The only room for exploration is provided by the poster producer and the reader is bound by the inherent limitations in what the producer has made available. Much as with our discussions of fallacies in argument from a recent tutorial, if information is presented poorly or you don’t get enough to go on, you can’t make a good decision.

Enter, the dice.

Because I deliberately kept the students away from them and never made a fuss about them, they assumed that they were normal dice. While the results were high, and suspicion was starting to creep in, I never gave them enough space to explore the dice and discern their true nature. Even today, while handing them to a student to inspect, I controlled the exploration and, by cherry picking and misdirection, managed to convey a false impression.

Now my students are moving into dynamic visualisation and they must prepare for sharing data in a way that can be explored by other people. While the students have a lot of control over who this exploration takes place, they must prepare for people’s inquisitiveness, their desire to assemble evidence and their tendency to want to try everything. They can’t rely upon hiding difficult pieces of data in their representation and they must be ready for users who want to keep exploring through the data in ways that weren’t originally foreseen. Now, in exploratory mode, they must prepare for people who want to try to collect enough evidence to determine if something is true or not, and to be able to interrogate the dataset accordingly.

Now I’m not saying that I believe that their static posters were produced badly, and I did require references to support statements, but the view presented was heavily controlled. They’ve now seen, in a simple analogue, how powerful that can be. Now, it’s time to break out of that mindset and create something that can be freely explored, letting their design guide the user to construct new things rather than to lead them down a particular path.

I can only hope that they’re exceed by this because I certainly am!!


(Reasonable) Argument, Evidence and (Good) Journalism: Is “Crimson” the Colour of Their Faces?

I ran across a story on the Harvard Crimson about a surprisingly high level of suspected plagiarism in a course, Government 1310. The story opens up simply enough in the realms of fact, where the professor suspected plagiarism behaviour in 10-20 take home exams, which was against published guidelines, and has now expanded to roughly 125 suspicious final exams. There was a brief discussion of the assessment of the course and the steps taken so far by the faculty.

Then, the article takes a weird turn. Suddenly, we have a student account, an anonymous student who doesn’t wish their name to be associated with the plagiarism, who “suspected that  Government 1310 was the course in question”. Hello? Why is this… ahhh. Here’s some more:

Though she said she followed the exam instructions and is not being investigated by the Ad Board, she said she thought the exam format lent itself to improper academic conduct.

“I can understand why it would be very easy to collaborate,” said the student

Oh. Collaborate. Interesting. Next we get the Q Guide rating for the course and this course gets 2.54/5 versus the apparent average of 3.91. Then we get some reviews from the Q Guide that “spoke critically of the course’s organisation and the difficulty of the exam questions”.

Spotting a pattern yet?

Another student said that he/she had joined a group of 15 other students just before the submission date and that they had been up all night trying to understand one of the questions (worth 20%).

I submitted this to my students to read and then asked them how they felt about it. Understandably, by the end of the reading, while my students were still thinking about plagiarism, they were thinking that there may have been some… justification. Then we started pulling the article apart.

When we start to look at the article, it becomes apparent that the facts presented all have a rather definite theme – namely that if cheating has occurred, that it has a justification because of the terrible way the course was taught (low Q Guide rating! 16 students confused!)

Now, I can not see the Q Guide data, because when I go to the page I get this information (and I need a Harvard login to go further):

Q Guide
The Q Guide was an annually published guide that reported the results of each year’s course evaluations. Formerly called the CUE Guide, it was renamed the Q Guide in 2007 because the evaluations now include the GSAS and are no longer run solely by the Committee on Undergraduate Education (CUE). In 2009, in place of The Q Guide, Harvard College integrated Q data with the online course selection tool (at my.harvard.edu), providing a simple and easy way to access and compare course evaluation data while planning your course schedule.

So if the article, regarding an exam run in 2012, is referring to the Q Guide for Gov 1310, then it’s one of two things: using an old name for new data (admittedly, fairly likely) or referring to old data. The question does arise, however, whether the Q Guide rating refers to this offering or a previous offering. I can’t tell you which it is because I don’t know. It’s not publicly available and the article doesn’t tell me. (Although you’ll note that the Q Guide text refers to this year‘s evaluations. There’s a part of me that strongly suspects that this is historical data but, of course, I’m speculating.)

However, the most insidious aspect is the presentation of 16 students who are confused about content in a way that overstates their significance. It’s a blatant example of emotive manipulation and encourages the reader to make a false generalisation. There were 279 students enrolled in Gov 1310. 16 is 5.7%. Would I be surprised in somewhere around 5% of my students weren’t capable of understanding all of the questions or thought that some material wasn’t in the course?

No, of course not. That’s roughly the percentage of my students who sometimes don’t know which Dr Falkner is teaching their class. (Hint: one is male and one is female. Noticeably so in both cases.)

I presented this to my Grand Challenge students as part of our studies of philosophical and logical fallacies, discussing how arguments are made to mislead and misdirect. The terrible shame is that, with a detected rate of plagiarism that is this high, I would usually have a very detailed look at the learning and teaching strategies employed (how often are exams being rewritten, how is information being presented, how is teaching being carried out) because this is an amazingly high level of suspected plagiarism.

Despite the misleading journalism presented in the Crimson, the course and its teachers may have to shoulder some responsibility here. As always, just because someone’s argument is badly made, doesn’t mean that it is actually wrong. It’s just disappointing that such a cheap and emotive argument was raised in a way that further fogs an important issue.

As I said to my students today, one of the most interesting way to try to understand a biassed or miscast argument is to understand who the bias favours – cui bono? (To whom the benefit? I am somewhat terrified, on looking for images for this phrase, that it has been highjacked by extremists and conspiracy theorists. It’s a shame because it’s historically beautiful.)

So why would the Crimson run this? It’s pretty manipulative so, unless this is just bad journalism, cui bono?

Having looked up how disciplinary boards are constituted at Harvard, I found a reference that there are three appointed faculty members and:

There are three students appointed to the board as full voting members. Two of these will be assigned to specific cases on a case-by-case basis and will not be in the same division as the student facing disciplinary action.

In this case, the Crimson’s story suddenly looks a lot… darker. If, by publishing this article, they reach the right students and convince them the action of the suspected plagiarists may have been overly influenced by academics who are not performing their duties – then we risk suddenly having a deadlocked board and a deleterious effect on what should have been an untainted process.

The Crimson has further distinguished itself with a follow-up article regarding the uncertainty students are feeling because of the process.

“It’s unfair to leave that uncertainty, given that we’re starting lives,” said the alumnus, who was granted anonymity by The Crimson because he said he feared repercussions from Harvard for discussing the case.

Oh, Harvard, you giant monster, unfairly delaying your decision on a plagiarism case because the lecturers were so very, very bad that students had to cheat. And, what’s worse, you are so evil that students are scared of you – they “fear the repercussions”!

Thank you, Crimson, for providing so much rich fodder for my discussion on how the words “logical argument”, “evidence” and “good journalism” can be so hard to fit into the same sentence.


Gamification: What Happens If All Of The Artefacts Already Exist

Win! Win! Win! (via mindjumpers.com)

I was reading an article today in May/June’s “Information Age”, the magazine of the Australian Computer Society, entitled “Gamification Goes Mainstream”. The article identified the gaming mechanics that could be added to businesses to improve engagement and work quality/productivity by employees. These measures are:

  1. Points: Users get points for achievements and can spend the points on prizes.
  2. Levelling: Points get harder to get as the user masters the systems.
  3. Badges: Badges are awarded and become part of the user’s “trophy page”, accompanying any comments made by the user.
  4. Leader Boards: Users are ranked by points or achievement.
  5. Community: Collaborative tools, contests, sharing and forums.

Now, of course, there’s a reason that things exist like in games and that’s because most games are outside of the physical world and, in the absence of the natural laws that normally make things happen and ground us, we rely upon these mechanics to help us to assess our progress through the game and provide us with some reward for our efforts. Now, while I’m a great believer in using whatever is necessary to make work engaging and to make like more enjoyable, I do wonder about the risk of setting up parallel systems that get people to focus on things other than their actual work.

Yes, yes, we all know I have issues with extrinsic motivations but let’s look again at the list of measures above, which would normally be provided in a game to allow us to make sense of the artificial world in which we find ourselves, and think about how they apply already in a workplace.

  1. Points that can be used to purchase things: I think that we call this money. If I provide a points system for buying company things then I’ve created a second economy that is not actually money.
  2. Levelling: Oh, wait, now it’s hard to spend the special points that I’ve been given so I’ve not only created a second economy, I’ve started down the road towards hyperinflation by devaluing the currency. (Ok, so the promotional system works here in my industry like that – our ranks are our levels, which isn’t that uncommon.)
  3. Badges: Plaques for special achievement, awards, post-nominal letters, Fellowships – anything that goes on the business card is effectively a badge.
  4. Leader Boards: Ok, this is something that we don’t often see in the professional world but, let’s face it, if you’re not on top then you’re not the best. Is that actually motivational or soul-destroying? Of course, if we don’t have it yet, then you do have to wonder why, given every other management trend seems to get a workout occasionally. I should note that I have seen leader boards at my workplace which have been ‘anonymised’ but given that I can see myself I can see where I sit – now not only do I know if am not top, I don’t know who to ask about how to get better, which has been touted as one of the reasons to identify the stars in the first place.
  5.  Community: We do have collaborative tools but they are focussed on helping us achieve our jobs, not on achieving orthogonal goals associated with a gaming system. We also have comment forums, discussion mechanisms such as mailing lists and the like. Contests? No. We don’t have contests. Do we? Oh wait, national competitive grant schemes, local teaching schemes, competitive bidding for opportunities.

Now if people aren’t engaging with the tasks that are expected of them (let’s assume reasonably) then, yes, we should find ways to make things more interesting to encourage participation. However, talking about all of the game mechanics above, it’s obviously going to take more thought than just picking a list of things that we are already doing and providing an alternative system that somehow makes everything really interesting again.

I should note that the article does sound a cautionary tone, from one of the participants, who basically says that it’s too soon to see how effective these schemes are and, of course, Kohn is already waggling a finger at setting up a prize/compliance expectation. So perhaps the lesson here is “how can we take what we already have and work out how to make it more interesting” rather than taking the lessons in required constructions of phenomena from a completely artificial environment where we have to define gravity in order to make things fall. Gamification shows promise in certain direction, mainly because there’s a lot of fun implicit in the whole process, but the approaches need to be carefully designed to make sure that we don’t accidentally reinvent the same old wheel.

 

 


The Precipice of “Everything’s Late”

I spent most of today working on the paper that I alluded to earlier where, after over a year of trying to work on it, I hadn’t made any progress. Having finally managed to dig myself out of the pit I was in, I had the mental and timeline capacity to sit down for the 6 hours it required and go through it all.

Climbers, eh?

In thinking about procrastination, you have to take into account something important: the fact that most of us work in a hyperbolic model where we expend no effort until the deadline is right upon us and then we put everything in, this is temporal discounting. Essentially we place less importance on things in the future than the things that are important to us now. For complex, multi-stage tasks over some time this is an exceedingly bad strategy, especially if we focus on the deadline of delivery, rather than the starting point. If we underestimate the time it requires and we construct our ‘panic now’ strategy based on our proximity to the deadline, then we are at serious risk of missing the starting point because, when it arrives, it just won’t be that important.

Now, let’s increase the difficulty of the whole thing and remember that the more things we have to think about in the present, the greater the risk that we’re going to exceed our capacity for cognitive load and hit the ‘helmet fire’ point – we will be unable to do anything because we’ve run out of the ability to choose what to do effectively. Of course, because we suffer from a hyperbolic discounting problem, we might do things now that are easy to do (because we can see both the beginning and end points inside our window of visibility) and this runs the risk that the things we leave to do later are far more complicated.

This is one of the nastiest implications of poor time management: you might actually not be procrastinating in terms of doing nothing, you might be working constantly but doing the wrong things. Combine this with the pressures of life, the influence of mood and mental state, and we have a pit that can open very wide – and you disappear into it wondering what happened because you thought you were doing so much!

This is a terrible problem for students because, let’s be honest, in your teens there are a lot of important things that are not quite assignments or studying for exams. (Hey, it’s true later too, we just have to pretend to be grownups.) Some of my students are absolutely flat out with activities, a lot of which are actually quite useful, but because they haven’t worked out which ones have to be done now they do the ones that can be done now – the pit opens and looms.

One of the big advantages of reviewing large tasks to break them into components is that you start to see how many ‘time units’ have to be carried out in order to reach your goal. Putting it into any kind of tracking system (even if it’s as simple as an Excel spreadsheet), allows you to see it compared to other things: it reduces the effect of temporal discounting.

When I first put in everything that I had to do as appointments in my calendar, I assumed that I had made a mistake because I had run out of time in the week and was, in some cases, triple booked, even after I spilled over to weekends. This wasn’t a mistake in assembling the calendar, this was an indication that I’d overcommitted and, over the past few months, I’ve been streamlining down so that my worst week still has a few hours free. (Yeah, yeah, not perfect, but there you go.) However, there was this little problem that anything that had been pushed into the late queue got later and later – the whole ‘deal with it soon’ became ‘deal with it now’ or ‘I should have dealt with that by now’.

Like students, my overcommitment wasn’t an obvious “Yes, I want to work too hard” commitment, it snuck in as bits and pieces. A commitment here, a commitment there, a ‘yes’, a ‘sure, I can do that’, and because you sometimes have to make decisions on the fly, you suddenly look around and think “What happened”? The last thing I want to do here is lecture, I want to understand how I can take my experience, learn from it, and pass something useful on. The basic message is that we all work very hard and sometimes don’t make the best decisions. For me, the challenge is now, knowing this, how can I construct something that tries and defeats this self-destructive behaviour in my students?

This week marks the time where I hope to have cleared everything on the ‘now/by now’ queue and finally be ahead. My friends know that I’ve said that a lot this year but it’s hard to read and think in the area of time management without learning something. (Some people might argue but I don’t write here to tell you that I have everything sorted, I write here to think and hopefully pass something on through the processes I’m going through.)


Time Banking: More and more reading.

I’ve spent most of the last week putting together the ideas of time banking, reviewing my reading list and then digging for more papers to read and integrate. It’s always a bit of a worry when you go to see if what you’ve been thinking about for 12 months has just been published by someone else but, fortunately, most people are still using traditional deadlines so I’m safe. I read a lot of papers but none more than when I’m planning or writing a paper: I need to know what else has happened if I’m to frame my work correctly and not accidentally re-invent the wheel. Especially if it’s a triangular wheel that never worked.

My focus is Time Banking so that’s what I’ve been searching for – concepts, names, similarities, to make sure that what I’m doing will make an additional contribution. This isn’t to say that Time Banking hasn’t been used before as a term or even a concept. I’ve been aware of several universities who allow a fixed number of extra days that students can draw on (Stanford being the obvious example) and the concept of banking your time is certainly not new – there’s even a Dilbert cartoon for it! There are papers on time banking, at low granularity and with little student control – it’s more of a convenient deadline extender rather than a mechanism for developing metacognition in order to promote self-regulating learning strategies in the student. Which is good because that’s the approach I’m taking.

The reasoning and methodology that I’m using does appear to be relatively novel and it encompasses a whole range of issues: pedagogy, self-regulation, ethics and evidence-based analysis of how deadlines are currently working for us. It’s a lot to fit into one paper but I have hope that I can at least cover the philosophical background of why what I’m doing is a good idea, not just because I want to convince my peers but because I want volunteers for when pilot schemes start to occur.

It’s not enough that something is a good idea, or that it reads well, it has to work. It has to be able to de deployed, we have to be able to measure it, collect evidence and say “Yes, this is what we wanted.” Then we publish lots more papers and win major awards – Profit! (Actually, if it’s a really good idea then we want everyone to do it. Widespread adoption that enhances education is the real profit.)

Like this but with less underpants collecting and more revolutionising education.

More seriously, I love writing papers because I really have to think deeply about what I’m saying. How does it fit with existing research? Has this been tried before? If so, did it work? Did it fail? What am I doing that is different? What am I really trying to achieve?

How can I convince another educator that this is actually a good idea?

The first draft of the paper is written and now my co-authors are scouring it, playing Devil’s advocate, and seeing how many useful and repairable holes they can tear in it in order to make it worthy of publication. Then it will go off at some point and a number of nice people will push it out to sea and shoot at it with large weapons to see if it sinks or swims. Then I get feedback (and hopefully a publication) and everyone learns something.

I’m really looking forward to seeing the first actual submission draft – I want to see what the polished ideas look like!


Let’s Transform Education! (MOOC Hijinks Hilarity! Jinkies!)

I had one of those discussions yesterday that every one in Higher Education educational research comes to dread: a discussion with someone who basically doesn’t believe the educational research and, within varying degrees of politeness, comes close to ignoring or denigrating everything that you’re trying to do. Yesterday’s high point was the use of the term “Mr Chips” to describe the (from the speaker’s perspective) incredibly low possibility of actually widening our entrance criteria and turning out “quality” graduates – his point was that more students would automatically mean much larger (70%) failure rates. My counter (and original point) is that since there is such a low correlation between school marks and University GPA (roughly 40-45% and it’s very noisy) that successful learning and teaching strategies could deal with an influx of supposedly ‘lower quality’ students, because the quality metric that we’re using (terminal high school grade or equivalent) is not a reliable indicator of performance. My fundamental belief is that good education is transformative. We start with the students that schools give us but good, well-constructed, education can, in the vast majority of cases, successfully educate students and transform them into functioning, self-regulating graduates. We have, as a community, carried out a lot of research that says that this works, provided that we are happy to accept that we (academics) are not by any stretch of the imagination the target demographic or majority experience in our classes, and that, please, let’s look at new teaching methods and approaches that actually work in developing the knowledge and characteristics that we’re after.

The “Mr Chips” thing is a reference to a rather sentimental account of the transformative influence of a school master, the eponymous Chips, and, by inference, using it in a discussion of the transformative power of education does cast the perception of my comments on equality of access, linked with educational design and learning systems as transformative technologies, as being seen as both naïve and (in a more personal reading) makes me uncomfortably aware that some people might think I’m talking about myself as being the key catalyst of some sort. One of the nice things about being an academic is that you can have a discussion like this and not actually come to blows over it – we think and argue for a living, after all. But I find this dismissive and rude. If we’re not trying to educate people and transform them, then what the hell are we doing? Advocating inclusion and transformation shouldn’t be seen as grandstanding – it should be seen as our job. I don’t want to be the keystone, I want systems that work and survive individuals, but that individuals can work within to improve and develop – we know this is possible and it’s happening in a lot of places. There are, however, pockets of resistance: people who are using the same old approaches out of laziness, ignorance and a refusal to update for what appear to be philosophical reasons but have no evidence to support them.

Frankly, I’m getting a little irritated by people doubting the value of the volumes of educational research. If I was dealing with people who’d read the papers, I’d be happier, but I’m often dealing with people who won’t read the papers because they just don’t believe that there’s a need to change or they refuse to accept what is in there because of a perceived difficulty in making it work. (A colleague demanded a copy of one of our papers showing the impact of our new approaches on retention – I haven’t heard from him since he got it. This probably means that he’s chosen to ignore it and is going to pretend that he never asked.) Over coffee this morning, musing on this, it occurred to me that at the same time that we’re not getting the greatest amount of respect and love in the educational research community, we’re also worried about the trend towards MOOCs. Many of our concerns about MOOCs are founded in the lack of evidence that they are educationally effective. And I saw a confluence.

All of the educational researchers who are not able to sway people inside their institutions – let’s just ignore them and surge into the MOOCs. We can still teach inside our own places, of course, and since MOOCs are free there’s no commercial conflict – but let’s take all of the research and practice and build a brave new world out in MOOC space that is the best of what we know. We can even choose to connect our in-house teaching into that system if we want. (Yes, we still have the face-to-face issue for those without a bricks-and-mortar campus, but how far could we go to make things better in terms of what MOOCs can offer?) We’re transformers, builders and creators. What could we do with the infinite canvas of the Internet and a lot of very clever people, working with a lot of very other clever people who are also driven and entrepreneurial?

The MOOC community will probably have a lot to say about this, which is why we shouldn’t see this as a hijack or a take-over, and I think it’s helpful to think of this very much as a confluence – a flowing together. I am, not for a second, saying that this will legitimise MOOCs, because this implies that they are illegitimate, but rather than keep fighting battles with colleagues and systems that can defeat 40 years of knowledge by saying “Well, I don’t think so”, let’s work with people who have already shown that they are looking to the future. Perhaps, combining people who are building giant engines of change with the people who are being frustrated in trying to bring about change might make something magical happen? I know that this is already happening in some places – but what if it was an international movement across the whole sector?

Jinkies! (Sorry, the title ran to this and I get to use a picture of a t-shirt with Velma on it!)

Relma!

The purpose of this is manifold:

  1. We get to build the systems that we want to, to deliver education to students in the best ways we know.
  2. We (potentially) help to improve MOOCs by providing strong theory to construct evidence gathering mechanisms that allow us to really get inside what MOOCs are doing.
  3. More students get educated. (Ok, maybe not in our host institutions, but what is our actual goal anyway?)
  4. We form a strong international community of educational researchers with common outputs and sharing that isn’t necessarily owned by one company (sorry, iTunesU).
  5. If we get it right, students vote with their feet and employers vote with their wallets. We make educational research important and impossible to ignore through visible success.

Now this is, of course, a pipe dream in many ways. Who will pay for it? How long will it take before even not-for-pay outside education becomes barred under new terms and conditions? Who will pay my mortgage if I get fired because I’m working on a deliberately external set of courses for students who are not paying to come to my institution?

But, the most important thing, for me, is that we should continue what has been proposed and work more and more closely with the MOOC community to develop exemplars of good practice that have strong, evidence-based outcomes that become impossible to ignore. Much as students use temporal discounting to procrastinate about their work, administrators tend to use a more traditional financial discounting when it comes to what they consider important. If it takes 12 papers and two years of study to justify spending $5,000 on a new tool or time spent on learning design – forget about it. If, however, MOOCs show strong evidence of improving student retention (*BING*), student attraction (*BING*), student engagement (*BING*) and employability – well, BINGO. People will pay money for that.

I’ve spoken before about how successful I had to be before I was tolerated in my pursuit of educational research and, while I don’t normally talk about it in detail because it smacks of hubris and I sincerely believe that I am not a role model of any kind, I  hope that you will excuse me so that I can explain why I think it’s crazy as to how successful I had to be in order to become tolerated – and not yet really believed. To summarise, I’m in three research groups, I’ve brought in (as part of a group and individually) somewhere in the order of $0.5M in one non-ed research area, I’ve brought in something like $30-50K in educational research money, I’ve published two A journals (one CS research, one CS ed), two A conferences (both ed) and one B conference (ed/CS) and I have a faculty level position as an Associate Dean and I have a national learning and teaching presence. All of the things on that line – that’s 2012. 2011 wasn’t quite as successful but it wasn’t bad by any stretch of the imagination. I think that’s an unreasonably high bar to pass in order to be allowed the luxury of asking questions about what it is that we’re doing with learning and teaching. But if I can leverage that to work with other colleagues who can then refer to what we’ve done in a way that makes administrators and managers accept the real value of an educational revolution – then my effort is shared over many more people and it suddenly looks like a much better investment of my time.

This is more musing that mission, I’m afraid, and I realise that any amount of this could be shot down but I look forward to some discussion!

 

 


Warning: Objects in Mirror May Appear Important Because They Appear Closer

I had an interesting meeting with one of my students who has been trying to allocate his time to various commitments, including the project that he’s doing with me. He had been spending most of his time on an assignment for another course and, while this assignment was important, I had to carry out one of the principle duties of the supervisor: pointing out the obvious when people have their face pressed too close to the window, staring at the things that are close.

There are three major things a project supervisor does: kick things off and give some ideas, tell the student when they’re not making good progress and help them to get back on track, and stop them before they run off into the distance and get them to write it all down as a thesis of some sort.

So, in our last meeting, I asked the student how much the other assignment was worth.

“About 10%.”

How much is your project work in terms of total courses?

“4 courses worth.”

So the project is 40 times the value of that assignment that has taken up most of your time? What’s that – 4,000%?

To his credit, he has been working along and it’s not too late yet, by any stretch of the imagination, but a little perspective is always handy. He has also started to plan his time out better and, most rewardingly, appreciates the perspective. This, to be honest, is the way that I like it: nothing bad has happened, everyone’s learned something. Hooray!

I sometimes wonder if it’s one of the crucial problems that we face as humans. Things that are close look bigger, whether optically because of how eyes work or because of things that are due tomorrow seem to have so much more importance than much, much bigger tasks due in four weeks. Oh, we could start talking about exponential time distributions or similar things but I prefer the comparison with the visual illusion.

Just because it looks close doesn’t mean it’s the biggest thing that you have to worry about.

Some close things are worth worrying about, however.