More MOOCs! (Still writing up ICER, sorry!)

The Gates Foundation is offering grants for MOOCs in Introductory Classes. I mentioned in an earlier post that if we can show that MOOCs work, then generally available and cheap teaching delivery is a fantastically transformative technology. You can read the press release but it’s obvious that this has some key research questions in it, much as we’ve all been raising:

The foundation wants to know, for instance, which students benefit most from MOOC’s (sic) and which kinds of courses translate best to that format.

Yes! If these courses do work then for whom do they work and which courses? There’s little doubt that the Gates have been doing some amazing things with their money and this looks promising – of course, now I have to find out if my University has been invited to join and, if so, how I can get involved. (Of course, if they haven’t, then it’s time to put on my dancing trousers and try to remedy that situation.)

However, money plus research questions is a good direction to go in.


ICER 2012 Day 1 Keynote: How Are We Thinking?

We started off today with a keynote address from Ed Meyer, from University of Queensland, on the Threshold Concepts Framework (Also Pedagogy, and Student Learning). I am, regrettably, not as conversant with threshold concepts as I should be, so I’ll try not to embarrass myself too badly. Threshold concepts are central to the mastery of a given subject and are characterised by some key features (Meyer and Land):

  1. Grasping a threshold concept is transformative because it changes the way that we think about something. These concepts become part of who we are.
  2. Once you’ve learned the concept, you are very unlikely to forget it – it is irreversible.
  3. This new concept allows you to make new connections and allows you to link together things that you previously didn’t realise were linked.
  4. This new concept has boundaries – they have an area over which they apply. You need to be able to question within the area to work out where it applies. (Ultimately, this may identify areas between schools of thought in an area.)
  5. Threshold concepts are ‘troublesome knowledge’. This knowledge can be counter-intuitive, even alien and will make no sense to people until they grasp the new concept. This is one of the key problems with discussing these concepts with people – they will wish to apply their intuitive understanding and fighting this tendency may take some considerable effort.

Meyer then discussed how we see with new eyes after we integrate these concepts. It can be argued that concepts such as these give us a new way of seeing that, because of inter-individual differences, students will experience in varying degrees as transformative, integrative, and (look out) provocative and troublesome. For this final one, a student experiences this in many ways: the world doesn’t work as I think it should! I feel lost! Helpless! Angry! Why are you doing this to me?

How do you introduce a student to one of these troublesome concepts and, more importantly, how can you describe what you are going to talk about when the concept itself is alien: what do you put in the course description given that you know that the student is not yet ready to assimilate the concept?

Meyer raised a really good point: how do we get someone to think inside the discipline? Do they understand the concept? Yes. Does this mean that they think along the right lines? Maybe, maybe not. If I don’t think like a Computer Scientist, I may not understand why a CS person sees a certain issue as a problem. We have plenty of evidence that people who haven’t dealt with the threshold concepts in CS Education find it alien to contemplate that the lecture is not the be-all and end-all of teaching – their resistance and reliance upon folk pedagogies is evidence of this wrestling with troublesome knowledge.

A great deal to think about from this talk, especially in dealing with key aspects of CS Ed as the threshold concept that is causing many of our non-educational research oriented colleagues so much trouble, as well as our students.

 


ICER 2012: Day 0 (Workshops)

Well, it’s Sunday so it must be New Zealand (or at least it was Sunday yesterday). I attended that rarest of workshops, one where every session was interesting and made me think – a very good sign for the conference to come.

We started with an on-line workshop on Bloom’s taxonomy, classifying exam questions, with Raymond Lister from UTS. One of the best things about this for me was the discussion about the questions where we disagreed: is this application or synthesis? It really made me think about how I write my examinations and how they could be read.

We then segued into a fascinating discussion of neo-Piagetian theory, where we see the development stages that we usually associate with children in adults as they learn new areas of knowledge. In (very rough) detail, we look at whether we have enough working memory to carry out a task and, if not, weird things happen.

Students can indulge in some weird behaviours when they don’t understand what’s going on. For example, permutation programming, where they just type semi-randomly until their program compiles or works. Other examples include shotgun debugging and voodoo programming and what these amount to are the student not having a good consistent model of what works and, as a result, they are basically dabbling in a semi-magic approach.

My notes from the session contain this following excerpt:

“Bizarro” novice programmer behaviours are actually normal stages of intellectual development.
Accept this and then work with this to find ways of moving students from pre-op, to concrete op, to formal operational. Don’t forget the evaluation. Must scaffold this process!

What this translates to is that the strange things we see are just indications that students having moved to what we would normally associate with an ‘adult’ (formal operational) understanding of the area. This shoots several holes in the old “You’re born a programmer” fallacy. Those students who are more able early may just have moved through the stages more quickly.

There was also an amount of derisive description of folk pedagogy, those theories that arise during pontification in the tea room, with no basis in educational theory or formed from a truly empirical study. Yet these folk pedagogies are very hard to shake and are one of the most frustrating things to deal with if you are in educational research. One “I don’t think so” can apparently ignore the 70 years since Dewey called the classrooms prisons.

The worst thought is that, if we’re not trying to help the students to transition, then maybe the transition to concrete operation is happening despite us instead of because of us, which is a sobering thought.

I thought that Ray Lister finished the session with really good thought regarding why students struggle sometimes:

The problem is not a student’s swimming skill, it’s the strength of the torrent.

As I’ve said before, making hard things easier to understand is part of the job of the educator. Anyone will fail, regardless of their ability, if we make it hard enough for them.


Conference Blogging! (Redux)

I’m about to head off to another conference and I’ve taken a new approach to my blogging. Rather than my traditional “Pre-load the queue with posts” activity, which tends to feel a little stilted even when I blog other things around it, I’ll be blogging in direct response to the conference and not using my standard posting time.

I’m off to ICER, which is only my second educational research conference, and I’m very excited. It’s a small but highly regarded conference and I’m getting ready for a lot of very smart people to turn their considerably weighty gaze upon the work that I’m presenting. My paper concerns the early detection of at-risk students, based on our analysis of over 200,000 student submissions. In a nutshell, our investigations indicate that paying attention to a student’s initial behaviour gives you some idea of future performance, as you’d expect, but it is the negative (late) behaviour that is the most telling. While there are no astounding revelations in this work, if you’ve read across the area, putting it all together with a large data corpus allows us to approach some myths and gently deflate them.

Our metric is timeliness, or how reliably a student submitted their work on time. Given that late penalties apply (without exception, usually) across the assignments in our school, late submission amounts to an expensive and self-defeating behaviour. We tracked over 1,900 students across all years of the undergraduate program and looked at all of their electronic submissions (all programming code is submitted this way, as are most other assignments.) A lot of the results were not that unexpected – students display hyperbolic temporal discounting, for example – but some things were slightly less expected.

For example, while 39% of my students hand in everything on time, 30% of people who hand in their first assignment late then go on to have a blemish-free future record. However, students who hand up that first assignment late are approximately twice as likely to have problems – which moves this group into a weakly classified at-risk category. Now, I note that this is before any marking has taken place, which means that, if you’re tracking submissions, one very quick and easy way to detect people who might be having problems is to look at the first assignment submission time. This inspection takes about a second and can easily be automated, so it’s a very low burden scheme for picking up people with problems. A personalised response, with constructive feedback or a gentle question, in the zone where the student should have submitted (but didn’t), can be very effective here. You’ll note that I’m working with late submitters not non-submitters. Late submitters are trying to stay engaged but aren’t judging their time or allocating resources well. Non-submitters have decided that effort is no longer worth allocating to this. (One of the things I’m investigating is whether a reminder in the ‘late submission’ area can turn non-submitters into submitters, but this is a long way from any outcomes.)

I should note that the type of assignment work is important here. Computer programs, at least in the assignments that we set, are not just copied in from text. They are not remembering it or demonstrating understanding, they are using the information in new ways to construct solutions to problems. In Bloom’s revised taxonomic terms, this is the “Applying” phase and it requires that the student be sufficiently familiar with the work to be able to understand how to apply it.

Bloom’s Revised Taxonomy

I’m not measuring my students’ timeliness in terms of their ability to show up to a lecture and sleep or to hand up an essay of three paragraphs that barely meets my requirements because it’s been Frankenwritten from a variety of sources. The programming task requires them to look at a problem, design a solution, implement it and then demonstrate that it works. Their code won’t even compile (turn into a form that a machine can execute) unless they understand enough about the programming language and the problem, so this is a very useful indication of how well the student is keeping up with the demands of the course. By focusing on an “Applying” task, we require the student to undertake a task that is going to take time and the way in which they assess this resource and decide on its management tells us a lot about their metacognitive skills, how they are situated in the course and, ultimately, how at-risk they actually are.

Looking at assignment submission patterns is a crude measure, unashamedly, but it’s a cheap measure, as well, with a reasonable degree of accuracy. I can determine, with 100% accuracy, if a student is at-risk by waiting until the end of the course to see if they fail. I have accuracy but no utility, or agency, in this model. I can assume everyone is at risk at the start and then have the inevitable problem of people not identifying themselves as being in this area until it’s too late. By identifying a behaviour that can lead to problems, I can use this as part of my feedback to illustrate a concrete issue that the student needs to address. I now have the statistical evidence to back up why I should invest effort into this approach.

Yes, you get a lot of excuses as to why something happened, but I have derived a great deal of value from asking students questions like “Why did you submit this late?” and then, when they give me their excuse, asking them “How are you going to avoid it next time?” I am no longer surprised at the slightly puzzled look on the student’s face as they realise that this is a valid and necessary question – I’m not interested in punishing them, I want them to not make the same mistake again. How can we do that?

I’ll leave the rest of this discussion for after my talk on Monday.


And more on the Harvard Scandal: Scandal? Apparently it’s not?

I’ve just read a Salon article regarding the Harvard cheating issue. Apparently, according to Farhad Manjoo, these students should be “celebrated for collaborating“.

Note that word? It’s the one that I picked on in the Crimson article and the reason that I did so is that it’s a very mild word, and a very positive one at that. However, this article, while acknowledging that the students were prevented from any such sharing, Manjoo then asks, to me somewhat disingenuously, “What’s the point of prohibiting these students from working together?”

Urm, well, for most of the course, they don’t. At the end of the course, when they want to see how much each individual knows, they attempt to test them individually. That’s not an unusual pattern.

Manjoo’s interpretation of the other articles goes well beyond anything else that I’ve seen, including putting all of the plagiarism claims together as group work and tutor consultation. I can’t speak to this as I don’t have his sources but, given that this was explicitly forbidden anyway, he’s making an empty argument. It doesn’t matter how you slice it, if students worked together, they did something that they weren’t supposed to do. However Manjoo argues that their actions are justified, I’m not sure that this argument is.

The author obviously disagrees with the nature of the open book test and, to my reading, has no real idea of what he’s talking about. Sentences like “But if you want to determine how well students think, why force them to think alone?” are almost completely self-defeating. It also ignores the need to build knowledge in a way that functions when the group isn’t there. We don’t use social constructivism in the assumption that we will always be travelling in packs, we do it to assist the construction of knowledge inside the individual by leveraging the advantages of the social structure. To evaluate how well it has happened, and to isolate group effects so that we can see the individual performing, we use rules such as Harvard clearly defined to set these boundaries.

Manjoo waxes rhetorical in this essay. “Rather than punishing these students, shouldn’t we be praising them for solving these problems the only way they could? ” Well, no, I think that we shouldn’t. There were many ways that, if they thought this approach was unreasonable or unfair, they could have legitimately protested. I note that half the class managed to not (apparently, as far as the number suspected) cheat during this test – what do we say about these people? Are these people worthy of double-plus-praise for somehow transcending the impossible test, or are they fools for not collaborating?

I’m not sure why these articles are providing so much padding for these students, if they have actually done nothing wrong (I hasten to add that they are merely suspected at the moment but if they are to be martyrs then let us assume a bleak outcome). At least, unlike the writers in the Crimson, Manjoo is a Cornell alumnus so he has some distance. I do note that he has a book called “True Enough: Learning to Live in a Post-Fact Society” which, according to the reviews, is about the media establishing views of reality that aren’t necessarily the facts so he’s aware of the impact that his words have on how people will see this issue. He is also writing in a column with, among its bylines, “The Conventional Wisdom Debunked”, so it’s not surprising that this article is written this way.

Manjoo has created (another) Harvard bogeyman: scared of collaboration, unfair to students, and out of step with reality. However, his argument is ultimately a series of misdirections and Manjoo’s opinion that don’t address the core issue: if these students worked with each other, they shouldn’t have. Until he accepts that this, and that this is not a legitimate course, I’m not sure that his arguments have much weight with me.

 


Loading the Dice: Show and Tell

I’ve been using a set of four six-sided dice to generate random numbers for one of my classes this year, generally to establish a presentation order or things like that. We’ve had a number of students getting the same number and so we have to have roll-offs. Now in this case, the most common number rolled so far has been in the range of 17-19 but we have only generated about 18-20 rolls so, while that’s a little high, it’s not high enough to arouse suspicion.

Today we rolled again, and one student wasn’t quite there yet so I did it with the rest of the class. Once again, 18 showed up a bit. This time I asked the class about it. Did that seem suspicious? Then I asked them to look at the dice.

Oh.

Only two of the dice are actually standard dice. One has the number five on every face. One has three sixes and three twos. The students have seen these dice numerous times and have never actually examined them – of course, I didn’t leave them lying around for them to examine but, despite one or two starting to think “Hey, that’s a bit weird”, nobody ever twigged to the loading.

All of the dice in this picture are loaded through weight manipulation, rather than dot alteration. You can buy them for just about any purpose. Ah, Internet!

Having exposed this trick, to some amusement, the last student knocked on the door and I picked up the dice. He was then asked to roll for his position, with the rest of the class staying quiet. (Well, smirking.) He rolled something in 17-19, I forget what, and I wrote that up on the board. Then I asked him if it seemed high to him? On reflection, he said that these numbers all seemed pretty high, especially as the theoretical maximum was 24. I then asked if he’d like to inspect the dice.

He then did so, as I passed him the dice one at a time, and storing the inspected dice in my other hand. (Of course, as he peered at each die to see if it was altered, I quickly swapped one of the ‘real’ dice back into the position in my hand and, as the rest of the class watched and kept admirably quiet, I then forced a real die onto him. Magic is all about misdirection, after all.)

So, having inspected all of them, he was convinced that they were normal. I then plonked them down on the table and asked him to inspect them, to make sure. He lined them up, looked across the top face and, then, looked at the side. Light dawned. Loudly! What, of course, was so startling to him was that he had just inspected the dice and now they weren’t normal.

What was my point?

My students have just completed a project on data visualisation where they provided a static representation of a dataset. There is a main point to present, supported by static analysis and graphs, but the poster is fundamentally explanatory. The only room for exploration is provided by the poster producer and the reader is bound by the inherent limitations in what the producer has made available. Much as with our discussions of fallacies in argument from a recent tutorial, if information is presented poorly or you don’t get enough to go on, you can’t make a good decision.

Enter, the dice.

Because I deliberately kept the students away from them and never made a fuss about them, they assumed that they were normal dice. While the results were high, and suspicion was starting to creep in, I never gave them enough space to explore the dice and discern their true nature. Even today, while handing them to a student to inspect, I controlled the exploration and, by cherry picking and misdirection, managed to convey a false impression.

Now my students are moving into dynamic visualisation and they must prepare for sharing data in a way that can be explored by other people. While the students have a lot of control over who this exploration takes place, they must prepare for people’s inquisitiveness, their desire to assemble evidence and their tendency to want to try everything. They can’t rely upon hiding difficult pieces of data in their representation and they must be ready for users who want to keep exploring through the data in ways that weren’t originally foreseen. Now, in exploratory mode, they must prepare for people who want to try to collect enough evidence to determine if something is true or not, and to be able to interrogate the dataset accordingly.

Now I’m not saying that I believe that their static posters were produced badly, and I did require references to support statements, but the view presented was heavily controlled. They’ve now seen, in a simple analogue, how powerful that can be. Now, it’s time to break out of that mindset and create something that can be freely explored, letting their design guide the user to construct new things rather than to lead them down a particular path.

I can only hope that they’re exceed by this because I certainly am!!


(Reasonable) Argument, Evidence and (Good) Journalism: Is “Crimson” the Colour of Their Faces?

I ran across a story on the Harvard Crimson about a surprisingly high level of suspected plagiarism in a course, Government 1310. The story opens up simply enough in the realms of fact, where the professor suspected plagiarism behaviour in 10-20 take home exams, which was against published guidelines, and has now expanded to roughly 125 suspicious final exams. There was a brief discussion of the assessment of the course and the steps taken so far by the faculty.

Then, the article takes a weird turn. Suddenly, we have a student account, an anonymous student who doesn’t wish their name to be associated with the plagiarism, who “suspected that  Government 1310 was the course in question”. Hello? Why is this… ahhh. Here’s some more:

Though she said she followed the exam instructions and is not being investigated by the Ad Board, she said she thought the exam format lent itself to improper academic conduct.

“I can understand why it would be very easy to collaborate,” said the student

Oh. Collaborate. Interesting. Next we get the Q Guide rating for the course and this course gets 2.54/5 versus the apparent average of 3.91. Then we get some reviews from the Q Guide that “spoke critically of the course’s organisation and the difficulty of the exam questions”.

Spotting a pattern yet?

Another student said that he/she had joined a group of 15 other students just before the submission date and that they had been up all night trying to understand one of the questions (worth 20%).

I submitted this to my students to read and then asked them how they felt about it. Understandably, by the end of the reading, while my students were still thinking about plagiarism, they were thinking that there may have been some… justification. Then we started pulling the article apart.

When we start to look at the article, it becomes apparent that the facts presented all have a rather definite theme – namely that if cheating has occurred, that it has a justification because of the terrible way the course was taught (low Q Guide rating! 16 students confused!)

Now, I can not see the Q Guide data, because when I go to the page I get this information (and I need a Harvard login to go further):

Q Guide
The Q Guide was an annually published guide that reported the results of each year’s course evaluations. Formerly called the CUE Guide, it was renamed the Q Guide in 2007 because the evaluations now include the GSAS and are no longer run solely by the Committee on Undergraduate Education (CUE). In 2009, in place of The Q Guide, Harvard College integrated Q data with the online course selection tool (at my.harvard.edu), providing a simple and easy way to access and compare course evaluation data while planning your course schedule.

So if the article, regarding an exam run in 2012, is referring to the Q Guide for Gov 1310, then it’s one of two things: using an old name for new data (admittedly, fairly likely) or referring to old data. The question does arise, however, whether the Q Guide rating refers to this offering or a previous offering. I can’t tell you which it is because I don’t know. It’s not publicly available and the article doesn’t tell me. (Although you’ll note that the Q Guide text refers to this year‘s evaluations. There’s a part of me that strongly suspects that this is historical data but, of course, I’m speculating.)

However, the most insidious aspect is the presentation of 16 students who are confused about content in a way that overstates their significance. It’s a blatant example of emotive manipulation and encourages the reader to make a false generalisation. There were 279 students enrolled in Gov 1310. 16 is 5.7%. Would I be surprised in somewhere around 5% of my students weren’t capable of understanding all of the questions or thought that some material wasn’t in the course?

No, of course not. That’s roughly the percentage of my students who sometimes don’t know which Dr Falkner is teaching their class. (Hint: one is male and one is female. Noticeably so in both cases.)

I presented this to my Grand Challenge students as part of our studies of philosophical and logical fallacies, discussing how arguments are made to mislead and misdirect. The terrible shame is that, with a detected rate of plagiarism that is this high, I would usually have a very detailed look at the learning and teaching strategies employed (how often are exams being rewritten, how is information being presented, how is teaching being carried out) because this is an amazingly high level of suspected plagiarism.

Despite the misleading journalism presented in the Crimson, the course and its teachers may have to shoulder some responsibility here. As always, just because someone’s argument is badly made, doesn’t mean that it is actually wrong. It’s just disappointing that such a cheap and emotive argument was raised in a way that further fogs an important issue.

As I said to my students today, one of the most interesting way to try to understand a biassed or miscast argument is to understand who the bias favours – cui bono? (To whom the benefit? I am somewhat terrified, on looking for images for this phrase, that it has been highjacked by extremists and conspiracy theorists. It’s a shame because it’s historically beautiful.)

So why would the Crimson run this? It’s pretty manipulative so, unless this is just bad journalism, cui bono?

Having looked up how disciplinary boards are constituted at Harvard, I found a reference that there are three appointed faculty members and:

There are three students appointed to the board as full voting members. Two of these will be assigned to specific cases on a case-by-case basis and will not be in the same division as the student facing disciplinary action.

In this case, the Crimson’s story suddenly looks a lot… darker. If, by publishing this article, they reach the right students and convince them the action of the suspected plagiarists may have been overly influenced by academics who are not performing their duties – then we risk suddenly having a deadlocked board and a deleterious effect on what should have been an untainted process.

The Crimson has further distinguished itself with a follow-up article regarding the uncertainty students are feeling because of the process.

“It’s unfair to leave that uncertainty, given that we’re starting lives,” said the alumnus, who was granted anonymity by The Crimson because he said he feared repercussions from Harvard for discussing the case.

Oh, Harvard, you giant monster, unfairly delaying your decision on a plagiarism case because the lecturers were so very, very bad that students had to cheat. And, what’s worse, you are so evil that students are scared of you – they “fear the repercussions”!

Thank you, Crimson, for providing so much rich fodder for my discussion on how the words “logical argument”, “evidence” and “good journalism” can be so hard to fit into the same sentence.


Gamification: What Happens If All Of The Artefacts Already Exist

Win! Win! Win! (via mindjumpers.com)

I was reading an article today in May/June’s “Information Age”, the magazine of the Australian Computer Society, entitled “Gamification Goes Mainstream”. The article identified the gaming mechanics that could be added to businesses to improve engagement and work quality/productivity by employees. These measures are:

  1. Points: Users get points for achievements and can spend the points on prizes.
  2. Levelling: Points get harder to get as the user masters the systems.
  3. Badges: Badges are awarded and become part of the user’s “trophy page”, accompanying any comments made by the user.
  4. Leader Boards: Users are ranked by points or achievement.
  5. Community: Collaborative tools, contests, sharing and forums.

Now, of course, there’s a reason that things exist like in games and that’s because most games are outside of the physical world and, in the absence of the natural laws that normally make things happen and ground us, we rely upon these mechanics to help us to assess our progress through the game and provide us with some reward for our efforts. Now, while I’m a great believer in using whatever is necessary to make work engaging and to make like more enjoyable, I do wonder about the risk of setting up parallel systems that get people to focus on things other than their actual work.

Yes, yes, we all know I have issues with extrinsic motivations but let’s look again at the list of measures above, which would normally be provided in a game to allow us to make sense of the artificial world in which we find ourselves, and think about how they apply already in a workplace.

  1. Points that can be used to purchase things: I think that we call this money. If I provide a points system for buying company things then I’ve created a second economy that is not actually money.
  2. Levelling: Oh, wait, now it’s hard to spend the special points that I’ve been given so I’ve not only created a second economy, I’ve started down the road towards hyperinflation by devaluing the currency. (Ok, so the promotional system works here in my industry like that – our ranks are our levels, which isn’t that uncommon.)
  3. Badges: Plaques for special achievement, awards, post-nominal letters, Fellowships – anything that goes on the business card is effectively a badge.
  4. Leader Boards: Ok, this is something that we don’t often see in the professional world but, let’s face it, if you’re not on top then you’re not the best. Is that actually motivational or soul-destroying? Of course, if we don’t have it yet, then you do have to wonder why, given every other management trend seems to get a workout occasionally. I should note that I have seen leader boards at my workplace which have been ‘anonymised’ but given that I can see myself I can see where I sit – now not only do I know if am not top, I don’t know who to ask about how to get better, which has been touted as one of the reasons to identify the stars in the first place.
  5.  Community: We do have collaborative tools but they are focussed on helping us achieve our jobs, not on achieving orthogonal goals associated with a gaming system. We also have comment forums, discussion mechanisms such as mailing lists and the like. Contests? No. We don’t have contests. Do we? Oh wait, national competitive grant schemes, local teaching schemes, competitive bidding for opportunities.

Now if people aren’t engaging with the tasks that are expected of them (let’s assume reasonably) then, yes, we should find ways to make things more interesting to encourage participation. However, talking about all of the game mechanics above, it’s obviously going to take more thought than just picking a list of things that we are already doing and providing an alternative system that somehow makes everything really interesting again.

I should note that the article does sound a cautionary tone, from one of the participants, who basically says that it’s too soon to see how effective these schemes are and, of course, Kohn is already waggling a finger at setting up a prize/compliance expectation. So perhaps the lesson here is “how can we take what we already have and work out how to make it more interesting” rather than taking the lessons in required constructions of phenomena from a completely artificial environment where we have to define gravity in order to make things fall. Gamification shows promise in certain direction, mainly because there’s a lot of fun implicit in the whole process, but the approaches need to be carefully designed to make sure that we don’t accidentally reinvent the same old wheel.

 

 


The Precipice of “Everything’s Late”

I spent most of today working on the paper that I alluded to earlier where, after over a year of trying to work on it, I hadn’t made any progress. Having finally managed to dig myself out of the pit I was in, I had the mental and timeline capacity to sit down for the 6 hours it required and go through it all.

Climbers, eh?

In thinking about procrastination, you have to take into account something important: the fact that most of us work in a hyperbolic model where we expend no effort until the deadline is right upon us and then we put everything in, this is temporal discounting. Essentially we place less importance on things in the future than the things that are important to us now. For complex, multi-stage tasks over some time this is an exceedingly bad strategy, especially if we focus on the deadline of delivery, rather than the starting point. If we underestimate the time it requires and we construct our ‘panic now’ strategy based on our proximity to the deadline, then we are at serious risk of missing the starting point because, when it arrives, it just won’t be that important.

Now, let’s increase the difficulty of the whole thing and remember that the more things we have to think about in the present, the greater the risk that we’re going to exceed our capacity for cognitive load and hit the ‘helmet fire’ point – we will be unable to do anything because we’ve run out of the ability to choose what to do effectively. Of course, because we suffer from a hyperbolic discounting problem, we might do things now that are easy to do (because we can see both the beginning and end points inside our window of visibility) and this runs the risk that the things we leave to do later are far more complicated.

This is one of the nastiest implications of poor time management: you might actually not be procrastinating in terms of doing nothing, you might be working constantly but doing the wrong things. Combine this with the pressures of life, the influence of mood and mental state, and we have a pit that can open very wide – and you disappear into it wondering what happened because you thought you were doing so much!

This is a terrible problem for students because, let’s be honest, in your teens there are a lot of important things that are not quite assignments or studying for exams. (Hey, it’s true later too, we just have to pretend to be grownups.) Some of my students are absolutely flat out with activities, a lot of which are actually quite useful, but because they haven’t worked out which ones have to be done now they do the ones that can be done now – the pit opens and looms.

One of the big advantages of reviewing large tasks to break them into components is that you start to see how many ‘time units’ have to be carried out in order to reach your goal. Putting it into any kind of tracking system (even if it’s as simple as an Excel spreadsheet), allows you to see it compared to other things: it reduces the effect of temporal discounting.

When I first put in everything that I had to do as appointments in my calendar, I assumed that I had made a mistake because I had run out of time in the week and was, in some cases, triple booked, even after I spilled over to weekends. This wasn’t a mistake in assembling the calendar, this was an indication that I’d overcommitted and, over the past few months, I’ve been streamlining down so that my worst week still has a few hours free. (Yeah, yeah, not perfect, but there you go.) However, there was this little problem that anything that had been pushed into the late queue got later and later – the whole ‘deal with it soon’ became ‘deal with it now’ or ‘I should have dealt with that by now’.

Like students, my overcommitment wasn’t an obvious “Yes, I want to work too hard” commitment, it snuck in as bits and pieces. A commitment here, a commitment there, a ‘yes’, a ‘sure, I can do that’, and because you sometimes have to make decisions on the fly, you suddenly look around and think “What happened”? The last thing I want to do here is lecture, I want to understand how I can take my experience, learn from it, and pass something useful on. The basic message is that we all work very hard and sometimes don’t make the best decisions. For me, the challenge is now, knowing this, how can I construct something that tries and defeats this self-destructive behaviour in my students?

This week marks the time where I hope to have cleared everything on the ‘now/by now’ queue and finally be ahead. My friends know that I’ve said that a lot this year but it’s hard to read and think in the area of time management without learning something. (Some people might argue but I don’t write here to tell you that I have everything sorted, I write here to think and hopefully pass something on through the processes I’m going through.)


Time Banking: More and more reading.

I’ve spent most of the last week putting together the ideas of time banking, reviewing my reading list and then digging for more papers to read and integrate. It’s always a bit of a worry when you go to see if what you’ve been thinking about for 12 months has just been published by someone else but, fortunately, most people are still using traditional deadlines so I’m safe. I read a lot of papers but none more than when I’m planning or writing a paper: I need to know what else has happened if I’m to frame my work correctly and not accidentally re-invent the wheel. Especially if it’s a triangular wheel that never worked.

My focus is Time Banking so that’s what I’ve been searching for – concepts, names, similarities, to make sure that what I’m doing will make an additional contribution. This isn’t to say that Time Banking hasn’t been used before as a term or even a concept. I’ve been aware of several universities who allow a fixed number of extra days that students can draw on (Stanford being the obvious example) and the concept of banking your time is certainly not new – there’s even a Dilbert cartoon for it! There are papers on time banking, at low granularity and with little student control – it’s more of a convenient deadline extender rather than a mechanism for developing metacognition in order to promote self-regulating learning strategies in the student. Which is good because that’s the approach I’m taking.

The reasoning and methodology that I’m using does appear to be relatively novel and it encompasses a whole range of issues: pedagogy, self-regulation, ethics and evidence-based analysis of how deadlines are currently working for us. It’s a lot to fit into one paper but I have hope that I can at least cover the philosophical background of why what I’m doing is a good idea, not just because I want to convince my peers but because I want volunteers for when pilot schemes start to occur.

It’s not enough that something is a good idea, or that it reads well, it has to work. It has to be able to de deployed, we have to be able to measure it, collect evidence and say “Yes, this is what we wanted.” Then we publish lots more papers and win major awards – Profit! (Actually, if it’s a really good idea then we want everyone to do it. Widespread adoption that enhances education is the real profit.)

Like this but with less underpants collecting and more revolutionising education.

More seriously, I love writing papers because I really have to think deeply about what I’m saying. How does it fit with existing research? Has this been tried before? If so, did it work? Did it fail? What am I doing that is different? What am I really trying to achieve?

How can I convince another educator that this is actually a good idea?

The first draft of the paper is written and now my co-authors are scouring it, playing Devil’s advocate, and seeing how many useful and repairable holes they can tear in it in order to make it worthy of publication. Then it will go off at some point and a number of nice people will push it out to sea and shoot at it with large weapons to see if it sinks or swims. Then I get feedback (and hopefully a publication) and everyone learns something.

I’m really looking forward to seeing the first actual submission draft – I want to see what the polished ideas look like!