ICER 2012: Day 0 (Workshops)

Well, it’s Sunday so it must be New Zealand (or at least it was Sunday yesterday). I attended that rarest of workshops, one where every session was interesting and made me think – a very good sign for the conference to come.

We started with an on-line workshop on Bloom’s taxonomy, classifying exam questions, with Raymond Lister from UTS. One of the best things about this for me was the discussion about the questions where we disagreed: is this application or synthesis? It really made me think about how I write my examinations and how they could be read.

We then segued into a fascinating discussion of neo-Piagetian theory, where we see the development stages that we usually associate with children in adults as they learn new areas of knowledge. In (very rough) detail, we look at whether we have enough working memory to carry out a task and, if not, weird things happen.

Students can indulge in some weird behaviours when they don’t understand what’s going on. For example, permutation programming, where they just type semi-randomly until their program compiles or works. Other examples include shotgun debugging and voodoo programming and what these amount to are the student not having a good consistent model of what works and, as a result, they are basically dabbling in a semi-magic approach.

My notes from the session contain this following excerpt:

“Bizarro” novice programmer behaviours are actually normal stages of intellectual development.
Accept this and then work with this to find ways of moving students from pre-op, to concrete op, to formal operational. Don’t forget the evaluation. Must scaffold this process!

What this translates to is that the strange things we see are just indications that students having moved to what we would normally associate with an ‘adult’ (formal operational) understanding of the area. This shoots several holes in the old “You’re born a programmer” fallacy. Those students who are more able early may just have moved through the stages more quickly.

There was also an amount of derisive description of folk pedagogy, those theories that arise during pontification in the tea room, with no basis in educational theory or formed from a truly empirical study. Yet these folk pedagogies are very hard to shake and are one of the most frustrating things to deal with if you are in educational research. One “I don’t think so” can apparently ignore the 70 years since Dewey called the classrooms prisons.

The worst thought is that, if we’re not trying to help the students to transition, then maybe the transition to concrete operation is happening despite us instead of because of us, which is a sobering thought.

I thought that Ray Lister finished the session with really good thought regarding why students struggle sometimes:

The problem is not a student’s swimming skill, it’s the strength of the torrent.

As I’ve said before, making hard things easier to understand is part of the job of the educator. Anyone will fail, regardless of their ability, if we make it hard enough for them.


Conference Blogging! (Redux)

I’m about to head off to another conference and I’ve taken a new approach to my blogging. Rather than my traditional “Pre-load the queue with posts” activity, which tends to feel a little stilted even when I blog other things around it, I’ll be blogging in direct response to the conference and not using my standard posting time.

I’m off to ICER, which is only my second educational research conference, and I’m very excited. It’s a small but highly regarded conference and I’m getting ready for a lot of very smart people to turn their considerably weighty gaze upon the work that I’m presenting. My paper concerns the early detection of at-risk students, based on our analysis of over 200,000 student submissions. In a nutshell, our investigations indicate that paying attention to a student’s initial behaviour gives you some idea of future performance, as you’d expect, but it is the negative (late) behaviour that is the most telling. While there are no astounding revelations in this work, if you’ve read across the area, putting it all together with a large data corpus allows us to approach some myths and gently deflate them.

Our metric is timeliness, or how reliably a student submitted their work on time. Given that late penalties apply (without exception, usually) across the assignments in our school, late submission amounts to an expensive and self-defeating behaviour. We tracked over 1,900 students across all years of the undergraduate program and looked at all of their electronic submissions (all programming code is submitted this way, as are most other assignments.) A lot of the results were not that unexpected – students display hyperbolic temporal discounting, for example – but some things were slightly less expected.

For example, while 39% of my students hand in everything on time, 30% of people who hand in their first assignment late then go on to have a blemish-free future record. However, students who hand up that first assignment late are approximately twice as likely to have problems – which moves this group into a weakly classified at-risk category. Now, I note that this is before any marking has taken place, which means that, if you’re tracking submissions, one very quick and easy way to detect people who might be having problems is to look at the first assignment submission time. This inspection takes about a second and can easily be automated, so it’s a very low burden scheme for picking up people with problems. A personalised response, with constructive feedback or a gentle question, in the zone where the student should have submitted (but didn’t), can be very effective here. You’ll note that I’m working with late submitters not non-submitters. Late submitters are trying to stay engaged but aren’t judging their time or allocating resources well. Non-submitters have decided that effort is no longer worth allocating to this. (One of the things I’m investigating is whether a reminder in the ‘late submission’ area can turn non-submitters into submitters, but this is a long way from any outcomes.)

I should note that the type of assignment work is important here. Computer programs, at least in the assignments that we set, are not just copied in from text. They are not remembering it or demonstrating understanding, they are using the information in new ways to construct solutions to problems. In Bloom’s revised taxonomic terms, this is the “Applying” phase and it requires that the student be sufficiently familiar with the work to be able to understand how to apply it.

Bloom’s Revised Taxonomy

I’m not measuring my students’ timeliness in terms of their ability to show up to a lecture and sleep or to hand up an essay of three paragraphs that barely meets my requirements because it’s been Frankenwritten from a variety of sources. The programming task requires them to look at a problem, design a solution, implement it and then demonstrate that it works. Their code won’t even compile (turn into a form that a machine can execute) unless they understand enough about the programming language and the problem, so this is a very useful indication of how well the student is keeping up with the demands of the course. By focusing on an “Applying” task, we require the student to undertake a task that is going to take time and the way in which they assess this resource and decide on its management tells us a lot about their metacognitive skills, how they are situated in the course and, ultimately, how at-risk they actually are.

Looking at assignment submission patterns is a crude measure, unashamedly, but it’s a cheap measure, as well, with a reasonable degree of accuracy. I can determine, with 100% accuracy, if a student is at-risk by waiting until the end of the course to see if they fail. I have accuracy but no utility, or agency, in this model. I can assume everyone is at risk at the start and then have the inevitable problem of people not identifying themselves as being in this area until it’s too late. By identifying a behaviour that can lead to problems, I can use this as part of my feedback to illustrate a concrete issue that the student needs to address. I now have the statistical evidence to back up why I should invest effort into this approach.

Yes, you get a lot of excuses as to why something happened, but I have derived a great deal of value from asking students questions like “Why did you submit this late?” and then, when they give me their excuse, asking them “How are you going to avoid it next time?” I am no longer surprised at the slightly puzzled look on the student’s face as they realise that this is a valid and necessary question – I’m not interested in punishing them, I want them to not make the same mistake again. How can we do that?

I’ll leave the rest of this discussion for after my talk on Monday.


And more on the Harvard Scandal: Scandal? Apparently it’s not?

I’ve just read a Salon article regarding the Harvard cheating issue. Apparently, according to Farhad Manjoo, these students should be “celebrated for collaborating“.

Note that word? It’s the one that I picked on in the Crimson article and the reason that I did so is that it’s a very mild word, and a very positive one at that. However, this article, while acknowledging that the students were prevented from any such sharing, Manjoo then asks, to me somewhat disingenuously, “What’s the point of prohibiting these students from working together?”

Urm, well, for most of the course, they don’t. At the end of the course, when they want to see how much each individual knows, they attempt to test them individually. That’s not an unusual pattern.

Manjoo’s interpretation of the other articles goes well beyond anything else that I’ve seen, including putting all of the plagiarism claims together as group work and tutor consultation. I can’t speak to this as I don’t have his sources but, given that this was explicitly forbidden anyway, he’s making an empty argument. It doesn’t matter how you slice it, if students worked together, they did something that they weren’t supposed to do. However Manjoo argues that their actions are justified, I’m not sure that this argument is.

The author obviously disagrees with the nature of the open book test and, to my reading, has no real idea of what he’s talking about. Sentences like “But if you want to determine how well students think, why force them to think alone?” are almost completely self-defeating. It also ignores the need to build knowledge in a way that functions when the group isn’t there. We don’t use social constructivism in the assumption that we will always be travelling in packs, we do it to assist the construction of knowledge inside the individual by leveraging the advantages of the social structure. To evaluate how well it has happened, and to isolate group effects so that we can see the individual performing, we use rules such as Harvard clearly defined to set these boundaries.

Manjoo waxes rhetorical in this essay. “Rather than punishing these students, shouldn’t we be praising them for solving these problems the only way they could? ” Well, no, I think that we shouldn’t. There were many ways that, if they thought this approach was unreasonable or unfair, they could have legitimately protested. I note that half the class managed to not (apparently, as far as the number suspected) cheat during this test – what do we say about these people? Are these people worthy of double-plus-praise for somehow transcending the impossible test, or are they fools for not collaborating?

I’m not sure why these articles are providing so much padding for these students, if they have actually done nothing wrong (I hasten to add that they are merely suspected at the moment but if they are to be martyrs then let us assume a bleak outcome). At least, unlike the writers in the Crimson, Manjoo is a Cornell alumnus so he has some distance. I do note that he has a book called “True Enough: Learning to Live in a Post-Fact Society” which, according to the reviews, is about the media establishing views of reality that aren’t necessarily the facts so he’s aware of the impact that his words have on how people will see this issue. He is also writing in a column with, among its bylines, “The Conventional Wisdom Debunked”, so it’s not surprising that this article is written this way.

Manjoo has created (another) Harvard bogeyman: scared of collaboration, unfair to students, and out of step with reality. However, his argument is ultimately a series of misdirections and Manjoo’s opinion that don’t address the core issue: if these students worked with each other, they shouldn’t have. Until he accepts that this, and that this is not a legitimate course, I’m not sure that his arguments have much weight with me.

 


Loading the Dice: Show and Tell

I’ve been using a set of four six-sided dice to generate random numbers for one of my classes this year, generally to establish a presentation order or things like that. We’ve had a number of students getting the same number and so we have to have roll-offs. Now in this case, the most common number rolled so far has been in the range of 17-19 but we have only generated about 18-20 rolls so, while that’s a little high, it’s not high enough to arouse suspicion.

Today we rolled again, and one student wasn’t quite there yet so I did it with the rest of the class. Once again, 18 showed up a bit. This time I asked the class about it. Did that seem suspicious? Then I asked them to look at the dice.

Oh.

Only two of the dice are actually standard dice. One has the number five on every face. One has three sixes and three twos. The students have seen these dice numerous times and have never actually examined them – of course, I didn’t leave them lying around for them to examine but, despite one or two starting to think “Hey, that’s a bit weird”, nobody ever twigged to the loading.

All of the dice in this picture are loaded through weight manipulation, rather than dot alteration. You can buy them for just about any purpose. Ah, Internet!

Having exposed this trick, to some amusement, the last student knocked on the door and I picked up the dice. He was then asked to roll for his position, with the rest of the class staying quiet. (Well, smirking.) He rolled something in 17-19, I forget what, and I wrote that up on the board. Then I asked him if it seemed high to him? On reflection, he said that these numbers all seemed pretty high, especially as the theoretical maximum was 24. I then asked if he’d like to inspect the dice.

He then did so, as I passed him the dice one at a time, and storing the inspected dice in my other hand. (Of course, as he peered at each die to see if it was altered, I quickly swapped one of the ‘real’ dice back into the position in my hand and, as the rest of the class watched and kept admirably quiet, I then forced a real die onto him. Magic is all about misdirection, after all.)

So, having inspected all of them, he was convinced that they were normal. I then plonked them down on the table and asked him to inspect them, to make sure. He lined them up, looked across the top face and, then, looked at the side. Light dawned. Loudly! What, of course, was so startling to him was that he had just inspected the dice and now they weren’t normal.

What was my point?

My students have just completed a project on data visualisation where they provided a static representation of a dataset. There is a main point to present, supported by static analysis and graphs, but the poster is fundamentally explanatory. The only room for exploration is provided by the poster producer and the reader is bound by the inherent limitations in what the producer has made available. Much as with our discussions of fallacies in argument from a recent tutorial, if information is presented poorly or you don’t get enough to go on, you can’t make a good decision.

Enter, the dice.

Because I deliberately kept the students away from them and never made a fuss about them, they assumed that they were normal dice. While the results were high, and suspicion was starting to creep in, I never gave them enough space to explore the dice and discern their true nature. Even today, while handing them to a student to inspect, I controlled the exploration and, by cherry picking and misdirection, managed to convey a false impression.

Now my students are moving into dynamic visualisation and they must prepare for sharing data in a way that can be explored by other people. While the students have a lot of control over who this exploration takes place, they must prepare for people’s inquisitiveness, their desire to assemble evidence and their tendency to want to try everything. They can’t rely upon hiding difficult pieces of data in their representation and they must be ready for users who want to keep exploring through the data in ways that weren’t originally foreseen. Now, in exploratory mode, they must prepare for people who want to try to collect enough evidence to determine if something is true or not, and to be able to interrogate the dataset accordingly.

Now I’m not saying that I believe that their static posters were produced badly, and I did require references to support statements, but the view presented was heavily controlled. They’ve now seen, in a simple analogue, how powerful that can be. Now, it’s time to break out of that mindset and create something that can be freely explored, letting their design guide the user to construct new things rather than to lead them down a particular path.

I can only hope that they’re exceed by this because I certainly am!!


(Reasonable) Argument, Evidence and (Good) Journalism: Is “Crimson” the Colour of Their Faces?

I ran across a story on the Harvard Crimson about a surprisingly high level of suspected plagiarism in a course, Government 1310. The story opens up simply enough in the realms of fact, where the professor suspected plagiarism behaviour in 10-20 take home exams, which was against published guidelines, and has now expanded to roughly 125 suspicious final exams. There was a brief discussion of the assessment of the course and the steps taken so far by the faculty.

Then, the article takes a weird turn. Suddenly, we have a student account, an anonymous student who doesn’t wish their name to be associated with the plagiarism, who “suspected that  Government 1310 was the course in question”. Hello? Why is this… ahhh. Here’s some more:

Though she said she followed the exam instructions and is not being investigated by the Ad Board, she said she thought the exam format lent itself to improper academic conduct.

“I can understand why it would be very easy to collaborate,” said the student

Oh. Collaborate. Interesting. Next we get the Q Guide rating for the course and this course gets 2.54/5 versus the apparent average of 3.91. Then we get some reviews from the Q Guide that “spoke critically of the course’s organisation and the difficulty of the exam questions”.

Spotting a pattern yet?

Another student said that he/she had joined a group of 15 other students just before the submission date and that they had been up all night trying to understand one of the questions (worth 20%).

I submitted this to my students to read and then asked them how they felt about it. Understandably, by the end of the reading, while my students were still thinking about plagiarism, they were thinking that there may have been some… justification. Then we started pulling the article apart.

When we start to look at the article, it becomes apparent that the facts presented all have a rather definite theme – namely that if cheating has occurred, that it has a justification because of the terrible way the course was taught (low Q Guide rating! 16 students confused!)

Now, I can not see the Q Guide data, because when I go to the page I get this information (and I need a Harvard login to go further):

Q Guide
The Q Guide was an annually published guide that reported the results of each year’s course evaluations. Formerly called the CUE Guide, it was renamed the Q Guide in 2007 because the evaluations now include the GSAS and are no longer run solely by the Committee on Undergraduate Education (CUE). In 2009, in place of The Q Guide, Harvard College integrated Q data with the online course selection tool (at my.harvard.edu), providing a simple and easy way to access and compare course evaluation data while planning your course schedule.

So if the article, regarding an exam run in 2012, is referring to the Q Guide for Gov 1310, then it’s one of two things: using an old name for new data (admittedly, fairly likely) or referring to old data. The question does arise, however, whether the Q Guide rating refers to this offering or a previous offering. I can’t tell you which it is because I don’t know. It’s not publicly available and the article doesn’t tell me. (Although you’ll note that the Q Guide text refers to this year‘s evaluations. There’s a part of me that strongly suspects that this is historical data but, of course, I’m speculating.)

However, the most insidious aspect is the presentation of 16 students who are confused about content in a way that overstates their significance. It’s a blatant example of emotive manipulation and encourages the reader to make a false generalisation. There were 279 students enrolled in Gov 1310. 16 is 5.7%. Would I be surprised in somewhere around 5% of my students weren’t capable of understanding all of the questions or thought that some material wasn’t in the course?

No, of course not. That’s roughly the percentage of my students who sometimes don’t know which Dr Falkner is teaching their class. (Hint: one is male and one is female. Noticeably so in both cases.)

I presented this to my Grand Challenge students as part of our studies of philosophical and logical fallacies, discussing how arguments are made to mislead and misdirect. The terrible shame is that, with a detected rate of plagiarism that is this high, I would usually have a very detailed look at the learning and teaching strategies employed (how often are exams being rewritten, how is information being presented, how is teaching being carried out) because this is an amazingly high level of suspected plagiarism.

Despite the misleading journalism presented in the Crimson, the course and its teachers may have to shoulder some responsibility here. As always, just because someone’s argument is badly made, doesn’t mean that it is actually wrong. It’s just disappointing that such a cheap and emotive argument was raised in a way that further fogs an important issue.

As I said to my students today, one of the most interesting way to try to understand a biassed or miscast argument is to understand who the bias favours – cui bono? (To whom the benefit? I am somewhat terrified, on looking for images for this phrase, that it has been highjacked by extremists and conspiracy theorists. It’s a shame because it’s historically beautiful.)

So why would the Crimson run this? It’s pretty manipulative so, unless this is just bad journalism, cui bono?

Having looked up how disciplinary boards are constituted at Harvard, I found a reference that there are three appointed faculty members and:

There are three students appointed to the board as full voting members. Two of these will be assigned to specific cases on a case-by-case basis and will not be in the same division as the student facing disciplinary action.

In this case, the Crimson’s story suddenly looks a lot… darker. If, by publishing this article, they reach the right students and convince them the action of the suspected plagiarists may have been overly influenced by academics who are not performing their duties – then we risk suddenly having a deadlocked board and a deleterious effect on what should have been an untainted process.

The Crimson has further distinguished itself with a follow-up article regarding the uncertainty students are feeling because of the process.

“It’s unfair to leave that uncertainty, given that we’re starting lives,” said the alumnus, who was granted anonymity by The Crimson because he said he feared repercussions from Harvard for discussing the case.

Oh, Harvard, you giant monster, unfairly delaying your decision on a plagiarism case because the lecturers were so very, very bad that students had to cheat. And, what’s worse, you are so evil that students are scared of you – they “fear the repercussions”!

Thank you, Crimson, for providing so much rich fodder for my discussion on how the words “logical argument”, “evidence” and “good journalism” can be so hard to fit into the same sentence.


Gamification: What Happens If All Of The Artefacts Already Exist

Win! Win! Win! (via mindjumpers.com)

I was reading an article today in May/June’s “Information Age”, the magazine of the Australian Computer Society, entitled “Gamification Goes Mainstream”. The article identified the gaming mechanics that could be added to businesses to improve engagement and work quality/productivity by employees. These measures are:

  1. Points: Users get points for achievements and can spend the points on prizes.
  2. Levelling: Points get harder to get as the user masters the systems.
  3. Badges: Badges are awarded and become part of the user’s “trophy page”, accompanying any comments made by the user.
  4. Leader Boards: Users are ranked by points or achievement.
  5. Community: Collaborative tools, contests, sharing and forums.

Now, of course, there’s a reason that things exist like in games and that’s because most games are outside of the physical world and, in the absence of the natural laws that normally make things happen and ground us, we rely upon these mechanics to help us to assess our progress through the game and provide us with some reward for our efforts. Now, while I’m a great believer in using whatever is necessary to make work engaging and to make like more enjoyable, I do wonder about the risk of setting up parallel systems that get people to focus on things other than their actual work.

Yes, yes, we all know I have issues with extrinsic motivations but let’s look again at the list of measures above, which would normally be provided in a game to allow us to make sense of the artificial world in which we find ourselves, and think about how they apply already in a workplace.

  1. Points that can be used to purchase things: I think that we call this money. If I provide a points system for buying company things then I’ve created a second economy that is not actually money.
  2. Levelling: Oh, wait, now it’s hard to spend the special points that I’ve been given so I’ve not only created a second economy, I’ve started down the road towards hyperinflation by devaluing the currency. (Ok, so the promotional system works here in my industry like that – our ranks are our levels, which isn’t that uncommon.)
  3. Badges: Plaques for special achievement, awards, post-nominal letters, Fellowships – anything that goes on the business card is effectively a badge.
  4. Leader Boards: Ok, this is something that we don’t often see in the professional world but, let’s face it, if you’re not on top then you’re not the best. Is that actually motivational or soul-destroying? Of course, if we don’t have it yet, then you do have to wonder why, given every other management trend seems to get a workout occasionally. I should note that I have seen leader boards at my workplace which have been ‘anonymised’ but given that I can see myself I can see where I sit – now not only do I know if am not top, I don’t know who to ask about how to get better, which has been touted as one of the reasons to identify the stars in the first place.
  5.  Community: We do have collaborative tools but they are focussed on helping us achieve our jobs, not on achieving orthogonal goals associated with a gaming system. We also have comment forums, discussion mechanisms such as mailing lists and the like. Contests? No. We don’t have contests. Do we? Oh wait, national competitive grant schemes, local teaching schemes, competitive bidding for opportunities.

Now if people aren’t engaging with the tasks that are expected of them (let’s assume reasonably) then, yes, we should find ways to make things more interesting to encourage participation. However, talking about all of the game mechanics above, it’s obviously going to take more thought than just picking a list of things that we are already doing and providing an alternative system that somehow makes everything really interesting again.

I should note that the article does sound a cautionary tone, from one of the participants, who basically says that it’s too soon to see how effective these schemes are and, of course, Kohn is already waggling a finger at setting up a prize/compliance expectation. So perhaps the lesson here is “how can we take what we already have and work out how to make it more interesting” rather than taking the lessons in required constructions of phenomena from a completely artificial environment where we have to define gravity in order to make things fall. Gamification shows promise in certain direction, mainly because there’s a lot of fun implicit in the whole process, but the approaches need to be carefully designed to make sure that we don’t accidentally reinvent the same old wheel.

 

 


Time Banking: More and more reading.

I’ve spent most of the last week putting together the ideas of time banking, reviewing my reading list and then digging for more papers to read and integrate. It’s always a bit of a worry when you go to see if what you’ve been thinking about for 12 months has just been published by someone else but, fortunately, most people are still using traditional deadlines so I’m safe. I read a lot of papers but none more than when I’m planning or writing a paper: I need to know what else has happened if I’m to frame my work correctly and not accidentally re-invent the wheel. Especially if it’s a triangular wheel that never worked.

My focus is Time Banking so that’s what I’ve been searching for – concepts, names, similarities, to make sure that what I’m doing will make an additional contribution. This isn’t to say that Time Banking hasn’t been used before as a term or even a concept. I’ve been aware of several universities who allow a fixed number of extra days that students can draw on (Stanford being the obvious example) and the concept of banking your time is certainly not new – there’s even a Dilbert cartoon for it! There are papers on time banking, at low granularity and with little student control – it’s more of a convenient deadline extender rather than a mechanism for developing metacognition in order to promote self-regulating learning strategies in the student. Which is good because that’s the approach I’m taking.

The reasoning and methodology that I’m using does appear to be relatively novel and it encompasses a whole range of issues: pedagogy, self-regulation, ethics and evidence-based analysis of how deadlines are currently working for us. It’s a lot to fit into one paper but I have hope that I can at least cover the philosophical background of why what I’m doing is a good idea, not just because I want to convince my peers but because I want volunteers for when pilot schemes start to occur.

It’s not enough that something is a good idea, or that it reads well, it has to work. It has to be able to de deployed, we have to be able to measure it, collect evidence and say “Yes, this is what we wanted.” Then we publish lots more papers and win major awards – Profit! (Actually, if it’s a really good idea then we want everyone to do it. Widespread adoption that enhances education is the real profit.)

Like this but with less underpants collecting and more revolutionising education.

More seriously, I love writing papers because I really have to think deeply about what I’m saying. How does it fit with existing research? Has this been tried before? If so, did it work? Did it fail? What am I doing that is different? What am I really trying to achieve?

How can I convince another educator that this is actually a good idea?

The first draft of the paper is written and now my co-authors are scouring it, playing Devil’s advocate, and seeing how many useful and repairable holes they can tear in it in order to make it worthy of publication. Then it will go off at some point and a number of nice people will push it out to sea and shoot at it with large weapons to see if it sinks or swims. Then I get feedback (and hopefully a publication) and everyone learns something.

I’m really looking forward to seeing the first actual submission draft – I want to see what the polished ideas look like!


Warning: Objects in Mirror May Appear Important Because They Appear Closer

I had an interesting meeting with one of my students who has been trying to allocate his time to various commitments, including the project that he’s doing with me. He had been spending most of his time on an assignment for another course and, while this assignment was important, I had to carry out one of the principle duties of the supervisor: pointing out the obvious when people have their face pressed too close to the window, staring at the things that are close.

There are three major things a project supervisor does: kick things off and give some ideas, tell the student when they’re not making good progress and help them to get back on track, and stop them before they run off into the distance and get them to write it all down as a thesis of some sort.

So, in our last meeting, I asked the student how much the other assignment was worth.

“About 10%.”

How much is your project work in terms of total courses?

“4 courses worth.”

So the project is 40 times the value of that assignment that has taken up most of your time? What’s that – 4,000%?

To his credit, he has been working along and it’s not too late yet, by any stretch of the imagination, but a little perspective is always handy. He has also started to plan his time out better and, most rewardingly, appreciates the perspective. This, to be honest, is the way that I like it: nothing bad has happened, everyone’s learned something. Hooray!

I sometimes wonder if it’s one of the crucial problems that we face as humans. Things that are close look bigger, whether optically because of how eyes work or because of things that are due tomorrow seem to have so much more importance than much, much bigger tasks due in four weeks. Oh, we could start talking about exponential time distributions or similar things but I prefer the comparison with the visual illusion.

Just because it looks close doesn’t mean it’s the biggest thing that you have to worry about.

Some close things are worth worrying about, however.


Six Heads Are Better Than One

We had the final project one group feedback session today for the Grand Challenges course. Lots of very impressive posters, as I would have expected with all the work we’ve done on them, but the best outcome was the quality and quantity of useful feedback from the group. There were a number of useful suggestions that identified key improvements to each of the posters.

The framing was important: look at the poster, then discuss how we could improve the presentation, or the underlying analysis. We went through the group in a variety of sequences to get the feedback, so that the last word generally belonged to a different person each time, as did the first. My voice was heard a fair bit, no surprise, but some of the best solutions came from the students, without question.

One of the grand challenges is the formation of a community that can solve the problems and the importance of inter-disciplinary cooperation. By providing an atmosphere where everyone’s voice can be heard, and having the rare opportunity to be able to run a course like this, I’ve been able to demonstrate exactly why this is so important.

Put simply, by yourself you make think of some amazing things, but a group view, with appropriate preparation and framing, will give you the extra things that you didn’t think of – the things that other people will see and, down the track, you might even kick yourself because you didn’t see it.

I don’t want to single out any of the students, because they’re all doing great things, but this is one that’s the closest to completion at the moment. There’s work to do, because the group suggestions put some really good ideas on the table, but the big advantage is that the producer (Heya, M!) was open to suggestions from his peer group and, of course, contributed as much to his peers – including offering to help people develop their expertise in the D3/JavaScript programming combination he used to make this.

250 Internet Maps, a lot of work by a PhD student, a postdoc, several lecturers and a rather busy Grand Challenges student: one picture of the awe-inspiring randomness that is the Internet.

I’m very happy with the progress that these students are made in terms of their knowledge development but also in terms of their overall demonstration of the importance of collaboration and cooperation. We still have a way to go, including some of the most difficult reports and projects, and the first really big marking stage is possibly going to introduce some strain – but I’m optimistic that things will keep going along good lines because I’ve been nothing other than honest about what has to happen, what I’m trying to do and why I believe it’s important.

I think I can sleep well tonight. 🙂


Musing on scaffolding: Why Do We Keep Needing Deadlines?

One of the things about being a Computer Science researcher who is on the way to becoming a Computer Science Education Researcher is the sheer volume of educational literature that you have to read up on. There’s nothing more embarrassing than having an “A-ha!” moment that turns out to have been covered 50 years and the equivalent of saying “Water – when it freezes – becomes this new solid form I call Falkneranium!”

Ahem. So my apologies to all who read my ravings and think “You know, X said that … and a little better, if truth be told.” However, a great way to pick up on other things is to read other people’s blogs because they reinforce and develop your knowledge, as well as giving you links to interesting papers. Even when you’ve seen a concept before, unsurprisingly, watching experts work with that concept can be highly informative.

I was reading Mark Guzdial’s blog some time ago and his post on the Khan Academy’s take on Computer Science appealed to me for a number of reasons, not least for his discussion of scaffolding; in this case, a tutor-guided exploration of a space with students that is based upon modelling, coaching and exploration. Importantly, however, this scaffolding fades over time as the student develops their own expertise and needs our help less. It’s like learning to ride a bike – start with trainer wheels, progress to a running-alongside parent, aspire to free wheeling! (But call a parent if you fall over or it’s too wet to ride home.)

One of my key areas of interest is self-regulation in students – producing students who no longer need me because they are self-aware, reflective, critical thinkers, conscious of how they fit into the discipline and (sufficiently) expert to be able to go out into the world. My thinking around Time Banking is one of the ways that students can become self-regulating – they manage their own time in a mature and aware fashion without me having to waggle a finger at them to get them to do something.

Today, R (postdoc in the  Computer Science Education Research Group) and I were brainstorming ideas for upcoming papers over about a 2 hour period. I love a good brainstorm because, for some time afterwards, ideas and phrases come to me that allow me to really think about what I’m doing. Combining my reading of Mark’s blog and the associated links, especially about the deliberate reduction of scaffolding over time, with my thoughts on time management and pedagogy, I had this thought:

If imposed deadlines have any impact upon the development of student timeliness, why do we continue to need them into the final year of undergraduate and beyond? When do the trainer wheels come off?

Now, of course, the first response is that they are an administrative requirement, a necessary evil, so they are (somehow) exempt from a pedagogical critique. Hmm. For detailed reasons that will go into the paper I’m writing, I don’t really buy that. Yes, every course (and program) has a final administrative requirement. Yes, we need time to mark and return assignments (or to provide feedback on those assignments, depending on the nature of the assessment obviously). But all of the data I have says that not only do the majority of students hand up on the last day (if not later), but that they continue to do so into later years – getting later and later as they progress, rather than earlier and earlier. Our administrative requirement appears to have no pedagogical analogue.

So here is another reason to look at these deadlines, or at least at the way that we impose them in my institution. If an entry test didn’t correlate at all with performance, we’d change it. If a degree turned out students who couldn’t function in the world, industry consultation would pretty smartly suggest that we change it. Yet deadlines, which we accept with little comment most of the time, only appear to work when they are imposed but, over time, appear to show no development of the related skill that they supposedly practice – timeliness. Instead, we appear to enforce compliance and, as we would expect from behavioural training on external factors, we must continue to apply the external stimulus in order to elicit the appropriate compliance.

Scaffolding works. Is it possible to apply a deadline system that also fades out over time as our students become more expert in their own time management?

I have two days of paper writing on Thursday and Friday and ‘m very much looking forward to the further exploration of these ideas, especially as I continue to delve into the deep literature pile that I’ve accumulated!