Student Reflections – The End of Semester Process Report
Posted: June 27, 2012 Filed under: Education | Tags: authenticity, education, feedback, Generation Why, higher education, in the student's head, learning, measurement, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking, tools Leave a commentI’ve mentioned before that I have two process awareness reports in one of my first-year courses. One comes just after the monster “Library” prac, and one is right at the end of the course. These encourage the students to reflect on their assignment work and think about their software development process. I’ve just finished marking the final one and, as last year, it’s a predominantly positive and rewarding experience.
When faced with 2-4 pages of text to produce, most of my students sit down and write several, fairly densely packed pages telling me about the things that they’ve discovered along the way: lessons learned, pit traps avoided and (interestingly) the holes that they did fall into. It’s rare that I get cynical replies and for this course, from over 100 responses, I think that I had about 5 disappointing ones.
The disappointing ones included ones that posted about how I had to give them marks for something that was rubbish (uh, no I didn’t, read the assignment spec and the forum carefully), ones that were scrawled together in about a minute and said nothing, and the ones that were the outpourings of someone who wasn’t really happy with where they were, rather than something I could easily fix. Let’s move on from these.
I want to talk about the ones who had crafted beautiful diagrams where they proudly displayed their software process. The ones who shared great ideas about how to help students in the next offering. The ones who shared the links that they found useful with me, in case other students would like them. The ones who were quietly proud of mastering their areas of difficulty and welcomed the opportunity to tell someone about it. The one who used this quote from Confucius:
“A man without distant care must have near sorrow”
(人无远虑 必有近忧)
To explain why you had to look into the future when you did software design – don’t leave your assignments to the last minute, he was saying, look ahead! (I am, obviously, going to use that for teaching next semester!)
Overall, I find these reports to be a resolutely uplifting experience. The vast majority of my students have learnt what I wanted them to learn and have improved their professional skills but, as well, a large number of them have realised that the assignments, together with the lectures, develop their knowledge. Here is one of my favourite student quotes about the assignments themselves, which tells me that we’re starting to get the design right:
The real payoff was towards the end of the assignment. Often it would be possible to “just type code” and earn at least half the marks fairly easily. However there was always a more complex final-part to the assignment, one that I could not complete unless I approached it in a systematic, well thought out way. The assignments made it easy to see that a program of any real complexity would be nearly impossible to build without a well-defined design.
But students were also thinking about how they were going to take more general lessons out of this. Here’s another quote I like:
Three improvements that I am aiming to take on board for future subjects are: putting together a study timetable early on in the game; taking the time to read and understand the problem I’ve been given; and put enough time aside to produce a concise design which includes testing strategies.
The exam for this course has just been held and we’re assembling the final marks for inspection on Friday, which will tell us how this new offering has gone. But, at this stage, I have an incredibly valuable resource of student feedback to draw on when I have to do any minor adjustments to make this course better for the next offering.
From a load perspective, yes, having two essays in an otherwise computationally based course does put load on the lecturer/marker but I am very happy to pay that price. It’s such a good way to find out what my students are thinking and, from a personal perspective, be a little more confident that my co-teaching staff and I are making a positive change in these students’ lives. Better still, by sharing comments from cohort to cohort, we provide an authenticity to the advice that I would be hard pressed to achieve.
I think that this course, the first one I’ve really designed from the ground up and I’m aware of how rare that opportunity is, is actually turning into something good. And that, unsurprisingly, makes me very happy.
Time Banking and Plagiarism: Does “Soul Destroying” Have An Ethical Interpretation?
Posted: June 25, 2012 Filed under: Education | Tags: advocacy, blogging, design, education, educational problem, feedback, higher education, in the student's head, learning, plagiarism, resources, student perspective, teaching, teaching approaches, time banking, tools, work/life balance, workload 4 CommentsYesterday, I wrote a post on the 40 hour week, to give an industrial basis for the notion of time banking, and I talked about the impact of overwork. One of the things I said was:
The crunch is a common feature in many software production facilities and the ability to work such back-breaking and soul-destroying shifts is often seen as a badge of honour or mark of toughness. (Emphasis mine.)
Back-breaking is me being rather overly emphatic regarding the impact of work, although in manual industries workplace accidents caused by fatigue and overwork can and do break backs – and worse – on a regular basis.
But soul-destroying? Am I just saying that someone will perform their tasks as an automaton or zombie, or am I saying something more about the benefit of full cognitive function – the soul as an amalgam of empathy, conscience, consideration and social factors? Well, the answer is that, when I wrote it, I was talking about mindlessness and the removal of the ability to take joy in work, which is on the zombie scale, but as I’ve reflected on the readings more, I am now convinced that there is an ethical dimension to fatigue-related cognitive impairment that is important to talk about. Basically, the more tired you get, the more likely you are to function on the task itself and this can have some serious professional and ethical considerations. I’ll provide a basis for this throughout the rest of this post.
The paper I was discussing, on why Crunch Mode doesn’t work, listed many examples from industry and one very interesting paper from the military. The paper, which had a broken link in the Crunch mode paper, may be found here and is called “Sleep, Sleep Deprivation, and Human Performance in Continuous Operations” by Colonel Gregory Belenky. Now, for those who don’t know, in 1997 I was a commissioned Captain in the Royal Australian Armoured Corps (Reserve), on detachment to the Training Group to set up and pretty much implement a new form of Officer Training for Army Reserve officers in South Australia. Officer training is a very arduous process and places candidates, the few who make it in, under a lot of stress and does so quite deliberately. We have to have some idea that, if terrible things happen and we have to deploy a human being to a war zone, they have at least some chance of being able to function. I had been briefed on most of the issues discussed in Colonel Belenky’s paper but it was only recently that I read through the whole thing.
And, to me today as an educator (I resigned my commission years ago), there are still some very important lessons, guidelines and warnings for all of us involved in the education sector. So stay with me while I discuss some of Belenky’s terminology and background. The first term I want to introduce is droning: the loss of cognitive ability through lack of useful sleep. As Belenky puts in, in the context of US Army Ranger training:
…the candidates can put one foot in front of another and respond if challenged, but have difficulty grasping their situation or acting on their own initiative.
What was most interesting, and may surprise people who have never served with the military, is that the higher the rank, the less sleep people got – and the higher level the formation, the less sleep people got. A Brigadier in charge of a Brigade is going to, on average, get less sleep than the more junior officers in the Brigade and a lot less sleep than a private soldier in a squad. As an officer, my soldiers were fed before me, rested before me and a large part of my day-to-day concern was making sure that they were kept functioning. This keeps on going up the chain and, as you go further up, things get more complex. Sadly, the people shouldering the most complex cognitive functions with the most impact on the overall battlefield are also the people getting the least fuel for their continued cognitive endeavours. They are the most likely to be droning: going about their work in an uninspired way and not really understanding their situation. So here is more evidence from yet another place: lack of sleep and fatigue lead to bad outcomes.
One of the key issues Belenky talks about is the loss of situational awareness caused by the accumulated sleep debt, fatigue and overwork suffered by military personnel. He gives an example of an Artillery Fire Direction Centre – this is where requests for fire support (big guns firing large shells at locations some distance away) come to and the human plotters take your requests, transform them into instructions that can be given to the gunners and then firing starts. Let me give you a (to me) chilling extract from the report, which the Crunch Mode paper also quoted:
Throughout the 36 hours, their ability to accurately derive range, bearing, elevation, and charge was unimpaired. However, after circa 24 hours they stopped keeping up their situation map and stopped computing their pre-planned targets immediately upon receipt. They lost situational awareness; they lost their grasp of their place in the operation. They no longer knew where they were relative to friendly and enemy units. They no longer knew what they were firing at. Early in the simulation, when we called for simulated fire on a hospital, etc., the team would check the situation map, appreciate the nature of the target, and refuse the request. Later on in the simulation, without a current situation map, they would fire without hesitation regardless of the nature of the target. (All emphasis mine.)
Here, perhaps, is the first inkling of what I realised I meant by soul destroying. Yes, these soldiers are overworked to the point of droning and are now shuffling towards zombiedom. But, worse, they have no real idea of their place in the world and, perhaps most frighteningly, despite knowing that accidents happen when fire missions are requested and having direct experience of rejecting what would have resulted in accidental hospital strikes, these soldiers have moved to a point of function where the only thing that matters is doing the work and calling the task done. This is an ethical aspect because, from their previous actions, it is quite obvious that there was both a professional and ethical dimension to their job as the custodians of this incredibly destructive weaponry – deprive them of enough sleep and they calculate and fire, no longer having the cognitive ability (or perhaps the will) to be ethical in their delivery. (I realise a number of you will have choked on your coffee slightly at the discussion of military ethics but, in the majority of cases, modern military units have a strong ethical code, even to the point of providing a means for soldiers to refuse to obey illegal orders. Most failures of this system in the military can be traced to failures in a unit’s ethical climate or to undetected instability in the soldiers: much as in the rest of the world.)
The message, once again, is clear. Overwork, fatigue and sleeplessness reduce the ability to perform as you should. Belenky even notes that the ability to benefit from training quite clearly deteriorates as the fatigue levels increase. Work someone hard enough, or let them work themselves hard enough, and not only aren’t they productive, they can’t learn to do anything else.
The notion of situational awareness is important because it’s a measure of your sense of place, in an organisational sense, in a geographical sense, in a relative sense to the people around you and also in a social sense. Get tired enough and you might swear in front of your grandma because your social situational awareness is off. But it’s not just fatigue over time that can do this: overloading someone with enough complex tasks can stress cognitive ability to the point where similar losses of situational awareness can occur.
Helmet fire is a vivid description of what happens when you have too many tasks to do, under highly stressful situations, and you lose your situational awareness. If you are a military pilot flying on instruments alone, especially with low or zero visibility, then you have to follow a set of procedures, while regularly checking the instruments, in order to keep the plane flying correctly. If the number of tasks that you have to carry out gets too high, and you are facing the stress of effectively flying the plane visually blind, then your cognitive load limits will be exceeded and you are now experiencing helmet fire. You are now very unlikely to be making any competent contributions at all at this stage but, worse, you may lose your sense of what you were doing, where you are, what your intentions are, which other aircraft are around you: in other words, you lose situational awareness. At this point, you are now at a greatly increased risk of catastrophic accident.
To summarise, if someone gets tired, stressed or overworked enough, whether acutely or over time, their performance goes downhill, they lose their sense of place and they can’t learn. But what does this have to do with our students?
A while ago I posted thoughts on a triage system for plagiarists – allocating our resources to those students we have the most chance of bringing back to legitimate activity. I identified the three groups as: sloppy (unintentional) plagiarism, deliberate (but desperate and opportunistic) plagiarism and systematic cheating. I think that, from the framework above, we can now see exactly where the majority of my ‘opportunistic’ plagiarists are coming from: sleep-deprived, fatigued and (by their own hands or not) over-worked students losing their sense of place within the course and becoming focused only on the outcome. Here, the sense of place is not just geographical, it is their role in the social and formal contracts that they have entered into with lecturers, other students and their institution. Their place in the agreements for ethical behaviour in terms of doing the work yourself and submitting only that.
If professional soldiers who have received very large amounts of training can forget where there own forces are, sometimes to the tragic extent that they fire upon and destroy them, or become so cognitively impaired that they carry out the mission, and only the mission, with little of their usual professionalism or ethical concern, then it is easy to see how a student can become so task focussed that start to think about only ending the task, by any means, to reduce the cognitive load and to allow themselves to get the sleep that their body desperately needs.
As always, this does not excuse their actions if they resort to plagiarism and cheating – it explains them. It also provides yet more incentive for us to try and find ways to reach our students and help them form systems for planning and time management that brings them closer to the 40 hour ideal, that reduces the all-nighters and the caffeine binges, and that allows them to maintain full cognitive function as ethical, knowledgable and professional skill practitioners.
If we want our students to learn, it appears that (for at least some of them) we first have to help them to marshall their resources more wisely and keep their awareness of exactly where they are, what they are doing and, in a very meaningful sense, who they are.
Time Banking: Aiming for the 40 hour week.
Posted: June 24, 2012 Filed under: Education | Tags: education, educational problem, higher education, in the student's head, learning, measurement, MIKE, principles of design, resources, student perspective, teaching, teaching approaches, time banking, tools, universal principles of design, work/life balance 5 CommentsI was reading an article on metafilter on the perception of future leisure from earlier last century and one of the commenters linked to a great article on “Why Crunch Mode Doesn’t Work: Six Lessons” via the International Game Designers Association. This article was partially in response to the quality of life discussions that ensued after ea_spouse outed the lifestyle (LiveJournal link) caused by her spouse’s ludicrous hours working for Electronic Arts, a game company. One of the key quotes from ea_spouse was this:
Now, it seems, is the “real” crunch, the one that the producers of this title so wisely prepared their team for by running them into the ground ahead of time. The current mandatory hours are 9am to 10pm — seven days a week — with the occasional Saturday evening off for good behavior (at 6:30pm). This averages out to an eighty-five hour work week. Complaints that these once more extended hours combined with the team’s existing fatigue would result in a greater number of mistakes made and an even greater amount of wasted energy were ignored.
This is an incredible workload and, as Evan Robinson notes in the “Crunch Mode” article, this is not only incredible but it’s downright stupid because every serious investigation into the effect of working more than 40 hours a week, for extended periods, and for reducing sleep and accumulating sleep deficit has come to the same conclusion: hours worked after a certain point are not just worthless, they reduce worth from hours already worked.
Robinsons cites studies and practices coming from industrialists as Henry Ford, who reduced shift length to a 40-hour work week in 1926, attracting huge criticism, because 12 years of research had shown that the shorter work week meant more output, not less. These studies have been going on since the 18th century and well into the 60’s at least and they all show the same thing: working eight hours a day, five days a week gives you more productivity because you get fewer mistakes, you get less fatigue accumulation and you have workers that are producing during their optimal production times (first 4-6 hours of work) without sliding into their negatively productive zones.
As Robinson notes, the games industry doesn’t seem to have got the memo. The crunch is a common feature in many software production facilities and the ability to work such back-breaking and soul-destroying shifts is often seen as a badge of honour or mark of toughness. The fact that you can get fired for having the audacity to try and work otherwise also helps a great deal in motivating people to adopt the strategy.
Why spend so many hours in the office? Remember when I said that it’s sometimes hard for people to see what I’m doing because, when I’m thinking or planning, I can look like I’m sitting in the office doing nothing? Imagine what it looks like if, two weeks before a big deadline, someone walks into the office at 5:30pm and everyone’s gone home. What does this look like? Because of our conditioning, which I’ll talk about shortly, it looks like we’ve all decided to put our lives before the work – it looks like less than total commitment.
As a manager, if you can tell everyone above you that you have people at their desks 80+ hours a week and will have for the next three months, then you’re saying that “this work is important and we can’t do any more.” The fact that people were probably only useful for the first 6 hours of every day, and even then only for the first couple of months, doesn’t matter because it’s hard to see what someone is doing if all you focus on is the output. Those 80+ hour weeks are probably only now necessary because everyone is so tired, so overworked and so cognitively impaired, that they are taking 4 times as long to achieve anything.
Yes, that’s right. All the evidence says that more than 2 months of overtime and you would have been better off staying at 40 hours/week in terms of measurable output and quality of productivity.
Robinson lists six lessons, which I’ll summarise here because I want to talk about it terms of students and why forward planning for assignments is good practice for better smoothing of time management in the future. Here are the six lessons:
- Productivity varies over the course of the workday, with greatest productivity in the first 4-6 hours. After enough hours, you become unproductive and, eventually, destructive in terms of your output.
- Productivity is hard to quantify for knowledge workers.
- Five day weeks of eight house days maximise long-term output in every industry that has been studied in the past century.
- At 60 hours per week, the loss of productivity caused by working longer hours overwhelms the extra hours worked within a couple of months.
- Continuous work reduces cognitive function 25% for every 24 hours. Multiple consecutive overnighters have a severe cumulative effect.
- Error rates climb with hours worked and especially with loss of sleep.
My students have approximately 40 hours of assigned work a week, consisting of contact time and assignments, but many of them never really think about that. Most plan in other things around their ‘free time’ (they may need to work, they may play in a band, they may be looking after families or they may have an active social life) and they fit the assignment work and other study into the gaps that are left. Immediately, they will be over the 40 hour marker for work. If they have a part-time job, the three months of one of my semesters will, if not managed correctly, give them a lumpy time schedule alternating between some work and far too much work.
Many of my students don’t know how they are spending their time. They switch on the computer, look at the assignment, Skype, browse, try something, compile, walk away, grab a bite, web surf, try something else – wow, three hours of programming! This assignment is really hard! That’s not all of them but it’s enough of them that we spend time on process awareness: working out what you do so you know how to improve it.
Many of my students see sports drinks, energy drinks and caffeine as a licence to not sleep. It doesn’t work long term as most of us know, for exactly the reasons that long term overwork and sleeplessness don’t work. Stimulants can keep you awake but you will still be carrying most if not all of your cognitive impairment.
Finally, and most importantly, enough of my students don’t realise that everything I’ve said up until now means that they are trying to sit my course with half a brain after about the halfway point, if not sooner if they didn’t rest much between semesters.
I’ve talked about the theoretical basis for time banking and the pedagogical basis for time banking: this is the industrial basis for time banking. One day I hope that at least some of my students will be running parts of their industries and that we have taught them enough about sensible time management and work/life balance that, as people in control of a company, they look at real measures of productivity, they look at all of the masses of data supporting sensible ongoing work rates and that they champion and adopt these practices.
As Robinson says towards the end of the article:
Managers decide to crunch because they want to be able to tell their bosses “I did everything I could.” They crunch because they value the butts in the chairs more than the brains creating games. They crunch because they haven’t really thought about the job being done or the people doing it. They crunch because they have learned only the importance of appearing to do their best to instead of really of doing their best. And they crunch because, back when they were programmers or artists or testers or assistant producers or associate producers, that was the way they were taught to get things done. (Emphasis mine.)
If my students can see all of their requirements ahead of time, know what is expected, have been given enough process awareness, and have the will and the skill to undertake the activities, then we can potentially teach them a better way to get things done if we focus on time management in a self-regulated framework, rather than imposed deadlines in a rigid authority-based framework. Of course, I still have a lot of work to to demonstrate that this will work but, from industrial experience, we have yet another very good reason to try.
Time Banking: Foresightedness and Reward
Posted: June 23, 2012 Filed under: Education | Tags: advocacy, authenticity, curriculum, design, education, educational problem, higher education, learning, teaching, time banking, work/life balance, workload 2 CommentsYou may have noticed that I’ve stopped numbering the time banking posts – you may not have noticed that they were numbered in the first place! The reason is fairly simple and revolves around the fact that the numbers are actually meaningless. It’s not as if I have a huge plan of final sequence of the time banking posts. I do have a general idea but the order can change as one idea or another takes me and I feel that numbering them makes it look as if there is some grand sequence.
There isn’t. That’s why they all tend to have subtitles after them so that they can be identified and classified in a cognitive sequence. So, why am I telling you this? I’m telling you this so that you don’t expect “Time Banking 13” to be something special, or (please, no) “Time Banking 100” to herald the apocalypse.

The Druids invented time banking but could never find a sufficiently good Oracle to make it work. The Greeks had the Oracle but not the bank. This is why the Romans conquered everywhere. True story!
If I’m going to require students to self-regulate then, whether through operant or phenomenological mechanisms, the outcomes that they receive are going to have to be shaped to guide the student towards a self-regulating model. In simple terms, they should never feel that they have wasted their time, that they are under-appreciated or that they have been stupid to follow a certain path.
In particular, if we’re looking at time management, then we have to ensure that time spent in advance is never considered to be wasted time. What does that mean to me as a teacher, if I set an assignment in advance and students put work towards it – I can’t change the assignment arbitrarily. This is one of the core design considerations for time banking: if deadlines are seen as arbitrary (and extending them in case of power failures or class-wide lack of submission can show how arbitrary they are) then we allow the students to make movement around the original deadlines, in a way that gives them control without giving us too much extra work. If I want my students to commit to planning ahead and doing work before the due date then some heavy requirements fall on me:
- I have to provide the assignment work ahead of schedule and, preferably, for the entire course at the start of the semester.
- The assignments stay the same throughout that time. No last minute changes or substitutions.
- The oracle is tied to the assignment and is equally reliable.
This requires a great deal of forward planning and testing but, more importantly, it requires a commitment from me. If I am asking my students to commit, I have to commit my time and planning and attention to detail to my students. It’s that simple. Nobody likes to feel like a schmuck. Like they invested time under false pretences. That they had worked on what they thought was a commitment but it turned out that someone just hadn’t really thought things through.
Wasting time and effort discourages people. It makes people disengage. It makes them less trustful of you as an educator. It makes them less likely to trust you in the future. It reduces their desire to participate. This is the antithesis of what I’m after with increasing self-regulation and motivation to achieve this, which I label under the banner of my ‘time banking’ project.
But, of course, it’s not as if we’re not already labouring under this commitment to our students, at least implicitly. If we don’t follow the three requirements above then, at some stage, students will waste effort and, believe me, they’re going to question what they’re doing, why they’re bothering, and some of them will drop out, drift away and be lost to us forever. Never thinking that you’ve wasted your time, never feeling like a schmuck, seeing your ideas realised, achieving goals: that’s how we reward students, that’s what can motivate students and that’s how we can move the on to higher levels of function and achievement.
Flow, Happiness and the Pursuit of Significance
Posted: June 22, 2012 Filed under: Education | Tags: Csíkszentmihályi, curriculum, education, educational research, flow, higher education, learning, measurement, MIKE, reflection, resources, student perspective, teaching, teaching approaches, time banking, tools, universal principles of design, vygotsky, Zone of proximal development Leave a commentI’ve just been reading Deirdre McCloskey’s article on “Happyism” in The New Republic. While there are a number of points I could pick at in the article, I question her specific example of statistical significance and I think she’s oversimplified a number of the philosophical points, there are a lot of interesting thoughts and arguments within the article.
One of my challenges in connecting with my students is that of making them understand what the benefit is to them of adopting, or accepting, suggestions from me as to how to become better as discipline practitioners, as students and, to some extent, as people. It would be nice if doing the right thing in this regard could give the students a tangible and measurable benefit that they could accumulate on some sort of meter – I have performed well, my “success” meter has gone up by three units. As McCloskey points out, this effectively requires us to have a meter for something that we could call happiness, but it is then tied directly to events that give us pleasure, rather than a sequence of events that could give us happiness. Workflows (chains of actions that lead to an eventual outcome) can be assessed for accuracy and then the outcome measured, but it is only when the workflow is complete that we can assess the ‘success’ of the workflow and then derive pleasure, and hence happiness, from the completion of the workflow. Yes, we can compose a workflow from sub-workflows but we will hit the same problem if we focus on an outcome-based model – at some stage, we are likely to be carrying out an action that can lead to an event from which we can derive a notion of success, but this requires us to be foresighted and see the events as a chain that results in this outcome.
And this is very hard to meter and display in a way that says anything other than “Keep going!” Unsurprisingly, this is not really the best way to provide useful feedback, reward or fodder for self-actualisation.
I have a standing joke that, as a runner, I go to a sports doctor because if I go to a General Practitioner and say “My leg hurts after I run”, the GP will just say “Stop running.” I am enough of a doctor to say that to myself – so I seek someone who is trained to deal with my specific problems and who can give me a range of feedback that may include “stop running” because my injuries are serious or chronic, but can provide me with far more useful information from which I can make an informed choice. The happiness meter must be able to work with workflow in some way that is useful – keep going is not enough. We therefore need to look at the happiness meter.
McCloskey identifies Bentham, founder of utilitarianism, as the original “pleasure meter” proponent and implicitly addressed the beneficial calculus as subverting our assessment of “happiness units” (utils) into a form that assumes that we can reasonably compare utils between different people and that we can assemble all of our life’s experiences in a meaningful way in terms of utils in the first place!
To address the issue of workflow itself, McCloskey refers to the work of Mihály Csíkszentmihályi on flow: “the absorption in a task just within our competence”. I have talked about this before, in terms of Vygotsky’s zone of proximal development and the use of a group to assist people who are just outside of the zone of flow. The string of activities can now be measured in terms of satisfaction or immersion, as well as the outcomes of this process. Of course, we have the outcomes of the process in terms of direct products and we have outcomes in terms of personal achievement at producing those products. Which of these go onto the until meter, given that they are utterly self-assessed, subjective and, arguably, orthogonal in some cases. (If you have ever done your best, been proud of what you did, but failed in your objective, you know what I’m talking about.)
My reading of McCloskey is probably a little generous because I find her overall argument appealing. I believe that her argument may be distilled are:
- If we are going to measure, we must measure sensibly and be very clear in our context and the interpretation of significance.
- If we are going to base any activity on our measurement, then the activity we create or change must be related to the field of measurement.
Looking at the student experience in this light, asking students if they are happy with something is, ultimately, a pointless activity unless I either provide well-defined training in my measurement system and scale, or I am looking for a measurement of better or worse. This is confounded by simple cognitive biasses including, but not limited to, the Hawthorne Effect and confirmation bias. However, measuring what my students are doing, as Csíkszentmihályi did in the flow experiments, will show me if they are so engaged with their activities that they are staying in the flow zone. Similarly, looking at participation and measuring outputs in collaborative activities where I would expect the zone of proximal development to be in effect is going to be far more revealing than asking students if they liked something or not.
As McCloskey discusses, there is a point at which we don’t seem to get any happier but it is very hard to tell if this is a fault in our measurement and our presumption of a three-point non-interval scale and it then often degenerates into a form of intellectual snobbery that, unsurprisingly, favours the elites who will be studying the non-elites. (As an aside, I learnt a new word. Clerisy: “A distinct class of learned or literary people” If you’re going to talk about the literate elites, it’s nice to have a single word to do so!) In student terms, does this mean that there is a point at which even the most keen of our best and brightest will not try some of our new approaches? The question, of course, is whether the pursuit of happiness is paralleling the quest for knowledge, or whether this is all one long endured workflow that results in a pleasure quantum labelled ‘graduation’.
As I said, I found it to be an interesting and thoughtful piece, despite some problems and I recommend it to you, even if we must then start an large debate in the comments on how much I misled you!
Your love is like bad measurement.
Posted: June 19, 2012 Filed under: Education, Opinion | Tags: advocacy, data visualisation, education, educational problem, ethics, higher education, learning, measurement, MIKE, teaching, teaching approaches, thinking, universal principles of design, workload Leave a comment(This is my 200th post. I’ve allowed myself a little more latitude on the opinionated scale. Educational content is still present but you may find some of the content slightly more confronting than usual. I’ve also allowed myself an awful pun in the title.)
People like numbers. They like solid figures, percentages, clear statements and certainty. It’s a great shame that mis-measurement is so easy to do, when you search for these figures, and so much a part of our lives. Today, I’m going to discuss precision and recall, because I eventually want to talk about bad measurement. It’s very easy to get measurement wrong but, even when it’s conducted correctly, the way that we measure or the reasons that we have for measuring can make even the most precise and delicate measurements useless to us for an objective scientific purpose. This is still bad measurement.
I’m going to give you a big bag of stones. Some of the stones have diamonds hidden inside them. Some of the stones are red on the outside. Let’s say that you decide that you are going to assume that all stones that have been coloured red contain diamonds. You pull out all of the red stones, but what you actually want is diamonds. The number of red stones are referred to as the number of retrieved instances – the things that you have selected out of that original bag of stones. Now, you get to crack them open and find out how many of them have diamonds. Let’s say you have R red stones and D1 diamonds that you found once you opened up the red stones. The precision is the fraction D1/R: what percentage of the stones that you selected (Red) were actually the ones that you wanted (Diamonds). Now let’s say that there are D2 diamonds (where D2 is greater than or equal to zero) left back in the bag. The total number of diamonds in that original bag was D1+D2, right? The recall is the fraction of the total number of things that you wanted (Diamonds, given by D1+D2) that you actually got (Diamonds that were also painted Red, which is D1). So this fraction is D1/(D1+D2),the number you got divided by the number that there were there for you to actually get.
If I don’t have any other mechanism that I can rely upon for picking diamonds out of the bag (assuming no-one has conveniently painted them red), and I want all of the diamonds, then I need to take all of them out. This will give me a recall of 100% (D2 will be 0 as there will be nothing left in the bag and the fraction will be D1/D1). Hooray! I have all of the diamonds! There’s only one problem – there are still only so many diamonds in that bag and (maybe) a lot more stones, so my precision may be terrible. More importantly, my technique sucks (to use an official term) and I have no actual way of finding diamonds. I just happen to have used a mechanism that gets me everything so it must, as a side effect, get me all of the diamonds. I haven’t actually done anything except move everything from one bag to another.
One of the things about selection mechanisms is that people often seem happy to talk about one side of the precision/recall issue. “I got all of them” is fine but not if you haven’t actually reduced your problem at all. “All the ones I picked were the right ones” sounds fantastic until you realise that you don’t know how many were left behind that were also the ones that you wanted. If we can specify solutions (or selection strategies) in terms of their precision and their recall, we can start to compare them. This is an example of how something that appears to be straightforward can actually be a bad measurement – leave out one side of precision or recall and you have no real way of assessing the utility of what it is that you’re talking about, despite having some concrete numbers to fall back on.
You may have heard this expressed in another way. Let’s assume that you can have a mechanism for determining if people are innocent or guilty of a crime. If it was a perfect mechanism, then only innocent people would go free and only guilt people would go to jail. (Let’s assume it’s a crime for which a custodial sentence is appropriate.) Now, let’s assume that we don’t have a perfect mechanism so we have to make a choice – either we set up our system so that no innocent person goes to jail, or we set up our system so that no guilty person is set free. It’s fairly easy to see how our interpretation of the presumption of innocence, the notion of reasonable doubt and even evidentiary laws would be constructed in different ways under either of these assumptions. Ultimately, this is an issue of precision and recall and by understanding these concepts we can define what we are actually trying to achieve. (The foundation of most modern law is that innocent people don’t go to jail. A number of changes in certain areas are moving more towards a ‘no one who may be guilty of crimes of a certain type will escape us’ model and, unsurprisingly, this is causing problems due to inconsistent applications of our simple definitions from above.)
The reason that I brought all of this up was to talk about bad measurement, where we measure things and then over-interpet (torture the data) or over-assume (the only way that this could have happened was…) or over-claim (this always means that). It is possible to have a precise measurement of something and still be completely wrong about why it is occurring. It is possible that all of the data that we collect is the wrong data – collected because our fundamental hypothesis is in error. Data gives us information but our interpretative framework is crucial in determining what use we can make of this data. I talked about this yesterday and stressed the importance of having enough data, but you really have to know what your data means in order to be sure that you can even start to understand what ‘enough data’ means.
One example is the miasma theory of disease – the idea that bad smells caused disease outbreaks. You could construct a gadget that measured smells and then, say in 18th Century England, correlate this with disease outbreaks – and get quite a good correlation. This is still a bad measurement because we’re actually measuring two effects, rather than a cause (dead mammals introducing decaying matter/faecal bacteria etc into water or food pathways) and the effects (smell of decomposition, and diseases like cholera, E. Coli contamination, and so on). We can collect as much ‘smell’ data as we like, but we’re unlikely to learn much more because any techniques that focus on the smell and reducing it will only work if we do things like remove the odiferous elements, rather than just using scent bags and pomanders to mask the smell.
To look at another example, let’s talk about the number of women in Computer Science at the tertiary level. In Australia, it’s certainly pretty low in many Universities. Now, we can measure the number of women in Computer Science and we can tell you exactly how many are in a given class, what their average marks are, and all sorts of statistical data about them. The risk here is that, from the measurements alone, I may have no real idea of what has led to the low enrolments for women in Computer Science.
I have heard, far too many times, that there are too few women in Computer Science because women are ‘not good at maths/computer science/non-humanities courses’ and, as I also mentioned recently when talking about the work of Professor Seron, this doesn’t appear to the reason at all. When we look at female academic performance, reasons for doing the degree and try to separate men and women, we don’t get the clear separation that would support this assertion. In fact, what we see is that the representation of women in Computer Science is far lower than we would expect to see from the (marginally small) difference that does appear at the very top end of the data. Interesting. Once we actually start measuring, we have to question our hypothesis.
Or we can abandon our principles and our heritage as scientists and just measure something else that agrees with us.
You don’t have to get your measurement methods wrong to conduct bad measurement. You can also be looking for the wrong thing and measure it precisely, because you are attempting to find data that verifies your hypothesis, but rather than being open to change if you find contradiction, you can twist your measurements to meet your hypothesis, you can only collect the data that supports your assumptions and you can over-generalise from a small scale, or from another area.
When we look at the data, and survey people to find out the reasons behind the numbers, we reduce the risk that our measurements don’t actually serve a clear scientific purpose. For example, and as I’ve mentioned before, the reason that there are too few women studying Computer Science appears to be unpleasantly circular and relates to the fact that there are too few women in the discipline over all, reducing support in the workplace, development opportunities and producing a two-speed system that excludes the ‘newcomers’. Sorry, Ada and Grace (to name but two), it turns out that we seem to have very short memories.
Too often, measurement is conducted to reassure ourselves of our confirmed and immutable beliefs – people measure to say that ‘this race of people are all criminals/cheats/have this characteristic’ or ‘women cannot carry out this action’ or ‘poor people always perform this set of actions’ without necessarily asking themselves if the measurement is going to be useful, or if this is useful pursuit as part of something larger. Measuring in a way that really doesn’t provide any more information is just an empty and disingenuous confirmation. This is forcing people into a ghetto, then declaring that “all of these people live in a ghetto so they must like living in a ghetto”.
Presented a certain way, poor and misleading measurement can only lead to questionable interpretation, usually to serve a less than noble and utterly non-scientific goal. It’s bad enough when the media does it but it’s terrible when scientists, educators and academics do it.
Without valid data, collected on the understanding that a world-changing piece of data could actually change our data, all our work is worthless. A world based on data collection purely for the sake of propping up, with no possibility of discovery and adaptation, is a world of very bad measurement.
The Many Types of Failure: What Does Zero Mean When Nothing Is Handed Up?
Posted: June 18, 2012 Filed under: Education | Tags: advocacy, blogging, curriculum, design, education, educational problem, educational research, higher education, in the student's head, learning, measurement, MIKE, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking, time banking, tools, workload 3 CommentsYou may have read about the Edmonton, Canada, teacher who expected to be sacked for handing out zeros. It’s been linked to sites as diverse as Metafilter, where a long and interesting debate ensued, and Cracked, where it was labelled one of the ongoing ‘pussifications’ of schools. (Seriously? I know you’re a humour site but was there some other way you could have put that? Very disappointed.)
Basically, the Edmonton Public School Board decided that, rather than just give a zero for a missed assignment, this would be used as a cue for follow-up work and additional classes at school or home. Their argument – you can’t mark work that hasn’t been submitted, let’s use this as a trigger to try and get submission, in case the source is external or behavioural. This, of course, puts the onus on the school to track the students, get the additional work completed, and then mark out of sequence. Lynden Dorval, the high school teacher who is at the centre of this, believe that there is too much manpower involved in doing this and that giving the student a zero forces them to come to you instead.

Some of you may never have seen one of these before. This is a zero, which is the lowest mark you can be awarded for any activity. (I hope!)
Now, of course, this has split people into two fairly neat camps – those who believe that Dorval is the “hero of zero” and those who can see the benefit of the approach, including taking into account that students still can fail if they don’t do enough work. (Where do I stand? I’d like to know a lot more than one news story before I ‘pick a side’.) I would note that a lot of tired argument and pejorative terminology has also come to the fore – you can read most of the buzzwords used against ‘progressives’ in this article, if you really want to. (I can probably summarise it for you but I wouldn’t do it objectively. This is just one example of those who are feting Dorval.)
Of course, rather than get into a heated debate where I really don’t have enough information to contribute, I’d rather talk about the basic concept – what exactly does a zero mean? If you hand something in and it meets none of my requirements, then a zero is the correct and obvious mark. But what happens if you don’t hand anything in?
With the marking approach that I practice and advertise, which uses time-based mark penalties for late submission, students are awarded marks for what they get right, rather than have marks deducted for what they do wrong. Under this scheme, “no submission” gives me nothing to mark, which means that I cannot give you any marks legitimately – so is this a straight-forward zero situation? The time penalties are in place as part of the professional skill requirements and are clearly advertised, and consistently policed. I note that I am still happy to give students the same level of feedback on late work, including their final mark without penalty, which meets all of the pedagogical requirements, but the time management issues can cost a student some, most or all of their marks. (Obviously, I’m actively working on improving engagement with time management through mechanisms that are not penalty based but that’s for other posts.)
As an aside, we have three distinct fail grades for courses at my University:
- Withdraw Fail (WF), where a student has dropped the course but after the census date. They pay the money, it stays on their record, but as a WF.
- Fail (F), student did something but not enough to pass.
- Fail No Submission (FNS), student submitted no work for assessment throughout the course.
Interestingly, for my Uni, FNS has a numerical grade of 0, although this is not shown on the transcript. Zero, in the course sense, means that you did absolutely nothing. In many senses, this represents the nadir of student engagement, given that many courses have somewhere from 1-5, maybe even 10%, of marks available for very simple activities that require very little effort.
My biggest problem with late work, or no submission, is that one of the strongest messages I have from that enormous data corpus of student submission that I keep talking about is that starting a pattern of late or no submission is an excellent indicator of reduced overall performance and, with recent analysis, a sharply decreased likelihood of making it to third year (final year) in your college studies. So I really want students to hand something in – which brings me to the crux of the way that we deal with poor submission patterns.
Whichever approach I take should be the one that is most likely to bring students back into a regular submission pattern.
If the Public School Board’s approach is increasing completion rates and this has a knock-on effect which increases completion rates in the future? Maybe it’s time to look at that resourcing profile and put the required money into this project. If it’s a transient peak that falls off because we’re just passing people who should be failing? Fuhgeddaboutit.
To quote Sherlock Holmes (Conan Doyle, naturally):
It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. (A Scandal in Bohemia)
“Data! Data! Data!” he cried impatiently. “I can’t make bricks without clay.” (The Adventure of the Copper Beeches)
It is very easy to take a side on this and it is very easy to see how both sides could have merit. The issue, however, is what each of these approaches actually does to encourage students to submit their assignment work in a more timely fashion. Experiments, experimental design, surveys, longitudinal analysis, data, data, data!
If I may end by waxing lyrical for a moment (and you will see why I stick to technical writing):
If zeroes make Heroes, then zeroes they must have! If nulls make for dulls, then we must seek other ways!
Time Banking IV: The Role of the Oracle
Posted: June 14, 2012 Filed under: Education | Tags: education, educational problem, educational research, feedback, Generation Why, higher education, learning, measurement, principles of design, resources, student perspective, teaching, teaching approaches, time banking 1 CommentI’ve never really gone into much detail on how I would make a system like Time Banking work. If a student can meet my requirements and submit their work early then, obviously, I have to provide some sort of mechanism that allows the students to know that my requirements have been met. The first option is that I mark everything as it comes in and then give the student their mark, allowing them to resubmit until they get 100%.
That’s not going to work, unfortunately, as, like so many people, I don’t have the time to mark every student’s assignment over and over again. I wait until all assignments have been submitted, review them as a group, mark them as a group and get the best use out of staying in the same contextual framework and working on the same assignments. If I took a piecemeal approach to marking, it would take me longer and, especially if the student still had some work to do, I could end up marking the same assignment 3,4, however many times and multiplying my load in an unsupportable way.
Now, of course I can come up with simple measures that the students can check for themselves. Of course, the problem we have here is setting something that a student can mis-measure as easily as they measure. If I say “You must have at least three pages for an essay” I risk getting three pages of rubbish or triple spaced 18 point print. It’s the same for any measure of quantity (number of words, number of citations, length of comments and so on) instead of quality. The problem is, once again, that if the students were capable of determining the quality of their own work and determining the effort and quality required to pass, they wouldn’t need time banking because their processes are already mature!
So I’m looking for an indicator of quality that a student can use to check their work and that costs me only (at most) a small amount of effort. In Computer Science, I can ask the students to test their work against a set of known inputs and then running their program to see what outputs we get. There is then the immediate problem of students hacking their code and just throwing it against the testing suite to see if they can fluke their way to a solution. So, even when I have an idea of how my oracle, my measure of meeting requirements, is going to work, there are still many implementation details to sort out.
Fortunately, to help me, I have over five years worth of student data through our automated assignment submission gateway where some assignments have an oracle, some have a detailed oracle, some have a limited oracle and some just say “Thanks for your submission.” The next stage in the design of the oracle is to go back and to see what impact the indications of progress and completeness had on the students. Most importantly, for me, is the indication of how many marks a student had to get in order to stop trying to make fresh submissions. If before the due date, did they always strive for 100%? If late, did they tend to stop at more than 50% of achieved marks, or more than 40% in the case of trying to avoid receiving a failing grade based on low assignment submission?
Are there significant and measurable differences between assignments with an oracle and those that have none (or a ‘stub’, so to speak)? I know what many people expect to find in the data, but now I have the data and I can go and interrogate that!
Every time that I have questions like this about the implementation, I have a large advantage in that I already have a large control body of data, before any attempts were made to introduce time banking. I can look at this to see what student behaviour is like and try to extract these elements and use them to assist students in smoothing out their application of effort and develop more mature time management approaches.
Now to see what the data actually says – I hope to post more on this particular aspect in the next week or so.
Time Banking III: Cheating and Meta-Cheating
Posted: June 13, 2012 Filed under: Education | Tags: authenticity, blogging, curriculum, design, education, educational problem, ethics, games, higher education, in the student's head, teaching, teaching approaches, thinking, time banking Leave a commentOne of the problems with setting up any new marking system is that, especially when you’re trying to do something a bit out of the ordinary, you have to make sure that you don’t produce a system that can be gamed or manipulated to let people get an unfair advantage. (Students are very resourceful when it comes to this – anyone who has received a mysteriously corrupted Word document of precisely the right length and with enough relevant strings to look convincing, on more than one occasion from the same student and they then are able to hand up a working one the next Monday, knows exactly what I’m talking about.)
As part of my design, I have to be clear to the students what I do and don’t consider to be reasonable behaviour (returning to Dickinson and McIntyre, I need to be clear in my origination and leadership role). Let me illustrate this with an anecdote from decades ago.
In the early 90s, I helped to write and run a number of Multi User Dungeons (MUDs) – the text-based fore-runners of the Massively Multiplayer On-line Role Playing Games, such as World of Warcraft. The games had very little graphical complexity and we spent most of our time writing the code that drove things like hitting orcs with swords or allowing people to cast spells. Because of the many interactions between the software components in the code, it was possible for unexpected things to happen – not just bugs where code stopped working but strange ‘features’ where things kept working but in an odd way. I knew a guy, let’s call him K, who was a long-term player of MUDs. If the MUD was any good, he’d not only played it, he’d effectively beaten it. He knew every trick, every lurk, the best way to attack a monster but, more interestingly, he had a nose for spotting errors in the code and taking advantage of them. One time, in a game we were writing, we spotted K walking around with something like 20-30 ’empty’ water bottles on him. (As game writers, wizards, we could examine any object in the game, which included seeing what players were carrying.)
This was weird. Players had a limited amount of stuff that they could carry, and K should have had no reason to carry those bottles. When we examined him, we discovered that we’d made an error in the code so that, when you drank from a bottle and emptied it, the bottle ended up weighing LESS THAN NOTHING. (It was a text game and our testing wasn’t always fantastic – I learnt!) So K was carrying around the in-game equivalent of helium balloons that allowed him to carry a lot more than he usually would.
Of course, once we detected it, we fixed the code and K stopped carrying so many empty bottles. (Although, I have no doubt that he personally checked each and every container we put into the game from that point on to see if could get it to happen again.) Did we punish him? No. We knew that K would need some ‘flexibility’ in his exploration of the game, knowing that he would press hard against the rubber sheet to see how much he could bend reality, but also knowing that he would spot problems that would take us weeks or months of time to find on our own. We took him into our new and vulnerable game knowing that if he tried to actually break or crash the game, or share the things he’d learned, we’d close off his access. And he knew that too.
Had I placed a limit in play that said “Cheating detected = Immediate Booting from the game”, K would have left immediately. I suspect he would have taken umbrage at the term ‘cheating’, as he generally saw it as “this is the way the world works – it’s not my fault that your world behaves strangely”. (Let’s not get into this debate right now, we’re not in the educational plagiarism/cheating space right now.)
We gave K some exploration space, more than many people would feel comfortable with, but we maintained some hard pragmatic limits to keep things working and we maintained the authority required to exercise these limits. In return, K helped us although, of course, he played for the fun of the game and, I suspect, the joy of discovering crazy bugs. However, overall, this approach saved us effort and load, and allowed us to focus on other things with our limited resources. Of course, to make this work required careful orientation and monitoring on our behalf. Nothing, after all, comes for free.
If I’d asked K to fill out forms describing the bugs he’d found, he’d never have done it. If I’d had to write detailed test documents for him, I wouldn’t have had time to do anything else. But it also illustrates something that I have to be very cautious of, which I’ve embodied as the ‘no cheating/gaming’ guideline for Time Banking. One of the problems with students at early development stages is that they can assume that their approach is right, or even assert that their approach is the correct one, when it is not aligned with our goals or intentions at all. Therefore, we have to be clear on the goals and open about our intentions. Given that the goal of Time Banking is to develop mature approach to time management, using the team approach I’ve already discussed, I need to be very clear in the guidance I give to students.
However, I also need to be realistic. There is a possibility that, especially on the first run, I introduce a feature in either the design or the supporting system that allows students to do something that they shouldn’t. So here’s my plan for dealing with this:
- There is a clear no-cheating policy. Get caught doing anything that tries to subvert the system or get you more hours in any other way than submitting your own work early and it’s treated as a cheating incident and you’re removed from the time bank.
- Reporting a significant fault in the system, that you have either deduced, or observed, is worth 24 hours of time to the first person who reports it. (Significant needs definition but it’s more than typos.)
I need the stick. Some of my students need to know that the stick is there, even if the stick is never needed, but I really can’t stand the stick. I have always preferred the carrot. Find me a problem and you get an automatic one-day extension, good for any assignment in the bank. Heck, I could even see my way clear to making this ‘liftable’ hours – 24 hours you can hand on to a friend if you want. If part of your team thinking extends to other people and, instead of a gifted student handing out their assignment, they hand out some hours, I have no problem with that. (Mr Pragmatism, of course, places a limit on the number of unearned hours you can do this with, from the recipient’s, not the donor’s perspective. If I want behaviour to change, then people have to act to change themselves.)
My design needs to keep the load down, the rewards up but, most importantly, the rewards have to move the students towards the same goals as the primary activity or I will cause off-task optimisation and I really don’t want to do that.
I’m working on a discussion document to go out to people who think this is a great idea, a terrible idea, the worst idea ever, something that they’d like to do, so that I can bring all of the thoughts back together and, as a group of people dedicated to education, come up with something that might be useful – OR, and it’s a big or, come up with the dragon slaying notion that kills time banking stone dead and provides the sound theoretical and evidence-based support as to why we must and always should use deadlines. I’m prepared for one, the other, both or neither to be true, along with degrees along the axis.
Time Banking II: We Are a Team
Posted: June 12, 2012 Filed under: Education | Tags: curriculum, design, education, educational problem, educational research, feedback, higher education, learning, measurement, reflection, resources, teaching, teaching approaches, time banking, tools, vygotsky Leave a commentIn between getting my camera ready copy together for ICER, and I’m still pumped that our paper got into ICER, I’ve been delving deep into the literature and the psychological and pedagogical background that I need to confirm before I go too much further with Time Banking. (I first mentioned this concept here. The term is already used in a general sense to talk about an exchange of services based on time as a currency. I use it here within the framework of student assignment submission.) I’m not just reading in CS Ed, of course, but across Ed, sociology, psychology and just about anywhere else where people have started to consider time as a manageable or tradable asset. I thought I’d take this post to outline some of the most important concepts behind it and provide some rationale for decisions that have already been made. I’ve already posted the guidelines for this, which can be distilled down to “not all events can be banked”, “additional load must be low”, “pragmatic limits apply”, “bad (cheating or gaming) behaviour is actively discouraged” and “it must integrate with our existing systems”.

Time/Bank currency design by Lawrence Weiner. Photo by Julieta Aranda. (Question for Nick – do I need something like this for my students?)
Our goal, of course, is to get students to think about their time management in a more holistic fashion and to start thinking about their future activities sometime sooner the 24 hours before the due date. Rather than students being receivers and storers of deadline, can we allow them to construct their own timelines, within a set of limits? (Ben-Ari, 1998, “Constructivism in Computer Science Education”, SIGCSE, although Ben-Ari referred to knowledge in this context and I’m adapting it to a knowledge of temporal requirements, which depends upon a mature assessment of the work involved and a sound knowledge of your own skill level.) The model that I am working with is effectively a team-based model, drawing on Dickinson and McIntyre’s 1997 work “Team Performance Assessment and Measurement: Theory, Methods and Applications.”, but where the team consists of a given student, my marking team and me. Ultimately our product is the submitted artefact and we are all trying to facilitate its timely production, but if I want students to be constructive and participative, rather than merely compliant and receptive, I have to involve them in the process. Dickinson and McIntyre identified seven roles in their model: orientation, leadership, monitoring, feedback, back-up (assisting/supporting), coordination and communication. Some of these roles are obviously mine, as the lecturer, such as orientation (establishing norms and keeping the group cohesive) and monitoring (observing performance and recognising correct contribution). However, a number of these can easily be shared between lecturer and student, although we must be clear as to who holds each role at a given time. In particular, if I hold onto deadlines and make them completely immutable then I have take the coordination role and handed over a very small fragment of that to the student. By holding onto that authority, whether it makes sense or not, I’m forcing the student into an authority-dependent mode.
(We could, of course, get into quite a discussion as to whether the benefit is primarily Piagiatien because we are connecting new experiences with established ideas, or Vygotskian because of the contact with the More Knowledgable Other and time spent in the Zone of Proximal Development. Let’s just say that either approach supports the importance of me working with a student in a more fluid and interactive manner than a more rigid and authoritarian relationship.)
Yes, I know, some deadlines are actually fixed and I accept that. I’m not saying that we abandon all deadlines or notion of immutability. What I am, however, saying is that we want our students to function in working teams, to collaborate, to produce good work, to know when to work harder earlier to make it easier for themselves later on. Rather than give them a tiny sandpit in which to play, I propose that we give them a larger space to work with. It’s still a space with edges, limits, defined acceptable behaviour – our monitoring and feedback roles are one of our most important contributions to our students after all – but it is a space in which a student can have more freedom of action and, for certain roles including coordination, start to construct their own successful framework for achievement.
Much as reading Vygotsky gives you useful information and theoretical background, without necessarily telling you how to teach, reading through all of these ideas doesn’t immediately give me a fully-formed implementation. This is why the guidelines were the first things I developed once I had some grip on the ideas, because I needed to place some pragmatic limits that would allow me to think about this within a teaching framework. The goal is to get students to use the process to improve their time management and process awareness and we need to set limits on possible behaviour to make sure that they are meeting the goal. “Hacks” to their own production process, such as those that allow them to legitimately reduce their development time (such as starting the work early, or going through an early prototype design) are the point of the exercise. “Hacks” that allow them to artificially generate extra hours in the time bank are not the point at all. So this places a requirement on the design to be robust and not susceptible to gaming, and on the orientation, leadership and monitoring roles as practiced by me and my staff. But it also requires the participants to enter into the spirit of it or choose not to participate, rather than attempting to undermine it or act to spite it.
The spontaneous generation of hours was something that I really wanted to avoid. When I sketched out my first solution, I realised that I had made the system far too complex by granting time credits immediately, when a ‘qualifying’ submission was made, and that later submissions required retraction of the original grant, followed by a subsequent addition operation. In fact, I had set up a potential race condition that made it much more difficult to guarantee that a student was using genuine extension credit time. The current solution? Students don’t get credit added to their account until a fixed point has passed, beyond which no further submissions can take place. This was the first of the pragmatic limits – there does exist a ‘no more submissions’ point but we are relatively elastic to that point. (It also stops students trying to use obtained credit for assignment X to try and hand up an improved version of X after the due date. We’re not being picky here but this isn’t the behaviour we want – we want students to think more than a week in advance because that is the skill that, if practised correctly, will really improve their time management.)
My first and my most immediate concern was that students may adapt to this ‘last hand-in barrier’ but our collected data doesn’t support this hypothesis, although there are some concerning subgroups that we are currently tearing apart to see if we can get more evidence on the small group of students who do seem to go to a final marks barrier that occurs after the main submission date.
I hope to write more on this over the next few days, discussing in more detail my support for requiring a ‘no more submissions’ point at all. As always, discussion is very welcome!





