How Do We Recognise Mastery? What Is My Masterpiece?

An artwork entitled “Masterpiece”. Click through for the webpage.

A few posts ago, and my goodness that’s a lot of words, I posted on issues of identity and examined the PhD in the light of it being a journeyman qualification, one that indicates the end of an apprenticeship and a readiness to go out into the world. That, however, is only half of the overall story of the apprentice, because there is a level above journeyman and that is, in all of its gendered glory, “master”. In the world of the trade and craft guilds, the designation of Mastery was only given when a journeyman applied to the guild and provided a piece of work that demonstrated their mastery of the appropriate craft. These works, if accepted, paved the way for journeyman to become Master, to become capable of training more apprentices and retaining their own journeymen, and were referred to as “Masterpieces”.

We use the term a bit more loosely these days, especially when coupled with the word “theatre”, but the sense remains. A Masterpiece is a piece of work that demonstrates your mastery of the craft and any sensible group of experts within your discipline would recognise it as such and declare you worthy to join them.

On reflection, after my last post on identity, I realised that I had placed the PhD into a very specific place, based on the PhD culture of my own discipline and my own experience. There are people who work their way up through a discipline for years, advancing steadily through their craft via diploma, recognition of prior learning and finally degree. Finally, having functioned as practitioner, they move into the academy in order to make their definitive contribution and it is as practitioner-academics that they create their final thesis which, in some regard, has more than a hint of the mastery of the craft about it and is far more likely to be a masterpiece than, say, my three year musing on big systems and XML. I regard myself more as an academic-practitioner as while I have previous knowledge, my research work began afresh and my PhD formed the basis of my qualification for entry into the profession of academic (journeyman) rather than the condensation of my life’s contribution as a practitioner, placed within the academic sphere to change teaching, research and policy (masterpiece).

However, this really doesn’t clear the issue up at all, all it does is emphasise that it is the recognition of the masterpiece that determines one’s mastery, which in turn requires that we have strong “guilds” or their equivalent in order to be able to clearly state when something has been produced to a level that we have met this particular skill battier.

Now, in terms of supervising other PhD students, I can do that now but, until my first student completes successfully (fingers crossed for December), I cannot be a principal supervisor. I am apprenticed, again, in effect until I have demonstrated sufficient mastery. So my PhD qualification is, again, rendered at the journeyman level. If I still had my network certifications from my previous life, I could instruct people in networking within certain corporate frameworks, but I (again) only had journeyman qualifications here. I have a friend who has achieved mastery in the networking discipline and the difference in our skill levels is amazing but, rather sadly, he has no masterpiece to show for his efforts. He worked to solve some difficult problems, and sat some very hard exams, and provided that he repeats this performance every 2 years, he will make lots of money doing interesting things involving networks. There is not, however, a single artefact of his that he can point to, which asserts that from that point on, he had mastery of a certain set of skills.

And this is very much the way of modern mastery. Why does my friend have to resit his exams? Because things are changing very quickly these days and, because of the Internet, we can propagate those changes almost immediately. A master craftsman of the 17th Century would learn new techniques, certainly, but having achieved mastery, he would enjoy maybe 20-30 more years of relatively low change until he died of some unspeakable disease or a falling giraffe. These days, while master craftsman certainly exist and are recognised as such, in many scientific disciplines, we tend to award this towards the end of someone’s life, at a time when their practical life is relatively close to over and I wonder if that is to stop the embarrassment of a recognised master who knows nothing about what has happened in the field because it has all moved on.

How do we recognise mastery in science, literature or academia? Well, there are significant Fellowships (the Royal Society springs to mind), important prizes (the Nobel, the Pulitzer) and awards (the Turing and the like). Of course, there is one award that recognises early achievement, the Fields Medal in mathematics, which may only be awarded to someone who is not yet 40, specifically to try and encourage the recipients to go further and do more. A lot of these awards and prizes, however, allow the luxury of a Masterpiece, especially those awards which are given for a specific piece of work. But which of J. M. Coetzee’s works was the definitive masterpiece that granted him the Nobel in Literature, the one that tipped the balance? Where is the specific masterpiece that I can pass to other guild members (not that I am one) and admire, wish that I had created, and learn from? Even where we have the books, we still don’t have a clear notion of what we are looking at. (I realise that Coetzee’s skills were clearly identified in the award, as well as his focus, and I am certainly not disputing the validity – but which is the book I give to someone to explain why he is a master?)

It is much harder to see where we give our students the ability to produce master works of any kind, even within our capstone courses. The works produced under capstone are more likely to be fit-for-purpose, complete but unremarkable, and therefore fit to judge for the end of apprenticeship, but no further. If they then progress to Honours, Masters or PhD, they do not so much have an opportunity to produce a masterpiece, what they are doing is conducting an apprenticeship for a new trade. (This varies by profession and intent. I can quite happily see that a PhD in Creative Writing has a masterpiece component attached to it, whereas a PhD in other disciplines may not.)

But, given that the international recognition of mastery is in a highly refined atmosphere and can, at most, accommodate a very small number of people, how do we even recognise those few masterpieces that will occur outside of the defining masterworks of a generation? For me, as a personal reflection, I am coming to terms with the fact that any masterpiece that I do produce, a work of great import or even a student (in some respects) that goes on to change the world, may have a very short shelf-life compared to other crafts. I also have to accept that the guild that accepts it as master work may never even contact me to tell me what they think – I’ll just have to watch my citation index go up and use it to get myself promoted.

I don’t have a complete answer to this, and I know that there’s a lot more thinking to do, but are we looking at the end of masterpieces or do we just have to adopt a different lens for seeing them, as well as a different group for judging them?


The Invisible War – How Do You Find What You Don’t Know You’re Missing?

Photo: jasonEscapist, CC licence, click for details.

[T]here are known knowns; there are things we know that we know.
There are known unknowns; that is to say there are things that, we now know we don’t know.
But there are also unknown unknowns – there are things we do not know, we don’t know.

Donald Rumsfeld, when United States Secretary of Defence

I realise that this quote has been mocked before but I have always found it be both clear and interesting, mainly because accepting that there are things that you don’t know that you don’t know is important. Because of the way our world works now, where most information is heavily filtered in one form or another, it is becoming more a world of unknowns unknowns (things that are so filtered that you didn’t even know that you could have known about them) then a world of known unknowns (things that you have yet to look into but know exist).

I have a student who is undertaking a project exploring ways of exposing the revision history of Wikipedia in a way that makes it immediately obvious if you’re reading something that is generally agreed upon or in massive dispute. The History and Discussion tabs in Wikipedia are, for most people, equivalent to unknown unknowns – not only do they not even realise what they are there for, they don’t think to look. This illustrates one of the most insidious forms of filter, one where the information is presented in a way that appears static and reliable, relying upon the mechanism that you use to give that impression.

How, for example, can a person inside the Chinese web search zone find pictures of Tank Man at Tiananmen Square, if all legitimate searches that might turn up anything to do with it have been altered? If no picture of Tiananmen shows protests or tanks, how do you even know to search for Tank Man? Even if you find a picture of a man standing there, in front of tanks, how do you then discover the meaning of the picture?

I was reminded of the impact of filtering while I was reading Metafilter the other day. One of the Front Page Posts (FPPs) dealt with the call to boycott a Fantasy writer/game article contributor who had advocated the use of rape in fantasy literature as an awesome way to make the story better (in a variety of ways). I started reading the article, because I assumed that I would take issue with this Fantasy author but wanted to read the whole story, and left the page up to see what sort of comments unfolded. Because this could take time, I ignored the page for about 30 minutes.

Then, when I reloaded later, the post had been deleted. Now, because of the way that Metafilter works, a deleted FPP still exists and can be located in the database, but it is no longer linked to the front page and can no longer be modified. So, suddenly, I had an island of effectively hidden and frozen information. Having read the contents, the comments so far, and the write-up, I was still quite interested to follow the story but the unfolding and contribution of other people in the comments thread, which is the greatest strength of Metafilter, was no longer going to happen.

Now there are many of these deleted FPPs in Metafilter, easily accessible if you search for them by number, but they are closed to comment. They are fragments of conversations, hanging in space, incomplete, cast in amber. You can see them but you can’t see the final comments that would have closed the debate, the petering out as the arguments faded, the additional links that would have been added to this shard of the data corpus by the 12,000 active account holders of Metafilter.

Now, of course, whenever you look at Metafilter, you’ll know that for every few stories that you see on the front page, there’s probably at least one deleted one. Whenever you look at Wikipedia’s illusion of a clean white page where everything looks like it’s just been printed, you may realise that this could hide hundreds or millions of updates and corrections behind the scenes.

How does this change your perception of the information that is contained in there?

While it is easy to point to traditional publishing, especially for text and reference books, and point out the elitist cabals and intellectual thuggery that permeated some of these avenues, we must accept that the printed book never changed once it had been printed. To change a printed book, you must excise, burn, overprint, paint, physically retrieve and then re-insert. There is no remote update. There is no way that an invisible war can be waged against the contents of your copy of uncorrected Biggles or that someone thousands of kilometres away can stop you from opening the pages of your history text that describe the Tiananmen Square protests.

We have always had filter bubbles but, at the same time, we had history and the ability to compare fixed and concrete entities with each other. Torn out pages left holes, holes gave us questions, unknowns were discovered. I try very hard to read across and out of my filter bubble, and I strongly encourage my students to do the same, but at the same time I have to remind both myself and them that we are doing what we can within an implicit filter bubble of known knowns and known unknowns.

By definition, even though I’m aware of the possible existence of things that have already been so well hidden from me that I will never find them in my life time, I have no idea where to look to find these unknown unknowns. Maybe that’s why I’m buying more books and magazines at the moment, reading so very widely across the written and the electronic, and trying to commit as much as possible to here?

Do we need to know what we don’t know? How will we achieve this? Is this just another twinge as we move towards a different way of managing information?

What will this post say tomorrow?


You’re Welcome On My Lawn But Leaf Blowers Are Not

I was looking at a piece of software the other day and, despite it being a well-used and large-userbase piece of code, I was musing that I had never found it be particularly fit for purpose. (No, I won’t tell you what it is – I’m allergic to defamation suits.) However, my real objections to it, in simple terms, sound a bit trivial to my own ears and I’ve never really had the words or metaphors to describe it to other people.

Until today.

My wife and I were walking in to work today and saw, in the distance, a haze of yellow dust, rising up in front of three men who were walking towards us, line abreast, as a street sweeping unit slowly accompanied them along the road. Each of the men had a leaf blower that they were swinging around, kicking up all of the Plain Tree pollen/dust (which is highly irritating) and pushing it towards us in a cloud. They did stop when they saw us coming but, given how much dust was in the air, it’s 8 hours later and I’m still getting grit out of my eyes.

Weirdly enough, this image comes from a gaming site, discussing mecha formations. The Internet constantly amazes me.

Now, I have no problem with streets being kept clean and free of debris and I have a lot of respect for the sweepers, cleaners and garbage removal people who stop us from dying in a MegaCholera outbreak from living in cities – but I really don’t like leaf blowers. On reflection, there are a number of things that I don’t like for similar reasons so let me refer back to the piece of software I was complaining about and call it a leaf blower.

Why? Well, primarily, it’s because leaf blowers are a noisy and inefficient way to not actually solve the problem. Leaf blowers move the problem to someone else. Leaf blowers are the socially acceptable face of picking up a bag of garbage and throwing it on your neighbour’s front porch. Today was a great example – all of the dust and street debris was being blown out of the city towards the Park lands where, presumably, this would become someone else’s problem. The fact that a public thoroughfare was a pollen-ridden nightmare for 30 minutes or so was also, apparently, collateral damage.

Now, of course, there are people who use leaf blowers to push leaves into big piles that they then pick up, but there are leaf vacuums and brooms and things like that which will do a more effective job with either less noise or more efficiently. (And a lot of people just blow it off their property as if it will magically disappear.) The catch is, of course, better solutions generally require more effort.

The problem with a broom is that pushing a broom is a laborious and tiring task, and it’s quite reasonable for large-scale tasks like this that we have mechanical alternatives. For brief tidy up and small spaces, however, the broom is king. The problem with the leaf vacuum is that it has to be emptied and they are, because of their size and nature, often more expensive than the leaf blower. You probably couldn’t afford to have as many of these on your cleanup crew’s equipment roster. So brooms are cheap but hard manual labour compared to expensive leaf vacuums which fulfil the social contract but require regular emptying.

Enter the leaf blower – low effort, relatively low cost, no need to empty the bag, just blow it off the property. It is, however, an easy way to not actually solve the problem.

And this, funnily enough, describes the software that I didn’t like (and many other things in a similar vein). Cost-wise it’s a sensible decision, compared to building it yourself and in terms of maintenance. It’s pretty easy to use. There’s no need to worry about being sensible or parsimonious with resources. You just do stuff in it with a small amount of time and you’re done.

The only problem is that what you are encouraged to produce by default, the affordance of the software, is not actually the solution to the problem the the software theoretically solves. It is an approximation to the answer but, in effect, you’ve handed the real problem to someone else – in my case, the student, because it’s software of an educational nature. This then feeds load straight back to you, your teaching assistants and support staff. Any effort you’ve expended is wasted and you didn’t even solve the problem.

I’ve talked before about trying to assess what knowledge workers are doing, rather than concentrating on the number of hours that they are spending at their desk, and the ‘desk hours’ metric is yet another example of leaf blowing. Cheap and easy metric, neither effective nor useful, and realistically any sensible interpretation requires you to go back and work out what people are actually doing during those hours – problem not solved, just shunted along, with a bit of wasted effort and a false sense of achievement.

Solving problems is sometimes difficult and it regularly requires careful thought and effort. There may be a cost involved. If we try to come up with something that looks like a solution, but all it does is blow the leaves around, then we probably haven’t actually solved anything.


Student Reflections – The End of Semester Process Report

I’ve mentioned before that I have two process awareness reports in one of my first-year courses. One comes just after the monster “Library” prac, and one is right at the end of the course. These encourage the students to reflect on their assignment work and think about their software development process. I’ve just finished marking the final one and, as last year, it’s a predominantly positive and rewarding experience.

When faced with 2-4 pages of text to produce, most of my students sit down and write several, fairly densely packed pages telling me about the things that they’ve discovered along the way: lessons learned, pit traps avoided and (interestingly) the holes that they did fall into. It’s rare that I get cynical replies and for this course, from over 100 responses, I think that I had about 5 disappointing ones.

The disappointing ones included ones that posted about how I had to give them marks for something that was rubbish (uh, no I didn’t, read the assignment spec and the forum carefully), ones that were scrawled together in about a minute and said nothing, and the ones that were the outpourings of someone who wasn’t really happy with where they were, rather than something I could easily fix. Let’s move on from these.

I want to talk about the ones who had crafted beautiful diagrams where they proudly displayed their software process. The ones who shared great ideas about how to help students in the next offering. The ones who shared the links that they found useful with me, in case other students would like them. The ones who were quietly proud of mastering their areas of difficulty and welcomed the opportunity to tell someone about it. The one who used this quote from Confucius:

“A man without distant care must have near sorrow”

(人无远虑 必有近忧)

To explain why you had to look into the future when you did software design – don’t leave your assignments to the last minute, he was saying, look ahead! (I am, obviously, going to use that for teaching next semester!)

The Confucian Symbol. Something else to put in my lecture slides for Semester 2, 2012.

Overall, I find these reports to be a resolutely uplifting experience. The vast majority of my students have learnt what I wanted them to learn and have improved their professional skills but, as well, a large number of them have realised that the assignments, together with the lectures, develop their knowledge. Here is one of my favourite student quotes about the assignments themselves, which tells me that we’re starting to get the design right:

The real payoff was towards the end of the assignment. Often it would be possible to “just type code” and earn at least half the marks fairly easily. However there was always a more complex final-­part to the assignment, one that I could not complete unless I approached it in a systematic, well thought out way. The assignments made it easy to see that a program of any real complexity would be nearly impossible to build without a well-­defined design.

But students were also thinking about how they were going to take more general lessons out of this. Here’s another quote I like:

Three improvements that I am aiming to take on board for future subjects are: putting together a study timetable early on in the game; taking the time to read and understand the problem I’ve been given; and put enough time aside to produce a concise design which includes testing strategies.

The exam for this course has just been held and we’re assembling the final marks for inspection on Friday, which will tell us how this new offering has gone. But, at this stage, I have an incredibly valuable resource of student feedback to draw on when I have to do any minor adjustments to make this course better for the next offering.

From a load perspective, yes, having two essays in an otherwise computationally based course does put load on the lecturer/marker but I am very happy to pay that price. It’s such a good way to find out what my students are thinking and, from a personal perspective, be a little more confident that my co-teaching staff and I are making a positive change in these students’ lives. Better still, by sharing comments from cohort to cohort, we provide an authenticity to the advice that I would be hard pressed to achieve.

I think that this course, the first one I’ve really designed from the ground up and I’m aware of how rare that opportunity is, is actually turning into something good. And that, unsurprisingly, makes me very happy.


Who Knew That the Slippery Slope Was Real?

Take a look at this picture.

Dan Ariely. Photo: poptech/Flickr, via wired.com.

One thing you might have noticed, if you’ve looked carefully, is that this man appears to have had some reconstructive surgery on the right side of his face and there is a colour difference, which is slightly accentuated by the lack of beard stubble. What if I were to tell you that this man was offered the chance to have fake stubble tattooed onto that section and, when he declined because he felt strange about it, received a higher level of pressure and, in his words, guilt trip than for any other procedure during the extensive time he spent in hospital receiving skin grafts and burn treatments. Why was the doctor pressuring him?

Because he had already performed the tattooing remediation on two people and needed a third for the paper. In Dan’s words, again, the doctor was a fantastic physician, thoughtful, and he cared but he had a conflict of interest that meant that he moved to a different mode of behaviour. For me, I had to look a couple of times because the asymmetry that the doctor referred to is not that apparent at first glance. Yet the doctor felt compelled, by interests that were now Dan’s, to make Dan self-conscious about the perceived problem.

A friend on Facebook (thanks, Bill!) posted a link to an excellent article in Wired, entitled “Why We Lie, Cheat, Go to Prison and Eat Chocolate Cake” by Dan Ariely, the man pictured above. Dan is a professor of behavioural economics and psychology at Duke and his new book explores the reasons that we lie to each other. I was interested in this because I’m always looking for explanations of student behaviour and I want to understand their motivations. I know that my students will rationalise and do some strange things but, if I’m forewarned, maybe I can construct activities and courses in a way that heads this off at the pass.

There were several points of interest to me. The first was the question whether a cost/benefit analysis of dishonesty – do something bad, go to prison – actually has the effect that we intend. As Ariely points out, if you talk to the people who got caught, the long-term outcome of their actions was never something that they thought about. He also discusses the notion of someone taking small steps, a little each time, that move them from law abiding, for want of a better word, to dishonest. Rather than set out to do bad things in one giant leap, people tend to take small steps, rationalising each one, and after each step opening up a range of darker and darker options.

Welcome to the slippery slope – beloved argument of rubicose conservative politicians since time immemorial. Except that, in this case, it appears that the slop is piecewise composed on tiny little steps. Yes, each step requires a decision, so there isn’t the momentum that we commonly associate with the slope, but each step, in some sense, takes you to larger and larger steps away from the honest place from which you started.

Ariely discusses an experiment where he gave two groups designer sunglasses and told one group that they had the real thing, and the other that they had fakes, and then asked them to complete a test and then gave them a chance to cheat. The people who had been randomly assigned into the ‘fake sunglasses’ group cheated more than the others. Now there are many possible reasons for this. One of them is the idea that if you know that are signalling your status deceptively to the world, which is Ariely’s argument, you are in a mindset where you have taken a step towards dishonesty. Cheating a little more is an easier step. I can see many interpretations of this, because of the nature of the cheating which is in reporting how many questions you completed on the test, where self-esteem issues caused by being in the ‘fake’ group may lead to you over-promoting yourself in the reporting of your success on the quiz – but it’s still cheating. Ultimately, whatever is motivating people to take that step, the step appears to be easier if you are already inside the dishonest space, even to a degree.

[Note: Previous paragraph was edited slightly after initial publication due to terrible auto-correcting slipping by me. Thanks, Gary!]

Where does something like copying software or illicitly downloading music come into this? Does this constant reminder of your small, well-rationalised, step into low-level lawlessness have any impact on the other decisions that you make? It’s an interesting question because, according to the outline in Ariely’s sunglasses experiment, we would expect it to be more of a problem if the products became part of your projected image. We know that having developed a systematic technological solution for downloading is the first hurdle in terms of achieving downloads but is it also the first hurdle in making steadily less legitimate decisions? I actually have no idea but would be very interested to see some research in this area. I feel it’s too glib to assume a relationship, because it is so ‘slippery slope’ argument, but Ariely’s work now makes me wonder. Is it possible that, after downloading enough music or software, you could actually rationalise the theft of a car? Especially if you were only ‘borrowing’ it? (Personally, I doubt it because I think that there are several steps in between.) I don’t have a stake in this fight – I have a personal code for behaviour in this sphere that I can live with but I see some benefits in asking and trying to answer these questions from something other that personal experience.

Returning to the article, of particular interest to me was the discussion of an honour code, such as Princeton’s, where students sign a pledge. Ariely sees it as benefit as a reminder to people that is active for some time but, ultimately, would have little value over several years because, as we’ve already discussed, people rationalise in small increments over the short term rather than constructing long-term models where the pledge would make a difference. Sign a pledge in 2012 and it may just not have any impact on you by the middle of 2012, let alone at the end of 2015 when you’re trying to graduate. Potentially, at almost any cost.

In terms of ongoing reminders, and a signature on a piece of work saying (in effect) “I didn’t cheat”, Ariely asks what happens if you have to sign the honour clause after you’ve finished a test – well, if you’ve finished then any cheating has already occurred so the honour clause is useless then. If you remind people at the start of every assignment, every test, and get them to pledge at the beginning then this should have an impact – a halo effect to an extent, or a reminder of expectation that will make it harder for you to rationalise your dishonesty.

In our school we have an electronic submission system that require students to use to submit their assignments. It has boiler plate ‘anti-plagiarism’ text and you must accept the conditions to submit. However, this is your final act before submission and you have already finished the code, which falls immediately into the trap mentioned in the previous paragraph. Dan Ariely’s answers have made me think about how we can change this to make it more of an upfront reminder, rather than an ‘after the fact – oh it may be too late now’ auto-accept at the end of the activity. And, yes, reminder structures and behaviour modifiers in time banking are also being reviewed and added in the light of these new ideas.

The Wired Q&A is very interesting and covers a lot of ground but, realistically, I think I have to go and buy Dan Ariely’s book(s), prepare myself for some harsh reflection and thought, and plan for a long weekend of reading.


Time Banking and Plagiarism: Does “Soul Destroying” Have An Ethical Interpretation?

Yesterday, I wrote a post on the 40 hour week, to give an industrial basis for the notion of time banking, and I talked about the impact of overwork. One of the things I said was:

The crunch is a common feature in many software production facilities and the ability to work such back-breaking and soul-destroying shifts is often seen as a badge of honour or mark of toughness. (Emphasis mine.)

Back-breaking is me being rather overly emphatic regarding the impact of work, although in manual industries workplace accidents caused by fatigue and overwork can and do break backs – and worse – on a regular basis.

Is it Monday morning already?

But soul-destroying? Am I just saying that someone will perform their tasks as an automaton or zombie, or am I saying something more about the benefit of full cognitive function – the soul as an amalgam of empathy, conscience, consideration and social factors? Well, the answer is that, when I wrote it, I was talking about mindlessness and the removal of the ability to take joy in work, which is on the zombie scale, but as I’ve reflected on the readings more, I am now convinced that there is an ethical dimension to fatigue-related cognitive impairment that is important to talk about. Basically, the more tired you get, the more likely you are to function on the task itself and this can have some serious professional and ethical considerations. I’ll provide a basis for this throughout the rest of this post.

The paper I was discussing, on why Crunch Mode doesn’t work, listed many examples from industry and one very interesting paper from the military. The paper, which had a broken link in the Crunch mode paper, may be found here and is called “Sleep, Sleep Deprivation, and Human Performance in Continuous Operations” by Colonel Gregory Belenky. Now, for those who don’t know, in 1997 I was a commissioned Captain in the Royal Australian Armoured Corps (Reserve), on detachment to the Training Group to set up and pretty much implement a new form of Officer Training for Army Reserve officers in South Australia. Officer training is a very arduous process and places candidates, the few who make it in, under a lot of stress and does so quite deliberately. We have to have some idea that, if terrible things happen and we have to deploy a human being to a war zone, they have at least some chance of being able to function. I had been briefed on most of the issues discussed in Colonel Belenky’s paper but it was only recently that I read through the whole thing.

And, to me today as an educator (I resigned my commission years ago), there are still some very important lessons, guidelines and warnings for all of us involved in the education sector. So stay with me while I discuss some of Belenky’s terminology and background. The first term I want to introduce is droning: the loss of cognitive ability through lack of useful sleep. As Belenky puts in, in the context of US Army Ranger training:

…the candidates can put one foot in front of another and respond if challenged, but have difficulty grasping their situation or acting on their own initiative.

What was most interesting, and may surprise people who have never served with the military, is that the higher the rank, the less sleep people got – and the higher level the formation, the less sleep people got. A Brigadier in charge of a Brigade is going to, on average, get less sleep than the more junior officers in the Brigade and a lot less sleep than a private soldier in a squad. As an officer, my soldiers were fed before me, rested before me and a large part of my day-to-day concern was making sure that they were kept functioning. This keeps on going up the chain and, as you go further up, things get more complex. Sadly, the people shouldering the most complex cognitive functions with the most impact on the overall battlefield are also the people getting the least fuel for their continued cognitive endeavours. They are the most likely to be droning: going about their work in an uninspired way and not really understanding their situation. So here is more evidence from yet another place: lack of sleep and fatigue lead to bad outcomes.

One of the key issues Belenky talks about is the loss of situational awareness caused by the accumulated sleep debt, fatigue and overwork suffered by military personnel. He gives an example of an Artillery Fire Direction Centre – this is where requests for fire support (big guns firing large shells at locations some distance away) come to and the human plotters take your requests, transform them into instructions that can be given to the gunners and then firing starts. Let me give you a (to me) chilling extract from the report, which the Crunch Mode paper also quoted:

Throughout the 36 hours, their ability to accurately derive range, bearing, elevation, and charge was unimpaired. However, after circa 24 hours they stopped keeping up their situation map and stopped computing their pre-planned targets immediately upon receipt. They lost situational awareness; they lost their grasp of their place in the operation. They no longer knew where they were relative to friendly and enemy units. They no longer knew what they were firing at. Early in the simulation, when we called for simulated fire on a hospital, etc., the team would check the situation map, appreciate the nature of the target, and refuse the request. Later on in the simulation, without a current situation map, they would fire without hesitation regardless of the nature of the target. (All emphasis mine.)

Here, perhaps, is the first inkling of what I realised I meant by soul destroying. Yes, these soldiers are overworked to the point of droning and are now shuffling towards zombiedom. But, worse, they have no real idea of their place in the world and, perhaps most frighteningly, despite knowing that accidents happen when fire missions are requested and having direct experience of rejecting what would have resulted in accidental hospital strikes, these soldiers have moved to a point of function where the only thing that matters is doing the work and calling the task done. This is an ethical aspect because, from their previous actions, it is quite obvious that there was both a professional and ethical dimension to their job as the custodians of this incredibly destructive weaponry – deprive them of enough sleep and they calculate and fire, no longer having the cognitive ability (or perhaps the will) to be ethical in their delivery. (I realise a number of you will have choked on your coffee slightly at the discussion of military ethics but, in the majority of cases, modern military units have a strong ethical code, even to the point of providing a means for soldiers to refuse to obey illegal orders. Most failures of this system in the military can be traced to failures in a unit’s ethical climate or to undetected instability in the soldiers: much as in the rest of the world.)

The message, once again, is clear. Overwork, fatigue and sleeplessness reduce the ability to perform as you should. Belenky even notes that the ability to benefit from training quite clearly deteriorates as the fatigue levels increase. Work someone hard enough, or let them work themselves hard enough, and not only aren’t they productive, they can’t learn to do anything else.

The notion of situational awareness is important because it’s a measure of your sense of place, in an organisational sense, in a geographical sense, in a relative sense to the people around you and also in a social sense. Get tired enough and you might swear in front of your grandma because your social situational awareness is off. But it’s not just fatigue over time that can do this: overloading someone with enough complex tasks can stress cognitive ability to the point where similar losses of situational awareness can occur.

Helmet fire is a vivid description of what happens when you have too many tasks to do, under highly stressful situations, and you lose your situational awareness. If you are a military pilot flying on instruments alone, especially with low or zero visibility, then you have to follow a set of procedures, while regularly checking the instruments, in order to keep the plane flying correctly. If the number of tasks that you have to carry out gets too high, and you are facing the stress of effectively flying the plane visually blind, then your cognitive load limits will be exceeded and you are now experiencing helmet fire. You are now very unlikely to be making any competent contributions at all at this stage but, worse, you may lose your sense of what you were doing, where you are, what your intentions are, which other aircraft are around you: in other words, you lose situational awareness. At this point, you are now at a greatly increased risk of catastrophic accident.

To summarise, if someone gets tired, stressed or overworked enough, whether acutely or over time, their performance goes downhill, they lose their sense of place and they can’t learn. But what does this have to do with our students?

A while ago I posted thoughts on a triage system for plagiarists – allocating our resources to those students we have the most chance of bringing back to legitimate activity. I identified the three groups as: sloppy (unintentional) plagiarism, deliberate (but desperate and opportunistic) plagiarism and systematic cheating. I think that, from the framework above, we can now see exactly where the majority of my ‘opportunistic’ plagiarists are coming from: sleep-deprived, fatigued and (by their own hands or not) over-worked students losing their sense of place within the course and becoming focused only on the outcome. Here, the sense of place is not just geographical, it is their role in the social and formal contracts that they have entered into with lecturers, other students and their institution. Their place in the agreements for ethical behaviour in terms of doing the work yourself and submitting only that.

If professional soldiers who have received very large amounts of training can forget where there own forces are, sometimes to the tragic extent that they fire upon and destroy them, or become so cognitively impaired that they carry out the mission, and only the mission, with little of their usual professionalism or ethical concern, then it is easy to see how a student can become so task focussed that start to think about only ending the task, by any means, to reduce the cognitive load and to allow themselves to get the sleep that their body desperately needs.

As always, this does not excuse their actions if they resort to plagiarism and cheating – it explains them. It also provides yet more incentive for us to try and find ways to reach our students and help them form systems for planning and time management that brings them closer to the 40 hour ideal, that reduces the all-nighters and the caffeine binges, and that allows them to maintain full cognitive function as ethical, knowledgable and professional skill practitioners.

If we want our students to learn, it appears that (for at least some of them) we first have to help them to marshall their resources more wisely and keep their awareness of exactly where they are, what they are doing and, in a very meaningful sense, who they are.


Time Banking: Aiming for the 40 hour week.

I was reading an article on metafilter on the perception of future leisure from earlier last century and one of the commenters linked to a great article on “Why Crunch Mode Doesn’t Work: Six Lessons” via the International Game Designers Association. This article was partially in response to the quality of life discussions that ensued after ea_spouse outed the lifestyle (LiveJournal link) caused by her spouse’s ludicrous hours working for Electronic Arts, a game company. One of the key quotes from ea_spouse was this:

Now, it seems, is the “real” crunch, the one that the producers of this title so wisely prepared their team for by running them into the ground ahead of time. The current mandatory hours are 9am to 10pm — seven days a week — with the occasional Saturday evening off for good behavior (at 6:30pm). This averages out to an eighty-five hour work week. Complaints that these once more extended hours combined with the team’s existing fatigue would result in a greater number of mistakes made and an even greater amount of wasted energy were ignored.

The badge is fastened with two pins that go straight into your chest.

This is an incredible workload and, as Evan Robinson notes in the “Crunch Mode” article, this is not only incredible but it’s downright stupid because every serious investigation into the effect of working more than 40 hours a week, for extended periods, and for reducing sleep and accumulating sleep deficit has come to the same conclusion: hours worked after a certain point are not just worthless, they reduce worth from hours already worked.

Robinsons cites studies and practices coming from industrialists as Henry Ford, who reduced shift length to a 40-hour work week in 1926, attracting huge criticism, because 12 years of research had shown that the shorter work week meant more output, not less. These studies have been going on since the 18th century and well into the 60’s at least and they all show the same thing: working eight hours a day, five days a week gives you more productivity because you get fewer mistakes, you get less fatigue accumulation and you have workers that are producing during their optimal production times (first 4-6 hours of work) without sliding into their negatively productive zones.

As Robinson notes, the games industry doesn’t seem to have got the memo. The crunch is a common feature in many software production facilities and the ability to work such back-breaking and soul-destroying shifts is often seen as a badge of honour or mark of toughness. The fact that you can get fired for having the audacity to try and work otherwise also helps a great deal in motivating people to adopt the strategy.

Why spend so many hours in the office? Remember when I said that it’s sometimes hard for people to see what I’m doing because, when I’m thinking or planning, I can look like I’m sitting in the office doing nothing? Imagine what it looks like if, two weeks before a big deadline, someone walks into the office at 5:30pm and everyone’s gone home. What does this look like? Because of our conditioning, which I’ll talk about shortly, it looks like we’ve all decided to put our lives before the work – it looks like less than total commitment.

As a manager, if you can tell everyone above you that you have people at their desks 80+ hours a week and will have for the next three months, then you’re saying that “this work is important and we can’t do any more.” The fact that people were probably only useful for the first 6 hours of every day, and even then only for the first couple of months, doesn’t matter because it’s hard to see what someone is doing if all you focus on is the output. Those 80+ hour weeks are probably only now necessary because everyone is so tired, so overworked and so cognitively impaired, that they are taking 4 times as long to achieve anything.

Yes, that’s right. All the evidence says that more than 2 months of overtime and you would have been better off staying at 40 hours/week in terms of measurable output and quality of productivity.

Robinson lists six lessons, which I’ll summarise here because I want to talk about it terms of students and why forward planning for assignments is good practice for better smoothing of time management in the future. Here are the six lessons:

  1. Productivity varies over the course of the workday, with greatest productivity in the first 4-6 hours. After enough hours, you become unproductive and, eventually, destructive in terms of your output.
  2. Productivity is hard to quantify for knowledge workers.
  3. Five day weeks of eight house days maximise long-term output in every industry that has been studied in the past century.
  4. At 60 hours per week, the loss of productivity caused by working longer hours overwhelms the extra hours worked within a couple of months.
  5. Continuous work reduces cognitive function 25% for every 24 hours. Multiple consecutive overnighters have a severe cumulative effect.
  6. Error rates climb with hours worked and especially with loss of sleep.

My students have approximately 40 hours of assigned work a week, consisting of contact time and assignments, but many of them never really think about that. Most plan in other things around their ‘free time’ (they may need to work, they may play in a band, they may be looking after families or they may have an active social life) and they fit the assignment work and other study into the gaps that are left. Immediately, they will be over the 40 hour marker for work. If they have a part-time job, the three months of one of my semesters will, if not managed correctly, give them a lumpy time schedule alternating between some work and far too much work.

Many of my students don’t know how they are spending their time. They switch on the computer, look at the assignment, Skype, browse, try something, compile, walk away, grab a bite, web surf, try something else – wow, three hours of programming! This assignment is really hard! That’s not all of them but it’s enough of them that we spend time on process awareness: working out what you do so you know how to improve it.

Many of my students see sports drinks, energy drinks and caffeine as a licence to not sleep. It doesn’t work long term as most of us know, for exactly the reasons that long term overwork and sleeplessness don’t work. Stimulants can keep you awake but you will still be carrying most if not all of your cognitive impairment.

Finally, and most importantly, enough of my students don’t realise that everything I’ve said up until now means that they are trying to sit my course with half a brain after about the halfway point, if not sooner if they didn’t rest much between semesters.

I’ve talked about the theoretical basis for time banking and the pedagogical basis for time banking: this is the industrial basis for time banking. One day I hope that at least some of my students will be running parts of their industries and that we have taught them enough about sensible time management and work/life balance that, as people in control of a company, they look at real measures of productivity, they look at all of the masses of data supporting sensible ongoing work rates and that they champion and adopt these practices.

As Robinson says towards the end of the article:

Managers decide to crunch because they want to be able to tell their bosses “I did everything I could.” They crunch because they value the butts in the chairs more than the brains creating games. They crunch because they haven’t really thought about the job being done or the people doing it. They crunch because they have learned only the importance of appearing to do their best to instead of really of doing their best. And they crunch because, back when they were programmers or artists or testers or assistant producers or associate producers, that was the way they were taught to get things done. (Emphasis mine.)

If my students can see all of their requirements ahead of time, know what is expected, have been given enough process awareness, and have the will and the skill to undertake the activities, then we can potentially teach them a better way to get things done if we focus on time management in a self-regulated framework, rather than imposed deadlines in a rigid authority-based framework. Of course, I still have a lot of work to to demonstrate that this will work but, from industrial experience, we have yet another very good reason to try.


Flow, Happiness and the Pursuit of Significance

I’ve just been reading Deirdre McCloskey’s article on “Happyism” in The New Republic. While there are a number of points I could pick at in the article, I question her specific example of statistical significance and I think she’s oversimplified a number of the philosophical points, there are a lot of interesting thoughts and arguments within the article.

One of my challenges in connecting with my students is that of making them understand what the benefit is to them of adopting, or accepting, suggestions from me as to how to become better as discipline practitioners, as students and, to some extent, as people. It would be nice if doing the right thing in this regard could give the students a tangible and measurable benefit that they could accumulate on some sort of meter – I have performed well, my “success” meter has gone up by three units. As McCloskey points out, this effectively requires us to have a meter for something that we could call happiness, but it is then tied directly to events that give us pleasure, rather than a sequence of events that could give us happiness. Workflows (chains of actions that lead to an eventual outcome) can be assessed for accuracy and then the outcome measured, but it is only when the workflow is complete that we can assess the ‘success’ of the workflow and then derive pleasure, and hence happiness, from the completion of the workflow. Yes, we can compose a workflow from sub-workflows but we will hit the same problem if we focus on an outcome-based model – at some stage, we are likely to be carrying out an action that can lead to an event from which we can derive a notion of success, but this requires us to be foresighted and see the events as a chain that results in this outcome.

And this is very hard to meter and display in a way that says anything other than “Keep going!” Unsurprisingly, this is not really the best way to provide useful feedback, reward or fodder for self-actualisation.

I have a standing joke that, as a runner, I go to a sports doctor because if I go to a General Practitioner and say “My leg hurts after I run”, the GP will just say “Stop running.” I am enough of a doctor to say that to myself – so I seek someone who is trained to deal with my specific problems and who can give me a range of feedback that may include “stop running” because my injuries are serious or chronic, but can provide me with far more useful information from which I can make an informed choice. The happiness meter must be able to work with workflow in some way that is useful – keep going is not enough. We therefore need to look at the happiness meter.

McCloskey identifies Bentham, founder of utilitarianism, as the original “pleasure meter” proponent and implicitly addressed the beneficial calculus as subverting our assessment of “happiness units” (utils) into a form that assumes that we can reasonably compare utils between different people and that we can assemble all of our life’s experiences in a meaningful way in terms of utils in the first place!

To address the issue of workflow itself, McCloskey refers to the work of Mihály Csíkszentmihályi on flow: “the absorption in a task just within our competence”. I have talked about this before, in terms of Vygotsky’s zone of proximal development and the use of a group to assist people who are just outside of the zone of flow. The string of activities can now be measured in terms of satisfaction or immersion, as well as the outcomes of this process. Of course, we have the outcomes of the process in terms of direct products and we have outcomes in terms of personal achievement at producing those products. Which of these go onto the until meter, given that they are utterly self-assessed, subjective and, arguably, orthogonal in some cases. (If you have ever done your best, been proud of what you did, but failed in your objective, you know what I’m talking about.)

My reading of McCloskey is probably a little generous because I find her overall argument appealing. I believe that her argument may be distilled are:

  • If we are going to measure, we must measure sensibly and be very clear in our context and the interpretation of significance.
  • If we are going to base any activity on our measurement, then the activity we create or change must be related to the field of measurement.

Looking at the student experience in this light, asking students if they are happy with something is, ultimately, a pointless activity unless I either provide well-defined training in my measurement system and scale, or I am looking for a measurement of better or worse. This is confounded by simple cognitive biasses including, but not limited to, the Hawthorne Effect and confirmation bias. However, measuring what my students are doing, as Csíkszentmihályi did in the flow experiments, will show me if they are so engaged with their activities that they are staying in the flow zone. Similarly, looking at participation and measuring outputs in collaborative activities where I would expect the zone of proximal development to be in effect is going to be far more revealing than asking students if they liked something or not.

As McCloskey discusses, there is a point at which we don’t seem to get any happier but it is very hard to tell if this is a fault in our measurement and our presumption of a three-point non-interval scale and it then often degenerates into a form of intellectual snobbery that, unsurprisingly, favours the elites who will be studying the non-elites. (As an aside, I learnt a new word. Clerisy: “A distinct class of learned or literary people” If you’re going to talk about the literate elites, it’s nice to have a single word to do so!) In student terms, does this mean that there is a point at which even the most keen of our best and brightest will not try some of our new approaches? The question, of course, is whether the pursuit of happiness is paralleling the quest for knowledge, or whether this is all one long endured workflow that results in a pleasure quantum labelled ‘graduation’.

As I said, I found it to be an interesting and thoughtful piece, despite some problems and I recommend it to you, even if we must then start an large debate in the comments on how much I misled you!


Time Banking: Foundations

Short post today because I’ve spent so much time looking at research and fixing papers and catching up on things that I haven’t left myself much time to blog. Sorry about that! Today’s post is talking about one of the most vital aspects of time banking and one that I’ve been working on slightly under the radar – the theoretical underpinnings based on work in education, psychology and economics.

Foundations: More complex than they first appear.

Today we’ve been looking at key papers in educational psychology on motivation – but the one that stood out today was Zimmerman (90), “Self-regulated learning and academic achievement: An overview.” in Educational Psychologist, 25.  I want my students to become their own time managers but that’s really just a facet of self-regulation. It’s important to place all of this “let’s get rubber with time” into context and build on the good science that has gone before. I want my students to have the will to learn and practice, and the skill to do so competently – one without the other is no good to me.

This is, of course, just one of the aspects that we have to look at. Do I even know how I’m planning to address the students? Within an operant framework of punishment and reward or a phenomenological framework of self-esteem? How am I expecting them to think? These seem like rather theoretical matters but I need to know how existing timeliness issues are being perceived. If students think that they’re working in a reward/punishment framework then my solution has to take that into account. Of course, this takes us well into the world of surveying and qualitative analysis, but to design this survey we need sound theory and good hypotheses so that we can start in the ballpark of the right answer and iteratively improve.

We’re looking at motivation as the key driver here. Yes, we’re interested in student resilience and performance, but it’s the motivation to move to self-regulation that is what we’re trying to maximise. Today’s readings and sketching will be just one day out of many more to come as we further refine our search from the current broader fan to a more concentrated beam.

What of the economic factors? There is no doubt that the time bank forms a primitive economy out of ‘hours’ of a student’s time but it’s one where the budget doesn’t have to balance across the class, just across an individual student. This makes things easier to an extent as I don’t have to consider a multi-agent market beyond two people: the student and me. However, the student still has private information from me, the quality, progress and provenance of their work, and I have private information from them, in terms of the final mark. Can I make the system strategy proof, where students have no incentive to lie about how much work they’ve done or don’t try to present their private information in a way that is inherently non-truthful? Can I also produce a system where I don’t overly manipulate the system through the construction of the oracle or my support mechanisms? There’s a lot of great work out there on markets and economies so I have a great deal of reading to do here as well.

So, short post – but a long and fascinating day.


The Many Types of Failure: What Does Zero Mean When Nothing Is Handed Up?

You may have read about the Edmonton, Canada, teacher who expected to be sacked for handing out zeros. It’s been linked to sites as diverse as Metafilter, where a long and interesting debate ensued, and Cracked, where it was labelled one of the ongoing ‘pussifications’ of schools. (Seriously? I know you’re a humour site but was there some other way you could have put that? Very disappointed.)

Basically, the Edmonton Public School Board decided that, rather than just give a zero for a missed assignment, this would be used as a cue for follow-up work and additional classes at school or home. Their argument – you can’t mark work that hasn’t been submitted, let’s use this as a trigger to try and get submission, in case the source is external or behavioural. This, of course, puts the onus on the school to track the students, get the additional work completed, and then mark out of sequence. Lynden Dorval, the high school teacher who is at the centre of this, believe that there is too much manpower involved in doing this and that giving the student a zero forces them to come to you instead.

Some of you may never have seen one of these before. This is a zero, which is the lowest mark you can be awarded for any activity. (I hope!)

Now, of course, this has split people into two fairly neat camps – those who believe that Dorval is the “hero of zero” and those who can see the benefit of the approach, including taking into account that students still can fail if they don’t do enough work. (Where do I stand? I’d like to know a lot more than one news story before I ‘pick a side’.) I would note that a lot of tired argument and pejorative terminology has also come to the fore – you can read most of the buzzwords used against ‘progressives’ in this article, if you really want to. (I can probably summarise it for you but I wouldn’t do it objectively. This is just one example of those who are feting Dorval.)

Of course, rather than get into a heated debate where I really don’t have enough information to contribute, I’d rather talk about the basic concept – what exactly does a zero mean? If you hand something in and it meets none of my requirements, then a zero is the correct and obvious mark. But what happens if you don’t hand anything in?

With the marking approach that I practice and advertise, which uses time-based mark penalties for late submission, students are awarded marks for what they get right, rather than have marks deducted for what they do wrong. Under this scheme, “no submission” gives me nothing to mark, which means that I cannot give you any marks legitimately – so is this a straight-forward zero situation? The time penalties are in place as part of the professional skill requirements and are clearly advertised, and consistently policed. I note that I am still happy to give students the same level of feedback on late work, including their final mark without penalty, which meets all of the pedagogical requirements, but the time management issues can cost a student some, most or all of their marks. (Obviously, I’m actively working on improving engagement with time management through mechanisms that are not penalty based but that’s for other posts.)

As an aside, we have three distinct fail grades for courses at my University:

  • Withdraw Fail (WF), where a student has dropped the course but after the census date. They pay the money, it stays on their record, but as a WF.
  • Fail (F), student did something but not enough to pass.
  • Fail No Submission (FNS), student submitted no work for assessment throughout the course.

Interestingly, for my Uni, FNS has a numerical grade of 0, although this is not shown on the transcript. Zero, in the course sense, means that you did absolutely nothing. In many senses, this represents the nadir of student engagement, given that many courses have somewhere from 1-5, maybe even 10%, of marks available for very simple activities that require very little effort.

My biggest problem with late work, or no submission, is that one of the strongest messages I have from that enormous data corpus of student submission that I keep talking about is that starting a pattern of late or no submission is an excellent indicator of reduced overall performance and, with recent analysis, a sharply decreased likelihood of making it to third year (final year) in your college studies. So I really want students to hand something in – which brings me to the crux of the way that we deal with poor submission patterns.

Whichever approach I take should be the one that is most likely to bring students back into a regular submission pattern. 

If the Public School Board’s approach is increasing completion rates and this has a knock-on effect which increases completion rates in the future? Maybe it’s time to look at that resourcing profile and put the required money into this project. If it’s a transient peak that falls off because we’re just passing people who should be failing? Fuhgeddaboutit.

To quote Sherlock Holmes (Conan Doyle, naturally): 

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. (A Scandal in Bohemia)

“Data! Data! Data!” he cried impatiently. “I can’t make bricks without clay.” (The Adventure of the Copper Beeches)

It is very easy to take a side on this and it is very easy to see how both sides could have merit. The issue, however, is what each of these approaches actually does to encourage students to submit their assignment work in a more timely fashion. Experiments, experimental design, surveys, longitudinal analysis, data, data, data!

If I may end by waxing lyrical for a moment (and you will see why I stick to technical writing):

If zeroes make Heroes, then zeroes they must have! If nulls make for dulls, then we must seek other ways!