HERDSA 2012: Integrating concepts across common engineering first year courses

I attended a talk by Dr Andrew Guzzomi on “Interdisciplinary threshold concepts in engineering”, where he talked about University of Western Australia’s reconfiguration of their first year common engineering program in the face of their new 3+2 course roll-out across the University. Most Unis have a common engineering first year that is the basis for all disciplines. This is usually a collection of individual units each focusing on one discipline, developed and taught by academics from that discipline. For example, civil engineers teach statics, mechs teach dynamics, but there is no guaranteed connection or conceptual linkage between the two areas. This is despite the fact that statics is effectively dynamics with some key variables set to zero. (Engineers, you may now all howl in dismay!)

This work looked at what the threshold concepts were for engineering. These threshold concepts are transformative, in that if you understand them it will change the way that you think about the discipline, but they are also troublesome, they need work to teach and effort to learn. But, in theory, if we identify the right threshold concepts then we:

  • Focus teaching, learning and assessment activities
  • Renew otherwise crowded curricula

This is a big issue as we balance the requirements of our students, our discipline, our professional bodies and industry – we have to make sure that whatever teach is appropriate and the most useful (in all of the objective spaces) thing that we can be teaching.

Dr Guzzomi then discussed the ALTC (Australian Learning and Teaching Council) project that supported the basic investigation to conduct an inventory of what all groups considered to be the core threshold concepts. UWA was the case study, with an aim to reducing a guide for other educators, and to add back to threshold concept theory. This is one of the main contributions of the large-scale Australia-wide educational research support bodies: they can give enough money and influence to a project to allow change to occur.

(I picked up from the talk that, effectively, it helped to have a Chair of Engineering Education on board to get an initiative like this through. Even then, there was still resistance from some quarters. This isn’t surprising. If we all agreed with each other, I’d be shocked.)

The threshold concept identification required a very large set of workshops and consultative activities, across students and staff both within and without the discipline, starting with a diversification phase as concepts were added, and then moving to an integration phase that rationalised these concepts down into the set that really expressed the key threshold concepts of engineering for first year.

The implementation in Syllabus terms required the implementors to:

  • Focus teaching and learning on TCs
  • Address troublesome features
  • provide opportunities to experience variation (motion unit taught using variation theory, when students work at indiv tables, doing different problems at different tables but pool similar answers for comparison to show the difference in approach and answer)
Then developed concept maps for each unit, showing inclusion, requirements and examples, used with, dependencies and so on.
This was then turned into a course implementation that had no lectures at all: courses were composed of four individual units that had readings, tutorial-like information sessions and 2 hour studio session that comprised practicals and more interaction sessions. I did ask Andrew about the assessment mechanisms in use and, while they’ve been completely rebuilt for the new course, they are still reviewing these to make sure that they exercise the threshold concepts appropriately. (I’ll be sending him e-mail to get more detail on this.)
Their findings so far are that these concept identification exercises have revealed the connections between the disciplines and the application of the same concepts across the whole of the discipline. Three concepts were identified as being good examples of concepts that have a reach that spread across all disciplines (integrating threshold concepts):
  1. System identification: where you work out which system he problem fits into to allow you simplify analysis
  2. Modelling and abstraction: where quantitative analysis is facilitated through translation to mathematical language and students use judgement to break system into salient components for modelling
  3. Dimensional reasoning: Identifying the variables needed to describe a complex system – making sure that equations balance.
The conclusions were relatively straight forward:
  • Rather than a traditional and relatively unlinked common foundation, teaching integrating concepts is showing promise
  • Threshold concepts provided the lens and developed approach to integrated disciplines
  • Teaching through variation supports student diversity in solutions
  • This approach reveals connections across engineering disciplines beyond those in which they later chose to specialise
UWA and U Melbourne run a very different degree program from the rest of us, so it’s always interesting to see what they are up to. In this case, there’s a lot going on. Not only have they done a great deal of surveying in order to find the new threshold concepts upon which their courses are now built, but they’ve also completely changed their teaching style to support it, with much greater use of collaboration and team work. I’ll be very interested to see some more follow-up on this after it’s run for the full year.

When the Stakes are High, the Tests Had Better Be Up to It.

(This is on the stronger opinion side but, in the case of standardised testing as it is currently practised, this will be a polarising issue. Please feel free to read the next article and not this one.)

If you make a mistake, please erase everything from the worksheet, and then leave the room, as you have just wasted 12 years of education.

A friend on FB (thanks, Julie!) linked me to an article in the Washington Post that some of you may have seen. The article is called “The Complete List of Problems with High-Stakes Standardised Tests” by Marion Brady, in the words of the article. a “teacher, administrator, curriculum designer and author”. (That’s attribution, not scare quotes.)

Brady provides a (rather long but highly interesting) list of problems with the now very widespread standardised testing regime that is an integral part of student assessment in some countries. Here. Brady focuses on the US but there is little doubt that the same problems would exist in other areas. From my readings and discussions with US teachers, he is discussing issues that are well-known problems in the area but they are slightly intimidating when presented as a block.

So many problems are covered here, from an incorrect focus on simplistic repetition of knowledge because it’s easier to assess, to the way that it encourages extrinsic motivations (bribery or punishment in the simplest form), to the focus on test providers as the stewards and guides of knowledge rather than the teachers. There are some key problems, and phrases, that I found most disturbing, and I quote some of them here:

[Teachers oppose the tests because they]

“unfairly advantage those who can afford test prep; hide problems created by margin-of-error computations in scoring; penalize test-takers who think in non-standard ways”

“wrongly assume that what the young will need to know in the future is already known; emphasize minimum achievement to the neglect of maximum performance; create unreasonable pressures to cheat.”

“are open to massive scoring errors with life-changing consequences”

“because they provide minimal to no useful feedback”

This is completely at odds with what we would consider to be reasonable education practice in any other area. If I had comments from students that identified that I was practising 10% of this, I would be having a most interesting discussion with my Head of School concerning what I was doing – and a carpeting would be completely fair! This isn’t how we should teach and we know it.

I spoke yesterday about an assault on critical thinking as being an assault on our civilisation, short-sightedly stabbing away at helping people to think as if it will really achieve what (those trying to undermine critical thinking) actually wanted. I don’t think that anyone can actually permanently stop information spreading, when that information can be observed in the natural world, but short-sightedness, malign manipulation of the truth and ignorance can certainly prevent individuals from gaining access to information – especially if we are peddling the lie that “everything which needs to be discovered is already known.”

We can, we have and we probably (I hope) always will work around these obstacles in information, these dark ages as I referred to them yesterday, but at what cost of the great minds who cannot be applied to important problems because they were born to poor families, in the ‘wrong’ state, in a district with no budget for schools, or had to compete against a system that never encouraged them to actually think?

The child who would have developed free safe power, starship drives, applicable zero-inflation stable economic models, or the “cure for cancer” may be sitting at the back of a poorly maintained, un-airconditioned, classroom somewhere, doodling away, and slowly drifting from us. When he or she encounters the standardised test, unprepared, untrained, and tries to answer it to the extent of his or her prodigious intellect, what will happen? Are you sufficiently happy with the system that you think that this child will receive a fair hearing?

We know that students learn from us, in every way. If we teach something in one way but we reward them for doing something else in a test, is it any surprise that they learn for the test and come to distrust what we talk about outside of these tests? I loathe the question “will this be in the exam” as much as the next teacher but, of course, if that is how we have prioritised learning and rewarded the student, then they would be foolish not to ask this question. If the standardised test is the one that decides your future, then, without doubt, this is the one that you must set as your goal, whether student, teacher, district or state!

Of course, it is the future of the child that is most threatened by all of this, as well as the future of the teaching profession. Poor results on a standardised test for a student may mean significantly reduced opportunity, and reduced opportunity, unless your redemptive mechanisms are first class, means limited pathways into the future. The most insidious thread through all of this is the idea that a standardised test can be easily manipulated through a strategy of learning what the answer should be, to a test question, rather than what it is, within the body of knowledge. We now combine the disadvantaged student having their future restricted, competing against the privileged student who has been heavily channeled into a mode that allows them to artificially excel, with no guarantee that they have the requisite aptitude to enjoy or take advantage of the increased opportunities. This means that both groups are equally in trouble, as far as realising their ambitions, because one cannot even see the opportunity while the other may have no real means for transforming opportunity into achievement.

The desire to control the world, to change the perception of inconvenient facts, to avoid hard questions, to never be challenged – all of these desires appear to be on the rise. This is the desire to make the world bend to our will, the real world’s actual composition and nature apparently not mattering much. It always helps me to remember that Cnut stood in the waves and commanded them not to come in order to prove that he could not control the waves – many people think that Cnut was defeated in his arrogance, when he was attempting to demonstrate his mortality and humility, in the face of his courtiers telling him that he had power above that of mortal men.

How unsurprising that so many people misrepresent this.


You’re Welcome On My Lawn But Leaf Blowers Are Not

I was looking at a piece of software the other day and, despite it being a well-used and large-userbase piece of code, I was musing that I had never found it be particularly fit for purpose. (No, I won’t tell you what it is – I’m allergic to defamation suits.) However, my real objections to it, in simple terms, sound a bit trivial to my own ears and I’ve never really had the words or metaphors to describe it to other people.

Until today.

My wife and I were walking in to work today and saw, in the distance, a haze of yellow dust, rising up in front of three men who were walking towards us, line abreast, as a street sweeping unit slowly accompanied them along the road. Each of the men had a leaf blower that they were swinging around, kicking up all of the Plain Tree pollen/dust (which is highly irritating) and pushing it towards us in a cloud. They did stop when they saw us coming but, given how much dust was in the air, it’s 8 hours later and I’m still getting grit out of my eyes.

Weirdly enough, this image comes from a gaming site, discussing mecha formations. The Internet constantly amazes me.

Now, I have no problem with streets being kept clean and free of debris and I have a lot of respect for the sweepers, cleaners and garbage removal people who stop us from dying in a MegaCholera outbreak from living in cities – but I really don’t like leaf blowers. On reflection, there are a number of things that I don’t like for similar reasons so let me refer back to the piece of software I was complaining about and call it a leaf blower.

Why? Well, primarily, it’s because leaf blowers are a noisy and inefficient way to not actually solve the problem. Leaf blowers move the problem to someone else. Leaf blowers are the socially acceptable face of picking up a bag of garbage and throwing it on your neighbour’s front porch. Today was a great example – all of the dust and street debris was being blown out of the city towards the Park lands where, presumably, this would become someone else’s problem. The fact that a public thoroughfare was a pollen-ridden nightmare for 30 minutes or so was also, apparently, collateral damage.

Now, of course, there are people who use leaf blowers to push leaves into big piles that they then pick up, but there are leaf vacuums and brooms and things like that which will do a more effective job with either less noise or more efficiently. (And a lot of people just blow it off their property as if it will magically disappear.) The catch is, of course, better solutions generally require more effort.

The problem with a broom is that pushing a broom is a laborious and tiring task, and it’s quite reasonable for large-scale tasks like this that we have mechanical alternatives. For brief tidy up and small spaces, however, the broom is king. The problem with the leaf vacuum is that it has to be emptied and they are, because of their size and nature, often more expensive than the leaf blower. You probably couldn’t afford to have as many of these on your cleanup crew’s equipment roster. So brooms are cheap but hard manual labour compared to expensive leaf vacuums which fulfil the social contract but require regular emptying.

Enter the leaf blower – low effort, relatively low cost, no need to empty the bag, just blow it off the property. It is, however, an easy way to not actually solve the problem.

And this, funnily enough, describes the software that I didn’t like (and many other things in a similar vein). Cost-wise it’s a sensible decision, compared to building it yourself and in terms of maintenance. It’s pretty easy to use. There’s no need to worry about being sensible or parsimonious with resources. You just do stuff in it with a small amount of time and you’re done.

The only problem is that what you are encouraged to produce by default, the affordance of the software, is not actually the solution to the problem the the software theoretically solves. It is an approximation to the answer but, in effect, you’ve handed the real problem to someone else – in my case, the student, because it’s software of an educational nature. This then feeds load straight back to you, your teaching assistants and support staff. Any effort you’ve expended is wasted and you didn’t even solve the problem.

I’ve talked before about trying to assess what knowledge workers are doing, rather than concentrating on the number of hours that they are spending at their desk, and the ‘desk hours’ metric is yet another example of leaf blowing. Cheap and easy metric, neither effective nor useful, and realistically any sensible interpretation requires you to go back and work out what people are actually doing during those hours – problem not solved, just shunted along, with a bit of wasted effort and a false sense of achievement.

Solving problems is sometimes difficult and it regularly requires careful thought and effort. There may be a cost involved. If we try to come up with something that looks like a solution, but all it does is blow the leaves around, then we probably haven’t actually solved anything.


Who Knew That the Slippery Slope Was Real?

Take a look at this picture.

Dan Ariely. Photo: poptech/Flickr, via wired.com.

One thing you might have noticed, if you’ve looked carefully, is that this man appears to have had some reconstructive surgery on the right side of his face and there is a colour difference, which is slightly accentuated by the lack of beard stubble. What if I were to tell you that this man was offered the chance to have fake stubble tattooed onto that section and, when he declined because he felt strange about it, received a higher level of pressure and, in his words, guilt trip than for any other procedure during the extensive time he spent in hospital receiving skin grafts and burn treatments. Why was the doctor pressuring him?

Because he had already performed the tattooing remediation on two people and needed a third for the paper. In Dan’s words, again, the doctor was a fantastic physician, thoughtful, and he cared but he had a conflict of interest that meant that he moved to a different mode of behaviour. For me, I had to look a couple of times because the asymmetry that the doctor referred to is not that apparent at first glance. Yet the doctor felt compelled, by interests that were now Dan’s, to make Dan self-conscious about the perceived problem.

A friend on Facebook (thanks, Bill!) posted a link to an excellent article in Wired, entitled “Why We Lie, Cheat, Go to Prison and Eat Chocolate Cake” by Dan Ariely, the man pictured above. Dan is a professor of behavioural economics and psychology at Duke and his new book explores the reasons that we lie to each other. I was interested in this because I’m always looking for explanations of student behaviour and I want to understand their motivations. I know that my students will rationalise and do some strange things but, if I’m forewarned, maybe I can construct activities and courses in a way that heads this off at the pass.

There were several points of interest to me. The first was the question whether a cost/benefit analysis of dishonesty – do something bad, go to prison – actually has the effect that we intend. As Ariely points out, if you talk to the people who got caught, the long-term outcome of their actions was never something that they thought about. He also discusses the notion of someone taking small steps, a little each time, that move them from law abiding, for want of a better word, to dishonest. Rather than set out to do bad things in one giant leap, people tend to take small steps, rationalising each one, and after each step opening up a range of darker and darker options.

Welcome to the slippery slope – beloved argument of rubicose conservative politicians since time immemorial. Except that, in this case, it appears that the slop is piecewise composed on tiny little steps. Yes, each step requires a decision, so there isn’t the momentum that we commonly associate with the slope, but each step, in some sense, takes you to larger and larger steps away from the honest place from which you started.

Ariely discusses an experiment where he gave two groups designer sunglasses and told one group that they had the real thing, and the other that they had fakes, and then asked them to complete a test and then gave them a chance to cheat. The people who had been randomly assigned into the ‘fake sunglasses’ group cheated more than the others. Now there are many possible reasons for this. One of them is the idea that if you know that are signalling your status deceptively to the world, which is Ariely’s argument, you are in a mindset where you have taken a step towards dishonesty. Cheating a little more is an easier step. I can see many interpretations of this, because of the nature of the cheating which is in reporting how many questions you completed on the test, where self-esteem issues caused by being in the ‘fake’ group may lead to you over-promoting yourself in the reporting of your success on the quiz – but it’s still cheating. Ultimately, whatever is motivating people to take that step, the step appears to be easier if you are already inside the dishonest space, even to a degree.

[Note: Previous paragraph was edited slightly after initial publication due to terrible auto-correcting slipping by me. Thanks, Gary!]

Where does something like copying software or illicitly downloading music come into this? Does this constant reminder of your small, well-rationalised, step into low-level lawlessness have any impact on the other decisions that you make? It’s an interesting question because, according to the outline in Ariely’s sunglasses experiment, we would expect it to be more of a problem if the products became part of your projected image. We know that having developed a systematic technological solution for downloading is the first hurdle in terms of achieving downloads but is it also the first hurdle in making steadily less legitimate decisions? I actually have no idea but would be very interested to see some research in this area. I feel it’s too glib to assume a relationship, because it is so ‘slippery slope’ argument, but Ariely’s work now makes me wonder. Is it possible that, after downloading enough music or software, you could actually rationalise the theft of a car? Especially if you were only ‘borrowing’ it? (Personally, I doubt it because I think that there are several steps in between.) I don’t have a stake in this fight – I have a personal code for behaviour in this sphere that I can live with but I see some benefits in asking and trying to answer these questions from something other that personal experience.

Returning to the article, of particular interest to me was the discussion of an honour code, such as Princeton’s, where students sign a pledge. Ariely sees it as benefit as a reminder to people that is active for some time but, ultimately, would have little value over several years because, as we’ve already discussed, people rationalise in small increments over the short term rather than constructing long-term models where the pledge would make a difference. Sign a pledge in 2012 and it may just not have any impact on you by the middle of 2012, let alone at the end of 2015 when you’re trying to graduate. Potentially, at almost any cost.

In terms of ongoing reminders, and a signature on a piece of work saying (in effect) “I didn’t cheat”, Ariely asks what happens if you have to sign the honour clause after you’ve finished a test – well, if you’ve finished then any cheating has already occurred so the honour clause is useless then. If you remind people at the start of every assignment, every test, and get them to pledge at the beginning then this should have an impact – a halo effect to an extent, or a reminder of expectation that will make it harder for you to rationalise your dishonesty.

In our school we have an electronic submission system that require students to use to submit their assignments. It has boiler plate ‘anti-plagiarism’ text and you must accept the conditions to submit. However, this is your final act before submission and you have already finished the code, which falls immediately into the trap mentioned in the previous paragraph. Dan Ariely’s answers have made me think about how we can change this to make it more of an upfront reminder, rather than an ‘after the fact – oh it may be too late now’ auto-accept at the end of the activity. And, yes, reminder structures and behaviour modifiers in time banking are also being reviewed and added in the light of these new ideas.

The Wired Q&A is very interesting and covers a lot of ground but, realistically, I think I have to go and buy Dan Ariely’s book(s), prepare myself for some harsh reflection and thought, and plan for a long weekend of reading.


Time Banking and Plagiarism: Does “Soul Destroying” Have An Ethical Interpretation?

Yesterday, I wrote a post on the 40 hour week, to give an industrial basis for the notion of time banking, and I talked about the impact of overwork. One of the things I said was:

The crunch is a common feature in many software production facilities and the ability to work such back-breaking and soul-destroying shifts is often seen as a badge of honour or mark of toughness. (Emphasis mine.)

Back-breaking is me being rather overly emphatic regarding the impact of work, although in manual industries workplace accidents caused by fatigue and overwork can and do break backs – and worse – on a regular basis.

Is it Monday morning already?

But soul-destroying? Am I just saying that someone will perform their tasks as an automaton or zombie, or am I saying something more about the benefit of full cognitive function – the soul as an amalgam of empathy, conscience, consideration and social factors? Well, the answer is that, when I wrote it, I was talking about mindlessness and the removal of the ability to take joy in work, which is on the zombie scale, but as I’ve reflected on the readings more, I am now convinced that there is an ethical dimension to fatigue-related cognitive impairment that is important to talk about. Basically, the more tired you get, the more likely you are to function on the task itself and this can have some serious professional and ethical considerations. I’ll provide a basis for this throughout the rest of this post.

The paper I was discussing, on why Crunch Mode doesn’t work, listed many examples from industry and one very interesting paper from the military. The paper, which had a broken link in the Crunch mode paper, may be found here and is called “Sleep, Sleep Deprivation, and Human Performance in Continuous Operations” by Colonel Gregory Belenky. Now, for those who don’t know, in 1997 I was a commissioned Captain in the Royal Australian Armoured Corps (Reserve), on detachment to the Training Group to set up and pretty much implement a new form of Officer Training for Army Reserve officers in South Australia. Officer training is a very arduous process and places candidates, the few who make it in, under a lot of stress and does so quite deliberately. We have to have some idea that, if terrible things happen and we have to deploy a human being to a war zone, they have at least some chance of being able to function. I had been briefed on most of the issues discussed in Colonel Belenky’s paper but it was only recently that I read through the whole thing.

And, to me today as an educator (I resigned my commission years ago), there are still some very important lessons, guidelines and warnings for all of us involved in the education sector. So stay with me while I discuss some of Belenky’s terminology and background. The first term I want to introduce is droning: the loss of cognitive ability through lack of useful sleep. As Belenky puts in, in the context of US Army Ranger training:

…the candidates can put one foot in front of another and respond if challenged, but have difficulty grasping their situation or acting on their own initiative.

What was most interesting, and may surprise people who have never served with the military, is that the higher the rank, the less sleep people got – and the higher level the formation, the less sleep people got. A Brigadier in charge of a Brigade is going to, on average, get less sleep than the more junior officers in the Brigade and a lot less sleep than a private soldier in a squad. As an officer, my soldiers were fed before me, rested before me and a large part of my day-to-day concern was making sure that they were kept functioning. This keeps on going up the chain and, as you go further up, things get more complex. Sadly, the people shouldering the most complex cognitive functions with the most impact on the overall battlefield are also the people getting the least fuel for their continued cognitive endeavours. They are the most likely to be droning: going about their work in an uninspired way and not really understanding their situation. So here is more evidence from yet another place: lack of sleep and fatigue lead to bad outcomes.

One of the key issues Belenky talks about is the loss of situational awareness caused by the accumulated sleep debt, fatigue and overwork suffered by military personnel. He gives an example of an Artillery Fire Direction Centre – this is where requests for fire support (big guns firing large shells at locations some distance away) come to and the human plotters take your requests, transform them into instructions that can be given to the gunners and then firing starts. Let me give you a (to me) chilling extract from the report, which the Crunch Mode paper also quoted:

Throughout the 36 hours, their ability to accurately derive range, bearing, elevation, and charge was unimpaired. However, after circa 24 hours they stopped keeping up their situation map and stopped computing their pre-planned targets immediately upon receipt. They lost situational awareness; they lost their grasp of their place in the operation. They no longer knew where they were relative to friendly and enemy units. They no longer knew what they were firing at. Early in the simulation, when we called for simulated fire on a hospital, etc., the team would check the situation map, appreciate the nature of the target, and refuse the request. Later on in the simulation, without a current situation map, they would fire without hesitation regardless of the nature of the target. (All emphasis mine.)

Here, perhaps, is the first inkling of what I realised I meant by soul destroying. Yes, these soldiers are overworked to the point of droning and are now shuffling towards zombiedom. But, worse, they have no real idea of their place in the world and, perhaps most frighteningly, despite knowing that accidents happen when fire missions are requested and having direct experience of rejecting what would have resulted in accidental hospital strikes, these soldiers have moved to a point of function where the only thing that matters is doing the work and calling the task done. This is an ethical aspect because, from their previous actions, it is quite obvious that there was both a professional and ethical dimension to their job as the custodians of this incredibly destructive weaponry – deprive them of enough sleep and they calculate and fire, no longer having the cognitive ability (or perhaps the will) to be ethical in their delivery. (I realise a number of you will have choked on your coffee slightly at the discussion of military ethics but, in the majority of cases, modern military units have a strong ethical code, even to the point of providing a means for soldiers to refuse to obey illegal orders. Most failures of this system in the military can be traced to failures in a unit’s ethical climate or to undetected instability in the soldiers: much as in the rest of the world.)

The message, once again, is clear. Overwork, fatigue and sleeplessness reduce the ability to perform as you should. Belenky even notes that the ability to benefit from training quite clearly deteriorates as the fatigue levels increase. Work someone hard enough, or let them work themselves hard enough, and not only aren’t they productive, they can’t learn to do anything else.

The notion of situational awareness is important because it’s a measure of your sense of place, in an organisational sense, in a geographical sense, in a relative sense to the people around you and also in a social sense. Get tired enough and you might swear in front of your grandma because your social situational awareness is off. But it’s not just fatigue over time that can do this: overloading someone with enough complex tasks can stress cognitive ability to the point where similar losses of situational awareness can occur.

Helmet fire is a vivid description of what happens when you have too many tasks to do, under highly stressful situations, and you lose your situational awareness. If you are a military pilot flying on instruments alone, especially with low or zero visibility, then you have to follow a set of procedures, while regularly checking the instruments, in order to keep the plane flying correctly. If the number of tasks that you have to carry out gets too high, and you are facing the stress of effectively flying the plane visually blind, then your cognitive load limits will be exceeded and you are now experiencing helmet fire. You are now very unlikely to be making any competent contributions at all at this stage but, worse, you may lose your sense of what you were doing, where you are, what your intentions are, which other aircraft are around you: in other words, you lose situational awareness. At this point, you are now at a greatly increased risk of catastrophic accident.

To summarise, if someone gets tired, stressed or overworked enough, whether acutely or over time, their performance goes downhill, they lose their sense of place and they can’t learn. But what does this have to do with our students?

A while ago I posted thoughts on a triage system for plagiarists – allocating our resources to those students we have the most chance of bringing back to legitimate activity. I identified the three groups as: sloppy (unintentional) plagiarism, deliberate (but desperate and opportunistic) plagiarism and systematic cheating. I think that, from the framework above, we can now see exactly where the majority of my ‘opportunistic’ plagiarists are coming from: sleep-deprived, fatigued and (by their own hands or not) over-worked students losing their sense of place within the course and becoming focused only on the outcome. Here, the sense of place is not just geographical, it is their role in the social and formal contracts that they have entered into with lecturers, other students and their institution. Their place in the agreements for ethical behaviour in terms of doing the work yourself and submitting only that.

If professional soldiers who have received very large amounts of training can forget where there own forces are, sometimes to the tragic extent that they fire upon and destroy them, or become so cognitively impaired that they carry out the mission, and only the mission, with little of their usual professionalism or ethical concern, then it is easy to see how a student can become so task focussed that start to think about only ending the task, by any means, to reduce the cognitive load and to allow themselves to get the sleep that their body desperately needs.

As always, this does not excuse their actions if they resort to plagiarism and cheating – it explains them. It also provides yet more incentive for us to try and find ways to reach our students and help them form systems for planning and time management that brings them closer to the 40 hour ideal, that reduces the all-nighters and the caffeine binges, and that allows them to maintain full cognitive function as ethical, knowledgable and professional skill practitioners.

If we want our students to learn, it appears that (for at least some of them) we first have to help them to marshall their resources more wisely and keep their awareness of exactly where they are, what they are doing and, in a very meaningful sense, who they are.


Time Banking: Foresightedness and Reward

You may have noticed that I’ve stopped numbering the time banking posts – you may not have noticed that they were numbered in the first place! The reason is fairly simple and revolves around the fact that the numbers are actually meaningless. It’s not as if I have a huge plan of final sequence of the time banking posts. I do have a general idea but the order can change as one idea or another takes me and I feel that numbering them makes it look as if there is some grand sequence.

There isn’t. That’s why they all tend to have subtitles after them so that they can be identified and classified in a cognitive sequence. So, why am I telling you this? I’m telling you this so that you don’t expect “Time Banking 13” to be something special, or (please, no) “Time Banking 100” to herald the apocalypse.

The Druids invented time banking but could never find a sufficiently good Oracle to make it work. The Greeks had the Oracle but not the bank. This is why the Romans conquered everywhere. True story!

If I’m going to require students to self-regulate then, whether through operant or phenomenological mechanisms, the outcomes that they receive are going to have to be shaped to guide the student towards a self-regulating model. In simple terms, they should never feel that they have wasted their time, that they are under-appreciated or that they have been stupid to follow a certain path.

In particular, if we’re looking at time management, then we have to ensure that time spent in advance is never considered to be wasted time. What does that mean to me as a teacher, if I set an assignment in advance and students put work towards it – I can’t change the assignment arbitrarily. This is one of the core design considerations for time banking: if deadlines are seen as arbitrary (and extending them in case of power failures or class-wide lack of submission can show how arbitrary they are) then we allow the students to make movement around the original deadlines, in a way that gives them control without giving us too much extra work. If I want my students to commit to planning ahead and doing work before the due date then some heavy requirements fall on me:

  1. I have to provide the assignment work ahead of schedule and, preferably, for the entire course at the start of the semester.
  2. The assignments stay the same throughout that time. No last minute changes or substitutions.
  3. The oracle is tied to the assignment and is equally reliable.

This requires a great deal of forward planning and testing but, more importantly, it requires a commitment from me. If I am asking my students to commit, I have to commit my time and planning and attention to detail to my students. It’s that simple. Nobody likes to feel like a schmuck. Like they invested time under false pretences. That they had worked on what they thought was a commitment but it turned out that someone just hadn’t really thought things through.

Wasting time and effort discourages people. It makes people disengage. It makes them less trustful of you as an educator. It makes them less likely to trust you in the future. It reduces their desire to participate. This is the antithesis of what I’m after with increasing self-regulation and motivation to achieve this, which I label under the banner of my ‘time banking’ project.

But, of course, it’s not as if we’re not already labouring under this commitment to our students, at least implicitly. If we don’t follow the three requirements above then, at some stage, students will waste effort and, believe me, they’re going to question what they’re doing, why they’re bothering, and some of them will drop out, drift away and be lost to us forever. Never thinking that you’ve wasted your time, never feeling like a schmuck, seeing your ideas realised, achieving goals: that’s how we reward students, that’s what can motivate students and that’s how we can move the on to higher levels of function and achievement.

 


The Many Types of Failure: What Does Zero Mean When Nothing Is Handed Up?

You may have read about the Edmonton, Canada, teacher who expected to be sacked for handing out zeros. It’s been linked to sites as diverse as Metafilter, where a long and interesting debate ensued, and Cracked, where it was labelled one of the ongoing ‘pussifications’ of schools. (Seriously? I know you’re a humour site but was there some other way you could have put that? Very disappointed.)

Basically, the Edmonton Public School Board decided that, rather than just give a zero for a missed assignment, this would be used as a cue for follow-up work and additional classes at school or home. Their argument – you can’t mark work that hasn’t been submitted, let’s use this as a trigger to try and get submission, in case the source is external or behavioural. This, of course, puts the onus on the school to track the students, get the additional work completed, and then mark out of sequence. Lynden Dorval, the high school teacher who is at the centre of this, believe that there is too much manpower involved in doing this and that giving the student a zero forces them to come to you instead.

Some of you may never have seen one of these before. This is a zero, which is the lowest mark you can be awarded for any activity. (I hope!)

Now, of course, this has split people into two fairly neat camps – those who believe that Dorval is the “hero of zero” and those who can see the benefit of the approach, including taking into account that students still can fail if they don’t do enough work. (Where do I stand? I’d like to know a lot more than one news story before I ‘pick a side’.) I would note that a lot of tired argument and pejorative terminology has also come to the fore – you can read most of the buzzwords used against ‘progressives’ in this article, if you really want to. (I can probably summarise it for you but I wouldn’t do it objectively. This is just one example of those who are feting Dorval.)

Of course, rather than get into a heated debate where I really don’t have enough information to contribute, I’d rather talk about the basic concept – what exactly does a zero mean? If you hand something in and it meets none of my requirements, then a zero is the correct and obvious mark. But what happens if you don’t hand anything in?

With the marking approach that I practice and advertise, which uses time-based mark penalties for late submission, students are awarded marks for what they get right, rather than have marks deducted for what they do wrong. Under this scheme, “no submission” gives me nothing to mark, which means that I cannot give you any marks legitimately – so is this a straight-forward zero situation? The time penalties are in place as part of the professional skill requirements and are clearly advertised, and consistently policed. I note that I am still happy to give students the same level of feedback on late work, including their final mark without penalty, which meets all of the pedagogical requirements, but the time management issues can cost a student some, most or all of their marks. (Obviously, I’m actively working on improving engagement with time management through mechanisms that are not penalty based but that’s for other posts.)

As an aside, we have three distinct fail grades for courses at my University:

  • Withdraw Fail (WF), where a student has dropped the course but after the census date. They pay the money, it stays on their record, but as a WF.
  • Fail (F), student did something but not enough to pass.
  • Fail No Submission (FNS), student submitted no work for assessment throughout the course.

Interestingly, for my Uni, FNS has a numerical grade of 0, although this is not shown on the transcript. Zero, in the course sense, means that you did absolutely nothing. In many senses, this represents the nadir of student engagement, given that many courses have somewhere from 1-5, maybe even 10%, of marks available for very simple activities that require very little effort.

My biggest problem with late work, or no submission, is that one of the strongest messages I have from that enormous data corpus of student submission that I keep talking about is that starting a pattern of late or no submission is an excellent indicator of reduced overall performance and, with recent analysis, a sharply decreased likelihood of making it to third year (final year) in your college studies. So I really want students to hand something in – which brings me to the crux of the way that we deal with poor submission patterns.

Whichever approach I take should be the one that is most likely to bring students back into a regular submission pattern. 

If the Public School Board’s approach is increasing completion rates and this has a knock-on effect which increases completion rates in the future? Maybe it’s time to look at that resourcing profile and put the required money into this project. If it’s a transient peak that falls off because we’re just passing people who should be failing? Fuhgeddaboutit.

To quote Sherlock Holmes (Conan Doyle, naturally): 

It is a capital mistake to theorize before one has data. Insensibly one begins to twist facts to suit theories, instead of theories to suit facts. (A Scandal in Bohemia)

“Data! Data! Data!” he cried impatiently. “I can’t make bricks without clay.” (The Adventure of the Copper Beeches)

It is very easy to take a side on this and it is very easy to see how both sides could have merit. The issue, however, is what each of these approaches actually does to encourage students to submit their assignment work in a more timely fashion. Experiments, experimental design, surveys, longitudinal analysis, data, data, data!

If I may end by waxing lyrical for a moment (and you will see why I stick to technical writing):

If zeroes make Heroes, then zeroes they must have! If nulls make for dulls, then we must seek other ways!


Time Banking III: Cheating and Meta-Cheating

One of the problems with setting up any new marking system is that, especially when you’re trying to do something a bit out of the ordinary, you have to make sure that you don’t produce a system that can be gamed or manipulated to let people get an unfair advantage. (Students are very resourceful when it comes to this – anyone who has received a mysteriously corrupted Word document of precisely the right length and with enough relevant strings to look convincing, on more than one occasion from the same student and they then are able to hand up a working one the next Monday, knows exactly what I’m talking about.)

As part of my design, I have to be clear to the students what I do and don’t consider to be reasonable behaviour (returning to Dickinson and McIntyre, I need to be clear in my origination and leadership role). Let me illustrate this with an anecdote from decades ago.

In the early 90s, I helped to write and run a number of Multi User Dungeons (MUDs) – the text-based fore-runners of the Massively Multiplayer On-line Role Playing Games, such as World of Warcraft. The games had very little graphical complexity and we spent most of our time writing the code that drove things like hitting orcs with swords or allowing people to cast spells. Because of the many interactions between the software components in the code, it was possible for unexpected things to happen – not just bugs where code stopped working but strange ‘features’ where things kept working but in an odd way. I knew a guy, let’s call him K, who was a long-term player of MUDs. If the MUD was any good, he’d not only played it, he’d effectively beaten it. He knew every trick, every lurk, the best way to attack a monster but, more interestingly, he had a nose for spotting errors in the code and taking advantage of them. One time, in a game we were writing, we spotted K walking around with something like 20-30 ’empty’ water bottles on him. (As game writers, wizards, we could examine any object in the game, which included seeing what players were carrying.)

A bit like this, but all on one person’s shoulders and no wheels.

This was weird. Players had a limited amount of stuff that they could carry, and K should have had no reason to carry those bottles. When we examined him, we discovered that we’d made an error in the code so that, when you drank from a bottle and emptied it, the bottle ended up weighing LESS THAN NOTHING. (It was a text game and our testing wasn’t always fantastic – I learnt!) So K was carrying around the in-game equivalent of helium balloons that allowed him to carry a lot more than he usually would.

Of course, once we detected it, we fixed the code and K stopped carrying so many empty bottles. (Although, I have no doubt that he personally checked each and every container we put into the game from that point on to see if could get it to happen again.) Did we punish him? No. We knew that K would need some ‘flexibility’ in his exploration of the game, knowing that he would press hard against the rubber sheet to see how much he could bend reality, but also knowing that he would spot problems that would take us weeks or months of time to find on our own. We took him into our new and vulnerable game knowing that if he tried to actually break or crash the game, or share the things he’d learned, we’d close off his access. And he knew that too.

Had I placed a limit in play that said “Cheating detected = Immediate Booting from the game”, K would have left immediately. I suspect he would have taken umbrage at the term ‘cheating’, as he generally saw it as “this is the way the world works – it’s not my fault that your world behaves strangely”. (Let’s not get into this debate right now, we’re not in the educational plagiarism/cheating space right now.)

We gave K some exploration space, more than many people would feel comfortable with, but we maintained some hard pragmatic limits to keep things working and we maintained the authority required to exercise these limits. In return, K helped us although, of course, he played for the fun of the game and, I suspect, the joy of discovering crazy bugs. However, overall, this approach saved us effort and load, and allowed us to focus on other things with our limited resources. Of course, to make this work required careful orientation and monitoring on our behalf. Nothing, after all, comes for free.

If I’d asked K to fill out forms describing the bugs he’d found, he’d never have done it. If I’d had to write detailed test documents for him, I wouldn’t have had time to do anything else. But it also illustrates something that I have to be very cautious of, which I’ve embodied as the ‘no cheating/gaming’ guideline for Time Banking. One of the problems with students at early development stages is that they can assume that their approach is right, or even assert that their approach is the correct one, when it is not aligned with our goals or intentions at all. Therefore, we have to be clear on the goals and open about our intentions. Given that the goal of Time Banking is to develop mature approach to time management, using the team approach I’ve already discussed, I need to be very clear in the guidance I give to students.

However, I also need to be realistic. There is a possibility that, especially on the first run, I introduce a feature in either the design or the supporting system that allows students to do something that they shouldn’t. So here’s my plan for dealing with this:

  1. There is a clear no-cheating policy. Get caught doing anything that tries to subvert the system or get you more hours in any other way than submitting your own work early and it’s treated as a cheating incident and you’re removed from the time bank.
  2. Reporting a significant fault in the system, that you have either deduced, or observed, is worth 24 hours of time to the first person who reports it. (Significant needs definition but it’s more than typos.)

I need the stick. Some of my students need to know that the stick is there, even if the stick is never needed, but I really can’t stand the stick. I have always preferred the carrot. Find me a problem and you get an automatic one-day extension, good for any assignment in the bank. Heck, I could even see my way clear to making this ‘liftable’ hours – 24 hours you can hand on to a friend if you want. If part of your team thinking extends to other people and, instead of a gifted student handing out their assignment, they hand out some hours, I have no problem with that. (Mr Pragmatism, of course, places a limit on the number of unearned hours you can do this with, from the recipient’s, not the donor’s perspective. If I want behaviour to change, then people have to act to change themselves.)

My design needs to keep the load down, the rewards up but, most importantly, the rewards have to move the students towards the same goals as the primary activity or I will cause off-task optimisation and I really don’t want to do that.

I’m working on a discussion document to go out to people who think this is a great idea, a terrible idea, the worst idea ever, something that they’d like to do, so that I can bring all of the thoughts back together and, as a group of people dedicated to education, come up with something that might be useful – OR, and it’s a big or, come up with the dragon slaying notion that kills time banking stone dead and provides the sound theoretical and evidence-based support as to why we must and always should use deadlines. I’m prepared for one, the other, both or neither to be true, along with degrees along the axis.

 


Time Banking II: We Are a Team

In between getting my camera ready copy together for ICER, and I’m still pumped that our paper got into ICER, I’ve been delving deep into the literature and the psychological and pedagogical background that I need to confirm before I go too much further with Time Banking. (I first mentioned this concept here. The term is already used in a general sense to talk about an exchange of services based on time as a currency. I use it here within the framework of student assignment submission.) I’m not just reading in CS Ed, of course, but across Ed, sociology, psychology and just about anywhere else where people have started to consider time as a manageable or tradable asset. I thought I’d take this post to outline some of the most important concepts behind it and provide some rationale for decisions that have already been made. I’ve already posted the guidelines for this, which can be distilled down to “not all events can be banked”, “additional load must be low”, “pragmatic limits apply”, “bad (cheating or gaming) behaviour is actively discouraged” and “it must integrate with our existing systems”.

Time/Bank currency design by Lawrence Weiner. Photo by Julieta Aranda. (Question for Nick – do I need something like this for my students?)

Our goal, of course, is to get students to think about their time management in a more holistic fashion and to start thinking about their future activities sometime sooner the 24 hours before the due date. Rather than students being receivers and storers of deadline, can we allow them to construct their own timelines, within a set of limits? (Ben-Ari, 1998, “Constructivism in Computer Science Education”, SIGCSE,  although Ben-Ari referred to knowledge in this context and I’m adapting it to a knowledge of temporal requirements, which depends upon a mature assessment of the work involved and a sound knowledge of your own skill level.) The model that I am working with is effectively a team-based model, drawing on Dickinson and McIntyre’s 1997 work “Team Performance Assessment and Measurement: Theory, Methods and Applications.”, but where the team consists of a given student, my marking team and me. Ultimately our product is the submitted artefact and we are all trying to facilitate its timely production, but if I want students to be constructive and participative, rather than merely compliant and receptive, I have to involve them in the process. Dickinson and McIntyre identified seven roles in their model: orientation, leadership, monitoring, feedback, back-up (assisting/supporting), coordination and communication. Some of these roles are obviously mine, as the lecturer, such as orientation (establishing norms and keeping the group cohesive) and monitoring (observing performance and recognising correct contribution). However, a number of these can easily be shared between lecturer and student, although we must be clear as to who holds each role at a given time. In particular, if I hold onto deadlines and make them completely immutable then I have take the coordination role and handed over a very small fragment of that to the student. By holding onto that authority, whether it makes sense or not, I’m forcing the student into an authority-dependent mode.

(We could, of course, get into quite a discussion as to whether the benefit is primarily Piagiatien because we are connecting new experiences with established ideas, or Vygotskian because of the contact with the More Knowledgable Other and time spent in the Zone of Proximal Development. Let’s just say that either approach supports the importance of me working with a student in a more fluid and interactive manner than a more rigid and authoritarian relationship.)

Yes, I know, some deadlines are actually fixed and I accept that. I’m not saying that we abandon all deadlines or notion of immutability. What I am, however, saying is that we want our students to function in working teams, to collaborate, to produce good work, to know when to work harder earlier to make it easier for themselves later on. Rather than give them a tiny sandpit in which to play, I propose that we give them a larger space to work with. It’s still a space with edges, limits, defined acceptable behaviour – our monitoring and feedback roles are one of our most important contributions to our students after all – but it is a space in which a student can have more freedom of action and, for certain roles including coordination, start to construct their own successful framework for achievement.

Much as reading Vygotsky gives you useful information and theoretical background, without necessarily telling you how to teach, reading through all of these ideas doesn’t immediately give me a fully-formed implementation. This is why the guidelines were the first things I developed once I had some grip on the ideas, because I needed to place some pragmatic limits that would allow me to think about this within a teaching framework.  The goal is to get students to use the process to improve their time management and process awareness and we need to set limits on possible behaviour to make sure that they are meeting the goal. “Hacks” to their own production process, such as those that allow them to legitimately reduce their development time (such as starting the work early, or going through an early prototype design) are the point of the exercise. “Hacks” that allow them to artificially generate extra hours in the time bank are not the point at all. So this places a requirement on the design to be robust and not susceptible to gaming, and on the orientation, leadership and monitoring roles as practiced by me and my staff. But it also requires the participants to enter into the spirit of it or choose not to participate, rather than attempting to undermine it or act to spite it.

The spontaneous generation of hours was something that I really wanted to avoid. When I sketched out my first solution, I realised that I had made the system far too complex by granting time credits immediately, when a ‘qualifying’ submission was made, and that later submissions required retraction of the original grant, followed by a subsequent addition operation. In fact, I had set up a potential race condition that made it much more difficult to guarantee that a student was using genuine extension credit time. The current solution? Students don’t get credit added to their account until a fixed point has passed, beyond which no further submissions can take place. This was the first of the pragmatic limits – there does exist a ‘no more submissions’ point but we are relatively elastic to that point. (It also stops students trying to use obtained credit for assignment X to try and hand up an improved version of X after the due date. We’re not being picky here but this isn’t the behaviour we want – we want students to think more than a week in advance because that is the skill that, if practised correctly, will really improve their time management.)

My first and my most immediate concern was that students may adapt to this ‘last hand-in barrier’ but our collected data doesn’t support this hypothesis, although there are some concerning subgroups that we are currently tearing apart to see if we can get more evidence on the small group of students who do seem to go to a final marks barrier that occurs after the main submission date.

I hope to write more on this over the next few days, discussing in more detail my support for requiring a ‘no more submissions’ point at all. As always, discussion is very welcome!


What are the Fiction and Non-Fiction Equivalents of Computer Science?

I commented yesterday that I wanted to talk about something covered in Mark’s blog, namely if it was possible to create an analogy between Common Core standards in different disciplines with English Language Arts and CS as the two exemplars. In particular, Mark pondered, and I quote him verbatim:

”Students should read as much nonfiction as fiction.”  What does that mean in terms of the notations of computing? Students should read as many program proofs as programs?  Students should read as much code as comments?

This a great question and I’m not sure that I have much of an answer but I’ve been enjoying thinking about it. We bandy the terms syntax and semantics around in Computer Science a lot: the legal structures of the programs we write and the meanings of the components and the programs. Is it even meaningful to talk about fiction and non-fiction in these terms and where do these fit? I’ve gone in a slightly different direction from Mark but I hope to bring it back to his suggestions later on.

I’m not an English specialist, so please forgive me or provide constructive guidance as you need to, but both fiction and non-fiction rely upon the same syntactic elements and the same semantic elements in linguistic terms – so the fact that we must have legal programs with well-defined syntax and semantics pose no obstacle to a fictional/non-fictional interpretation.

Forgive me as I go to Wikipedia for definitions for fiction and non-fiction for a moment:

“Non-fiction (or nonfiction) is the form of any narrativeaccount, or other communicative work whose assertions and descriptions are understood to be factual.” (Warning, embedded Wikipedia links)

“Fiction is the form of any narrative or informative work that deals, in part or in whole, with information or events that are not factual, but rather, imaginary—that is, invented by the author” (Again, beware Wikipedia).

Now here we can start to see something that we can get our teeth into. Many computer programs model reality and are computerised representation of concrete systems, while others may have no physical analogue at all or model a system that has never or may never exist. Are our simulations and emulations of large-scale system non-fiction? If so, is a virtual reality fictional because it has never existed or non-fictional because we are simulating realistic gravity? (But, of course, fiction is often written in a real world setting but with imaginary elements.)

From a software engineering perspective, I can see an advantage to making statements regarding abstract representations and concrete analogues, much as I can see a separation in graphics and game design between narrative/event engine construction and the physics engine underneath.

Is this enough of a separation? Mark’s comments on proof versus program is an interesting one: if we had an idea (an author’s creation) then it is a fiction until we can determine that it exists, but proof or implementation provides this proof of existence. In my mind, a proof and a program are both non-fiction in terms of their reification, but the idea that they span may still be fictional. Comments versus code is also very interesting – comments do not change the behaviour of code but explain, from the author’s mind, what has happened. (Given some student code and comment combinations, I can happily see a code as non-fiction, comment as fiction modality – or even comment as magical reality!)

Of course, this is all an enjoyable mental exercise, but what can I take from this and use in my teaching. Is there a particular set of code or comments that students should read for maximum benefit and can we make a separation that, even if not partitioned so neatly across two sets, gives us the idea of what constitutes a balanced diet of the products of our discipline?

I’d love to see some discussion on this but, if nothing here, then I’m happy to buy the first round of drinks at HERDSA or ICER to start a really good conversation going!