Take a look at this picture.
One thing you might have noticed, if you’ve looked carefully, is that this man appears to have had some reconstructive surgery on the right side of his face and there is a colour difference, which is slightly accentuated by the lack of beard stubble. What if I were to tell you that this man was offered the chance to have fake stubble tattooed onto that section and, when he declined because he felt strange about it, received a higher level of pressure and, in his words, guilt trip than for any other procedure during the extensive time he spent in hospital receiving skin grafts and burn treatments. Why was the doctor pressuring him?
Because he had already performed the tattooing remediation on two people and needed a third for the paper. In Dan’s words, again, the doctor was a fantastic physician, thoughtful, and he cared but he had a conflict of interest that meant that he moved to a different mode of behaviour. For me, I had to look a couple of times because the asymmetry that the doctor referred to is not that apparent at first glance. Yet the doctor felt compelled, by interests that were now Dan’s, to make Dan self-conscious about the perceived problem.
A friend on Facebook (thanks, Bill!) posted a link to an excellent article in Wired, entitled “Why We Lie, Cheat, Go to Prison and Eat Chocolate Cake” by Dan Ariely, the man pictured above. Dan is a professor of behavioural economics and psychology at Duke and his new book explores the reasons that we lie to each other. I was interested in this because I’m always looking for explanations of student behaviour and I want to understand their motivations. I know that my students will rationalise and do some strange things but, if I’m forewarned, maybe I can construct activities and courses in a way that heads this off at the pass.
There were several points of interest to me. The first was the question whether a cost/benefit analysis of dishonesty – do something bad, go to prison – actually has the effect that we intend. As Ariely points out, if you talk to the people who got caught, the long-term outcome of their actions was never something that they thought about. He also discusses the notion of someone taking small steps, a little each time, that move them from law abiding, for want of a better word, to dishonest. Rather than set out to do bad things in one giant leap, people tend to take small steps, rationalising each one, and after each step opening up a range of darker and darker options.
Welcome to the slippery slope – beloved argument of rubicose conservative politicians since time immemorial. Except that, in this case, it appears that the slop is piecewise composed on tiny little steps. Yes, each step requires a decision, so there isn’t the momentum that we commonly associate with the slope, but each step, in some sense, takes you to larger and larger steps away from the honest place from which you started.
Ariely discusses an experiment where he gave two groups designer sunglasses and told one group that they had the real thing, and the other that they had fakes, and then asked them to complete a test and then gave them a chance to cheat. The people who had been randomly assigned into the ‘fake sunglasses’ group cheated more than the others. Now there are many possible reasons for this. One of them is the idea that if you know that are signalling your status deceptively to the world, which is Ariely’s argument, you are in a mindset where you have taken a step towards dishonesty. Cheating a little more is an easier step. I can see many interpretations of this, because of the nature of the cheating which is in reporting how many questions you completed on the test, where self-esteem issues caused by being in the ‘fake’ group may lead to you over-promoting yourself in the reporting of your success on the quiz – but it’s still cheating. Ultimately, whatever is motivating people to take that step, the step appears to be easier if you are already inside the dishonest space, even to a degree.
[Note: Previous paragraph was edited slightly after initial publication due to terrible auto-correcting slipping by me. Thanks, Gary!]
Where does something like copying software or illicitly downloading music come into this? Does this constant reminder of your small, well-rationalised, step into low-level lawlessness have any impact on the other decisions that you make? It’s an interesting question because, according to the outline in Ariely’s sunglasses experiment, we would expect it to be more of a problem if the products became part of your projected image. We know that having developed a systematic technological solution for downloading is the first hurdle in terms of achieving downloads but is it also the first hurdle in making steadily less legitimate decisions? I actually have no idea but would be very interested to see some research in this area. I feel it’s too glib to assume a relationship, because it is so ‘slippery slope’ argument, but Ariely’s work now makes me wonder. Is it possible that, after downloading enough music or software, you could actually rationalise the theft of a car? Especially if you were only ‘borrowing’ it? (Personally, I doubt it because I think that there are several steps in between.) I don’t have a stake in this fight – I have a personal code for behaviour in this sphere that I can live with but I see some benefits in asking and trying to answer these questions from something other that personal experience.
Returning to the article, of particular interest to me was the discussion of an honour code, such as Princeton’s, where students sign a pledge. Ariely sees it as benefit as a reminder to people that is active for some time but, ultimately, would have little value over several years because, as we’ve already discussed, people rationalise in small increments over the short term rather than constructing long-term models where the pledge would make a difference. Sign a pledge in 2012 and it may just not have any impact on you by the middle of 2012, let alone at the end of 2015 when you’re trying to graduate. Potentially, at almost any cost.
In terms of ongoing reminders, and a signature on a piece of work saying (in effect) “I didn’t cheat”, Ariely asks what happens if you have to sign the honour clause after you’ve finished a test – well, if you’ve finished then any cheating has already occurred so the honour clause is useless then. If you remind people at the start of every assignment, every test, and get them to pledge at the beginning then this should have an impact – a halo effect to an extent, or a reminder of expectation that will make it harder for you to rationalise your dishonesty.
In our school we have an electronic submission system that require students to use to submit their assignments. It has boiler plate ‘anti-plagiarism’ text and you must accept the conditions to submit. However, this is your final act before submission and you have already finished the code, which falls immediately into the trap mentioned in the previous paragraph. Dan Ariely’s answers have made me think about how we can change this to make it more of an upfront reminder, rather than an ‘after the fact – oh it may be too late now’ auto-accept at the end of the activity. And, yes, reminder structures and behaviour modifiers in time banking are also being reviewed and added in the light of these new ideas.
The Wired Q&A is very interesting and covers a lot of ground but, realistically, I think I have to go and buy Dan Ariely’s book(s), prepare myself for some harsh reflection and thought, and plan for a long weekend of reading.
Yesterday, I wrote a post on the 40 hour week, to give an industrial basis for the notion of time banking, and I talked about the impact of overwork. One of the things I said was:
The crunch is a common feature in many software production facilities and the ability to work such back-breaking and soul-destroying shifts is often seen as a badge of honour or mark of toughness. (Emphasis mine.)
Back-breaking is me being rather overly emphatic regarding the impact of work, although in manual industries workplace accidents caused by fatigue and overwork can and do break backs – and worse – on a regular basis.
But soul-destroying? Am I just saying that someone will perform their tasks as an automaton or zombie, or am I saying something more about the benefit of full cognitive function – the soul as an amalgam of empathy, conscience, consideration and social factors? Well, the answer is that, when I wrote it, I was talking about mindlessness and the removal of the ability to take joy in work, which is on the zombie scale, but as I’ve reflected on the readings more, I am now convinced that there is an ethical dimension to fatigue-related cognitive impairment that is important to talk about. Basically, the more tired you get, the more likely you are to function on the task itself and this can have some serious professional and ethical considerations. I’ll provide a basis for this throughout the rest of this post.
The paper I was discussing, on why Crunch Mode doesn’t work, listed many examples from industry and one very interesting paper from the military. The paper, which had a broken link in the Crunch mode paper, may be found here and is called “Sleep, Sleep Deprivation, and Human Performance in Continuous Operations” by Colonel Gregory Belenky. Now, for those who don’t know, in 1997 I was a commissioned Captain in the Royal Australian Armoured Corps (Reserve), on detachment to the Training Group to set up and pretty much implement a new form of Officer Training for Army Reserve officers in South Australia. Officer training is a very arduous process and places candidates, the few who make it in, under a lot of stress and does so quite deliberately. We have to have some idea that, if terrible things happen and we have to deploy a human being to a war zone, they have at least some chance of being able to function. I had been briefed on most of the issues discussed in Colonel Belenky’s paper but it was only recently that I read through the whole thing.
And, to me today as an educator (I resigned my commission years ago), there are still some very important lessons, guidelines and warnings for all of us involved in the education sector. So stay with me while I discuss some of Belenky’s terminology and background. The first term I want to introduce is droning: the loss of cognitive ability through lack of useful sleep. As Belenky puts in, in the context of US Army Ranger training:
…the candidates can put one foot in front of another and respond if challenged, but have difficulty grasping their situation or acting on their own initiative.
What was most interesting, and may surprise people who have never served with the military, is that the higher the rank, the less sleep people got – and the higher level the formation, the less sleep people got. A Brigadier in charge of a Brigade is going to, on average, get less sleep than the more junior officers in the Brigade and a lot less sleep than a private soldier in a squad. As an officer, my soldiers were fed before me, rested before me and a large part of my day-to-day concern was making sure that they were kept functioning. This keeps on going up the chain and, as you go further up, things get more complex. Sadly, the people shouldering the most complex cognitive functions with the most impact on the overall battlefield are also the people getting the least fuel for their continued cognitive endeavours. They are the most likely to be droning: going about their work in an uninspired way and not really understanding their situation. So here is more evidence from yet another place: lack of sleep and fatigue lead to bad outcomes.
One of the key issues Belenky talks about is the loss of situational awareness caused by the accumulated sleep debt, fatigue and overwork suffered by military personnel. He gives an example of an Artillery Fire Direction Centre – this is where requests for fire support (big guns firing large shells at locations some distance away) come to and the human plotters take your requests, transform them into instructions that can be given to the gunners and then firing starts. Let me give you a (to me) chilling extract from the report, which the Crunch Mode paper also quoted:
Throughout the 36 hours, their ability to accurately derive range, bearing, elevation, and charge was unimpaired. However, after circa 24 hours they stopped keeping up their situation map and stopped computing their pre-planned targets immediately upon receipt. They lost situational awareness; they lost their grasp of their place in the operation. They no longer knew where they were relative to friendly and enemy units. They no longer knew what they were firing at. Early in the simulation, when we called for simulated fire on a hospital, etc., the team would check the situation map, appreciate the nature of the target, and refuse the request. Later on in the simulation, without a current situation map, they would fire without hesitation regardless of the nature of the target. (All emphasis mine.)
Here, perhaps, is the first inkling of what I realised I meant by soul destroying. Yes, these soldiers are overworked to the point of droning and are now shuffling towards zombiedom. But, worse, they have no real idea of their place in the world and, perhaps most frighteningly, despite knowing that accidents happen when fire missions are requested and having direct experience of rejecting what would have resulted in accidental hospital strikes, these soldiers have moved to a point of function where the only thing that matters is doing the work and calling the task done. This is an ethical aspect because, from their previous actions, it is quite obvious that there was both a professional and ethical dimension to their job as the custodians of this incredibly destructive weaponry – deprive them of enough sleep and they calculate and fire, no longer having the cognitive ability (or perhaps the will) to be ethical in their delivery. (I realise a number of you will have choked on your coffee slightly at the discussion of military ethics but, in the majority of cases, modern military units have a strong ethical code, even to the point of providing a means for soldiers to refuse to obey illegal orders. Most failures of this system in the military can be traced to failures in a unit’s ethical climate or to undetected instability in the soldiers: much as in the rest of the world.)
The message, once again, is clear. Overwork, fatigue and sleeplessness reduce the ability to perform as you should. Belenky even notes that the ability to benefit from training quite clearly deteriorates as the fatigue levels increase. Work someone hard enough, or let them work themselves hard enough, and not only aren’t they productive, they can’t learn to do anything else.
The notion of situational awareness is important because it’s a measure of your sense of place, in an organisational sense, in a geographical sense, in a relative sense to the people around you and also in a social sense. Get tired enough and you might swear in front of your grandma because your social situational awareness is off. But it’s not just fatigue over time that can do this: overloading someone with enough complex tasks can stress cognitive ability to the point where similar losses of situational awareness can occur.
Helmet fire is a vivid description of what happens when you have too many tasks to do, under highly stressful situations, and you lose your situational awareness. If you are a military pilot flying on instruments alone, especially with low or zero visibility, then you have to follow a set of procedures, while regularly checking the instruments, in order to keep the plane flying correctly. If the number of tasks that you have to carry out gets too high, and you are facing the stress of effectively flying the plane visually blind, then your cognitive load limits will be exceeded and you are now experiencing helmet fire. You are now very unlikely to be making any competent contributions at all at this stage but, worse, you may lose your sense of what you were doing, where you are, what your intentions are, which other aircraft are around you: in other words, you lose situational awareness. At this point, you are now at a greatly increased risk of catastrophic accident.
To summarise, if someone gets tired, stressed or overworked enough, whether acutely or over time, their performance goes downhill, they lose their sense of place and they can’t learn. But what does this have to do with our students?
A while ago I posted thoughts on a triage system for plagiarists – allocating our resources to those students we have the most chance of bringing back to legitimate activity. I identified the three groups as: sloppy (unintentional) plagiarism, deliberate (but desperate and opportunistic) plagiarism and systematic cheating. I think that, from the framework above, we can now see exactly where the majority of my ‘opportunistic’ plagiarists are coming from: sleep-deprived, fatigued and (by their own hands or not) over-worked students losing their sense of place within the course and becoming focused only on the outcome. Here, the sense of place is not just geographical, it is their role in the social and formal contracts that they have entered into with lecturers, other students and their institution. Their place in the agreements for ethical behaviour in terms of doing the work yourself and submitting only that.
If professional soldiers who have received very large amounts of training can forget where there own forces are, sometimes to the tragic extent that they fire upon and destroy them, or become so cognitively impaired that they carry out the mission, and only the mission, with little of their usual professionalism or ethical concern, then it is easy to see how a student can become so task focussed that start to think about only ending the task, by any means, to reduce the cognitive load and to allow themselves to get the sleep that their body desperately needs.
As always, this does not excuse their actions if they resort to plagiarism and cheating – it explains them. It also provides yet more incentive for us to try and find ways to reach our students and help them form systems for planning and time management that brings them closer to the 40 hour ideal, that reduces the all-nighters and the caffeine binges, and that allows them to maintain full cognitive function as ethical, knowledgable and professional skill practitioners.
If we want our students to learn, it appears that (for at least some of them) we first have to help them to marshall their resources more wisely and keep their awareness of exactly where they are, what they are doing and, in a very meaningful sense, who they are.
Triage is a process used in hospitals where a patient’s condition is assessed and this assessment is used to assign them a priority. The term, and the practice, originally comes from battlefield medicine where patients were sorted into:
- those who were going to live, regardless of what doctors did
- those who were going to die, regardless
- those who might live if they received immediate attention
You’ll notice that there are three basic categories but the word triage isn’t a reference to the three categories (tri), it’s a French word that just refers to selection or separation, based on quality. (The first use of triage stems from a basis in the Napoleonic wars and the work of French doctors in the Great War.)
Battlefield medicine is hard medicine in many respects. Under-resourced, extreme injuries, a requirement to maintain fighting power because it might stop your own position from being overrun – it’s incredibly stressful. You’ve all seen triage in M*A*S*H episodes, no doubt, where the doctors try and group the injured into the ones that need them straight away, the ones who will probably need a patch-up later and the ones that no-one can save.
The core of triage is that, with limited resources, you have to select where to apply them or you risk wasting your effort. It’s pretty unemotional stuff.
I’ve spent a lot of time looking at student plagiarism activity. I’ve been involved in teaching for a long time now and I spent a few years as an Assessment and Examinations co-ordinator, which meant that every single plagiarism case went by me. One of the things about plagiarism and cheating detection is that it is resource intensive. If I’m going to carry out a systematic program to reduce and detect plagiarism, I’m going to have to:
- Have strong policies in place that I adhere to, from the institutional level, for consistency.
- Refresh and alter all of my assignments, every year, to reduce any incentive to re-use a previous assignment.
- Brief my students and tell them that I’m serious about plagiarism.
- Apply detection methods to every submission (to detect global plagiarism or cheating).
- Check every submission against every other submission (to detect local plagiarism or cheating).
- Investigate every case that triggers my detection threshold.
- Prepare all of the evidence that is required for a hearing.
- Attend the hearing and present the evidence.
- Incorporate any changed marks, do any follow-up, counsel the student.
Now, 1, 2, 3 and 4 are either things that I should be doing anyway or, in the case of 4, I can make the students involved in their own process (by telling them to submit their work and a Turnitin report, for example). Number 5 is actually hard because comparing all assignments to each other has a large burden associated with it. As you add assignments, the amount of checking required is given by the square of the total number. (So, if the number of assignments is n, the checking burden increases proportional to n*n. For those who are curious, it’s because the number of edges in a complete graph with n nodes is given by (n(n-1)/2).)
From 6 onwards, the load on me is proportional to the number of students that I catch. If 1-3 have had any effect, and I’m serious about 4 and 5, then 6-9 should actually be for a far smaller number of students. But there are two big “if”s in there and, I know from experience, not everyone takes the same approach to plagiarism that I do, for a range of reasons.
But let’s drill down into the types of plagiarism and cheating I’m talking about. Plagiarism can be as ‘light’ as forgetting to add a reference (not attributing work correctly) or as ‘heavy’ as handing up someone else’s essay with your name on it (or a similar net-sourced equivalent). Cheating is a different beast in many ways and can be harder to define, but carrying illicit notes into exams is an obvious one, as is obtaining a solution guide outside of legitimate channels. These days, of course, we have an entirely new form of work avoidance – work for hire. A student can submit a piece of work that is original and yet it is not their own work, as they have paid someone else to do it. Similarly, they could ostensibly pay someone else to sit an exam for them.
At this stage, my detection efforts in 4 and 5 start to fall apart. I have no way to detect work-for-hire as it can’t be determined by comparison. If someone else studies for the exam and shows up, and we don’t detect it, then we won’t get the usual cues that indicate that other materials have been brought in.
Now, of course, we build our courses so that assignments interlock with knowledge, supporting the student’s development, and we also test things in different ways. For example, my first year course has in-class quizzes, programming assignments, tutorials and on-line quizzes that test the same things in different ways. We then have an exam that tests understanding over everything else, including coding and theory. To cheat across the whole thing would require a quite systematic approach to cheating – you would have to be fairly well organised to arrange for successful work-for-hire or cheating across the entire course. Which brings me to my point.
I believe that students who plagiarise or cheat fall into roughly three categories:
- Students who are sloppy or careless, which explains things like not attributing some text occasionally, or who drop the odd bit of code in from the internet because they’re being lazy. These students, with policy framing and reminders from me, are not really a high risk in terms of cheating. In triage terms, these are the ones who are going to live so I can put a little effort in here but it’s mostly structural and self-sustaining.
- Students who get rushed and panic – then start doing stupid things like copying code from other students wholesale, or copying large slabs of text. They start late, don’t prepare properly, panic, and rush for something rather than a legitimate 0. These are the students who need to be found, counselled, and retrained in better practices so that they use their time more effectively. Rather than being lazy and accidental plagiarists, they are intentional but opportunistic plagiarists. These are the students who I can bring back to life with enough effort.
- Finally, we have the students who have made plagiarism and cheating a part of their success plan. They build in timeframes for work-for-hire, scour sources for illicit advantage, spend four hours writing cheat notes for an exam rather than studying. This is pre-meditated and systematic cheating and, despite everything I say, I’m unlikely to reach these students.
Now, whenever I detect someone cheating or plagiarising, they’re going to get the full process as defined – steps 6-9, regardless of which of the student categories they’re in. But how I deal with them to bring them back will vary from group to group. The only problem is that I have limited time. I have to conduct triage to work out who I can bring back and where I can spend my effort most usefully.
I don’t know if there’s any heresy in this but I believe that my duty lies to the majority of my students and the ones for whom I can achieve the best results. The Group 1s (accidental) will be woken up if they get caught but careful policy framing, education and changing assignments will deal with most of them in advance. The Group 2s (opportunistic) need shepherding and feedback, positive reinforcement and encouragement to stay righteous. They’ll take a lot of effort but the net result may be good. Group 3… if I have any time left, I might try and work with Group 3 but how can I? They have set a path which includes cheating as a definitive strategy for success. I’m not sure they’re going to do anything other than nod solemnly at me and snicker behind my back.
Worse, Group 3 may have been formed this way by experience, by systems that encourage this behaviour, or by academics who slyly ignore the early signs of cheating. Group 3 students may have taken years to solidify into this form and my 6-month exposure to them is a brief inconvenience in their overall plan to achieve graduate status through other people’s effort.
We make a distinction in our society between murder and manslaughter, because pre-meditation makes a difference. Group 3, pre-meditated cheating, is easier to define if we accept that “work-for-hire”, “solution scraping”, “organising someone to sit the exam” all require a deliberate and concentrated effort to subvert our academic quality requirements. Perhaps Group 3 even get a different set of outcomes from plagiarism detection? Usually, we’d give a student 0 for the assignment when first detected, 0 for the entire course on second detection and it escalates from there. Group 3 have set out on a deliberate path and we have (unfortunately) probably only detected a fraction of what they’ve done, especially if work-for-hire is involved. If we detect work-for-hire should this be an immediate course-level failure, given that it is so obviously a core part of their strategy? Is there such a thing as an opportunistic “work-for-hire” retainer?
But, as it stands, Group 3 are those students must likely to not benefit positively from my intervention. I think, with regret, I’m going to have to leave them until last and hope that I have enough effort left over after dealing with everyone else to do something about them.
Of course, the dead in a battlefield situation aren’t actually ignored, they’re buried. To be more precise, they’re passed to another group of people who attend to their needs at a different pace. So, a more positive view of group 3 is that they are taken out of my load and moved to somewhere where they can be dealt with more appropriately. This is where a good Transitions and Advisory Service can come in but, obviously, there will be a lot of effort required to get some students back from an engrained and systematic pattern of negative behaviour. But there are benefits to centralisation of resources and this may be somewhere that, rather than burying the dead, we look after them for a while to see if there’s any chance, any remaining spark of life, even though the surgeons didn’t have enough time on their first pass.
I have no real solutions here but I’d be interested to see the discussion. Can we actually make these distinctions clearly? Do we risk moving students into other categories if we change outcomes based on what we detect? Can I make such a clear distinction based on what I detect or am I being unfair?
Right now, I’m trying to be as rigorous about 1-5 as I can, going to 6-9 when I have to, and building my courses so that there’s enough interlock that it deters everyone except the most dedicated and systematic cheat from trying to use work-for-hire. That’s probably the best use of my time and I hope it gives the best results in terms on knowledge and the student experience.
(This, once again, is a little more opinion/political but it does touch on some important teaching points and might be useful for a class in ethics. However, some of you might find my editorial stance disagrees with your perspective.)
Some of you will have seen that the Chronicle of Higher Education recently fired one of their blogging staff because she “did not meet The Chronicle’s basic editorial standards for reporting and fairness in opinion articles”. You can find the story in a number of places, and there’s a reasonable summary here, but, despite people trying to turn this into a debate on “left-wing victimisation clap trap” versus “freedom of speech” versus any number of the quite offensive straw men that were put up in the original blog, Naomi Schaefer Riley committed the cardinal sin.
She published work that made a claim which could not be substantiated by the references.
The title of her blog was “The Most Persuasive Case for Eliminating Black Studies? Just Read the Dissertations.” but, as it turned out, she hadn’t. The dissertations weren’t available to read so she wrote a scathing, dismissive and quite unpleasant article on incomplete knowledge. Then, when called on it, she claimed that she didn’t need to read them to write a 500 word blog post.
Regardless of everything else in the post, regardless of who is right, this is just not acceptable. Had she started from a position of assessing the abstracts, drawing a long bow and then saying “But, of course, we have to see the dissertations”, I suspect she’d still have her job. Journalists do this all the time. However, like scientists, there comes a point where you have to be able to pick up the grain of truth that you’re standing on and point to it. If it turns out that you’ve, effectively, made something up or, worse, misrepresented what you’ve read, then that’s unacceptable and in this case, quite rightly, the Chronicle asked her to go.Years ago, when I was a junior PhD student, I needed to look up a paper that is seminal in our field “On the Translation of Languages from Left to Right” by Knuth. It is a cracker of a paper. Concise, accurate, well-written and easy to understand. I went to get it from the library, because it wasn’t on the Internet back then (*gasp*), and discovered that the volume that held it was stored in the joint store – a warehouse with a long delay for retrieving works. No matter, I arranged for it to be pulled and discovered that I was the first person in many years to grab that volume. So what people been using? Their own photocopies? Other sources? As it turns out, most people were citing a paper that cited Knuth. A survey paper, which I won’t name and that’s good camouflage because there are at least 712 other papers that cite this paper, that pulled together some other key papers and people referred to it as the resource. That, in itself, isn’t a problem. The problem occurs when you read the survey paper and then place a citation reference to the original paper.
Of course, you know that I discovered that people had done this. How? The survey paper, to avoid plagiarising Knuth, had rephrased one of the clear and concise explanations – and they had introduced a distinctive way of representing the problem. (I still found the original much clearer.) It got to the stage that I could tell who had read the original or the survey from which twist they had in their framing paragraph for a key point, without having to spend time looking at the references.
Why had people done this? Because Knuth wasn’t readily available. Being in a 1965 publication meant that many libraries had shunted these ‘old books’ to stores as newer volumes came in and it required a week or two to get it back, sometimes longer. Sometimes these volumes were lost forever. (These days, I’m happy to say, there are many on-line sources for this paper. So there’s no excuse, if you’re in CS, you go off now and read yourself some Knuth.) The survey paper was easy to find and was pretty well written. It was just unfortunate that a wrinkle had crept in that allowed us to tell Knuth from Knuth-prime.
It’s still no excuse. It’s a pretty basic rule for us – if you’ve only read the abstract, you haven’t read the paper. If you haven’t read the paper, you can’t cite the paper. If you’ve read a survey, then you can cite the survey but not one of the surveyed papers. But, categorically and set in stone, if you haven’t read the paper then you can’t criticise the paper.
Personally, I think that Naomi Schaefer Riley’s article was pretty badly written, unnecessarily vicious and was the kind of article I’d describe as “written by the food critic before they entered the restaurant”. But that’s only my opinion of the worth of the article. For that, should she lose her job? No, of course not – we differ, that’s life. But for writing an article that insinuated in the text, and stated in the heading, that she had read something, upon which she based a vitriolic criticism, which she then recanted, claiming she didn’t have enough time?
I could lose my job for that. I could even lose my PhD for that.
My Vice Chancellor could lose his job for that.
It’s a bit of a shame that it took some community nudging for the Chronicle to do something here, but I think they did the right thing. If you want to write about our world and our standards, then I think you pretty much have to exemplify them yourselves. It’s all about authenticity. Fairness. Ethics. Something that I hope Naomi Schaefer Riley can think about and learn from. I hope she’s had a chance to think about this and go forward constructively from it sometime in the future. Maybe no-one has every called her on it before? Either way, the next time she shows up, I’ll happily read what she’s written – but I will be checking her references.
For those who don’t know, Turnitin is automated plagiarism detection software that scans submitted documents and looks to see if there are matches to text found inside its databases. And, it should be noted, Turnitin’s databases are large. It’s a great tool, although it can be pricey to access, because you use it as a verification and detection tool, the obvious use, and as a teaching tool, where students submit their own work to see how much cut-and-paste and unattributed work they have included. Taking the latter approach allows students to improve their work and then you can get them to submit their work AND their Turnitin report for the final submission. This makes the student an active participant in their own development – a very good thing.
I’m on the Turnitin mailing list so I receive regular updates and the one that came through today had a really nice graphic that I’m going to share here today, although I note that it is associated with the Turnitin webcast “Why Students Plagiarize?” by Jason Stephens. Here, he summarises the three common motivational factors.
I love this diagram. It gets to the core of the problem and, unsurprisingly as I’ve linked it here, completely agrees with my thinking and experience on this. Let’s go through these points. (I haven’t watched the talk, I just liked the graphic. I’m planning to watch the talk next week and I hope to have something to share here from that.)
If a student isn’t engaged, they won’t take the work seriously and they won’t really care about allocating enough time to do it – or to do it properly. Worse, if the assignment is seen (fairly or not) as make work or if the educator is seen to be under-interested, then the lack of value associated with the assignment may allow some students to rationalise a decision to grab someone else’s work, put a quick gloss on it and then hand it up. No interest, no engagement, no pride – no worth. Students have to be shown that the work is valuable and that we are interested – which means that we have to be interested and the work has to be worthwhile doing!
- Under Pressure
Students tend to allocate their effort based on proximity of deadlines. Wait, let me correct that. People tend to allocate their effort based on proximity of deadlines. Given that students are not yet mature in many of their professional skills, their ability to estimate how long a task will take is also not guaranteed to be mature. As a result, many of our students are under a cascade of time pressures. This is never a justification for plagiarism but it is often the foundation of a rationalisation for plagiarism. “I’m in a hurry and I really need to get this done so I’ll take shortcuts.” Training students to improve their time management and encouragement to start and submit work early are the best ways to help fight this, in conjunction with plagiarism awareness.
Students who don’t have the skills can’t do the work themselves. To complete assignments without having the understanding yourself, you have to use the work of other people. For us, this means that we have to quickly identify when students don’t have the knowledge to proceed and try to remedy it, while still maintaining out academic standards and keeping our pass bars form and at the right level. Sometimes this is just a perception, rather than the truth, and guidance and encouragement can help. Sometimes we need remedial work, pre-testing and hurdles to make sure that students are at the right level to proceed. It’s a complex juggling act that forms the basis of what we do – catering to everyone across the range of abilities.
The main reason that I like this diagram so much is that it doesn’t say anything about where the student comes from, or who they are, it talks about the characteristics that are common to most students who plagiarise. Let’s give up the demonisation and work on the problems.