EduTech AU 2015, Day 2, Higher Ed Leaders, “Assessment: The Silent Killer of Learning”, #edutechau @eric_mazurPosted: June 3, 2015
No surprise that I’m very excited about this talk as well. Eric is a world renowned educator and physicist, having developed Peer Instruction in 1990 for his classes at Harvard as a way to deal with students not developing a working physicist’s approach to the content of his course. I should note that Eric also gave this talk yesterday and the inimitable Steve Wheeler blogged that one, so you should read Steve as well. But after me. (Sorry, Steve.)
I’m not an enormous fan of most of the assessment we use as most grades are meaningless, assessment becomes part of a carrot-and-stick approach and it’s all based on artificial timelines that stifle creativity. (But apart from that, it’s fine. Ho ho.) My pithy statement on this is that if you build an adversarial educational system, you’ll get adversaries, but if you bother to build a learning environment, you’ll get learning. One of the natural outcomes of an adversarial system is activities like cheating and gaming the system, because people start to treat beating the system as the goal itself, which is highly undesirable. You can read a lot more about my views on plagiarism here, if you like. (Warning: that post links to several others and is a bit of a wormhole.)
Now, let’s hear what Eric has to say on this! (My comments from this point on will attempt to contain themselves in parentheses. You can find the slides for his talk – all 62MB of them – from this link on his website. ) It’s important to remember that one of the reasons that Eric’s work is so interesting is that he is looking for evidence-based approaches to education.
Eric discussed the use of flashcards. A week after Flashcard study, students retain 35%. After two weeks, it’s almost gone. He tried to communicate this to someone who was launching a cloud-based flashcard app. Her response was “we only guarantee they’ll pass the test”.
*low, despairing chuckle from the audience*
Of course most students study to pass the test, not to learn, and they are not the same thing. For years, Eric has been bashing the lecture (yes, he noted the irony) but now he wants to focus on changing assessment and getting it away from rote learning and regurgitation. The assessment practices we use now are not 21st century focused, they are used for ranking and classifying but, even then, doing it badly.
So why are we assessing? What are the problems that are rampant in our assessment procedure? What are the improvements we can make?
How many different purposes of assessment can you think of? Eric gave us 90s to come up with a list. Katrina and I came up with about 10, most of which were serious, but it was an interesting question to reflect upon. (Eric snuck
- Rate and rank students
- Rate professor and course
- Motivate students to keep up with work
- Provide feedback on learning to students
- Provide feedback to instructor
- Provide instructional accountability
- Improve the teaching and learning.
Ah, but look at the verbs – they are multi-purpose and in conflict. How can one thing do so much?
So what are the problems? Many tests are fundamentally inauthentic – regurgitation in useless and inappropriate ways. Many problem-solving approaches are inauthentic as well (a big problem for computing, we keep writing “Hello, World”). What does a real problem look like? It’s an interruption in our pathway to our desired outcome – it’s not the outcome that’s important, it’s the pathway and the solution to reach it that are important. Typical student problem? Open the book to chapter X to apply known procedure Y to determine an unknown answer.
Shout out to Bloom’s! Here’s Eric’s slide to remind you.
Eric doesn’t think that many of us, including Harvard, even reach the Applying stage. He referred to a colleague in physics who used baseball problems throughout the course in assignments, until he reached the final exam where he ran out of baseball problems and used football problems. “Professor! We’ve never done football problems!” Eric noted that, while the audience were laughing, we should really be crying. If we can’t apply what we’ve learned then we haven’t actually learned i.
Eric sneakily put more audience participation into the talk with an open ended question that appeared to not have enough information to come up with a solution, as it required assumptions and modelling. From a Bloom’s perspective, this is right up the top.
Students loathe assumptions? Why? Mostly because we’ll give them bad marks if they get it wrong. But isn’t the ability to make assumptions a really important skill? Isn’t this fundamental to success?
Eric demonstrated how to tame the problem by adding in more constraints but this came at the cost of the creating stage of Bloom’s and then the evaluating and analysing. (Check out his slides, pages 31 to 40, for details of this.) If you add in the memorisation of the equation, we have taken all of the guts out of the problem, dropping down to the lowest level of Bloom’s.
But, of course, computers can do most of the hard work for that is mechanistic. Problems at the bottom layer of Bloom’s are going to be solved by machines – this is not something we should train 21st Century students for.
But… real problem solving is erratic. Riddled with fuzziness. Failure prone. Not guaranteed to succeed. Most definitely not guaranteed to be optimal. The road to success is littered with failures.
But, if you make mistakes, you lose marks. But if you’re not making mistakes, you’re very unlikely to be creative and innovative and this is the problem with our assessment practices.
Eric showed us a stress of a traditional exam room: stressful, isolated, deprived of calculators and devices. Eric’s joke was that we are going to have to take exams naked to ensure we’re not wearing smart devices. We are in a time and place where we can look up whatever we want, whenever we want. But it’s how you use that information that makes a difference. Why are we testing and assessing students under such a set of conditions? Why do we imagine that the result we get here is going to be any indicator at all of the likely future success of the student with that knowledge?
Cramming for exams? Great, we store the information in short-term memory. A few days later, it’s all gone.
Assessment produces a conflict, which Eric noticed when he started teaching a team and project based course. He was coaching for most of the course, switching to a judging role for the monthly fair. He found it difficult to judge them because he had a coach/judge conflict. Why do we combine it in education when it would be unfair or unpleasant in every other area of human endeavour? We hide between the veil of objectivity and fairness. It’s not a matter of feelings.
But… we go back to Bloom’s. The only thinking skill that can be evaluated truly objectively is remembering, at the bottom again.
But let’s talk about grade inflation and cheating. Why do people cheat at education when they don’t generally cheat at learning? But educational systems often conspire to rob us of our ownership and love of learning. Our systems set up situations where students cheat in order to succeed.
- Mimic real life in assessment practices!
Open-book exams. Information sticks when you need it and use it a lot. So use it. Produce problems that need it. Eric’s thought is you can bring anything you want except for another living person. But what about assessment on laptops? Oh no, Google access! But is that actually a problem? Any question to which the answer can be Googled is not an authentic question to determine learning!
Eric showed a video of excited students doing a statistic tests as a team-based learning activity. After an initial pass at the test, the individual response is collected (for up to 50% of the grade), and then students work as a group to confirm the questions against an IF AT scratchy card for the rest of the marks. Discussion, conversation, and the students do their own grading for you. They’ve also had the “A-ha!” moment. Assessment becomes a learning opportunity.
Eric’s not a fan of multiple choice so his Learning Catalytics software allows similar comparison of group answers without having to use multiple choice. Again, the team based activities are social, interactive and must less stressful.
- Focus on feedback, not ranking.
Objective ranking is a myth. The amount of, and success with, advanced education is no indicator of overall success in many regards. So why do we rank? Eric showed some graphs of his students (in earlier courses) plotting final grades in physics against the conceptual understanding of force. Some people still got top grades without understanding force as it was redefined by Newton. (For those who don’t know, Aristotle was wrong on this one.) Worse still is the student who mastered the concept of force and got a C, when a student who didn’t master force got an A. Objectivity? Injustice?
- Focus on skills, not content
Eric referred to Wiggins and McTighe, “Understanding by Design.” Traditional approach is course content drives assessment design. Wiggins advocates identifying what the outcomes are, formulate these as action verbs, ‘doing’ x rather than ‘understanding’ x. You use this to identify what you think the acceptable evidence is for these outcomes and then you develop the instructional approach. This is totally outcomes based.
- resolve coach/judge conflict
In his project-based course, Eric brought in external evaluators, leaving his coach role unsullied. This also validates Eric’s approach in the eyes of his colleagues. Peer- and self-evaluation are also crucial here. Reflective time to work out how you are going is easier if you can see other people’s work (even anonymously). Calibrated peer review, cpr.molsci.ucla.edu, is another approach but Eric ran out of time on this one.
If we don’t rethink assessment, the result of our assessment procedures will never actually provide vital information to the learner or us as to who might or might not be successful.
I really enjoyed this talk. I agree with just about all of this. It’s always good when an ‘internationally respected educator’ says it as then I can quote him and get traction in change-driving arguments back home. Thanks for a great talk!
Nick Falkner is an Australian academic with a pretty interesting career path. He is also a culture sponge and, given that he’s (very happily) never going to be famous enough for a magazine style interview, he interviews himself on one of the more confronting images to come across the wires recently. For clarity, he’s the Interviewer (I) when he’s asking the questions.
Interviewer: As you said, “I want you to be sad. I want you to be angry. I want you to understand.” I’ve looked at the picture and I can see a lot of people who are being associated with a mass cheating scandal in the Indian state of Bihar. This appears to be a systematic problem, especially as even more cheating has been exposed in the testing for the police force! I think most people would agree that it’s a certainly a sad state of affairs and there are a lot of people I’ve heard speaking who are angry about this level of cheating – does this mean I understand? Basically, cheating is wrong?
Nick: No. What’s saddening me is that most of the reaction I’ve seen to this picture is lacking context, lacking compassion and, worse, perpetrating some of the worst victim blaming I’ve ever seen. I’m angry because people still don’t get that the system in place is Bihar, and wherever else we put systems like this, is going to lead to behaviour like this out of love and a desire for children to have opportunity, rather than some grand criminal scheme of petty advancement.
Interviewer: Well, ok, that’s a pretty strong set of statements. Does this mean that you think cheating is ok?
Nick: (laughs) Well, we’ve got the most usual response out of the way. No, I don’t support “cheating” in any educational activity because it means that the student is bypassing the learning design and, if we’ve done our job, this will be to their detriment. However, I also strongly believe that some approaches to large-scale education naturally lead to a range of behaviours where external factors can affect the perceived educational benefit to the student. In other words, I don’t want students to cheat but I know that we sometimes set things up so that cheating becomes a rational response and, in some cases, the only difference between a legitimate advantage and “cheating” is determined by privilege, access to funds and precedent.
Interviewer: Those are big claims. And you know what that means…
Nick: You want evidence! Ok. Let’s start with some context. Bihar is the third-largest state in India by population, with over 100 million people, the highest density of population in India, the largest number of people under 25 (nearly 60%), a heavily rural population (~85%) and a literacy rate around 64%. Bihar is growing very quickly but has put major work into its educational systems. From 2001 to 2011, literacy jumped from 48 to 64% – 20 of those percentage points are in increasing literacy in women alone.
If we took India out of the measurement, Bihar is in the top 11 countries in the world by population. And it’s accelerating in growth. At the same time, Bihar has lagged behind other Indian states in socio-economic development (for a range of reasons – it’s very … complicated). Historically, Bihar has been a seat of learning but recent actions, including losing an engineering college in 2000 due to boundary re-alignment, means that they are rebuilding themselves right now. At the same time, Bihar has a relatively low level of industrialisation by Indian standards although it’s redefining itself away from agriculture to services and industry at the moment, with some good economic growth. There are some really interesting projects on the horizon – the Indian Media Hub, IT centres and so on – which may bring a lot more money into the region.
Interviewer: Ok, Bihar is big, relatively poor … and?
Nick: And that’s the point. Bihar is full of people, not all of whom are literate, and many of whom still live in Indian agricultural conditions. The future is brightening for Bihar but if you want to be able to take advantage of that, then you’re going to have to be able to get into the educational system in the first place. That exam that the parents are “helping” their children with is one that is going to have an almost incomprehensibly large impact on their future…
Interviewer: Their future employment?
Nick: Not just that! This will have an impact on whether they live in a house with 24 hour power. On whether they will have an inside toilet. On whether they will be able to afford good medicine when they or their family get sick. On whether they will be able to support their parents when they get old. On how good the water they drink is. As well, yes, it will help them to get into a University system where, unfortunately, existing corruption means that money can smooth a path where actual ability has led to a rockier road. The article I’ve just linked to mentions pro-cheating rallies in Uttar Pradesh in the early 90s but we’ve seen similar arguments coming from areas where rote learning, corruption and mass learning systems are thrown together and the students become grist to a very hard stone mill. And, by similar arguments, I mean pro-cheating riots in China in 2013. Student assessment on the massive scale. Rote learning. “Perfect answers” corresponding to demonstrating knowledge. Bribery and corruption in some areas. Angry parents because they know that their children are being disadvantaged while everyone else is cheating. Same problem. Same response.
Interviewer: Well, it’s pretty sad that those countries…
Nick: I’m going to stop you there. Every time that we’ve forced students to resort to rote learning and “perfect answer” memorisation to achieve good outcomes, we’ve constructed an environment where carrying in notes, or having someone read you answers over a wireless link, suddenly becomes a way to successfully reach that outcome. The fact that this is widely used in the two countries that have close to 50% of the world’s population is a reflection of the problem of education at scale. Are you volunteering to sit down and read the 50 million free-form student essays that are produced every year in China under a fairer system? The US approach to standardised testing isn’t any more flexible. Here’s a great article on what’s wrong with the US approach because it identifies that these tests are good for measuring conformity to the test and its protocol, not the quality of any education received or the student’s actual abilities. But before we get too carried away about which countries cheat most, here are some American high school students sharing answers on Twitter.
Every time someone talks about the origin of a student, rather than the system that a student was trained under, we start to drift towards a racist mode of thinking that doesn’t help. Similar large-scale, unimaginative, conform-or-perish tests that you have to specifically study for across India, China and the US. What do we see? No real measurement of achievement or aptitude. Cheating. But let’s go back to India because the scale of the number of people involved really makes the high stakes nature of these exams even more obvious. Blow your SATs or your GREs and you can still do OK, if possibly not really well, in the US. In India… let’s have a look.
State Bank of India advertised some entry-level vacancies back in 2013. They wanted 1,500 people. 17 million applied. That’s roughly the adult population of Australia applying for some menial work at the bank. You’ve got people who are desperate to work, desperate to do something with their lives. We often think of cheats as being lazy or deceitful when it’s quite possible to construct a society so that cheating is part of a wider spectrum of behaviour that helps you achieve your goals. Performing well in exams in India and China is a matter of survival when you’re talking about those kinds of odds, not whether you get a great or an ok job.
Interviewer: You’d use a similar approach to discuss the cheating on the police exam?
Nick: Yes. It’s still something that shouldn’t be happening but the police force is a career and, rather sadly, can also be a lucrative source of alternative income in some countries. It makes sense that this is also something that people consider to be very, very high stakes. I’d put money on similar things happening in countries where achieving driving licences are a high stakes activity. (Oh, good, I just won money.)
Interviewer: So why do you want us to be sad?
Nick: I don’t actually want people to be sad, I’d much prefer it if we didn’t need to have this discussion. But, in a nutshell, every parent in that picture is actually demonstrating their love and support for their children and family. That’s what the human tragedy is here. These Biharis probably don’t have the connections or money to bypass the usual constraints so the best hope that their kids have is for their parents to risk their lives climbing walls to slip them notes.
I mean, everyone loves their kids. And, really, even those of us without children would be stupid not to realise that all children are our children in many ways, because they are the future. I know a lot of parents who saw this picture and they didn’t judge the people on the walls because they could see themselves there once they thought about it.
But it’s tragic. When the best thing you can do for your child is to help them cheat on an exam that controls their future? How sad is that?
Interviewer: Do you need us to be angry? Reading back, it sounds like you have enough anger for all of us.
Nick: I’m angry because we keep putting these systems in place despite knowing that they’re rubbish. Rousseau knew it hundreds of years ago. Dewey knew it in the 1930s. We keep pretending that exams like this sort people on merit when all of our data tells us that the best indicator of performance is the socioeconomic status of the parents, rather than which school they go to. But, of course, choosing a school is a kind of “legal” cheating anyway.
Interviewer: Ok, now there’s a controversial claim.
Nick: Not really. Studies show us that students at private schools tend to get higher University entry marks, which is the gateway to getting into courses and also means that they’ve completed their studies. Of course, the public school students who do get in go on to get higher GPAs… (This article contains the data.)
Interviewer: So it all evens out?
Nick: (laughs) No, but I have heard people say that. Basically, sending your kids to a “better” school, one of the private schools or one of the high-performing publics, especially those that offer International Baccalaureate, is not going to hurt your child’s chances of getting a good Tertiary entry mark. But, of course, the amount of money required to go to a private school is not… small… and the districting of public schools means that you have to be in the catchment to get one of these more desirable schools. And, strangely enough, once you factor in the socio-economic factors and outlook for a school district, it’s amazing how often that the high-performing schools map into higher SEF areas. Not all of them and there are some magnificent efforts in innovative and aggressive intervention in South Australia alone but even these schools have limited spaces and depend upon the primary feeder schools. Which school you go to matters. It shouldn’t. But it does.
So, you could bribe someone to make the exam easier or you could pay up to AUD $24,160 in school fees every year to put your child into a better environment. You could go to your local public school or, if you can manage the difficulty and cost of upheaval, you could relate to a new suburb to get into a “better” public school. Is that fair to the people that your child is competing against to get into limited places at University if they can’t afford that much or can’t move? That $24,000 figure is from this year’s fees for one of South Australia’s most highly respected private schools. That figure is 10 times the nominal median Indian income and roughly the same as an experienced University graduate would make in Bihar each year. In Australia, the 2013 median household income was about twice that figure, before tax. So you can probably estimate how many Australian families could afford to put one, or more, children through that kind of schooling for 5-12 years and it’s not a big number.
The Biharis in the picture don’t have a better option. They don’t have the money to bribe or the ability to move. Do you know how I know? Because they are hanging precariously from a wall trying to help their children copy out a piece of information in perfect form in order to get an arbitrary score that could add 20 years to their lifespan and save their own children from dying of cholera or being poisoned by contaminated water.
Some countries, incredibly successful education stories like Finland (seriously, just Google “Finland Educational System” and prepare to have your mind blown), take the approach that every school should be excellent, every student is valuable, every teacher is a precious resource and worthy of respect and investment and, for me, these approaches are the only way to actually produce a fair system. Excellence in education that is only available to the few makes everyone corrupt, to a greater or lesser degree, whether they realise it or not. So I’m angry because we know exactly what happens with high stakes exams like this and I want everyone to be angry because we are making ourselves do some really awful things to ourselves by constantly bending to conform to systems like this. But I want people to be angry because the parents in the picture have a choice of “doing the right thing” and watching their children suffer, or “doing the wrong thing” and getting pilloried by a large and judgemental privileged group on the Internet. You love your kids. They love their kids. We should all be angry that these people are having to scramble for crumbs at such incredibly high stakes.
But demanding that the Indian government do something is hypocritical while we use similar systems and we have the ability to let money and mobility influence the outcome for students at the expense of other students. Go and ask Finland what they do, because they’re happy to tell you how they fixed things but people don’t seem to want to actually do most of the things that they have done.
Interviewer: We’ve been talking for a while so we had better wrap up. What do you want people to understand?
Nick: What I always want people to understand – I want them to understand “why“. I want them to be able to think about and discuss why these images from a collapsing educational system is so sad. I want them to understand why our system is really no better. I want them to think about why struggling students do careless, thoughtless and, by our standards, unethical things when they see all the ways that other people are sliding by in the system or we don’t go to the trouble to construct assessment that actually rewards creative and innovative approaches.
I want people to understand that educational systems can be hard to get right but it is both possible and essential. It takes investment, it takes innovation, it takes support, it takes recognition and it takes respect. Why aren’t we doing this? Delaying investment will only make the problem harder!
Really, I want people to understand that we would have to do a very large amount of house cleaning before we could have the audacity to criticise the people in that photo and, even then, it would be an action lacking in decency and empathy.
We have never seen enough of a level playing field to make a meritocratic argument work because of ingrained privilege and disparity in opportunity.
Interviewer: So, basically, everything most people think about how education and exams work is wrong? There are examples of a fairer system but most of us never see it?
Nick: Pretty much. But I have hope. I don’t want people to stay sad or angry, I want those to ignite the next stages of action. Understanding, passion and action can change the world.
Interviewer: And that’s all we have time for. Thank you, Nick Falkner!
I recently ran across a very interesting article on Gamasutra on the top tips for turning a Free To Play (F2P) game into a Paying game by taking advantage of the way that humans think and act. F2P games are quite common but, obviously, it costs money to make a game so there has to be some sort of associated revenue stream. In some cases, the F2P is a Lite version of the pay version, so after being hooked you go and buy the real thing. Sometimes there is an associated advertising stream, where you viewing the ads earns the producer enough money to cover costs. However, these simple approaches pale into insignificance when compared with the top tips in the link.
Ramin identifies two games for this discussion: games of skill, where it is your ability to make sound decisions that determines the outcome, and money games, where your success is determined by the amount of money you can spend. Games of chance aren’t covered here but, given that we’re talking about motivation and agency, we’re depending upon one specific blindspot (the inability of humans to deal sensibly with probability) rather than the range of issues identified in the article.
I dont want to rehash the entire article but the key points that I want to discuss are the notion of manipulating difficulty and fun pain. A game of skill is effectively fun until it becomes too hard. If you want people to keep playing then you have to juggle the difficulty enough to make it challenging but not so hard that you stop playing. Even where you pay for a game up front, a single payment to play, you still want to get enough value out of it – too easy and you finish too quickly and feel that you’ve wasted your money; too hard and you give up in disgust, again convinced that you’ve wasted your money. Ultimately, in a pure game of skill, difficulty manipulation must be carefully considered. As the difficulty ramps up, the player is made uncomfortable, the delightful term fun pain is applied here, and resolving the difficulty removes this.
Or, you can just pay to make the problem go away. Suddenly your game of skill has two possible modes of resolution: play through increasing difficulty, at some level of discomfort or personal inconvenience, or, when things get hard enough, pump in a deceptively small amount of money to remove the obstacle. The secret of the P2P game that becomes successfully monetised is that it was always about the money in the first place and the initial rounds of the game were just enough to get you engaged to a point where you now have to pay in order to go further.
You can probably see where I’m going with this. While it would be trite to describe education as a game of skill, it is most definitely the most apt of the different games on offer. Progress in your studies should be a reflection of invested time in study, application and the time spent in developing ideas: not based on being ‘lucky’, so the random game isn’t a choice. The entire notion of public education is founded on the principle that educational opportunities are open to all. So why do some parts of this ‘game’ feel like we’ve snuck in some covert monetisation?
I’m not talking about fees, here, because that’s holding the place of the fee you pay to buy a game in the first place. You all pay the same fee and you then get the same opportunities – in theory, what comes out is based on what the student then puts in as the only variable.
But what about textbooks? Unless the fee we charge automatically, and unavoidably, includes the cost of the textbook, we have now broken the game into two pieces: the entry fee and an ‘upgrade’. What about photocopying costs? Field trips? A laptop computer? An iPad? Home internet? Bus fare?
It would be disingenuous to place all of this at the feet of public education – it’s not actually the fault of Universities that financial disparity exists in the world. It is, however, food for thought about those things that we could put into our courses that are useful to our students and provide a paid alternative to allow improvement and progress in our courses. If someone with the textbook is better off than someone without the textbook, because we don’t provide a valid free alternative, then we have provided two-tiered difficulty. This is not the fun pain of playing a game, we are now talking about genuine student stress, a two-speed system and a very high risk that stressed students will disengage and leave.
From my earlier discussions on plagiarism, we can easily tie in Ramin’s notion of the driver of reward removal, where players have made so much progress that, on facing defeat, they will pay a fee to reduce the impact of failure; or, in some cases, to remove it completely. As Ramin notes:
“This technique alone is effective enough to make consumers of any developmental level spend.”
It’s not just lost time people are trying to get back, it’s the things that have been achieved in that time. Combine that with, in our case, the future employability and perception of that piece of paper, and we have a very strong behavioural driver. A number of the tricks Ramin describes don’t work as well on mature and aware thinkers but this one is pretty reliable. If it’s enough to make people pay money, regardless of their development level, then there are lots of good design decisions we can make from this – lower risk assessment, more checkpointing, steady progress towards achievement. We know lots of good ways to avoid this, if we consider it to be a problem and want to take the time to design around it.
This is one of the greatest lessons I’ve learned about studying behaviour, even as a rank amateur. Observing what people do and trying to build systems that will work despite that makes a lot more sense than building a system that works to some ideal and trying to jam people into it. The linked article shows us how people are making really big piles of money by knowing how people work. It’s worth looking at to make sure that we aren’t, accidentally, manipulating students in the same way.
I’ve just finished the lecturing component for my first year course on programming, algorithms and data structures. As always, the learning has been mutual. I’ve got some longer posts to write on this at some time in the future but the biggest change for this year was dropping the written examination component down and bringing in supervised practical examinations in programming and code reading. This has given us some interesting results that we look forward to going through, once all of the exams are done and the marks are locked down sometime in late July.
Whenever I put in practical examinations, we encounter the strange phenomenon of students who can mysteriously write code in very short periods of time in a practical situation very similar to the practical examination, but suddenly lose the ability to write good code when they are isolated from the Internet, e-Mail and other people’s code repositories. This is, thank goodness, not a large group (seriously, it’s shrinking the more I put prac exams in) but it does illustrate why we do it. If someone has a genuine problem with exam pressure, and it does occur, then of course we set things up so that they have more time and a different environment, as we support all of our students with special circumstances. But to be fair to everyone, and because this can be confronting, we pitch the problems at a level where early achievement is possible and they are also usually simpler versions of the types of programs that have already been set as assignment work. I’m not trying to trip people up, here, I’m trying to develop the understanding that it’s not the marks for their programming assignments that are important, it’s the development of the skills.
I need those people who have not done their own work to realise that it probably didn’t lead to a good level of understanding or the ability to apply the skill as you would in the workforce. However, I need to do so in a way that isn’t unfair, so there’s a lot of careful learning design that goes in, even to the selection of how much each component is worth. The reminder that you should be doing your own work is not high stakes – 5-10% of the final mark at most – and builds up to a larger practical examination component, worth 30%, that comes after a total of nine practical programming assignments and a previous prac exam. This year, I’m happy with the marks design because it takes fairly consistent failure to drop a student to the point where they are no longer eligible for redemption through additional work. The scope for achievement is across knowledge of course materials (on-line quizzes, in-class scratchy card quizzes and the written exam), programming with reference materials (programming assignments over 12 weeks), programming under more restricted conditions (the prac exams) and even group formation and open problem handling (with a team-based report on the use of queues in the real world). To pass, a student needs to do enough in all of these. To excel, they have to have a good broad grasp of theoretical and practical. This is what I’ve been heading towards for this first-year course, a course that I am confident turns out students who are programmers and have enough knowledge of core computer science. Yes, students can (and will) fail – but only if they really don’t do enough in more than one of the target areas and then don’t focus on that to improve their results. I will fail anyone who doesn’t meet the standard but I have no wish to do any more of that than I need to. If people can come up to standard in the time and resource constraints we have, then they should pass. The trick is holding the standard at the right level while you bring up the people – and that takes a lot of help from my colleagues, my mentors and from me constantly learning from my students and being open to changing the learning design until we get it right.
Of course, there is always room for improvement, which means that the course goes back up on blocks while I analyse it. Again. Is this the best way to teach this course? Well, of course, what we will do now is to look at results across the course. We’ll track Prac Exam performance across all practicals, across the two different types of quizzes, across the reports and across the final written exam. We’ll go back into detail on the written answers to the code reading question to see if there’s a match for articulation and comprehension. We’ll assess the quality of response to the exam, as well as the final marked outcome, to tie this back to developmental level, if possible. We’ll look at previous results, entry points, pre-University marks…
And then we’ll teach it again!
I’ve been taught by, met, taught and am colleagues with a wide range of educators. The more people I meet, the more I realise how similar people are and the more I realise that one of the key differences in educators is how much they care. Caring more about your students is generally a good thing, as is caring about your commitment to scholarship and ethics, but caring is also a terrible amplifier of thoughtlessness and, regrettably, people can be truly thoughtless at times. When people are thoughtful, then being a caring educator is fantastic because you get that great feeling from finding out that people valued what you did, the effort that was expended and the final result that was achieved. I love it when students get back in touch with me, sometime down the track, or send me e-mail to let me know that something has really resonated with them. Sadly, the people who are thoughtless, or attempt to be unpleasant in some way, seem to stick in my mind a lot more than the success stories do.
In a way this makes sense because a successful student, or a successful course, doesn’t require any changes to be made. However, given that my job is to educate, anytime something goes wrong, it not only means that there is something to be fixed, it means that I didn’t do my job properly – or, at least, someone is perceiving that I have not done my job properly. You don’t have to care much to feel that fairly deeply. Caring about what you do is great because it makes you take work seriously and responsibly, but it also leaves you vulnerable. It saddens me that I have seen a handful of students who have gone out of their way to exploit that – but it saddens me more that they could have been through the educational experience that we still have to offer (it may not be perfect but it is still pretty impressive) and come out the end so determined to make somebody else unhappy or so utterly ignorant of the impact of their thoughtlessness.
I can clearly remember the first time years ago that a student’s relatively thoughtless act had a big impact on me, when I received a really nasty student evaluation for three students in a group of 140+. I had enforced some penalties for plagiarism and, mysteriously, a number of my students equal to the number of plagiarists had decided that I was awful, that I hated my students, I had acted unfairly and I was bigoted and discriminatory. It really shook my faith in my ability to teach. Overall, my figures were fine but I usually attribute the depth of passion to the extremity of the commitment and the fact that three people took the trouble to label me as a completely unacceptable teacher hit me hard.
When I first applied for Federal research funding, I received a reviewer’s comment that was so manifestly unpleasant, dismissive and vindictive that I went to the head of school pretty much assuming that I would have to resign and go and find other work. The reviewer all but told me to get out of academia or, maybe, in a decade’s time, I might not bring down another grant too badly. Those words, which I would laugh off in other arenas or at other times, came through a channel and at a time when I was going to place great import upon them.
There is a lot of difference in how you can say things and, the older I get, the more I realise that some things just don’t have to be said. There is no shortage of people who are happy to tell people things “for their own good” when, in reality, they are telling them for far less altruistic reasons. I have seen a lot of vindictiveness over the years dressed up as thoughtlessness, pretending to be an accidental overstatement. Of course, being human, I’ve sometimes made the mistake myself and I unreservedly apologise to anyone that I ever offended – if I haven’t already found you to apologise!
I sometimes wonder what some of my students want. If I didn’t care, if I showed up with the same slides from the last 20 years and rattled through them, never updating, handing all marking off to inexperienced TAs, failing people just because I’ve classified them as ‘dumb’, then I would be untouchable. I’d be untouchable because I would have divided the world into people who matter and people who don’t, slotting myself clearly into the ‘matter’ while leaving all of my students elsewhere. My students couldn’t matter to me and have me still teach them so badly. The problem arises when you do care about your students and some people, for whatever reason, decide that this is a weakness. Something to game for their own advantage or for their own amusement.
I suspect that I have taught less than 10 such people over my years in education, which is great in a way because it means that there aren’t that many of them, but it’s terrible to consider that such a small percentage of the students I’ve seen could still stick so much in my mind. However, these students, despite themselves, help to make me better at what I do. Yes, they get under my skin but I turn around and work out if any of what was said was valid. Could I improve? Could I help other people? This doesn’t defend unpleasantness- a positive outcome ascribed through moral accident is no validation of vindictiveness. But, by digging through the comments, sometimes I looked at myself and thought “Well, I’m not that bad but I could make some improvements here.”
Despite everything, we probably never will give up on these students. They may not understand that and they will probably never appreciate it, but the community of educators is one of the most inclusive, forgiving and amazing groups I’ve seen. Because we know what it’s like to learn and some things take longer than others, and sometimes people do dumb and thoughtless things. Fortunately, it turns out that caring can amplify thoughtfulness just as well as it amplifies thoughtlessness.
I have written previously about classifying plagiarists into three groups (accidental, panicked and systematic), trying to get the student to focus on the journey rather than the objective, and how overwork can produce situations in which human beings do very strange things. Recently, I was asked to sit in on another plagiarism hearing and, because I’ve been away from the role of Assessment Coordinator for a while, I was able to look at the process with an outsider’s eye, a slightly more critical view, to see how it measures up.
Our policy is now called an Academic Honesty Policy and is designed to support one of our graduate attributes: “An awareness of ethical, social and cultural issues within a global context and their importance in the exercise of professional skills and responsibilities”. The principles are pretty straight-forward for the policy:
- Assessment is an aid to learning and involves obligations on the part of students to make it effective.
- Academic honesty is an essential component of teaching, learning and research and is fundamental to the very nature of universities.
- Academic writing is evidence-based, and the ideas and work of others must be acknowledged and not claimed or presented as one’s own, either deliberately or unintentionally.
The policy goes on to describe what student responsibilities are, why they should do the right thing for maximum effect of the assessment and provides some handy links to our Writing Centre and applying for modified arrangements. There’s also a clear statement of what not to do, followed by lists of clarifications of various terms.
Sitting in on a hearing, looking at the process unfolding, I can review the overall thrust of this policy and be aware that it has been clearly identified to students that they must do their own work but, reading through the policy and its implementation guide, I don’t really see what it provides to sufficiently scaffold the process of retraining or re-educating students if they are detected doing the wrong thing.
There are many possible outcomes from the application of this policy, starting with “Oh, we detected something but we turned out to be wrong”, going through “Well, you apparently didn’t realise so we’ll record your name for next time, now submit something new ” (misunderstanding), “You knew what you were doing so we’re going to give you zero for the assignment and (will/won’t) let you resubmit it (with a possible mark cap)” (first offence), “You appear to make a habit of this so we’re giving you zero for the course” (second offence) and “It’s time to go.” (much later on in the process after several confirmed breaches).
Let me return to my discussions on load and the impact on people from those earlier posts. If you accept my contention that the majority of plagiarism cheating is minor omission or last minute ‘helmet fire’ thinking under pressure, then we have to look at what requiring students to resubmit will do. In the case of the ‘misunderstanding’, students may also be referred to relevant workshops or resources to attend in order to improve their practices. However, considering that this may have occurred because the student was under time pressure, we have just added more work and a possible requirement to go and attend extra training. There’s an old saying from Software Development called Brook’s Law:
“…adding manpower to a late software project makes it later.” (Brooks, Mythical Man Month, 1975)
In software it’s generally because there is ramp up time (the time required for people to become productive) and communication overheads (which increases with the square of the number of people again). There is time required for every assignment that we set which effectively stands in for the ramp-up and, as plagiarising/cheating students have probably not done the requisite work before (or could just have completed the assignment), we have just added extra ramp-up into their lives for any re-issued assignments and/or any additional improvement training. We have also greatly increased the communication burden because the communication between lecturers and peers has implicit context based on where we are in the semester. All of the student discussion (on-line or face-to-face) from points A to B will be based around the assignment work in that zone and all lecturing staff will also have that assignment in their heads. An significantly out-of-sequence assignment not only isolates the student from their community, it increases the level of context switching required by the staff, decreasing the amount of effective time that have with the student and increasing the amount of wall-clock time. Once again, we have increased the potential burden on a student that, we suspect, is already acting this way because of over-burdening or poor time management!
Later stages in the policy increase the burden on students by either increasing the requirement to perform at a higher level, due to the reduction of available marks through giving a zero, or by removing an entire course from their progress and, if they wish to complete the degree, requiring them to overload or spend an additional semester (at least) to complete their degree.
My question here is, as always, are any of these outcomes actually going to stop the student from cheating or do they risk increasing the likelihood of either the student cheating or the student dropping out? I complete agree with the principles and focus of our policy, and I also don’t believe that people should get marks for work that they haven’t done, but I don’t see how increasing burden is actually going to lead to the behaviour that we want. (Dan Pink on TED can tell you many interesting things about motivation, extrinsic factors and cognitive tasks, far more effectively than I can.)
This is, to many people, not an issue because this kind of policy is really treated as being punitive rather than remedial. There are some excellent parts in our policy that talk about helping students but, once we get beyond the misunderstanding, this language of support drops away and we head swiftly into the punitive with the possibility of controlled resubmission. The problem, however, is that we have evidence that light punishment is interpreted as a licence to repeat the action, because it doesn’t discourage. This does not surprise me because we have made such a risk/reward strategy framing with our current policy. We have resorted to a punishment modality and, as a result, we have people looking at the punishments to optimise their behaviour rather than changing their behaviour to achieve our actual goals.
This policy is a strange beast as there’s almost no way that I can take an action under the current approach without causing additional work to students at a time when it is their ability to handle pressure that is likely to have led them here. Even if it’s working, and it appears that it does, it does so by enforcing compliance rather than actually leading people to change the way that they think about their work.
My conjecture is that we cannot isolate the problems to just this policy. This spills over into our academic assessment policies, our staff training and our student support, and the key difference between teaching ethics and training students in ethical behaviour. There may not be a solution in this space that meets all of our requirements but if we are going to operate punitively then let us be honest about it and not over-burden the student with remedial work that they may not be supported for. If we are aiming for remediation then let us scaffold it properly. I think that our policy, as it stands, can actually support this but I’m not sure that I’ve seen the broad spread of policy and practice that is required to achieve this desirable, but incredibly challenging, goal of actually changing student behaviour because the students realise that it is detrimental to their learning.
I’ve just read a Salon article regarding the Harvard cheating issue. Apparently, according to Farhad Manjoo, these students should be “celebrated for collaborating“.
Note that word? It’s the one that I picked on in the Crimson article and the reason that I did so is that it’s a very mild word, and a very positive one at that. However, this article, while acknowledging that the students were prevented from any such sharing, Manjoo then asks, to me somewhat disingenuously, “What’s the point of prohibiting these students from working together?”
Urm, well, for most of the course, they don’t. At the end of the course, when they want to see how much each individual knows, they attempt to test them individually. That’s not an unusual pattern.
Manjoo’s interpretation of the other articles goes well beyond anything else that I’ve seen, including putting all of the plagiarism claims together as group work and tutor consultation. I can’t speak to this as I don’t have his sources but, given that this was explicitly forbidden anyway, he’s making an empty argument. It doesn’t matter how you slice it, if students worked together, they did something that they weren’t supposed to do. However Manjoo argues that their actions are justified, I’m not sure that this argument is.
The author obviously disagrees with the nature of the open book test and, to my reading, has no real idea of what he’s talking about. Sentences like “But if you want to determine how well students think, why force them to think alone?” are almost completely self-defeating. It also ignores the need to build knowledge in a way that functions when the group isn’t there. We don’t use social constructivism in the assumption that we will always be travelling in packs, we do it to assist the construction of knowledge inside the individual by leveraging the advantages of the social structure. To evaluate how well it has happened, and to isolate group effects so that we can see the individual performing, we use rules such as Harvard clearly defined to set these boundaries.
Manjoo waxes rhetorical in this essay. “Rather than punishing these students, shouldn’t we be praising them for solving these problems the only way they could? ” Well, no, I think that we shouldn’t. There were many ways that, if they thought this approach was unreasonable or unfair, they could have legitimately protested. I note that half the class managed to not (apparently, as far as the number suspected) cheat during this test – what do we say about these people? Are these people worthy of double-plus-praise for somehow transcending the impossible test, or are they fools for not collaborating?
I’m not sure why these articles are providing so much padding for these students, if they have actually done nothing wrong (I hasten to add that they are merely suspected at the moment but if they are to be martyrs then let us assume a bleak outcome). At least, unlike the writers in the Crimson, Manjoo is a Cornell alumnus so he has some distance. I do note that he has a book called “True Enough: Learning to Live in a Post-Fact Society” which, according to the reviews, is about the media establishing views of reality that aren’t necessarily the facts so he’s aware of the impact that his words have on how people will see this issue. He is also writing in a column with, among its bylines, “The Conventional Wisdom Debunked”, so it’s not surprising that this article is written this way.
Manjoo has created (another) Harvard bogeyman: scared of collaboration, unfair to students, and out of step with reality. However, his argument is ultimately a series of misdirections and Manjoo’s opinion that don’t address the core issue: if these students worked with each other, they shouldn’t have. Until he accepts that this, and that this is not a legitimate course, I’m not sure that his arguments have much weight with me.