Dead man’s curve: Adventures in overdriving.Posted: January 28, 2012
Looking at my previous posts, you’ll know that I have a strong message for my students: do the work, do it yourself, get the knowledge and you’ll most likely (at least) pass. Part of my commitment to my side of the bargain is that a student almost exclusively controls the input that controls his or her own mark. Yes, very occasionally, we will scale a bit if we think that students are being disadvantaged but I have never seen a situation where we have increased the number of students who fail. (To be honest, our scaling is really lightweight.)
In particular, I do not fit grades to a curve. I rarely have so many students that such an approach would be even approaching statistical validity and, more interestingly, I tend to get an approximately normal distribution anyway, without any curve fitting. My position on this is more than mathematical, however, as it doesn’t sit well with me to fail someone because someone else did better.
Because we try very hard to not have to shift marks, we have an implicit obligation to manage the courses so that a pass mark represents a sufficient level of effort in assimilating knowledge, completing assignments, participating in individual and group interactive activities and the exam. Based on that, if everyone has done enough work to pass, then I can pass everyone. The trickier bit is managing the difficulty of the course so that the following conditions hold:
- A pass mark represents a level of effort and demonstrated knowledge that means that a student has achieved the required level of knowledge.
- A fail mark represents a level that, over a number of opportunities for improvement, indicates a combination of insufficient knowledge and/or effort.
- There is scope in every activity for students to demonstrate excellence. These excellence marks go ON TOP of the ‘core’ marks.
- The mark distribution is not bimodal around 0%/100% but has a range of possible values.
To avoid having to manually redistribute the buckets using curve grading, I have to build the course so that the final mark is built from assignments that meet all of those criteria and in a way that the aggregate of these marks will also produce marks that meet the final criteria. This, of course, means that I advertise assignment weightings, combinations and criteria as early as possible to allow students allocate their effort and then I have to incur the marking burden of applying a marking scheme that, once again, gives me this range.
One of the reasons that I believe this is important is because we risk overdriving one of our key student characteristics if we create an artificial curved-based separation. My students have all been through a fairly rigorous selection system by the time they reach me – the numbers dwindle through to final year of high school and the number who go to Uni are less than a quarter of those who start school. The ‘range’ of these students is the ‘not only passed but made it to a Uni course’. This automatically bands them relatively closely. If 100 students sit a course and half get 60 and half get 65 then, assuming I’ve done my job correctly in the design, they all deserve to pass because they are quite close in in-coming ability and they have achieved similar results. More importantly, the half who got 60 don’t deserve to fail because the other half get 65. If you overdrive noise then all you get is loud noise, not some sort of ‘better’ signal.
I’m not opposed to adaptation in teaching – in fact, I’m a huge fan of using challenge and extension questions to allow people other opportunities to excel, to refine their knowledge or to get a chance to be more specific. However, I support it from an additive approach, where marks are added for success, rather than a subtractive approach, where not managing to add more marks is treated as a mark removal exercise if a sufficiently large group of other people manage to add marks. This requires me to design courses carefully, give enough assignment opportunities for people to demonstrate their skills and provide a lot of feedback.
I note that I use almost no standardised testing and, where I do use multiple choice questions, I either require an accompanying explanation or the component is worth a small number of marks. As a result, I have a lot of flexibility in my marking.
I am not saying that we need to dumb down our material – far from it. If we design our courses with ‘acceptance level = pass, extension achievement = distinction’ we can isolate the core material and then put the ‘next stages’ in as well. As I’ve said before, letting a student know that there’s somewhere else to go and something else to do can be a spur to higher achievement.
Coincidentally, I had a meeting today with a colleague who has done some very interesting work on identifying and assessing the amount of ‘core’ material a student gets right from the ‘advanced’ material. From his early figures, there is very little variation in core material achievement level, as you would expect from all of this explanation that I’ve put up, but there was a vast range of achievement in the advanced material. More investigation required!