Ugliness 101: late punishments
Posted: January 13, 2016 Filed under: Education, Opinion | Tags: advocacy, aesthetics, authenticity, bandura, beauty, community, design, education, educational problem, educational research, ethics, higher education, in the student's head, instrumentality, lateness, learning, penalties, student perspective, teaching, teaching approaches, thinking, time banking, time management, tools 4 CommentsBefore I lay out the program design I’m thinking of (and, beyond any discussion of competency, as a number of you have suggested, we are heading towards Bloom’s mastery learning as a frame with active learning elements), we need to address one of the most problematic areas of assessment.
Late penalties.
Well, let’s be accurate, penalties are, by definition, punishments imposed for breaking the rules, so these are punishments. This is the stick in the carrot-and-stick reward/punish approach to forcing people to do what you want.
Let’s throw the Greek trinity at this and see how it shapes up. A student produces an otherwise perfect piece of work for an assessment task. It’s her own work. She has spent time developing it. It’s really good. Insightful. Oh, but she handed it up a day late. So we’re now going to say that this knowledge is worth less because it wasn’t delivered on time. She’s working a day job to pay the bills? She should have organised herself better. No Internet at home? Why didn’t she work in the library? I’m sure the campus is totally safe after hours and, well, she should just be careful in getting to and from the library. After all, the most important thing in her life, without knowing anything about her, should be this one hundred line program to reinvent something that has been written over a million times by every other CS student in history.

Have an owl, while you think about that.
That’s not truth. That’s establishing a market value for knowledge with a temporal currency. To me, unless there’s a good reason for doing this, this is as bad as curve grading because it changes what the student has achieved for reasons outside of the assignment activity itself.
“Ah!” you say “Nick, we want to teach people to hand work in on time because that’s how the world works! Time is money, Jones!”
Rubbish. Yes, there are a (small) number of unmovable deadlines in the world. We certainly have some in education because we have to get grades in to achieve graduations and degrees. But most adults function in a world where they choose how to handle all of the commitments in their lives and then they schedule them accordingly. The more you do that, the more practice you get and you can learn how to do it well.
If you have ever given students a week, or even a day’s, extension because of something that has stopped you being able to accept or mark student work, no matter how good the reason, you have accepted that your submission points are arbitrary. (I feel strongly about this and have posted about it before.)
So what would be a good reason for sticking to these arbitrary deadlines? We’d want to see something really positive coming out of the research into this, right? Let’s look at some research on this, starting with Britton and Tesser, “Effects of Time-Management Practices on College Grades”, J Edu Psych, 1991, 83, 3. This reinforces what we already know from Bandura: students who feel in control and have high self-efficacy are going to do well. If a student sits down every day to work out what they’re going to do then they, unsurprisingly, can get things done. But this study doesn’t tell us about long-range time planning – the realm of instrumentality, the capability to link activity today with success in the future. (Here are some of my earlier thoughts on this, with references to Husman.) From Husman, we know that students value tasks in terms of how important they think it is, how motivated they are and how well they can link future success to the current task.
In another J Edu Psych paper (1990,82,4), Macan and Shahani reported that participants who felt that they had control over what they were doing did better but also clearly indicated that ambiguity and stress had an influence on time management in terms of perception and actuality. But the Perceived Control of Time (author’s caps) dominated everything, reducing the impact of ambiguity, reducing the impact of stress, and lead to greater satisfaction.
Students are rarely in control of their submission deadlines. Worse, we often do not take into account everything else in a student’s life (even other University courses) when we set our own deadlines. Our deadlines look arbitrary to students because they are, in the majority of cases. There’s your truth. We choose deadlines that work for our ability to mark and to get grades in or, perhaps, based on whether we are in the country or off presenting research on the best way to get students to hand work in on-time.
(Yes, the owl above is staring at me just as hard as he is staring at anyone else here.)
My own research clearly shows that fixed deadlines do not magically teach students the ability to manage their time and, when you examine it, why should it? (ICER 2012, was part of a larger study that clearly demonstrated students continuing, and even extending, last-minute behaviour all the way to the fourth year of their studies.) Time management is a discipline that involves awareness of the tasks to be performed, a decomposition of those tasks to subtasks that can be performed when the hyperbolic time discounting triggers go off, and a well-developed sense of instrumentality. Telling someone to hand in their work by this date OR ELSE does not increase awareness, train decomposition, or develop any form of planning skills. Well, no wonder it doesn’t work any better than shouting at people teaches them Maxwell’s Equations or caning children suddenly reveals the magic of the pluperfect form in Latin grammar.
So, let’s summarise: students do well when they feel in control and it helps with all of the other factors that could get in the way. So, in order to do almost exactly the opposite of help with this essential support step, we impose frequently arbitrary time deadlines and then act surprised when students fall prey to lack of self-confidence, stress or lose sight of what they’re trying to do. They panic, asking lots of (what appear to be) unnecessary questions because they are desperately trying to reduce confusion and stress. Sound familiar?
I have written about this at length while exploring time banking, giving students agency and the ability to plan their own time, to address all of these points. But the new lens in my educational inspection loupe allows me to be very clear about what is most terribly wrong with late penalties.
They are not just wrong, they satisfy none of anyone’s educational aesthetics. Because we don’t take a student’s real life into account, we are not being fair. Because we are not actually developing the time management abilities but treating them as something that will be auto-didactically generated, we are not being supportive. Because we downgrade work when it is still good, we are being intellectually dishonest. Because we vary deadlines to suit ourselves but may not do so for an individual student, we are being hypocritical. We are degrading the value of knowledge for procedural correctness. This is hideously “unbeautiful”.
That is not education. That’s bureaucracy. Just because most of us live within a bureaucracy doesn’t mean that we have to compromise our pedagogical principles. Even trying to make things fit well, as Rapaport did to try and fit into another scale, we end up warping and twisting our intent, even before we start thinking about lateness and difficult areas such as that. This cannot be good.
There is nothing to stop a teacher setting an exercise that is about time management and is constructed so that all steps will lead someone to develop better time management. Feedback or marks that reflect something being late when that is the only measure of fitness is totally reasonable. But to pretend that you can slap some penalties on to the side of an assessment and it will magically self-scaffold is to deceive yourself, to your students’ detriment. It’s not true.
Do I have thoughts on how to balance marking resources with student feedback requirements, elastic time management, and real assessments while still recognising that there are some fixed deadlines?
Funny you should ask. We’ll come back to this, soon.
No numbers
Posted: January 11, 2016 Filed under: Education, Opinion | Tags: authenticity, beauty, brecht, cui bono, design, education, educational research, ethics, higher education, learning, rapaport, resources, scales, student perspective, teaching, teaching approaches, thinking, tools, triage 5 CommentsWe know that grades are really quite arbitrary and that turning numbers into letters, while something we can do, is actually not that strongly coupled to evaluating learning or demonstrating mastery. Why? Because having the appropriate level of knowledge and being able to demonstrate it are not necessarily the same as being able to pass tests or produce solutions to assignments.
For example, if we look at Rapaport’s triage approach as a way to evaluate student interaction with assignments, we can then design our learning environment to provide multiple opportunities to construct and evaluate knowledge on the understanding that we are seeking clear evidence that a student cannot just perform tasks of this nature but, more important, can do reliably. We can do this even if we use “Good, getting there, wrong and no submission” rather than numbers. The duality of grades (a symbol and its meaning) degenerates to something other than numbers anyway. Students at my University didn’t care about 84 versus 85 until we put a new letter grade in at 85 (High Distinction). But even these distinctions are arbitrary scales when it comes to evaluating actual learning.

A very arbitrary scale.
Why are numbers not important in this? Because they’re rarely important anyway. Have you ever asked your surgeon what her grades were in school? What about your accountant? Perhaps you’ve questioned the percentage that your favourite Master of Wine achieved in the tasting exams? Of course you haven’t. You’ve assumed that a certification (of some sort) indicates sufficient knowledge to practise. And what we have to face is that we are currently falling back onto numbers to give us false confidence that we are measuring learning. They don’t map. They’re not objective. They’re often mathematically nonsensical. No-one cares about them except to provide yet another way of sorting human beings and, goodness knows, we already have enough of those.
Ah, but “but students like to know how they’re going”, right? Yes. Which is where critique and evaluation come in, as well as may other authentic and appropriate ways to recognise progress and encourage curiosity and further development. None of which require numbers.
Let me ask you a question:
Does every student who accumulates enough pass tokens to graduate from your program have a clearly demonstrated ability to perform tasks to the requisite level in all of the knowledge areas of your program?
If the answer is no, then numbers and grades didn’t help, did they? I suspect that, for you as for many others including me, you can probably think of students who managed to struggle through but, in reality, were probably never going to be much good in the field. Perhaps 50% doesn’t magically cover competency? If 50% doesn’t, then raising the bar to 75% won’t solve the problem either. For reasons already mentioned, many of the ways we combine numbers to get grades just don’t make any real sense and they certainly don’t provide much insight into how well the student actually learned what you were trying to teach.
If numbers/grades don’t have much solid foundation, don’t always reflect ability to perform the task, and aren’t actually going to be used in the future? Then they are neither good nor true. And they cannot be beautiful.
Thus, let me strip Rapaport back one notch and provide a three-tier grade-free system, commonly used in many places already, that is closer to what we probably want:
- Nothing submitted,
- Work in progress, resubmit if possible, and
- Work to competent standard.
I know that there are concerns about the word ‘competency’ but I think it’s something we’re going to have think about moving on from. I teach engineers and computer scientists and they have to go out and perform tasks successfully if people are going to employ them or work with them. They have to be competent. Right now, I can tell you which of them have passed but, for a variety of grading reasons, I can’t tell you which one of them, from an academic transcript alone, will be able to sit down and solve your problem. I can see which ones pass exams but I don’t know if this is fixed knowledge or swotting. But what if you made it easy and said “ok, just point to the one who will build me the best bridge”? No. I can’t tell you that. (The most likely worst bridge is easier, as I can identify who does and doesn’t have Civil Engineering qualifications.)
The three-tier scale is simple. The feedback approach that the marker should take is pretty clear in each place and the result is clear to the student. If we build our learning environment correctly, then we can construct a pathway where a student has to achieve tier 3 for all key activities and, at that point, we can actually say “Yes, this student can perform this task or apply this knowledge to the required level”. We do this enough times, we may even start to think that the student could perform this at the level of the profession.
Wait. Have we just re-invented competency-based assessment? There’s an immediate urge to say “but that’s not a University level thing” and I do understand that. CBA has a strong vocational focus but anyone who works in an engineering faculty is already in that boat. We have industry linked accreditation to allow our students to practise as engineers and they have to demonstrate the achievement of a certified program, as well as work experience. That program is taught at University but, given that all you need is to get the degree, you can do it on raw passes and be ‘as accredited’ as the next person.
Now, I’d be the first person to say that not only are many aspects of the University not vocationally focussed but I’d go further and say that they shouldn’t be vocationally focussed. The University is a place that allows for the unfettered exploration of knowledge for knowledge’s sake and I wouldn’t want to change that. (And, yet, so often, we still grade such abstract ideals…) But let’s take competency away from the words job and vocational for a moment. I’m not suggesting we turn Universities into vocational study centres or shut down “non-Industry” programs and schools. (I’d like to see more but that’s another post.) Let’s look at focusing on clarity and simplicity of evaluation.
A student writes an essay on Brecht and submits it for assessment. All of the rich feedback on language use, referencing and analysis still exists without the need to grade it as A, B or C. The question is whether the work should be changed in response to the feedback (if possible) or whether it is, recognisably, an appropriate response to the question ‘write an essay on Brecht’ that will allow the student to develop their knowledge and skills. There is no job focus here but pulling back to separate feedback and identifying whether knowledge has been sufficiently demonstrated is, fundamentally, a competency argument.
The PhD, the pinnacle of the University system, is essentially not graded. You gain vast amounts of feedback over time, you write in response and then you either defend it to your prospective peers or have it blind-assessed by external markers. Yes, there are degrees of acceptance but, ultimately, what you end up with is “Fine as it is”, “Do some more work”, and “Oh, no. Just no.” If we can extend this level of acceptance of competency to our highest valued qualification, what is the consistent and sound reasoning that requires us to look at a student group and say “Hmm, 73. And this one is… yes, 74.”? If I may, cui bono? Who is benefitting here?
But what would such a program look like, you ask? (Hey, and didn’t Nick say he was going to talk about late penalties?) Yes, indeed. Come back tomorrow!
The Illusion of a Number
Posted: January 10, 2016 Filed under: Education, Opinion | Tags: authenticity, beauty, curve grading, design, education, educational problem, educational research, ethics, grading, higher education, in the student's head, learning, rapaport, reflection, resources, teaching, teaching approaches, thinking, tools, wittgenstein 3 Comments
Rabbit? Duck? Paging Wittgenstein!
I hope you’ve had a chance to read William Rapaport’s paper, which I referred to yesterday. He proposed a great, simple alternative to traditional grading that reduces confusion about what is signalled by ‘grade-type’ feedback, as well as making things easier for students and teachers. Being me, after saying how much I liked it, I then finished by saying “… but I think that there are problems.” His approach was that we could break all grading down into: did nothing, wrong answer, some way to go, pretty much there. And that, I think, is much better than a lot of the nonsense that we pretend we hand out as marks. But, yes, I have some problems.
I note that Rapaport’s exceedingly clear and honest account of what he is doing includes this statement. “Still, there are some subjective calls to make, and you might very well disagree with the way that I have made them.” Therefore, I have license to accept the value of the overall scholarship and the frame of the approach, without having to accept all of the implementation details given in the paper. Onwards!
I think my biggest concern with the approach given is not in how it works for individual assessment elements. In that area, I think it shines, as it makes clear what has been achieved. A marker can quickly place the work into one of four boxes if there are clear guidelines as to what has to be achieved, without having to worry about one or two percentage points here or there. Because the grade bands are so distinct, as Rapaport notes, it is very hard for the student to make the ‘I only need one more point argument’ that is so clearly indicative as a focus on the grade rather than the learning. (I note that such emphasis is often what we have trained students for, there is no pejorative intention here.) I agree this is consistent and fair, and time-saving (after Walvoord and Anderson), and it avoids curve grading, which I loathe with a passion.
However, my problems start when we are combining a number of these triaged grades into a cumulative mark for an assignment or for a final letter grade, showing progress in the course. Sections 4.3 and 4.4 of the paper detail the implementation of assignments that have triage graded sub-tasks. Now, instead of receiving a “some way to go” for an assignment, we can start getting different scores for sub-tasks. Let’s look at an example from the paper, note 12, to describe programming projects in CS.
- Problem definition 0,1,2,3
- Top-down design 0,1,2,3
- Documented code
- Code 0,1,2,3
- Documentation 0,1,2,3
- Annotated output
- Output 0,1,2,3
- Annotations 0,1,2,3
Total possible points = 18
Remember my hypothetical situation from yesterday? I provided an example of two students who managed to score enough marks to pass by knowing the complement of each other’s course knowledge. Looking at the above example, it appears (although not easily) to be possible for this situation to occur and both students to receive a 9/18, yet for different aspects. But I have some more pressing questions:
- Should it be possible for a student to receive full marks for output, if there is no definition, design or code presented?
- Can a student receive full marks for everything else if they have no design?
The first question indicates what we already know about task dependencies: if we want to build them into numerical grading, we have to be pedantically specific and provide rules on top of the aggregation mathematics. But, more subtly, by aggregating these measures, we no longer have an ‘accurately triaged’ grade to indicate if the assignment as a whole is acceptable or not. An assignment with no definition, design or code can hardly be considered to be a valid submission, yet good output, documentation and annotation (with no code) will not give us the right result!
The second question is more for those of us who teach programming and it’s a question we all should ask. If a student can get a decent grade for an assignment without submitting a design, then what message are we sending? We are, implicitly, saying that although we talk a lot about design, it’s not something you have to do in order to be successful. Rapaport does go on to talk about weightings and how we can emphasis these issues but we are still faced with an ugly reality that, unless we weight our key aspects to be 50-60% of the final aggregate, students will be able to side-step them and still perform to a passing standard. Every assignment should be doing something useful, modelling the correct approaches, demonstrating correct techniques. How do we capture that?
Now, let me step back and say that I have no problem with identifying the sub-tasks and clearly indicating the level of performance using triage grading, but I disagree with using it for marks. For feedback it is absolutely invaluable: triage grading on sub-tasks will immediately tell you where the majority of students are having trouble, quickly. That then lets you know an area that is more challenging than you thought or one that your students were not prepared for, for some reason. (If every student in the class is struggling with something, the problem is more likely to lie with the teacher.) However, I see three major problems with sub-task aggregation and, thus, with final grade aggregation from assignments.
The first problem is that I think this is the wrong kind of scale to try and aggregate in this way. As Rapaport notes, agreement on clear, linear intervals in grading is never going to be achieved and is, very likely, not even possible. Recall that there are four fundamental types of scale: nominal, ordinal, interval and ratio. The scales in use for triage grading are not interval scales (the intervals aren’t predictable or equidistant) and thus we cannot expect to average them and get sensible results. What we have here are, to my eye, ordinal scales, with no objective distance but a clear ranking of best to worst. The clearest indicator of this is the construction of a B grade for final grading, where no such concept exists in the triage marks for assessing assignment quality. We have created a “some way to go but sometimes nearly perfect” that shouldn’t really exist. Think of it like runners: you win one race and you come third in another. You never actually came second in any race so averaging it makes no sense.
The second problem is that aggregation masks the beauty of triage in terms of identifying if a task has been performed to the pre-determined level. In an ideal world, every area of knowledge that a student is exposed to should be an important contributor to their learning journey. We may have multiple assignments in one area but our assessment mechanism should provide clear opportunities to demonstrate that knowledge. Thus, their achievement of sufficient assignment work to demonstrate their competency in every relevant area of knowledge should be a necessary condition for graduating. When we take triage grading back to an assignment level, we can then look at our assignments grouped by knowledge area and quickly see if a student has some way to go or has achieved the goal. This is not anywhere near as clear when we start aggregating the marks because of the mathematical issues already raised.
Finally, the reduction of triage to mathematical approximation reduces the ability to specify which areas of an assessment are really valuable and, while weighting is a reasonable approximation to this, it is very hard to use a mathematical formula with more and more ‘fudge factors’, a term Rapaport uses, to make up for the fact that this is just a little too fragile.
To summarise, I really like the thrust of this paper. I think what is proposed is far better, even with all of the problems raised above, at giving a reasonable, fair and predictable grade to students. But I think that the clash with existing grading traditions and the implicit requirement to turn everything back into one number is causing problems that have to be addressed. These problems mean that this solution is not, yet, beautiful. But let’s see where we can go.
Tomorrow, I’ll suggest an even more cut-down version of grading and then work on an even trickier problem: late penalties and how they affect grades.
Assessment is (often) neither good nor true.
Posted: January 9, 2016 Filed under: Education, Opinion | Tags: advocacy, aesthetics, beauty, community, design, education, ethics, higher education, in the student's head, kohn, principles of design, rapaport, reflection, student perspective, teaching, teaching approaches, thinking, tools, universal principles of design, work/life balance, workload 3 CommentsIf you’ve been reading my blog over the past years, you’ll know that I have a lot of time for thinking about assessment systems that encourage and develop students, with an emphasis on intrinsic motivation. I’m strongly influenced by the work of Alfie Kohn, unsurprisingly given I’ve already shown my hand on Focault! But there are many other writers who are… reassessing assessment: why we do it, why we think we are doing it, how we do it, what actually happens and what we achieve.

In my framing, I want assessment to be as all other aspects of education: aesthetically satisfying, leading to good outcomes and being clear and what it is and what it is not. Beautiful. Good. True. There are some better and worse assessment approaches out there and there are many papers discussing this. One of these that I have found really useful is Rapaport’s paper on a simplified assessment process for consistent, fair and efficient grading. Although I disagree with some aspects, I consider it to be both good, as it is designed to clearly address a certain problem to achieve good outcomes, and it is true, because it is very honest about providing guidance to the student as to how well they have met the challenge. It is also highly illustrative and honest in representing the struggle of the author in dealing with the collision of novel and traditional assessment systems. However, further discussion of Rapaport is for the near future. Let me start by demonstrating how broken things often are in assessment, by taking you through a hypothetical situation.
Thought Experiment 1
Two students, A and B, are taking the same course. There are a number of assignments in the course and two exams. A and B, by sheer luck, end up doing no overlapping work. They complete different assignments to each other, half each and achieve the same (cumulative bare pass overall) marks. They then manage to score bare pass marks in both exams, but one answers only the even questions and only answers the odd. (And, yes, there are an even number of questions.) Because of the way the assessment was constructed, they have managed to avoid any common answers in the same area of course knowledge. Yet, both end up scoring 50%, a passing grade in the Australian system.
Which of these students has the correct half of the knowledge?
I had planned to build up to Rapaport but, if you’re reading the blog comments, he’s already been mentioned so I’ll summarise his 2011 paper before I get to my main point. In 2011, William J. Rapaport, SUNY Buffalo, published a paper entitled “A Triage Theory of Grading: The Good, The Bad and the Middling.” in Teaching Philosophy. This paper summarised a number of thoughtful and important authors, among them Perry, Wolff, and Kohn. Rapaport starts by asking why we grade, moving through Wolff’s taxonomic classification of assessment into criticism, evaluation, and ranking. Students are trained, by our world and our education systems to treat grades as a measure of progress and, in many ways, a proxy for knowledge. But this brings us into conflict with Perry’s developmental stages, where students start with a deep need for authority and the safety of a single right answer. It is only when students are capable of understanding that there are, in many cases, multiple right answers that we can expect them to understand that grades can have multiple meanings. As Rapaport notes, grades are inherently dual: a representative symbol attached to a quality measure and then, in his words, “ethical and aesthetic values are attached” (emphasis mine.) In other words, a B is a measure of progress (not quite there) that also has a value of being … second-tier if an A is our measure of excellence. A is not A, as it must be contextualised. Sorry, Ayn.
When we start to examine why we are grading, Kohn tells us that the carrot and stick is never as effective as the motivation that someone has intrinsically. So we look to Wolff: are we critiquing for feedback, are we evaluating learning, or are we providing handy value measures for sorting our product for some consumer or market? Returning to my thought experiment above, we cannot provide feedback on assignments that students don’t do, our evaluation of learning says that both students are acceptable for complementary knowledge, and our students cannot be discerned from their graded rank, despite the fact that they have nothing in common!
Yes, it’s an artificial example but, without attention to the design of our courses and in particular the design of our assessment, it is entirely possible to achieve this result to some degree. This is where I wish to refer to Rapaport as an example of thoughtful design, with a clear assessment goal in mind. To step away from measures that provide an (effectively) arbitrary distinction, Rapaport proposes a tiered system for grading that simplifies the overall system with an emphasis on identifying whether a piece of assessment work is demonstrating clear knowledge, a partial solution, an incorrect solution or no work at all.
This, for me, is an example of assessment that is pretty close to true. The difference between a 74 and a 75 is, in most cases, not very defensible (after Haladyna) unless you are applying some kind of ‘quality gate’ that really reduces a percentile scale to, at most, 13 different outcomes. Rapaport’s argument is that we can reduce this further and this will reduce grade clawing, identify clear levels of achieve and reduce marking load on the assessor. That last point is important. A system that buries the marker under load is not sustainable. It cannot be beautiful.
There are issues in taking this approach and turning it back into the grades that our institutions generally require. Rapaport is very open about the difficulties that he has turning his triage system into an acceptable letter grade and it’s worth reading the paper to see that discussion alone, because it quite clearly shows what
Rapaport’s scheme clearly defines which of Wolff’s criteria he wishes his assessment to achieve. The scheme, for individual assessments, is no good for ranking (although we can fashion a ranking from it) but it is good to identify weak areas of knowledge (as transmitted or received) for evaluation of progress and also for providing elementary critique. It says what it is and it pretty much does it. It sets out to achieve a clear goal.
The paper ends with a summary of the key points of Haladyna’s 1999 book “A Complete Guide to Student Grading”, which brings all of this together.
Haladyna says that “Before we assign a grade to any students, we need:
- an idea about what a grade means,
- an understanding of the purposes of grading,
- a set of personal beliefs and proven principles that we will use in teaching
and grading,
- a set of criteria on which the grade is based, and, finally,
- a grading method,which is a set of procedures that we consistently follow
in arriving at each student’s grade. (Haladyna 1999: ix)
There is no doubt that Rapaport’s scheme meets all of these criteria and, yet, for me, we have not yet gone far enough in search of the most beautiful, most good and most true extent that we can take this idea. Is point 3, which could be summarised as aesthetics not enough for me? Apparently not.
Tomorrow I will return to Rapaport to discuss those aspects I disagree with and, later on, discuss both an even more trimmed-down model and some more controversial aspects.
Beauty Attack I: Assessment
Posted: January 8, 2016 Filed under: Education, Opinion | Tags: aesthetics, beauty, design, discipline, education, educational research, ethics, foucault, higher education, in the student's head, principles of design, punishment, reflection, resources, teaching, teaching approaches 3 CommentsFor the next week, I’m going to be applying an aesthetic lens to assessment and, because I’m in Computer Science, I’ll be focusing on the assessment of Computer Science knowledge and practice.
How do we know if our students know something? In reality, the best way is to turn them loose, come back in 25 years and ask the people in their lives, their clients, their beneficiaries and (of course) their victims, the same question: “Did the student demonstrate knowledge of area X?”
This is not available to us as an option because my Dean, if not my Head of School, would probably peer at me curiously if I were to suggest that all measurement of my efficacy be moved a generation from now. Thus, I am forced to retreat to the conventions and traditions of assessment: it is now up to the student to demonstrate to me, within a fixed timeframe, that he or she has taken a firm grip of the knowledge.
We know that students who are prepared to learn and who are motivated to learn will probably learn, often regardless of what we do. We don’t have to read Vallerand et al to be convinced that self-motivated students will perform, as we can see it every day. (But it is an enjoyable paper to read!) Yet we measure these students in the same assessment frames as students who do not have the same advantages and, thus, may not yet have the luxury or capacity of self-motivation: students from disadvantaged backgrounds, students who are first-in-family and students who wouldn’t know auto-didacticism if it were to dance in front of them.
How, then, do we fairly determine what it means to pass, what it means to fail and, even more subtly, what it means to pass or fail well? I hesitate to invoke Foucault, especially when I speak of “Discipline and Punish” in an educational setting, but he is unavoidable when we gaze upon a system that is dedicated to awarding ranks, graduated in terms of punishment and reward. It is strange, really, that were many patients to die under the hand of a surgeon for a simple surgery, we would ask for an inquest, but many students failing under the same professor in a first-year course is merely an indicator of “bad students”. So many of our mechanisms tell us that students are failing but often too late to be helpful and not in a way that encourages improvement. This is punishment. And it is not good enough.

Foucault: thinking about something very complicated, apparently.
Our assessment mechanisms are not beautiful. They are barely functional. They exist to provide a rough measure to separate pass from fail, with a variety of other distinctions that owe more to previous experience and privilege in many cases than any higher pedagogical approach.
Over the next week, I shall conduct an attack upon the assessment mechanisms that are currently used in my field, including my own, in the hope of arriving at a mechanism of design, practice and validation that is pedagogically pleasing (the aesthetic argument again) and will lead to outcomes that are both good and true.
Getting it wrong
Posted: January 7, 2016 Filed under: Education, Opinion | Tags: advocacy, authenticity, design, education, educational problem, educational research, higher education, john c. dewey, learning, pragmatism, principles of design, reflection, teaching, teaching approaches, thinking, tools, william james Leave a commentIt’s fine to write all sorts of wonderful statements about theory and design and we can achieve a lot in thinking about such things. But, let’s be honest, we face massive challenges in the 21st Century and improved thinking and practice in education is one of the most important contributions we can make to future generations. Thus, if we want to change the world based upon our thinking, then all of our discussions have no use if we can’t develop something that’s going to achieve our goals. Dewey’s work provide an experimental, even instrumental, approach to the American philosophical school of pragmatism. To briefly explain this term in the specific meaning, I turn to William James, American psychologist and philosopher.
Pragmatism asks its usual question. “Grant an idea or belief to be true,” it says, “what concrete difference will its being true make in anyone’s actual life? How will the truth be realized? What experiences will be different from those which would obtain if the belief were false? What, in short, is the truth’s cash-value in experiential terms?”
William James, Pragmatism (1907)
(James is far too complex to summarise with one paragraph and I am using only one of his ideas to illustrate my point. Even James’ scholars disagree on how to interpret many of his writings. It’s worth reading him and Hegel at the same time as they square off across the ring quite well.)

Portrait of William James by John La Farge, circa 1859
What will be different? How will we recognise or measure it? What do we gain by knowing if we are right or wrong? This is why all good education researchers depend so heavily on testing their hypotheses in the space where they will make an impact and there is usually an obligation to look at how things are working before and after any intervention. This places further obligation upon us to evaluate what has occurred and then, if our goals haven’t been achieved, change our approach further. It’s a simple breakdown of roles but I often think as educational work in three heavily overlapping areas: practice, scholarship and research. Practice should be applying techniques that achieve our goals, scholarship involves the investigation, dissemination and comparison of these techniques, and research builds on scholarship to evaluate practice in ways that will validate and develop new techniques – or invalidate formerly accepted ones as knowledge improves. This leads me to my point: evaluating your own efforts to work out how to do better next time.
There are designers, architects, makers and engineers who are committed to the practice of impact design, where (and this is one definition):
“Impact design is rooted in the core belief that design can be used to create positive social, environmental and economic change, and focuses on actively measuring impact to inform and direct the design process.” Impact Design Hub, About.
Thus, evaluation of what works is essential for these practitioners. The same website recently shared some designers talking about things that went wrong and what they learned from the process.
If you read that link, you’ll see all sorts of lessons: don’t hand innovative control to someone who’s scared of risk, don’t ignore your community, don’t apply your cultural values to others unless you really know what you’re doing, and don’t forget the importance of communication.
Writing some pretty words every day is not going to achieve my goal and I need to be reminded of the risks that I face in trying to achieve something large – one of which is not actually working towards my own goals in a useful manner! One of the biggest risks is confusing writing a blog with actual work, unless I use this medium to do something. Over the coming weeks, I hope to show you what I am doing as I move towards my very ambitious goal of “beautiful education”. I hope you find the linked article as useful as I did.
Maximise beauty or minimise …?
Posted: January 4, 2016 Filed under: Education, Opinion | Tags: advocacy, authenticity, beauty, charlie brown, education, educational research, ethics, higher education, in the student's head, peanuts, peppermint patty, principles of design, reflection, teaching, teaching approaches 3 CommentsThere is a Peanuts comic from April 16, 1972 where Peppermint Patty asks Charlie Brown what he thinks the secret of living is. Charlie Brown’s answer is “A convertible and a lake.” His reasoning is simple. When it’s sunny you can drive around in the convertible and be happy. When it’s raining you can think “oh well, the rain will fill up my lake.” Peppermint Patty asks Snoopy the same question and, committed sensualist that he is, he kisses her on the nose.

This is the Amphicar. In the 21st century, no philosophical construct will avoid being reified.
Charlie Brown, a character written to be constantly ground down by the world around him, is not seeking to maximise his happiness, he is seeking to minimise his unhappiness. Given his life, this is an understandable philosophy.
But what of beauty and, in this context, beauty in education? I’ve already introduced the term ‘ugly’ as the opposite of beauty but it’s hard for me to wrap my head around the notion of ‘minimising ugliness’; ugly is such a strong term. It’s also hard to argue that any education, when it covers any aspects to which we would apply that label, is ever totally ugly. Perhaps, in the educational framing, the absence of beauty is plainness. We end up with things that are ordinary, rather than extraordinary. I think that there is more than enough range between beauty and plainness for us to have a discussion on the movement between those states.
Is it enough for us to accept educational thinking that is acceptably plain? Is that a successful strategy? Many valid concerns about learning at scale focus on the innate homogeneity and lack of personalisation inherent in such an approach: plainness is the enemy. Yet there are many traditional and face-to-face approaches where plainness stares us in the face. Banality in education is, when identified, always rejected, yet it so often slips by without identification. We know that there is a hole in our slippers, yet we only seem to notice when that hole directly affects us or someone else points it out.
My thesis here is that a framing of beauty should lead us to a strategy of maximising beauty, rather than minimising plainness, as it is only in that pursuit that we model that key stage of falling in love with knowledge that we wish our students to emulate. If we say “meh is ok”, then that is what we will receive in return. We model, they follow as part of their learning. That’s what we’re trying to make happen, isn’t it?
What would Charlie Brown’s self-protective philosophy look like in a positive framing, maximising his joy rather than managing his grief? I’m not sure but I think it would look a lot like a dancing beagle who kisses people on the nose. We may need more than this for a sound foundation to reframe education!
Exploring beauty and aesthetics
Posted: January 3, 2016 Filed under: Education, Opinion | Tags: aesthetics, beauty, education, educational problem, educational research, hegel, higher education, Kant, learning, reflection, suits, teaching, teaching approaches, The Grasshopper, thinking, tools, wittgenstein Leave a comment
“Nothing great in the world has ever been accomplished without passion.” Hegel
- the ability to state the goal of any educational activity as separate from the activity,
- the awareness of evidence-based practice and its use in everyday teaching, and
- a willingness to accept that it is correct goal setting and using techniques that work, and can be shown to work, that will lead to better outcomes.
A Year of Beauty
Posted: January 1, 2016 Filed under: Education, Opinion | Tags: advocacy, authenticity, beauty, blogging, design, education, educational problem, educational research, good, higher education, plato, principles of design, reflection, socrates, teaching, teaching approaches, thinking, truth, vygotsky, workload 5 Comments
Plato: Unifying key cosmic values of Greek culture to a useful conceptual trinity.
Ever since education became something we discussed, teachers and learners alike have had strong opinions regarding the quality of education and how it can be improved. What is surprising, as you look at these discussions over time, is how often we seem to come back to the same ideas. We read Dewey and we hear echoes of Rousseau. So many echoes and so much careful thought, found as we built new modern frames with Vygotsky, Piaget, Montessori, Papert and so many more. But little of this should really be a surprise because we can go back to the writings of Marcus Fabius Quintilianus (Quinitilian) and his twelve books of The Orator’s Education and we find discussion of small class sizes, constructive student-focused discussions, and that more people were capable of thought and far-reaching intellectual pursuits than was popularly believed.
“… as birds are born for flying, horses for speed, beasts of prey for ferocity, so are [humans] for mental activity and resourcefulness.” Quintilian, Book I, page 65.
I used to say that it was stunning how contemporary education seems to be slow in moving in directions first suggested by Dewey a hundred years ago, then I discovered that Rousseau had said it 150 years before that. Now I find that Quntilian wrote things such as this nearly 2,000 years ago. And Marcus Aurelius, among other stoics, made much of approaches to thinking that, somehow, were put to one side as we industrialised education much as we had industrialised everything else.
This year I have accepted that we have had 2,000 years of thinking (and as much evidence when we are bold enough to experiment) and yet we just have not seen enough change. Dewey’s critique of the University is still valid. Rousseau’s lament on attaining true mastery of knowledge stands. Quintilian’s distrust of mere imitation would not be quieted when looking at much of repetitive modern examination practice.
What stops us from changing? We have more than enough evidence of discussion and thought, from some of the greatest philosophers we have seen. When we start looking at education, in varying forms, we wander across Plato, Hypatia, Hegel, Kant, Nietzsche, in addition to all of those I have already mentioned. But evidence, as it stands, does not appear to be enough, especially in the face of personal perception of achievement, contribution and outcomes, whether supported by facts or not.
Evidence of uncertainty is not enough. Evidence of the lack of efficacy of techniques, now that we can and do measure them, is not enough. Evidence that students fail who then, under other tutors or approaches, mysteriously flourish elsewhere, is not enough.
Authority, by itself, is not enough. We can be told to do more or to do things differently but the research we have suggests that an externally applied control mechanism just doesn’t work very well for areas where thinking is required. And thinking is, most definitely, required for education.
I have already commented elsewhere on Mark Guzdial’s post that attracted so much attention and, yet, all he was saying was what we have seen repeated throughout history and is now supported in this ‘gilt age’ of measurement of efficacy. It still took local authority to stop people piling onto him (even under the rather shabby cloak of ‘scientific enquiry’ that masks so much negative activity). Mark is repeating the words of educators throughout the ages who have stepped back and asked “Is what we are doing the best thing we could be doing?” It is human to say “But, if I know that this is the evidence, why am I acting as if it were not true?” But it is quite clear that this is still challenging and, amazingly, heretical to an extent, despite these (apparently controversial) ideas pre-dating most of what we know as the trappings and establishments of education. Here is our evidence that evidence is not enough. This experience is the authority that, while authority can halt a debate, authority cannot force people to alter such a deeply complex and cognitive practice in a useful manner. Nobody is necessarily agreeing with Mark, they’re just no longer arguing. That’s not helpful.
So, where to from here?
We should not throw out everything old simply because it is old, as that is meaningless without evidence to do so and it is wrong as autocratically rejecting everything new because it is new.
The challenge is to find a way of explaining how things could change without forcing conflict between evidence and personal experience and without having to resort to an argument by authority, whether moral or experiential. And this is a massive challenge.
This year, I looked back to find other ways forward. I looked back to the three values of Ancient Greece, brought together as a trinity through Socrates and Plato.
These three values are: beauty, goodness and truth. Here, truth means seeing things as they are (non-concealment). Goodness denotes the excellence of something and often refers to a purpose of meaning for existence, in the sense of a good life. Beauty? Beauty is an aesthetic delight; pleasing to those senses that value certain criteria. It does not merely mean pretty, as we can have many ways that something is aesthetically pleasing. For Dewey, equality of access was an essential criterion of education; education could only be beautiful to Dewey if it was free and easily available. For Plato, the revelation of knowledge was good and beauty could arose a love for this knowledge that would lead to such a good. By revealing good, reality, to our selves and our world, we are ultimately seeking truth: seeing the world as it really is.
In the Platonic ideal, a beautiful education leads us to fall in love with learning and gives us momentum to strive for good, which will lead us to truth. Is there any better expression of what we all would really want to see in our classrooms?
I can speak of efficiencies of education, of retention rates and average grades. Or I can ask you if something is beautiful. We may not all agree on details of constructivist theory but if we can discuss those characteristics that we can maximise to lead towards a beautiful outcome, aesthetics, perhaps we can understand where we differ and, even more optimistically, move towards agreement. Towards beautiful educational practice. Towards a system and methodology that makes our students as excited about learning as we are about teaching. Let me illustrate.
A teacher stands in front of a class, delivering the same lecture that has been delivered for the last ten years. From the same book. The classroom is half-empty. There’s an assignment due tomorrow morning. Same assignment as the last three years. The teacher knows roughly how many people will ask for an extension an hour beforehand, how many will hand up and how many will cheat.
I can talk about evidence, about pedagogy, about political and class theory, about all forms of authority, or I can ask you, in the privacy of your head, to think about these questions.
- Is this beautiful? Which of the aesthetics of education are really being satisfied here?
- Is it good? Is this going to lead to the outcomes that you want for all of the students in the class?
- Is it true? Is this really the way that your students will be applying this knowledge, developing it, exploring it and taking it further, to hand on to other people?
- And now, having thought about yourself, what do you think your students would say? Would they think this was beautiful, once you explained what you meant?
Over the coming year, I will be writing a lot more on this. I know that this idea is not unique (Dewey wrote on this, to an extent, and, more recently, several books in the dramatic arts have taken up the case of beauty and education) but it is one that we do not often address in science and engineering.
My challenge, for 2016, is to try to provide a year of beautiful education. Succeed or fail, I will document it here.

