Equity is the principal educational aestheticPosted: January 26, 2016 Filed under: Education | Tags: aesthetics, authenticity, beauty, Bloom, community, education, educational research, ethics, higher education, learning, mastery, reflection, resources, teaching, teaching approaches, time management, tools Leave a comment
I’ve laid out some honest and effective approaches to the evaluation of student work that avoid late penalties and still provide high levels of feedback, genuine motivation and a scalable structure.
But these approaches have to fit into the realities of time that we have in our courses. This brings me to the discussion of mastery learning (Bloom). An early commenter noted how much my approach was heading towards mastery goals, where we use personalised feedback and targeted intervention to ensure that students have successfully mastered certain tiers of knowledge, before we move on to those that depend upon them.
A simple concept: pre-requisites must be mastered before moving on. It’s what much of our degree structure is based upon and is what determines the flow of students through courses, leading towards graduation. One passes 101 in order to go on to courses that assume such knowledge.
Within an individual course, we quickly realise that too many mastery goals starts to leave us in a precarious position. As I noted from my earlier posts, having enough time to do your job as designer or evaluator requires you to plan what you’re doing and keep careful track of your commitments. The issue that arises with mastery goals is that, if a student can’t demonstrate mastery, we offer remedial work and re-training with an eye to another opportunity to demonstrate that knowledge.
This can immediately lead to a backlog of work that must be completed prior to the student being considered to have mastered an area, and thus being ready to move on. If student A has completed three mastery goals while B is struggling with the first, where do we pitch our teaching materials, in anything approximating a common class activity, to ensure that everyone is receiving both what they need and what they are prepared for? (Bergmann and Sams’ Flipped Mastery is one such approach, where flipping and time-shifting are placed in a mastery focus – in their book “Flip Your Classroom”)
But even if we can handle a multi-speed environment (and we have to be careful because we know that streaming is a self-fulfilling prophecy) how do we handle the situation where a student has barely completed any mastery goals and the end of semester is approaching?
Mastery learning is a sound approach. It’s both ethically and philosophically pitched to prevent the easy out for a teacher of saying “oh, I’m going to fit the students I have to an ideal normal curve” or, worse, “these are just bad students”. A mastery learning approach tends to produces good results, although it can be labour intensive as we’ve noted. To me, Bloom’s approach is embodying one of my critical principles in teaching: because of the variable level of student preparation, prior experience and unrelated level of privilege, we have to adjust our effort and approach to ensure that all students can be brought to the same level wherever possible.
Equity is one of my principle educational aesthetics and I hope it’s one of yours. But now we have to mutter to ourselves that we have to think about limiting how many mastery goals there are because of administrative constraints. We cannot load up some poor student who is already struggling and pretend that we are doing anything other than delaying their ultimate failure to complete.
At the same time, we would be on shaky ground to construct a course where we could turn around at week 3 of 12 and say “You haven’t completed enough mastery goals and, because of the structure, this means that you have already failed. Stop trying.”
The core of a mastery-based approach is the ability to receive feedback, assimilate it, change your approach and then be reassessed. But, if this is to be honest, this dependency upon achievement of pre-requisites should have a near guarantee of good preparation for all courses that come afterwards. I believe that we can all name pre-requisite and dependency patterns where this is not true, whether it is courses where the pre-requisite course is never really used or dependencies where you really needed to have achieved a good pass in the pre-req to advance.
Competency-based approaches focus on competency and part of this is the ability to use the skill or knowledge where it is required, whether today or tomorrow. Many of our current approaches to knowledge and skill are very short-term-focussed, encouraging cramming or cheating in order to tick a box and move on. Mastering a skill for a week is not the intent but, unless we keep requiring students to know or use that information, that’s the message we send. This is honesty: you must master this because we’re going to keep using it and build on it! But trying to combine mastery and grades raises unnecessary tension, to the student’s detriment.
As Bloom notes:
Mastery and recognition of mastery under the present relative grading system is unattainable for the majority of students – but this is the result of the way in which we have “rigged” the educational system.
Bloom, Learning for Mastery, UCLA CSEIP Evaluation Comment, 1, 2, 1968.
Mastery learning is part and parcel of any competency based approach but, without being honest about the time constraints that are warping it, even this good approach is diminished.
The upshot of this is that any beautiful model of education adhering to the equity aesthetic has to think in a frame that is longer than a semester and in a context greater than any one course. We often talk about doing this but detailed alignment frequently escapes us, unless it is to put up our University-required ‘graduate attributes’ to tell the world how good our product will be.
We have to accept that part of our job is asking a student to do something and then acknowledging that they have done it, while continuing to build systems where what they have done is useful, provides a foundation to further learning and, in key cases, is something that they could do again in the future to the approximate level of achievement.
We have to, again, ask not only why we grade but also why we grade in such strangely synchronous containers. Why is it that a degree for almost any subject is three to five years long? How is that, despite there being nearly thirty years between the computing knowledge in the degree that I did and the one that I teach, they are still the same length? How are we able to have such similarity when we know how much knowledge is changing?
A better model of education is not one that starts from the assumption of the structures that we have. We know a lot of things that work. Why are we constraining them so heavily?
It’s only five minutes late!Posted: January 23, 2016 Filed under: Education | Tags: aesthetics, beauty, education, educational problem, educational research, higher education, in the student's head, learning, principles of design, resources, student perspective, teaching, teaching approaches, thinking, time, time management Leave a comment
I’ve been talking about why late penalties are not only not useful but they don’t work, yet I keep talking about getting work in on time and tying it to realistic resource allocation. Does this mean I’m really using late penalties?
No, but let me explain why, starting from the underlying principle of fairness that is an aesthetic pillar of good education. One part of this is that the actions of one student should not unduly affect the learning journey of another student. That includes evaluation (and associated marks).
This is the same principle that makes me reject curve grading. It makes no sense to me that someone else’s work is judged in the context of another, when we have so little real information with which we could establish any form of equivalence of human experience and available capacity.
I don’t want to create a market economy for knowledge, where we devaluate successful demonstrations of knowledge and skill for reasons that have nothing to do with learning. Curve grading devalues knowledge. Time penalties devalue knowledge.
I do have to deal with resource constraints, in that I often have (some) deadlines that are administrative necessities, such as degree awards and things like this. I have limited human resources, both personally and professionally.
Given that I do not have unconstrained resources, the fairness principle naturally extends to say that individual students should not consume resources to the detriment of others. I know that I have a limited amount of human evaluation time, therefore I have to treat this as a constrained resource. My E1 and E2 evaluations resources must be, to a degree at least, protected to ensure the best outcome for the most students. (We can factor equity into this, and should, but this stops this from being a simple linear equivalence and makes the terms more complex than they need to be for explanation, so I’ll continue this discussion as if we’re discussing equality.)
You’ve noticed that the E3 and E4 evaluation systems are pretty much always available to students. That’s deliberate. If we can automate something, we can scale it. No student is depriving another of timely evaluation and so there’s no limitation of access to E3 and E4, unless it’s too late for it to be of use.
If we ask students to get their work in at time X, it should be on the expectation that we are ready to leap into action at second X+(prep time), or that the students should be engaged in some other worthwhile activity from X+1, because otherwise we have made up a nonsense figure. In order to be fair, we should release all of our evaluations back at the same time, to avoid accidental advantages because of the order in which things were marked. (We may wish to vary this for time banking but we’ll come back to this later.) As many things are marked in surname or student number order, the only way to ensure that we don’t accidentally keep granting an advantage is to release everything at the same time.
Remember, our whole scheme is predicated on the assumption that we have designed and planned for how long it will take to go through the work and provide feedback in time for modification before another submission. When X+(prep time) comes, we should know, roughly to the hour or day, at worst, when this will be done.
If a student hands up fifteen minutes late, they have most likely missed the preparation phase. If we delay our process to include this student, then we will delay feedback to everyone. Here is a genuine motivation for students to submit on time: they will receive rich and detailed feedback as soon as it is ready. Students who hand up late will be assessed in the next round.
That’s how the real world actually works. No-one gives you half marks for something that you do a day late. It’s either accepted or not and, often, you go to the back of the queue. When you miss the bus, you don’t get 50% of the bus. You just have to wait for the next opportunity and, most of the time, there is another bus. Being late once rarely leaves you stranded without apparent hope – unlucky Martian visitors aside.
But there’s more to this. When we have finished with the first group, we can immediately release detailed feedback on what we were expecting to see, providing the best results to students and, from that point on, anyone who submits would have the benefit of information that the first group didn’t have before their initial submission. Rather than make the first group think that they should have waited (and we know students do), we give them the best possible outcome for organising their time.
The next submission deadline is done by everyone with the knowledge gained from the first pass but people who didn’t contribute to it can’t immediately use it for their own benefit. So there’s no free-riding.
There is, of course, a tricky period between the submission deadline and the release, where we could say “Well, they didn’t see the feedback” and accept the work but that’s when we think about the message we want to send. We would prefer students to improve their time management and one part of this is to have genuine outcomes from necessary deadlines.
If we let students keep handing in later and later, we will eventually end up having these late submissions running into our requirement to give feedback. But, more importantly, we will say “You shouldn’t have bothered” to those students who did hand up on time. When you say something like this, students will learn and they will change their behaviour. We should never reinforce behaviour that is the opposite of what we consider to be valuable.
Fairness is a core aesthetic of education. Authentic time management needs to reflect the reality of lost opportunity, rather than diminished recognition of good work in some numerical reduction. Our beauty argument is clear: we can be firm on certain deadlines and remove certain tasks from consideration and it will be a better approach and be more likely to have positive outcomes than an arbitrary reduction scheme already in use.
First year course evaluation schemePosted: January 22, 2016 Filed under: Education, Opinion | Tags: advocacy, aesthetics, assessment, authenticity, beauty, design, education, educational problem, ethics, higher education, learning, reflection, resources, student perspective, teaching, teaching approaches, time management 2 Comments
In my earlier post, I wrote:
Even where we are using mechanical or scripted human [evaluators], the hand of the designer is still firmly on the tiller and it is that control that allows us to take a less active role in direct evaluation, while still achieving our goals.
and I said I’d discuss how we could scale up the evaluation scheme to a large first year class. Finally, thank you for your patience, here it is.
The first thing we need to acknowledge is that most first-year/freshman classes are not overly complex nor heavily abstract. We know that we want to work concrete to abstract, simple to complex, as we build knowledge, taking into account how students learn, their developmental stages and the mechanics of human cognition. We want to focus on difficult concepts that students struggle with, to ensure that they really understand something before we go on.
In many courses and disciplines, the skills and knowledge we wish to impart are fundamental and transformative, but really quite straight-forward to evaluate. What this means, based on what I’ve already laid out, is that my role as a designer is going to be crucial in identifying how we teach and evaluate the learning of concepts, but the assessment or evaluation probably doesn’t require my depth of expert knowledge.
The model I put up previously now looks like this:
My role (as the notional E1) has moved entirely to design and oversight, which includes developing the E3 and E4 tests and training the next tier down, if they aren’t me.
As an example, I’ve put in two feedback points, suitable for some sort of worked output in response to an assignment. Remember that the E2 evaluation is scripted (or based on rubrics) yet provides human nuance and insight, with personalised feedback. That initial feedback point could be peer-based evaluation, group discussion and demonstration, or whatever you like. The key here is that the evaluation clearly indicates to the student how they are travelling; it’s never just “8/10 Good”. If this is a first year course then we can capture much of the required feedback with trained casuals and the underlying automated systems, or by training our students on exemplars to be able to evaluate each other’s work, at least to a degree.
The same pattern as before lies underneath: meaningful timing with real implications. To get access to human evaluation, that work has to go in by a certain date, to allow everyone involved to allow enough time to perform the task. Let’s say the first feedback is a peer-assessment. Students can be trained on exemplars, with immediate feedback through many on-line and electronic systems, and then look at each other’s submissions. But, at time X, they know exactly how much work they have to do and are not delayed because another student handed up late. After this pass, they rework and perhaps the next point is a trained casual tutor, looking over the work again to see how well they’ve handled the evaluation.
There could be more rework and review points. There could be less. The key here is that any submission deadline is only required because I need to allocate enough people to the task and keep the number of tasks to allocate, per person, at a sensible threshold.
Beautiful evaluation is symmetrically beautiful. I don’t overload the students or lie to them about the necessity of deadlines but, at the same time, I don’t overload my human evaluators by forcing them to do things when they don’t have enough time to do it properly.
As for them, so for us.
Throughout this process, the E1 (supervising evaluator) is seeing all of the information on what’s happening and can choose to intervene. At this scale, if E1 was also involved in evaluation, intervention would be likely last-minute and only in dire emergency. Early intervention depends upon early identification of problems and sufficient resources to be able to act. Your best agent of intervention is probably the person who has the whole vision of the course, assisted by other human evaluators. This scheme gives the designer the freedom to have that vision and allows you to plan for how many other people you need to help you.
In terms of peer assessment, we know that we can build student communities and that students can appreciate each other’s value in a way that enhances their perceptions of the course and keeps them around for longer. This can be part of our design. For example, we can ask the E2 evaluators to carry out simple community-focused activities in classes as part of the overall learning preparation and, once students are talking, get them used to the idea of discussing ideas rather than having dualist confrontations. This then leads into support for peer evaluation, with the likelihood of better results.
Some of you will be saying “But this is unrealistic, I’ll never get those resources.” Then, in all likelihood, you are going to have to sacrifice something: number of evaluations, depth of feedback, overall design or speed of intervention.
You are a finite resource. Killing you with work is not beautiful. I’m writing all of this to speak to everyone in the community, to get them thinking about the ugliness of overwork, the evil nature of demanding someone have no other life, the inherent deceit in pretending that this is, in any way, a good system.
We start by changing our minds, then we change the world.
Most assessment’s uglyPosted: January 21, 2016 Filed under: Education, Opinion | Tags: aesthetics, beauty, Bloom, education, higher education, learning, teaching, thinking, time management 3 Comments
Ed challenged me: distill my thinking! In three words? Ok, Ed, fine: most assessment’s ugly.
Why is that? (Three word answers. Yes, I’m cheating.)
- It’s not authentic.
- There’s little design.
- Wrong Bloom’s level.
- Weak links forward.
- Weak links backward.
- Testing not evaluating.
- Marks not feedback.
- Not learning focused.
- Deadlines are rubbish.
- Tradition dominates innovation.
How was that?
The hand of an expert is visible in designPosted: January 20, 2016 Filed under: Education | Tags: advocacy, aesthetics, arts and crafts, authenticity, beauty, community, design, dewey, education, educational problem, educational research, ethics, good, higher education, jeffrey petts, learning, morris, reflection, resources, student perspective, teaching, teaching approaches, thinking, time, time management, tools, truth 1 Comment
In yesterday’s post, I laid out an evaluation scheme that allocated the work of evaluation based on the way that we tend to teach and the availability, and expertise, of those who will be evaluating the work. My “top” (arbitrary word) tier of evaluators, the E1s, were the teaching staff who had the subject matter expertise and the pedagogical knowledge to create all of the other evaluation materials. Despite the production of all of these materials and designs already being time-consuming, in many cases we push all evaluation to this person as well. Teachers around the world know exactly what I’m talking about here.
Our problem is time. We move through it, tick after tick, in one direction and we can neither go backwards nor decrease the number of seconds it takes to perform what has to take a minute. If we ask educators to undertake good learning design, have engaging and interesting assignments, work on assessment levels well up in the taxonomies and we then ask them to spend day after day standing in front of a class and add marking on top?
Forget it. We know that we are going to sacrifice the number of tasks, the quality of the tasks or our own quality of life. (I’ve written a lot about time before, you can search my blog for time or read this, which is a good summary.) If our design was good, then sacrificing the number of tasks or their quality is going to compromise our design. If we stop getting sleep or seeing our families, our work is going to suffer and now our design is compromised by our inability to perform to our actual level of expertise!
When Henry Ford refused to work his assembly line workers beyond 40 hours because of the increased costs of mistakes in what were simple, mechanical, tasks, why do we keep insisting that complex, delicate, fragile and overwhelmingly cognitive activities benefit from us being tired, caffeine-propped, short-tempered zombies?
We’re not being honest. And thus we are not meeting our requirement for truth. A design that gets mangled for operational reasons without good redesign won’t achieve our outcomes. That’s not going to achieve our results – so that’s not good. But what of beauty?
What are the aesthetics of good work? In Petts’ essay on the Arts and Crafts movement, he speaks of William Morris, Dewey and Marx (it’s a delightful essay) and ties the notion of good work to work that is authentic, where such work has aesthetic consequences (unsurprisingly given that we were aiming for beauty), and that good (beautiful) work can be the result of human design if not directly the human hand. Petts makes an interesting statement, which I’m not sure Morris would let pass un-challenged. (But, of course, I like it.)
It is not only the work of the human hand that is visible in art but of human design. In beautiful machine-made objects we still can see the work of the “abstract artist”: such an individual controls his labor and tools as much as the handicraftsman beloved of Ruskin.
Jeffrey Petts, Good Work and Aesthetic Education: William Morris, the Arts and Crafts Movement, and Beyond, The Journal of Aesthetic Education, Vol. 42, No. 1 (Spring, 2008), page 36
Petts notes that it is interesting that Dewey’s own reflection on art does not acknowledge Morris especially when the Arts and Crafts’ focus on authenticity, necessary work and a dedication to vision seems to be a very suitable framework. As well, the Arts and Crafts movement focused on the rejection of the industrial and a return to traditional crafting techniques, including social reform, which should have resonated deeply with Dewey and his peers in the Pragmatists. However, Morris’ contribution as a Pragmatist aesthetic philosopher does not seem to be recognised and, to me, this speaks volumes of the unnecessary separation between cloister and loom, when theory can live in the pragmatic world and forms of practice can be well integrated into the notional abstract. (Through an Arts and Crafts lens, I would argue that there is are large differences between industrialised education and the provision, support and development of education using the advantages of technology but that is, very much, another long series of posts, involving both David Bowie and Gary Numan.)
But here is beauty. The educational designer who carries out good design and manages to hold on to enough of her time resources to execute the design well is more aesthetically pleasing in terms of any notion of creative good works. By going through a development process to stage evaluations, based on our assessment and learning environment plans, we have created “made objects” that reflect our intention and, if authentic, then they must be beautiful.
We now have a strong motivating factor to consider both the often over-looked design role of the educator as well as the (easier to perceive) roles of evaluation and intervention.
I’ve revisited the diagram from yesterday’s post to show the different roles during the execution of the course. Now you can clearly see that the course lecturer maintains involvement and, from our discussion above, is still actively contributing to the overall beauty of the course and, we would hope, it’s success as a learning activity. What I haven’t shown is the role of the E1 as designer prior to the course itself – but that’s another post.
Even where we are using mechanical or scripted human markers, the hand of the designer is still firmly on the tiller and it is that control that allows us to take a less active role in direct evaluation, while still achieving our goals.
Do I need to personally look at each of the many works all of my first years produce? In our biggest years, we had over 400 students! It is beyond the scale of one person and, much as I’d love to have 40 expert academics for that course, a surplus of E1 teaching staff is unlikely anytime soon. However, if I design the course correctly and I continue to monitor and evaluate the course, then the monster of scale that I have can be defeated, if I can make a successful argument that the E2 to E4 marker tiers are going to provide the levels of feedback, encouragement and detailed evaluation that are required at these large-scale years.
Tomorrow, we look at the details of this as it applies to a first-year programming course in the Processing language, using a media computation approach.
Four-tier assessmentPosted: January 19, 2016 Filed under: Education, Opinion | Tags: aesthetics, assessment, authenticity, beauty, community, design, dewey, education, educational problem, educational research, ethics, higher education, in the student's head, intrinsic motivation, john c. dewey, learning, motivation, resources, teaching, teaching approaches, time, time management, tools 1 Comment
We’ve looked at a classification of evaluators that matches our understanding of the complexity of the assessment tasks we could ask students to perform. If we want to look at this from an aesthetic framing then, as Dewey notes:
“By common consent, the Parthenon is a great work of art. Yet it has aesthetic standing only as the work becomes an experience for a human being.”
John Dewey, Art as Experience, Chapter 1, The Live Creature.
Having a classification of evaluators cannot be appreciated aesthetically unless we provide a way for it to be experienced. Our aesthetic framing demands an implementation that makes use of such an evaluator classification, applies to a problem where we can apply a pedagogical lens and then, finally, we can start to ask how aesthetically pleasing it is.
And this is what brings us to beauty.
A systematic allocation of tasks to these different evaluators should provide valid and reliable marking, assuming we’ve carried out our design phase correctly. But what about fairness, motivation or relevancy, the three points that we did not address previously? To be able to satisfy these aesthetic constraints, and to confirm the others, it now matters how we handle these evaluation phases because it’s not enough to be aware that some things are going to need different approaches, we have to create a learning environment to provide fairness, motivation and relevancy.
I’ve already argued that arbitrary deadlines are unfair, that extrinsic motivational factors are grossly inferior to those found within, and, in even earlier articles, that we too insist on the relevancy of the measurements that we have, rather than designing for relevancy and insisting on the measurements that we need.
To achieve all of this and to provide a framework that we can use to develop a sense of aesthetic satisfaction (and hence beauty), here is a brief description of a four-tier, penalty free, assessment.
Let’s say that, as part of our course design, we develop an assessment item, A1, that is one of the elements to provide evaluation coverage of one of the knowledge areas. (Thus, we can assume that A1 is not required to be achieved by itself to show mastery but I will come back to this in a later post.)
Recall that the marking groups are: E1, expert human markers; E2, trained or guided human markers; E3, complex automated marking; and E4, simple and mechanical automated marking.
A1 has four, inbuilt, course deadlines but rather than these being arbitrary reductions of mark, these reflect the availability of evaluation resource, a real limitation as we’ve already discussed. When the teacher sets these courses up, she develops an evaluation scheme for the most advanced aspects (E1, which is her in this case), an evaluation scheme that could be used by other markers or her (E2), an E3 acceptance test suite and some E4 tests for simplicity. She matches the aspects of the assignment to these evaluation groups, building from simple to complex, concrete to abstract, definite to ambiguous.
The overall assessment of work consists of the evaluation of four separate areas, associated with each of the evaluators. Individual components of the assessment build up towards the most complex but, for example, a student should usually have had to complete at least some of E4-evaluated work to be able to attempt E3.
Here’s a diagram of the overall pattern for evaluation and assessment.
The first deadline for the assignment is where all evaluation is available. If students provide their work by this time, the E1 will look at the work, after executing the automated mechanisms, first E4 then E3, and applying the E2 rubrics. If the student has actually answered some E1-level items, then the “top tier” E1 evaluator will look at that work and evaluate it. Regardless of whether there is E1 work or not, human-written feedback from the lecturer on everything will be provided if students get their work in at that point. This includes things that would be of help for all other levels. This is the richest form of feedback, it is the most useful to the students and, if we are going to use measures of performance, this is the point at which the most opportunities to demonstrate performance can occur.
This feedback will be provided in enough time that the students can modify their work to meet the next deadline, which is the availability of E2 markers. Now TAs or casuals are marking instead or the lecturer is now doing easier evaluation from a simpler rubric. These human markers still start by running the automated scripts, E4 then E3, to make sure that they can mark something in E2. They also provide feedback on everything in E2 to E4, sent out in time for students to make changes for the next deadline.
Now note carefully what’s going on here. Students will get useful feedback, which is great, but because we have these staggered deadlines, we can pass on important messages as we identify problems. If the class is struggling with key complex or more abstract elements, harder to fix and requiring more thought, we know about it quickly because we have front-loaded our labour.
Once we move down to the fully automated systems, we’re losing opportunities for rich and human feedback to students who have not yet submitted. However, we have a list of students who haven’t submitted, which is where we can allocate human labour, and we can encourage them to get work in, in time for the E3 “complicated” script. This E3 marking script remains open for the rest of the semester, to encourage students to do the work sometime ahead of the exam. At this point, the discretionary allocation of labour for feedback is possible, because the lecturer has done most of the hard work in E1 and E2 and should, with any luck, have far fewer evaluation activities for this particular assignment. (Other things may intrude, including other assignments, but we have time bounds on this one, which is better than we often have!)
Finally, at the end of the teaching time (in our parlance, a semester’s teaching will end then we will move to exams), we move the assessment to E4 marking only, giving students the ability (if required) to test their work to meet any “minimum performance” requirements you may have for their eligibility to sit the exam. Eventually, the requirement to enter a record of student performance in this course forces us to declare the assessment item closed.
This is totally transparent and it’s based on real resource limitations. Our restrictions have been put in place to improve student feedback opportunities and give them more guidance. We have also improved our own ability to predict our workload and to guide our resource requests, as well as allowing us to reuse some elements of automated scripts between assignments, without forcing us to regurgitate entire assignments. These deadlines are not arbitrary. They are not punitive. We have improved feedback and provided supportive approaches to encourage more work on assignments. We are able to get better insight into what our students are achieving, against our design, in a timely fashion. We can now see fairness, intrinsic motivation and relevance.
I’m not saying this is beautiful yet (I think I have more to prove to you) but I think this is much closer than many solutions that we are currently using. It’s not hiding anything, so it’s true. It does many things we know are great for students so it looks pretty good.
Tomorrow, we’ll look at whether such a complicated system is necessary for early years and, spoilers, I’ll explain a system for first year that uses peer assessment to provide a similar, but easier to scale, solution.
At least they’re being honestPosted: January 13, 2016 Filed under: Education, Opinion | Tags: advocacy, assessment, authenticity, community, competency-based assessment, education, educational research, ethics, higher education, learning, reflection, student perspective, teaching, teaching approaches, thinking, time banking, time management, tools 2 Comments
I was inspired to write this by a comment about using late penalties but dealing slightly differently with students when they owned up to being late. I have used late penalties extensively (it’s school policy) and so I have a lot of experience with the many ways students try to get around them.
Like everyone, I have had students who have tried to use honesty where every other possible way of getting the assignment in on time (starting early, working on it before the day before, miraculous good luck) has failed. Sometimes students are puzzled that “Oh, I was doing another assignment from another lecturer” isn’t a good enough excuse. (Genuine reasons for interrupted work, medical or compassionate, are different and I’m talking about the ambit extension or ‘dog ate my homework’ level of bargaining.)
My reasoning is simple. In education, owning up to something that you did knowing that it would have punitive consequences of some sort should not immediately cause things to become magically better. Plea bargaining (and this is an interesting article of why that’s not a good idea anywhere) is you agreeing to your guilt in order to reduce your sentence. But this is, once again, horse-trading knowledge on the market. Suddenly, we don’t just have a temporal currency, we have a conformal currency, where getting a better deal involves finding the ‘kindest judge’ among the group who will give you the ‘lightest sentence’. Students optimise their behaviour to what works or, if they’re lucky, they have a behaviour set that’s enough to get them to a degree without changing much. The second group aren’t mostly who we’re talking about and I don’t want to encourage the first group to become bargain-hunting mark-hagglers.
I believe that ‘finding Mr Nice Lecturer’ behaviour is why some students feel free to tell me that they thought someone else’s course was more important than mine, because I’m a pretty nice person and have a good rapport with my students, and many of my colleagues can be seen (fairly or not) as less approachable or less open.
We are not doing ourselves or our students any favours. At the very least, we risk accusations of unfairness if we extend benefits to one group who are bold enough to speak to us (and we know that impostor syndrome and lack of confidence are rife in under-represented groups). At worst, we turn our students into cynical mark shoppers, looking for the easiest touch and planning their work strategy based on what they think they can get away with instead of focusing back on the learning. The message is important and the message must be clearly communicated so that students try to do the work for when it’s required. (And I note that this may or may not coincide with any deadlines.)
We wouldn’t give credit to someone who wrote ‘True’ and then said ‘Oh, but I really meant False’. The work is important or it is not. The deadline is important or it is not. Consequences, in a learning sense, do not have to mean punishments and we do not need to construct a Star Chamber in our offices.
Yes, I do feel strongly about this. I completely understand why people do this and I have also done this before. But after thinking about it at length, I changed my practice so that being honest about something that shouldn’t have happened was appreciated but it didn’t change what occurred unless there was a specific procedural difference in handling. I am not a judge. I am not a jury. I want to change the system so that not only do I not have to be but I’m not tempted to be.
Ugliness 101: late punishmentsPosted: January 13, 2016 Filed under: Education, Opinion | Tags: advocacy, aesthetics, authenticity, bandura, beauty, community, design, education, educational problem, educational research, ethics, higher education, in the student's head, instrumentality, lateness, learning, penalties, student perspective, teaching, teaching approaches, thinking, time banking, time management, tools 4 Comments
Before I lay out the program design I’m thinking of (and, beyond any discussion of competency, as a number of you have suggested, we are heading towards Bloom’s mastery learning as a frame with active learning elements), we need to address one of the most problematic areas of assessment.
Well, let’s be accurate, penalties are, by definition, punishments imposed for breaking the rules, so these are punishments. This is the stick in the carrot-and-stick reward/punish approach to forcing people to do what you want.
Let’s throw the Greek trinity at this and see how it shapes up. A student produces an otherwise perfect piece of work for an assessment task. It’s her own work. She has spent time developing it. It’s really good. Insightful. Oh, but she handed it up a day late. So we’re now going to say that this knowledge is worth less because it wasn’t delivered on time. She’s working a day job to pay the bills? She should have organised herself better. No Internet at home? Why didn’t she work in the library? I’m sure the campus is totally safe after hours and, well, she should just be careful in getting to and from the library. After all, the most important thing in her life, without knowing anything about her, should be this one hundred line program to reinvent something that has been written over a million times by every other CS student in history.
That’s not truth. That’s establishing a market value for knowledge with a temporal currency. To me, unless there’s a good reason for doing this, this is as bad as curve grading because it changes what the student has achieved for reasons outside of the assignment activity itself.
“Ah!” you say “Nick, we want to teach people to hand work in on time because that’s how the world works! Time is money, Jones!”
Rubbish. Yes, there are a (small) number of unmovable deadlines in the world. We certainly have some in education because we have to get grades in to achieve graduations and degrees. But most adults function in a world where they choose how to handle all of the commitments in their lives and then they schedule them accordingly. The more you do that, the more practice you get and you can learn how to do it well.
If you have ever given students a week, or even a day’s, extension because of something that has stopped you being able to accept or mark student work, no matter how good the reason, you have accepted that your submission points are arbitrary. (I feel strongly about this and have posted about it before.)
So what would be a good reason for sticking to these arbitrary deadlines? We’d want to see something really positive coming out of the research into this, right? Let’s look at some research on this, starting with Britton and Tesser, “Effects of Time-Management Practices on College Grades”, J Edu Psych, 1991, 83, 3. This reinforces what we already know from Bandura: students who feel in control and have high self-efficacy are going to do well. If a student sits down every day to work out what they’re going to do then they, unsurprisingly, can get things done. But this study doesn’t tell us about long-range time planning – the realm of instrumentality, the capability to link activity today with success in the future. (Here are some of my earlier thoughts on this, with references to Husman.) From Husman, we know that students value tasks in terms of how important they think it is, how motivated they are and how well they can link future success to the current task.
In another J Edu Psych paper (1990,82,4), Macan and Shahani reported that participants who felt that they had control over what they were doing did better but also clearly indicated that ambiguity and stress had an influence on time management in terms of perception and actuality. But the Perceived Control of Time (author’s caps) dominated everything, reducing the impact of ambiguity, reducing the impact of stress, and lead to greater satisfaction.
Students are rarely in control of their submission deadlines. Worse, we often do not take into account everything else in a student’s life (even other University courses) when we set our own deadlines. Our deadlines look arbitrary to students because they are, in the majority of cases. There’s your truth. We choose deadlines that work for our ability to mark and to get grades in or, perhaps, based on whether we are in the country or off presenting research on the best way to get students to hand work in on-time.
(Yes, the owl above is staring at me just as hard as he is staring at anyone else here.)
My own research clearly shows that fixed deadlines do not magically teach students the ability to manage their time and, when you examine it, why should it? (ICER 2012, was part of a larger study that clearly demonstrated students continuing, and even extending, last-minute behaviour all the way to the fourth year of their studies.) Time management is a discipline that involves awareness of the tasks to be performed, a decomposition of those tasks to subtasks that can be performed when the hyperbolic time discounting triggers go off, and a well-developed sense of instrumentality. Telling someone to hand in their work by this date OR ELSE does not increase awareness, train decomposition, or develop any form of planning skills. Well, no wonder it doesn’t work any better than shouting at people teaches them Maxwell’s Equations or caning children suddenly reveals the magic of the pluperfect form in Latin grammar.
So, let’s summarise: students do well when they feel in control and it helps with all of the other factors that could get in the way. So, in order to do almost exactly the opposite of help with this essential support step, we impose frequently arbitrary time deadlines and then act surprised when students fall prey to lack of self-confidence, stress or lose sight of what they’re trying to do. They panic, asking lots of (what appear to be) unnecessary questions because they are desperately trying to reduce confusion and stress. Sound familiar?
I have written about this at length while exploring time banking, giving students agency and the ability to plan their own time, to address all of these points. But the new lens in my educational inspection loupe allows me to be very clear about what is most terribly wrong with late penalties.
They are not just wrong, they satisfy none of anyone’s educational aesthetics. Because we don’t take a student’s real life into account, we are not being fair. Because we are not actually developing the time management abilities but treating them as something that will be auto-didactically generated, we are not being supportive. Because we downgrade work when it is still good, we are being intellectually dishonest. Because we vary deadlines to suit ourselves but may not do so for an individual student, we are being hypocritical. We are degrading the value of knowledge for procedural correctness. This is hideously “unbeautiful”.
That is not education. That’s bureaucracy. Just because most of us live within a bureaucracy doesn’t mean that we have to compromise our pedagogical principles. Even trying to make things fit well, as Rapaport did to try and fit into another scale, we end up warping and twisting our intent, even before we start thinking about lateness and difficult areas such as that. This cannot be good.
There is nothing to stop a teacher setting an exercise that is about time management and is constructed so that all steps will lead someone to develop better time management. Feedback or marks that reflect something being late when that is the only measure of fitness is totally reasonable. But to pretend that you can slap some penalties on to the side of an assessment and it will magically self-scaffold is to deceive yourself, to your students’ detriment. It’s not true.
Do I have thoughts on how to balance marking resources with student feedback requirements, elastic time management, and real assessments while still recognising that there are some fixed deadlines?
Funny you should ask. We’ll come back to this, soon.
ITiCSE 2014, Day 2, Session4A, Software Engineering, #ITiCSE2014 #ITiCSEPosted: June 24, 2014 Filed under: Education | Tags: collaboration, computer science education, education, educational problem, educational research, higher education, industry, ITiCSE, ITiCSE 2014, learning, mentoring, pair programming, pedagogy, principles of design, project based learning, small group learning, student perspective, studio based learning, teaching, teaching approaches, thinking, time, time factors, time management, universal principles of design, work-life balance Leave a comment
- People: learning community
teachers and learners
- Process: creative , reflective
- physical space
- Product: designed object – a single focus for the process
- Intra-Group Relations: Group 1 has lots of strong characters and appeared to be competent and performing well, with students in group learning about Scrum from each other. Group 2 was more introverted, with no dominant or strong characters, but learned as a group together. Both groups ended up being successful despite the different paths. Collaborative learning inside the group occurred well, although differently.
- Inter-Group Relations: There was good collaborative learning across and between groups after the middle of the semester, where initially the groups were isolated (and one group was strongly focused on winning a prize for best project). Groups learned good practices from observing each other.
Deadlines and Decisions – an Introduction to Time BankingPosted: May 9, 2012 Filed under: Education | Tags: education, higher education, learning, measurement, perry, reflection, resources, teaching, teaching approaches, time banking, time management, tools 3 Comments
I’m working on a new project, as part of my educational research, to change the way that students think about deadlines and time estimation. The concept’s called Time Banking and it’s pretty simple. Some schools already give students some ‘slack time’, free extension time that the students manage to allow them to manage their own deadlines. Stanford offers 2 days up front so, at any time in the course, you can claim some extra time and give yourself an extension.
The idea behind Time Banking is that you get extra hours if you hand up your work (to a certain standard) early. These hours can be used later as free extensions for assignment, up to some maximum number of days. This makes deadlines flexible and personalised per student.
Now I know that some of you already have your “Time is Money, Jones!” hats on and may even be waggling a finger. Here’s a picture of what that looks like, if you’re not a-waggling.
“Deadlines are fixed for a reason!”
“We use deadlines to teach professional conduct!”
“This is going to make marking impossible.”
“That’s not the right way to tie a bow tie!”
“It’s the end of civilisation as we know it!” (Sorry, that’s a little hyperbolic)
Of course, some deadlines are fixed. However, looking back over my own activities during the past quarter, I have far more negotiable and mutable deadlines than I do fixed ones. Knowing how to assess my own use of time in the face of a combination of fixed and mutable deadlines is a skill that I refine every year.
If I had up late, telling me to hand up on time or start earlier doesn’t really involve me in the process that’s required: making a decision as to how I’m going to manage all of my commitments over time, rather than panicking when I run into a deadline.
I can’t help thinking that forcing students to treat every assignment deadline as fixed, whether it needs to be or not, doesn’t deal with the student in the way that we try to in every other sphere. It makes them depend upon the deadline from an authority, rather than forcing them to look at their assignment work across a whole semester and plan inside that larger context. How can we produce students who are able to work at the multiplicity or commitment level, sorry, Perry again, if we force them to be authority-dependent dualists in their time management?
Now, before you think I’ve gone mad, there are some guidelines for all of this, as well as the requirement to have a good basis in evidence.
- We must be addressing an existing behavioural problem. (More on this later.)
- Some deadlines are immutable. This includes weekly dependencies, assignments where the solutions are revealed post submission, and ‘end of semester’ close-off dates.
- The assessment of ‘early and satisfactory’ must be low effort for the teacher. We don’t want to encourage handing up empty assignments a week ahead. We want to encourage meeting a certain standard, preferably automatically assessed, to bring student activity forward.
- We have limits on the amount you can bank or spend, to keep assessment of the submitted materials inside the realm of possibility and, again, to reduce unnecessary load on the staff,
- We don’t tolerate bad behaviour. Cheating or system fiddling immediately removes the system from the scheme.
- We provide up-front hours to give all students a base line of extension.
- We integrate this with our existing ‘system problem’ and ‘medical/compassionate problem’ extension systems.
Now, if students don’t have a problem, there’s nothing to fix. If our existing fixed deadline system encouraged students to start their work at the right time and finish in a timely fashion, then by final year, we wouldn’t need anything like this. However, my data from our web submission system clearly indicates the existence of ‘persistently’ late students and, in fact, rather than getting better, we actually start to see some students getting later in second, third and honours years. So, while this isn’t concrete, we’re not seeing the “Nope, no problem here” behaviour that we’d like. So that’s point 1 dealt with – it looks like we have a problem.
Most of the points are technical issues or components of an economic model, but 6 and 7 address a more important issue: equity. Right now, if your on-line submission systems crash the day before the assignment is due, what happens? Everyone who handed in their work has done the right thing but, because you have to grant a one day extension, they actually prioritised their work too early. Not a huge deal in many ways, because students who get their work in early probably march to a different drum anyway, but it makes a mockery of the whole fixed deadline thing. Either the deadline is fixed or it isn’t – by allowing extension on a broad scale for any reason, you’re admitting that your deadline was arbitrary.
We’re trying to make them think harder than that.
How about, instead, you hand out 24 hours of time in the bank. Now the students who handed up early have 24 hours to spend later on and the students who didn’t get it in before the crash have a fair chance to get their work in on time. Student gets sick, your medical extensions are now just managed as time in the bank, reflecting the fact that knock on effects can be far greater than just getting an extension for a single assignment.
But we don’t go crazy. My current thoughts are that we’d limit the students to only starting to count early about 2 days before the assignment is due, and allow a maximum of 3 days extension (greater for medical or compassionate). This keeps it in our marking boundary and also, assuming that you’ve placed your assignments in the context of the appropriate knowledge delivery, keeps the assignments roughly in the same location as the work – not doing the assignment at the beginning of the term and then forgetting the knowledge.
So, cards on the table, I’m writing a paper on this, identifying exactly what I need to look at in order to demonstrate if this is a problem, the literature that supports my approach, the objections to it and the obstacles. I also have to spec the technical system that would support it and , yes, identify the range of assignments for which it would work. It won’t work for everything/everyone or every course. But I suspect it might work very well for some areas.
Could we allow team banking? Course banking? Social sharing? Community involvement (donation to charity for so many hours in the bank at the end of the course)? What could we do by involving students in the elastic management of their own time?
There’s a lot more but I’d love to hear some thoughts on it. I look forward to the discussion!