The shortest intervalPosted: January 30, 2016 Filed under: Education | Tags: advocacy, aesthetics, authenticity, beauty, community, design, education, educational problem, educational research, higher education, in the student's head, learning, resources, teaching, teaching approaches, tools 2 Comments
If we want to give feedback, then the time it takes to give feedback is going to determine how often we can do it. If the core of our evaluation is feedback, rather than some low-Bloom’s quiz-based approach giving a score of some sort, then we have to set our timelines to allow us to:
- Get the work when we are ready to work on it
- Undertake evaluation to the required level
- Return that feedback
- Do this at such a time that our students can learn from it and potentially use it immediately, to reinforce the learning
A commenter asked me how I actually ran large-scale assessment. The largest class I’ve run detailed feedback/evaluation on was 360 students with a weekly submission of a free-text (and graphics) solution to a puzzle. The goal was to have the feedback back within a week – prior to the next lecture where the solution would be delivered.
I love a challenge.
This scale is, obviously, impossible for one person to achieve reliably (we estimated it as at least forty hours of work). Instead, we allocated a marking team to this task, coordinated by the lead educator. (E1 and E2 model again. There was, initially, no automated capacity for this at the time although we added some later.)
Coordinating a team takes time. Even when you start with a rubric, free text answers can turn up answer themes that you didn’t anticipate and we would often carry our simple checks to make sure that things were working. But, looking at the marking time I was billed for (a good measure), I could run an entire cycle of this in three days, including briefing time, testing, marking, and oversight. But this is with a trained team, a big enough team, good conceptual design and a senior educator who’s happy to take a more executive role.
In this case, we didn’t give the students a chance to refactor their work but, if we had, we could have done this with a release 3 days after submission. To ensure that we then completed the work again by the ‘solution release’ deadline, we would have had to set the next submission deadline to only 24 hours after the feedback was released. This sounds short but, if we assume that some work has been done, then refactoring and reworking should take less time.
But then we have to think about the cost. By running two evaluation cycles we are providing early feedback but we have doubled our cost for human markers (a real concern for just about everyone these days).
My solution was to divide the work into two components. The first was quiz-based and could be automatically and immediately assessed by the Learning Management System, delivering a mark at a fixed deadline. The second part was looked at by humans. Thus, students received immediate feedback on part of the problem straight away (or a related problem) while they were waiting for humans.
But I’d be the first to admit that I hadn’t linked this properly, according to my new model. It does give us insight for a staged hybrid model where we buffer our human feedback by using either smart or dumb automated assessment component to highlight key areas and, better still, we can bring these forward to help guide time management.
I’m not unhappy with that early attempt at large-scale human feedback as the students were receiving some excellent evaluation and feedback and it was timely and valuable. It also gave me a lot of valuable information about design and about what can work, as well as how to manage marking teams.
I also realised that some courses could never be assessed the way that they claimed unless they had more people on task or only delivered at a time when the result wasn’t usable anymore.
How much time should we give students to rework things? I’d suggest that allowing a couple of days takes into account the life beyond Uni that many students have. That means that we can do a cycle in a week if we can keep our human evaluation stages under 2 days. Then, without any automated marking, we get 2 days (E1 or E2) + 2 days (student) + 2 days (second evaluation, possibly E2) + 1 day (final readjustment) and then we should start to see some of the best work that our students can produce.
Assuming, of course, that all of us can drop everything to slot into this. For me, this motivates a cycle closer to two to three weeks to allow for everything else that both groups are doing. But that then limits us to fewer than five big assessment items for a twelve week course!
What’s better? Twelve assessment items that are “submit and done” or four that are “refine and reinforce to best practice”? Is this even a question we can ask? I know which one is aesthetically pleasing, in terms of all of the educational aesthetics we’ve discussed so far but is this enough for an educator to be able to stand up to a superior and say “We’re not going to do X because it just doesn’t make any sense!”
What do you think?
I tried using LMS-quizzes for quick feedback in a course last year:
along with the assignment submission, students had to take a “quiz”
(ungraded as long as they took it) in the LMS that asked them about
various design decisions (this was for a programming-oriented
assignments in a first-year object-oriented design course). Based on
the students’ answers, I used the quiz feedback mechanisms to urge
them to go to office hours ASAP to clear up confusion before the next
Note that this approach puts responsibility for seeking feedback on
the student: we took on the responsibility of alerting students that
they likely needed help, but then left it to them to request more
in-depth feedback. Everyone still got some human-produced feedback
roughly a week later, but those who took advantage of the call to go
to office hours got deeper feedback with more time for it to affect
work on the next assignment.
As far as I know, only a handful of students (out of 150) actually
responded to our invitation/suggestion that they come in for help.
But this raises a question: what role should we expect students to
take in the feedback process?
Great question. My short answer is that if we expect students to reflect on their learning then we are already expecting them to provide a form of feedback to themselves. But do students know enough to know that they need help from someone else? Trickier. Threshold Concepts theory tells us that students spend time in the liminal phase on their path to learning certain challenging concepts. That liminal state can lead them to think that things are ok until it comes time to test their knowledge, which is when they realise that they don’t know it and when we have confirmation that they don’t know it.
Autonomy is important but a false sense of security is a bad thing to build autonomy upon. Getting people motivated comes from a sense of being in control (autonomy), enough skill or knowledge to carry out the task (mastery), and a reason for doing so (a purpose).
I ran a large assignment in a third year unit that ran over several weeks and, about 7 days before the due date, I ran a pass over the repositories and submissions portal to see student activity. The students who had done nothing got a mail from me asking if there were any problems, how it was going, if they were still doing the course and so on.
The response rate to this e-mail was over 80% and the majority of those students got something reasonable in one time. Two students dropped the course after coming to see me to discuss that they weren’t quite catching up with activity and needed to focus their studies.
I provided rich feedback automatically through the scheduled evaluations of the submitted work and, thus, students knew if they weren’t doing it. However, the students I spoke to (and followed up with) thought that they had plenty of time but, after reminding, realised what the task was and that it was closer than they thought.
My goal was to make sure that the students thought about the assignment in time to have a reasonable chance of submitting something. Having provided this trigger, I then only tracked those students were really struggling. But most of these students engaged with the community and this provided much needed guidance on activity and apprenticeship experiences from other students, as well as me.
The ‘top’ 70% of my students don’t need my help with feedback in third year. About 30% do. My mantra is that I try to leave no student behind, unless they are really determined to do it. Hence, I intervene more. I suspect that the feedback involvement is heavily dependant upon what you want to achieve, your cohort, the disparity, and probably many other factors I’ve forgotten.
Thanks again for a great question!