Small evaluation, big impact.
Posted: January 29, 2016 Filed under: Education | Tags: aesthetics, beauty, community, design, education, educational problem, educational research, ethics, higher education, learning, resources, student perspective, teaching, teaching approaches, thinking, tools Leave a commentOne of the problems with any model that builds in more feedback is that we incur both the time required to produce the feedback and we also have an implicit requirement to allow students enough time to assimilate and make use of it. This second requirement is still there even if we don’t have subsequent attempts at work, as we want to build upon existing knowledge. The requirement for good feedback makes no sense without a requirement that it be useful.
But let me reiterate that pretty much all evaluation and feedback can be very valuable, no matter how small or quick, if we know what we are trying to achieve. (I’ll get to more complicated systems in later posts.)
Novice programmers often struggle with programming and this early stage of development is often going to influence if they start off thinking that they can program or not. Given that automated evaluation only really provides useful feedback once the student has got something working, novice programming classes are an ideal place to put human markers. If we can make students think “Yes, I can do this” early on, this is the emotion that they will remember. We need to get to big problems quickly, turn them into manageable issues that can be overcome, and then let motivation and curiosity take the rest.

That first computer experience can stay with you your whole life. (Mine was actually punch cards but they don’t blink.)
There’s an excellent summary paper on computer programming visualisation systems aimed at novice programmers, which discusses some of the key problems novices face on their path to mastery:
- Novices can see some concepts as code rather than the components of a dynamic process. For example, they might see objects as simply a way of containing things rather than modelling objects and their behaviours. These static perceptions prevent the students from understanding that they are designing behaviours, not just writing magic formulas.
- There can be significant difficulties in understanding the computer, seeing the notional machine that is the abstraction, forming a basis upon which knowledge of one language or platform could be used elsewhere.
- Misunderstanding fundamental concepts is common and such misconceptions can easily cause weak understanding, leaving the students in the liminal state, unable to assimilate a threshold concept and move on.
- Students struggle to trace programs and work out what state the program should be in. In my own community, Raymond Lister, Donna Teague, Simon, and others have clearly shown that many students struggle with the tracing of even simple programs.
If we have put human markers (E1 or E2) into a programming class and identified that these are the problems we’re looking for, we can provide immediate targeted evaluation that is also immediate constructive feedback. On the day, in response to actual issues, authentic demonstration of a solution process that students can model. This is the tightest feedback and reward loop we can offer. How does this work?
- Program doesn’t work because of one of the key problem areas.
- Human evaluator intervenes with student and addresses the issue, encouraging discovery inside the problem area.
- Student tries to identify problem and explains it to evaluator in context, modelling evaluator and based on existing knowledge.
- Evaluator provides more guidance and feedback.
- Student continues to work on problem.
- We hope that the student will come across the solution (or think towards it) but we may have to restart this loop.
Note that we’re not necessarily giving the solution here but we can consider leading towards this if the student is getting visibly frustrated. I’d suggest never telling a student what to type as it doesn’t address any of the problems, it just makes the student dependent upon being told the answer. Not desirable. (There’s an argument here for rich development environments that I’ll expand on later.)
Evaluation like this is formative, immediate and rich. We can even streamline it with guidelines to help the evaluators although much of this will amount to supporting students as they learn to read their own code and understand the key concepts. We should develop students simple to complex, concrete to abstract, so some problems with abstraction are to be expected, especially if we are playing near any threshold concepts.
But this is where learning designers have to be ready to say “this may cause trouble” and properly brief the evaluators who will be on the ground. If we want our evaluators to work efficiently and effectively, we have to brief them on what to expect, what to do, and how to follow up.
If you’ve missed it so far, one of our big responsibilities is training our evaluation team. It’s only by doing this that we can make sure that our evaluators aren’t getting bogged down in side issues or spending too much time with one student and doing the work for them. This training should include active scenario-based training to allow the evaluators to practise with the oversight of the educators and designers.
We have finite resources. If we want to support a room full of novices, we have to prepare for the possibility of all of them having problems at once and the only way to support that at scale is to have an excellent design and train for it.