Workshop report: ALTC Workshop “Assessing student learning against the Engineering Accreditation Competency Standards: A practical approach. Part 2.

Continuing on from yesterday’s post, I was discussing the workshop that I went to and what I’d learned from it. I finished on the point that assessment of learning occurs when Lecturers:

  • Use evidence of student learning
  • to make judgements on student achievement
  • against goals and standards

but we have so many other questions to ask at this stage. What were our initial learning objectives? What were we trying to achieve? The learning outcome is effectively a contract between educator and student so we plan to achieve them, but how they fit in the context of our accreditation and overall requirements? One of the things stressed in the workshop was that we need a range of assessment tasks to achieve our objectives:

  • We need a wide variety
  • These should be open-entry where students can begin the tasks from a range of previous learning levels and we cater for different learning preferences and interests
  • They should be open-ended, where we don’t railroad the students towards a looming and monolithic single right answer, and multiple pathways or products are possible
  • We should be building students’ capabilities by building on the standards
  • Finally, we should provide space for student ownership and decision making.

Effectively, we need to be able to get to the solution in a variety of ways. If we straitjacket students into a fixed solution we risk stifling their ability to actually learn and, as I’ve mentioned before, we risk enforcing compliance to a doctrine rather than developing knowledgeable self-regulated learners. If we design these activities properly then we should find the result reduces student complaints about fairness or incorrect assumptions about their preparation. However, these sorts of changes take time and, a point so important that I’ll give it its own line:

You can’t expect to change all of your assessment in one semester!

The advice from Wageeh and Jeff was to focus on an aspect, monitor it, make your change, assess it, reflect and then extend what you’ve learned to other aspects. I like this because, of course, it sounds a lot like a methodical scientific approach to me. Because it is. As to which assessment methods you should choose, the presenters recognised that working out how to make a positive change to your assessment can be hard so they suggested generating a set of alternative approaches and then picking one. They then introduced Prus and Johnson’s 1994 paper “A critical review of Student Assessment Options” which provide twelve different assessment methods and their drawbacks and advantages. One of the best things about this paper is that there is no ‘must’ or ‘right’, there is always ‘plus’ and ‘minus’.

Want to mine archival data to look at student performance? As I’ve discussed before, archival data gives you detailed knowledge but at a time when it’s too late to do anything for that student or a particular cohort in that class. Archival data analysis is, however, a fantastic tool for checking to see if your prerequisites are set correctly. Does their grade in this course correlate with grades in the prereqs? Jeff mentioned a student where the students should have depended upon Physics and Maths but, while their Physics mark correlated with their final Statics mark, Mathematics didn’t. (A study at Baldwin-Wallace presented at SIGCSE 2012 asked the more general question: what are the actual dependencies if we carry out a Bayesian Network Analysis. I’m still meaning to do this for our courses as well.)

Other approaches, such as Surveys, are quick and immediate but are all perceptual. Asking a student how they did on a quiz should never be used as their actual mark! The availability of time will change the methods you choose. If you have a really big group then you can statistically sample to get an indication but this starts to make your design and tolerance for possible error very important.

Jeff stressed that, in all of this assessment, it was essential to never give students an opportunity to gain marks in areas that are not the core focus. (Regular readers know that this is one of my design and operational mantras, as it encourages bad behaviour, by which I mean incorrect optimisation.)

There were so many other things covered in this workshop and, sadly, we only had three hours. I suggested that the next time it was run that they allow more time because I believe I could happily have spent a day going through this. And I would still have had questions.

We discussed the issue of subjectivity and objectivity and the distinction between setting and assessment. Any way that I set a multiple choice quiz is going to be subjective, because I will choose the questions based on my perception of the course and assessment requirements, but it is scored completely objectively.

We also discussed data collection as well because there are so many options here. When will we collect the data? If we collect continuously, can we analyse and react continuously? What changes are we making in response? This is another important point:

If you collect data in order to determine which changes are to be made, tie your changes to your data driven reasons!

There’s little point in saying “We collected all student submission data for three years and then we went to multiple choice questions” unless you can provide a reason from the data, which will both validate your effort in collection and give you a better basis for change. When do I need data to see if someone is clearing the bar? If they’re not, what needs to be fixed? What do I, as a lecturer, need to collect during the process to see what needs to be fixed, rather than the data we collect at the end to determine if they’ve met the bar.

How do I, as a student, determine if I’m making progress along the way? Can I put all of the summative data onto one point? Can I evaluate everything on a two-hour final exam?

WHILE I’m teaching the course, are the students making progress, do they need something else, how do I (and should I) collect data throughout the course. A lot of what we actually collect is driven by the mechanisms that we already have. We need to work out what we actually require and this means that we may need to work beyond the systems that we have.

Again, a very enjoyable workshop! It’s always nice to be able to talk to people and get some really useful suggestions for improvement.