Workshop report: ALTC Workshop “Assessing student learning against the Engineering Accreditation Competency Standards: A practical approach. Part 2.

Continuing on from yesterday’s post, I was discussing the workshop that I went to and what I’d learned from it. I finished on the point that assessment of learning occurs when Lecturers:

  • Use evidence of student learning
  • to make judgements on student achievement
  • against goals and standards

but we have so many other questions to ask at this stage. What were our initial learning objectives? What were we trying to achieve? The learning outcome is effectively a contract between educator and student so we plan to achieve them, but how they fit in the context of our accreditation and overall requirements? One of the things stressed in the workshop was that we need a range of assessment tasks to achieve our objectives:

  • We need a wide variety
  • These should be open-entry where students can begin the tasks from a range of previous learning levels and we cater for different learning preferences and interests
  • They should be open-ended, where we don’t railroad the students towards a looming and monolithic single right answer, and multiple pathways or products are possible
  • We should be building students’ capabilities by building on the standards
  • Finally, we should provide space for student ownership and decision making.

Effectively, we need to be able to get to the solution in a variety of ways. If we straitjacket students into a fixed solution we risk stifling their ability to actually learn and, as I’ve mentioned before, we risk enforcing compliance to a doctrine rather than developing knowledgeable self-regulated learners. If we design these activities properly then we should find the result reduces student complaints about fairness or incorrect assumptions about their preparation. However, these sorts of changes take time and, a point so important that I’ll give it its own line:

You can’t expect to change all of your assessment in one semester!

The advice from Wageeh and Jeff was to focus on an aspect, monitor it, make your change, assess it, reflect and then extend what you’ve learned to other aspects. I like this because, of course, it sounds a lot like a methodical scientific approach to me. Because it is. As to which assessment methods you should choose, the presenters recognised that working out how to make a positive change to your assessment can be hard so they suggested generating a set of alternative approaches and then picking one. They then introduced Prus and Johnson’s 1994 paper “A critical review of Student Assessment Options” which provide twelve different assessment methods and their drawbacks and advantages. One of the best things about this paper is that there is no ‘must’ or ‘right’, there is always ‘plus’ and ‘minus’.

Want to mine archival data to look at student performance? As I’ve discussed before, archival data gives you detailed knowledge but at a time when it’s too late to do anything for that student or a particular cohort in that class. Archival data analysis is, however, a fantastic tool for checking to see if your prerequisites are set correctly. Does their grade in this course correlate with grades in the prereqs? Jeff mentioned a student where the students should have depended upon Physics and Maths but, while their Physics mark correlated with their final Statics mark, Mathematics didn’t. (A study at Baldwin-Wallace presented at SIGCSE 2012 asked the more general question: what are the actual dependencies if we carry out a Bayesian Network Analysis. I’m still meaning to do this for our courses as well.)

Other approaches, such as Surveys, are quick and immediate but are all perceptual. Asking a student how they did on a quiz should never be used as their actual mark! The availability of time will change the methods you choose. If you have a really big group then you can statistically sample to get an indication but this starts to make your design and tolerance for possible error very important.

Jeff stressed that, in all of this assessment, it was essential to never give students an opportunity to gain marks in areas that are not the core focus. (Regular readers know that this is one of my design and operational mantras, as it encourages bad behaviour, by which I mean incorrect optimisation.)

There were so many other things covered in this workshop and, sadly, we only had three hours. I suggested that the next time it was run that they allow more time because I believe I could happily have spent a day going through this. And I would still have had questions.

We discussed the issue of subjectivity and objectivity and the distinction between setting and assessment. Any way that I set a multiple choice quiz is going to be subjective, because I will choose the questions based on my perception of the course and assessment requirements, but it is scored completely objectively.

We also discussed data collection as well because there are so many options here. When will we collect the data? If we collect continuously, can we analyse and react continuously? What changes are we making in response? This is another important point:

If you collect data in order to determine which changes are to be made, tie your changes to your data driven reasons!

There’s little point in saying “We collected all student submission data for three years and then we went to multiple choice questions” unless you can provide a reason from the data, which will both validate your effort in collection and give you a better basis for change. When do I need data to see if someone is clearing the bar? If they’re not, what needs to be fixed? What do I, as a lecturer, need to collect during the process to see what needs to be fixed, rather than the data we collect at the end to determine if they’ve met the bar.

How do I, as a student, determine if I’m making progress along the way? Can I put all of the summative data onto one point? Can I evaluate everything on a two-hour final exam?

WHILE I’m teaching the course, are the students making progress, do they need something else, how do I (and should I) collect data throughout the course. A lot of what we actually collect is driven by the mechanisms that we already have. We need to work out what we actually require and this means that we may need to work beyond the systems that we have.

Again, a very enjoyable workshop! It’s always nice to be able to talk to people and get some really useful suggestions for improvement.


Workshop report: ALTC Workshop “Assessing student learning against the Engineering Accreditation Competency Standards: A practical approach”

I was fortunate to be able to attend a 3 hour workshop today presented by Professor Wageeh Boles, Queensland University of Technology, and Professor Jeffrey (Jeff) Froyd, Texas A&M, on how we could assess student learning against the accreditation competency standards in Engineering. I’ve seen Wageeh present before in his capacity as an Australian Learning and Teaching Council ALTC National Teaching Fellowship and greatly enjoyed it, so I was looking forward to today. (Note: the ALTC has been replaced with the Office for Learning and Teaching, OLT, but a number of schemes are still labelled under the old title. Fortunately, I speak acronym.)

Both Wageeh and Jeff spoke at length about why we were undertaking assessment and we started by looking at the big picture: University graduate capabilities and the Engineers Australia accreditation criteria. Like it or not, we live in a world where people expect our students to be able to achieve well-defined things and be able to demonstrate certain skills. To focus on the course, unit, teaching and learning objectives and assessment alone, without framing this in the national and University expectations is to risk not producing the students that are expected or desired. Ultimately if the high level and local requirements aren’t linked then they should be because otherwise we’re probably not pursuing the right objectives. (Is it too soon to mention pedagogical luck again?)

We then discussed three types of assessment:

  • Assessment FOR Learning: Which is for teachers and allows them to determine the next steps in advancing learning.
  • Assessment AS Learning: Which is for students and allows them to monitor and reflect upon their own progress (effectively formative).
  • Assessment OF Learning: Which is used to assess what the students have learned and is most often characterised as summative learning.

But, after being asked about the formative/summative approach, this was recast into a decision making framework. We carry out assessment of all kinds to allow people to make better decisions and the people, in this situation, are Educators and Students. When we see the results of the summative assessment we, as teachers, can then ask “What decisions do we need to make for this class?” to improve the levels of knowledge demonstrated in the summative. When the students see the result of formative assessment, we then have the question “What decisions do students need to make” to improve their own understanding. The final aspect, Assessment FOR Learning, is going to cover those areas of assessment that help both educators and students to make better decisions by making changes to the overall course in response to what we’re seeing.

This is a powerful concept as it identifies assessment in terms of responsible groups: this assessment involves one group, the other or both and this is why you need to think about the results. (As an aside, this is why I strongly subscribe to the idea that formative assessment should never have an extrinsic motivating aspect, like empty or easy submission marks, because it stops the student focussing on the feedback, which will help their decisions, and makes it look summative, which suddenly starts to look like the educator’s problem.)

One point that came out repeatedly was that our assessment methods should be varied. If your entire assessment is based on a single exam, of one type of question, at the end of the semester then you really only have a single point of data. Anyone who has ever drawn a line on a graph knows that a single point tells you nothing about the shape of the line and, ultimately, the more points that yo can plot accurately, the more you can work out what is actually happening. However, varying assessment methods doesn’t mean replicating or proxying the exam, it means providing different assessment types, varying questions, changing assessment over time. (Yes, this was stressed: changing assessment from offering to offering is important and is much a part of varying assessment as any other component.)

All delightful music to my ears, which was just was well as we all worked very hard, talking, discussing and sharing ideas throughout the groups. We had a range of people who were mostly from within the Faculty and, while it was a small group and full of the usual faces, we all worked well, had an open discussion and there were some first-timers who obviously learned a lot.

What I found great about this was that it was very strongly practical. We worked on our own courses, looked for points for improvement and I took away four points of improvement that I’m currently working on: a fantastic result for a three-hour investment. Our students don’t need to just have done assessment that makes it look like they know their stuff, they have to actually know their stuff and be confident with it. Job ready. Able to stand up and demonstrate their skills. Ready for reality.

As was discussed in the workshop, assessment of learning occurs when Lecturers:

  • Use evidence of student learning
  • to make judgements on student achievement
  • against goals and standards

And this identifies some of our key problems. We often gather all of the evidence, whether it’s final grades or Student Evaluations, at a point when the students have left, or are just about to leave, the course. How can we change this course for that student? We are always working one step in the past. Even if we do have the data, do we have the time and the knowledge to make the right judgement? If so, is it defensible, fair and meeting the standards that we should be meeting? We can’t apply standards from 20 years ago because that’s what we’re used to. The future, in Australia, is death by educational acronyms (AQF, TEQSA, EA, ACS, OLT…) but these are the standards by which we are accredited and these are the yardsticks by which our students will be judged. If we want to change those then, sure, we can argue this at the Government level but until then, these have to be taken into account, along with all of our discipline, faculty and University requirements.

I think that this will probably spill over in a second post but, in short, if you get a chance to see Wageeh and Jeff on the road with this workshop then, please, set aside the time to go and leave time for a chat afterwards. This is one of the most rewarding and useful activities that I’ve done this year – and I’ve had a very good year for thinking about CS Education.