The Confusing Message: Sourcing Student FeedbackPosted: May 26, 2012
Once, for a course which we shall label ‘an introduction to X and Y’, I saw some feedback from a student that went as follows. A single student, on the same feedback form, and in adjacent text boxes, gave these answers:
What do you like most about this course: the X
What would you like to see happen to improve the course: less X, more Y!
Now, of course, this not inherently contradictory but, honestly, it’s really hard to get the message here. You think that X is great but less useful than Y, although you like X more? You’re a secret masochist and you like to remove pleasure from your life?
As (almost) always, the problem here is that we these two questions, asked in adjacent text boxes, are asking completely different things. Survey construction is an art, a dark and mysterious art, and a well-constructed survey will probably not answer a question once, in one way. It will ask the same question in multiple ways, sometimes in the negative, to see if the “X” and “not ( not (X))” scores line up for each area of interest. This, of course, assumes that you have people who are willing to fill out long surveys and give you reliable answers. This is a big assumption. Most of the surveys that I work with have to fit into short time frames and are Likert-based with text boxes. Not quite yes/no tick/flick but not much more and very little opportunity for mutually interacting questions.
Our student experience surveys are about 10 questions long with two text boxes and are about the length that we can fit into the end of a lecture and have the majority of students fill out and return. From experience, if I construct larger surveys, or have special ‘survey-only’ sessions, I get poor participation. (Hey, I might just be doing it wrong. Tips and help in the comments, please!)
Of course, being Mr Measurement, I often measure things as side effects of the main activity. Today, I held a quiz in class and while everyone was writing away, I was actually getting a count of attendees because they were about to hand up cards for marking. This gives me an indicator of attendance and, as it happens, two weeks away from the end of the course, we’re still getting good attendance. (So, I’m happy.) I can also see how the students are doing with fundamental concepts so I can monitor that too.
I’m fascinated by what students think about their experience but I need to know what they need based on their performance, so that I can improve their performance without having to work out what they mean. The original example would give me no real insight into what to do and how to improve – so I can’t really do anything with any certainty. If the student had said “I love X but I feel that we spent too much time on it and it could be just as good with a little less.” then I know what I can do.
I also sometimes just ask for direct feedback in assignments, or in class, because then I’ll get the things that are really bugging or exciting people. That also gives me the ability to adapt to what I hear and ask more directed questions.
Student opinion and feedback can be a vital indicator of our teaching efficacy, assuming that we can find out what people think rather than just getting some short and glib answers to questions that don’t really probe in the right ways, where we never get a real indication of their thoughts. To do this requires us to form a relationship, to monitor, to show the value of feedback and to listen. Sadly, that takes a lot more work than throwing out a standard form once a semester, so it’s not surprising that it’s occasionally overlooked.