A (measurement) league of our own?
Posted: August 19, 2012 Filed under: Education | Tags: advocacy, authenticity, education, educational problem, higher education, learning, measurement, reflection, teaching, teaching approaches, workload 1 CommentAs I’ve mentioned before, the number of ways that we are being measured is on the rise, whether it’s measures of our research output or ‘quality’, or the impact, benefits, quality or attractiveness of our learning and teaching. The fascination with research quality is not new but, given that we have had a “publish or perish” mentality where people would put out anything and be called ‘research active’, a move to a quality focus (which often entails far more preparation, depth of research and time to publication) from a quantity focus is not a trivial move. Worse, the lens through which we are assessed can often change far faster than we can change those aspects that are assessed.
If you look at some of the rankings of Universities, you’ll see that the overall metrics include things like the number of staff who are Nobel Laureates or have won the Fields Medal. Well, there are less than 52 Fields medallists and only a few hundred Nobel Laureates and, as the website itself distinguishes, a number of those are in the Economics area. This is an inherently scarce resource, however you slice it, and, much like a gallery that prides itself on having an excellent collection of precious art, you are more likely to be able to get more of these slices if you already have some. Thus, this measure of the research presence of your University is a bit of feedback loop.
Similarly the measurement of things like ‘number of papers in the top 20% of publications’. This conveniently ignores some of the benefits of being at better funded institutions, being part of an established community, being invited to lodge papers, and so on. Even where we have anonymous submission and evaluation, you don’t have to be a rocket scientist to spot connections, groups, and, of course, let’s not forget that a well-funded group will have more time, more resources, more postdocs. Basically, funding should lead to better results which leads to better measurement which may lead to better funding.
In terms of high prestige personnel, and their importance, or a history of quality publication, neither of these metrics can be changed overnight. Certainly a campaign to attract prestigious staff might be fruitful in the short term but, and let us be very frank here, if you can buy these staff with a combination of desirable locale issues and money, then it is a matter of bidding as to which University they go to next. But trying to increase your “number of high end publications in the last 5 years” is going to take 5 years to improve and this is kind of long-term thinking that we, as humans, appear to be very bad at.
Speaking of thinking in the long term, a number of the measures that would be most useful to us are not collected or used for assessment because they are over large timescales and, as I’ll discuss, may force us to realise that some things are intrinsically unmeasurable. Learning and teaching quality and impact is intrinsically hard to measure, mainly because we rarely seem take the time to judge the impact of tertiary institutions over an appropriate timescale. Given the transition issues in going from high school to University, measuring drop-out and retention rates in a student’s first semester leaves us wondering who is at fault. Are the schools not quite doing the job? Is it the University staff? The courses? The discipline identity? The student? Yes, we can measure retention and do a good job, with the right assessment, of maturing depth and type of knowledge but what about the core question?
How can we measure the real impact of undertaking studies in our field at our University?
After all, this is what these metrics are all about – determining the impact of a given set of academics at a given Uni so you can put them into a league table, hand out funding in some weighted scheme or tell students which Uni they should be going to. Realistically, we should come back in twenty years and find out how much was used, where their studies took them, whether they think it was valuable. How did our student use the tools we gave them to change the world? Of course, how do we then present a control to determine that it was us who caused that change. Oh, obviously a professional linkage is something we can think of as correlated – but not every engineer is Brunel and, most certainly, you don’t have to have gone to University to change the world.
This is most definitely not to say that shorter term measures of l&t quality aren’t important but we have to be very careful what we’re measuring and the reason that we’re measuring – and the purpose to which we put it. Measuring depth of knowledge, ability to apply that knowledge and professionally practice in a discipline? That’s worth measuring if we do it in a way that encourages constructive improvement rather than punishment or negative feedback that doesn’t show the way forward.
I don’t mind being measured, as long as it’s useful, but I’m getting a little tired of being ranked by mechanisms that I can’t change unless I go back in time and publish 10 more papers over the last 5 years or I manage to heal an entire educational system just so my metrics improve for reducing first-year drop out. (Hey, just so you know, I am working on increasing number of ICT students on a national level – you do have to think on the large scale occasionally.)
Apart from anything else, I wouldn’t rank my own students this way – it’s intrinsically arbitrary and unfair. Food for thought.
Great article. Measurements and rankings…we all complan about them, but we all use them to promote our universities! It always brings us back to the famous quote by Albert Einstein :
“Not everything that counts can be measured. Not everything that can be measured counts”
LikeLike