Our Influence: Prejudice As Predictor
Posted: September 14, 2012 Filed under: Education | Tags: advocacy, authenticity, community, education, educational problem, educational research, higher education, in the student's head, learning, measurement, principles of design, reflection, student perspective, teaching, teaching approaches, thinking Leave a commentIf you want to see Raymond Lister get upset, tell him that students fall into two categories: those who can program and those who can’t. If you’ve been reading much (anything) of what I’ve been writing recently, you’ll realise that I’ve been talking about things like cognitive development, self-regulation, dependence on authority, all of which have one thing in common in that students can be at different stages when they reach us. There is no guarantee that students will be self-reliant, cognitively mature and completely capable of making reasoned decisions at the most independent level.
There was a question raised several times during the conference and it’s the antithesis of the infamous “double hump conjecture”, that students divide into two groups naturally and irrevocably because of some innate characteristic. The question is “Do our students demonstrate their proficiency because of what we do or in spite of what we do?” If the innate characteristic conjecture is correct, and this is a frequently raised folk pedagogy, then our role has no real bearing on whether a student will learn to program or not.
If we accept that students come to us at different stages in their development, and that these development stages will completely influence their ability to learn and form mental models, then the innate characteristic hypothesis withers and dies almost immediately. A student who does not have their abilities ready to display can no more demonstrate their ability to program than a three-year old child can write Shakespeare – they are not yet ready to be able to learn, assemble, reassemble or demonstrate the requisite concepts and related skills.
However, a prejudicial perspective that students who cannot demonstrate the requisite ability are innately and permanently lacking that skill will, unpleasantly, viciously and unnecessarily, cause that particular future to lock in. Of course a derisive attitude to these ‘stupid’ or ‘slow’ students will make them withdraw or undermine their confidence! As I will note from the conference, confidence and support have a crucial impact on students. Undermining a student’s confidence is worse than not teaching them at all. Walking in with the mental model that separates the world into programmers and non-programmers forces that model into being.
Since I’ve entered the area of educational research, I’ve been exposed to things that I can separate into the following categories:
- Fascinating knowledge and new views of the world, based on solid research and valid experience.
- Nonsense
- Damned nonsense
- Rank stupidity
Where most of the latter come from other educators who react, our of fear or ignorance, to the lessons from educational research with disbelief, derision and resentment. “I don’t care what you say, or what that paper says, you’re wrong” says the voice of “experience”.
There is no doubt that genuine and thoughtful experience is, has been, and will always be a strong and necessary sibling to the educational and psychological theory that is the foundation of educational research. However, shallow experience can often be built up into something that it is not, when it is combined with fallacious thinking, cherry picking, confirmation bias and any other permutation of fear, resentment and inertia. The influence of folk pedagogies, lessons claimed from tea room mutterings and the projection of a comfortable non-reality that mysteriously never requires the proponent to ever expend any additional effort or change what they do, is a malign shadow over the illumination of good learning and teaching practice.
The best educators explain their successes with solid theory, strive to find a solution to the problems that lead to failure, and listen to all sources in order to construct a better practice and experience for their students. I hope, one day, to achieve this level- but I do know that doubting everything new is not the path forward for me.
I am pleased to say that the knowledge and joy of this (to me) new field far outstrips most of the other things that I have seen but I cannot stress any more how important it is that we choose our perspectives carefully. We, as educators, have disproportionally high influence: large shadows and big feet. Reading further into this discipline illustrates that we must very carefully consider the way that we think, the way that our students think and the capability that we actually have in the students for reasoning and knowledge accumulation before we make any rash or prejudicial statements about the innate capabilities of that most mythical of entities: the standard student.
Post 300 – 2012, the Year of the Plague
Posted: September 9, 2012 Filed under: Education, Opinion | Tags: 300, advocacy, authenticity, community, education, ethics, higher education, measurement, teaching approaches, vaccination Leave a commentAs it turns out, this is post 300 and I’m going to use it to make a far more opinionated point than usual. I’m currently in Auckland, New Zealand, and there is a warning up on the wall about a severe outbreak of measles. This is one of the most outrageously stupid signs to see on a wall, anywhere, given that we have had a solid vaccine since 1971 and, despite ill-informed and unscientific studies that try to contradict this, the overall impact of the MMR vaccine is overwhelmingly positive. There is no reasonable excuse for the outbreak of an infectious, dangerous disease 40 years after the development of a reliable (and overwhelmingly safe) vaccine.
My fear is that, rather than celebrating the elimination of measles and polio (under 200 cases this year so far according to the records I’ve seen) in the same way that we eradicated smallpox, we will be seeing more and more of these signs identifying outbreaks of eradicable and controllable diseases, because ignorance is holding sway.
Be in no doubt, if we keep going down this path, the risk increases rapidly that a disease will finish us off because we will not have the correct mental framing and scientific support to quickly respond to a lethal outbreak or mutation. The risk we take is that, one day, our cities lie empty with signs like this up all over the place, doors sealed with crosses on them, a quiet end to a considerable civilisation. All attributable to a rejection of solid scientific evidence and the triumph of ignorance. We have survived massive outbreaks before, even those with high lethality, but we have been, for want of a better word, lucky. We live in much denser environments and are far more connected than we were before. I can step around the world in a day and, with every step, a disease can follow my footsteps.
One of my students recently plotted 2009 Flu cases relative to air routes. While disease used to rely upon true geographical contiguity, we now connect the world with the false adjacency of the air route. Outbreaks in isolated parts of the world map beautifully to the air hubs and their importance and utilisation: more people, higher disease.
So, in short, it’s not just the way that we control the controllable diseases that is important, it is accepting that the lower risk of vaccination is justifiable in the light of the much greater risk of infection and pandemic. This fights the human tendency to completely misunderstand probability, our susceptibility to fallacious thinking, and our desperate desire to do no harm to our children. I get this but we have to be a little bit smarter or we are putting ourselves at a much higher risk – regrettably, this is a future risk so temporal discounting gets thrown into the mix to make it ever harder for people to make a good decision.
Here’s what the Smallpox Wikipedia page says: “Smallpox was an infectious disease unique to humans” (emphasis mine). This is one of the most amazing things that we have achieved. Let’s do it again!
I talk a lot about education, in terms of my thoughts on learning and teaching, but we must never forget why we educate. It’s to enlighten, to inform, to allow us to direct our considerable resources to solving the considerable problems that beset us. It’s helping people to make good decisions. It’s being aware of why people find it so hard to accept scientific evidence: because they’re scared, because someone lied to them, because no-one has gone to the trouble to actually try and explain it to them properly. Ignorance of a subject is the state that we occupy before we become informed and knowledgable. It’s not a permanent state!
That sign made me angry. But it underlined the importance of what it is that we do.
Loading the Dice: Show and Tell
Posted: September 6, 2012 Filed under: Education | Tags: authenticity, curriculum, design, education, educational problem, higher education, in the student's head, learning, principles of design, resources, teaching, teaching approaches, thinking, tools Leave a commentI’ve been using a set of four six-sided dice to generate random numbers for one of my classes this year, generally to establish a presentation order or things like that. We’ve had a number of students getting the same number and so we have to have roll-offs. Now in this case, the most common number rolled so far has been in the range of 17-19 but we have only generated about 18-20 rolls so, while that’s a little high, it’s not high enough to arouse suspicion.
Today we rolled again, and one student wasn’t quite there yet so I did it with the rest of the class. Once again, 18 showed up a bit. This time I asked the class about it. Did that seem suspicious? Then I asked them to look at the dice.
Oh.
Only two of the dice are actually standard dice. One has the number five on every face. One has three sixes and three twos. The students have seen these dice numerous times and have never actually examined them – of course, I didn’t leave them lying around for them to examine but, despite one or two starting to think “Hey, that’s a bit weird”, nobody ever twigged to the loading.

All of the dice in this picture are loaded through weight manipulation, rather than dot alteration. You can buy them for just about any purpose. Ah, Internet!
Having exposed this trick, to some amusement, the last student knocked on the door and I picked up the dice. He was then asked to roll for his position, with the rest of the class staying quiet. (Well, smirking.) He rolled something in 17-19, I forget what, and I wrote that up on the board. Then I asked him if it seemed high to him? On reflection, he said that these numbers all seemed pretty high, especially as the theoretical maximum was 24. I then asked if he’d like to inspect the dice.
He then did so, as I passed him the dice one at a time, and storing the inspected dice in my other hand. (Of course, as he peered at each die to see if it was altered, I quickly swapped one of the ‘real’ dice back into the position in my hand and, as the rest of the class watched and kept admirably quiet, I then forced a real die onto him. Magic is all about misdirection, after all.)
So, having inspected all of them, he was convinced that they were normal. I then plonked them down on the table and asked him to inspect them, to make sure. He lined them up, looked across the top face and, then, looked at the side. Light dawned. Loudly! What, of course, was so startling to him was that he had just inspected the dice and now they weren’t normal.
What was my point?
My students have just completed a project on data visualisation where they provided a static representation of a dataset. There is a main point to present, supported by static analysis and graphs, but the poster is fundamentally explanatory. The only room for exploration is provided by the poster producer and the reader is bound by the inherent limitations in what the producer has made available. Much as with our discussions of fallacies in argument from a recent tutorial, if information is presented poorly or you don’t get enough to go on, you can’t make a good decision.
Enter, the dice.
Because I deliberately kept the students away from them and never made a fuss about them, they assumed that they were normal dice. While the results were high, and suspicion was starting to creep in, I never gave them enough space to explore the dice and discern their true nature. Even today, while handing them to a student to inspect, I controlled the exploration and, by cherry picking and misdirection, managed to convey a false impression.
Now my students are moving into dynamic visualisation and they must prepare for sharing data in a way that can be explored by other people. While the students have a lot of control over who this exploration takes place, they must prepare for people’s inquisitiveness, their desire to assemble evidence and their tendency to want to try everything. They can’t rely upon hiding difficult pieces of data in their representation and they must be ready for users who want to keep exploring through the data in ways that weren’t originally foreseen. Now, in exploratory mode, they must prepare for people who want to try to collect enough evidence to determine if something is true or not, and to be able to interrogate the dataset accordingly.
Now I’m not saying that I believe that their static posters were produced badly, and I did require references to support statements, but the view presented was heavily controlled. They’ve now seen, in a simple analogue, how powerful that can be. Now, it’s time to break out of that mindset and create something that can be freely explored, letting their design guide the user to construct new things rather than to lead them down a particular path.
I can only hope that they’re exceed by this because I certainly am!!
(Reasonable) Argument, Evidence and (Good) Journalism: Is “Crimson” the Colour of Their Faces?
Posted: September 5, 2012 Filed under: Education | Tags: advocacy, authenticity, blogging, community, education, ethics, grand challenge, higher education, in the student's head, learning, reflection, student perspective, teaching, teaching approaches, thinking 2 CommentsI ran across a story on the Harvard Crimson about a surprisingly high level of suspected plagiarism in a course, Government 1310. The story opens up simply enough in the realms of fact, where the professor suspected plagiarism behaviour in 10-20 take home exams, which was against published guidelines, and has now expanded to roughly 125 suspicious final exams. There was a brief discussion of the assessment of the course and the steps taken so far by the faculty.
Then, the article takes a weird turn. Suddenly, we have a student account, an anonymous student who doesn’t wish their name to be associated with the plagiarism, who “suspected that Government 1310 was the course in question”. Hello? Why is this… ahhh. Here’s some more:
Though she said she followed the exam instructions and is not being investigated by the Ad Board, she said she thought the exam format lent itself to improper academic conduct.
“I can understand why it would be very easy to collaborate,” said the student
Oh. Collaborate. Interesting. Next we get the Q Guide rating for the course and this course gets 2.54/5 versus the apparent average of 3.91. Then we get some reviews from the Q Guide that “spoke critically of the course’s organisation and the difficulty of the exam questions”.
Spotting a pattern yet?
Another student said that he/she had joined a group of 15 other students just before the submission date and that they had been up all night trying to understand one of the questions (worth 20%).
I submitted this to my students to read and then asked them how they felt about it. Understandably, by the end of the reading, while my students were still thinking about plagiarism, they were thinking that there may have been some… justification. Then we started pulling the article apart.
When we start to look at the article, it becomes apparent that the facts presented all have a rather definite theme – namely that if cheating has occurred, that it has a justification because of the terrible way the course was taught (low Q Guide rating! 16 students confused!)
Now, I can not see the Q Guide data, because when I go to the page I get this information (and I need a Harvard login to go further):
Q Guide
The Q Guide was an annually published guide that reported the results of each year’s course evaluations. Formerly called the CUE Guide, it was renamed the Q Guide in 2007 because the evaluations now include the GSAS and are no longer run solely by the Committee on Undergraduate Education (CUE). In 2009, in place of The Q Guide, Harvard College integrated Q data with the online course selection tool (at my.harvard.edu), providing a simple and easy way to access and compare course evaluation data while planning your course schedule.
So if the article, regarding an exam run in 2012, is referring to the Q Guide for Gov 1310, then it’s one of two things: using an old name for new data (admittedly, fairly likely) or referring to old data. The question does arise, however, whether the Q Guide rating refers to this offering or a previous offering. I can’t tell you which it is because I don’t know. It’s not publicly available and the article doesn’t tell me. (Although you’ll note that the Q Guide text refers to this year‘s evaluations. There’s a part of me that strongly suspects that this is historical data but, of course, I’m speculating.)
However, the most insidious aspect is the presentation of 16 students who are confused about content in a way that overstates their significance. It’s a blatant example of emotive manipulation and encourages the reader to make a false generalisation. There were 279 students enrolled in Gov 1310. 16 is 5.7%. Would I be surprised in somewhere around 5% of my students weren’t capable of understanding all of the questions or thought that some material wasn’t in the course?
No, of course not. That’s roughly the percentage of my students who sometimes don’t know which Dr Falkner is teaching their class. (Hint: one is male and one is female. Noticeably so in both cases.)
I presented this to my Grand Challenge students as part of our studies of philosophical and logical fallacies, discussing how arguments are made to mislead and misdirect. The terrible shame is that, with a detected rate of plagiarism that is this high, I would usually have a very detailed look at the learning and teaching strategies employed (how often are exams being rewritten, how is information being presented, how is teaching being carried out) because this is an amazingly high level of suspected plagiarism.
Despite the misleading journalism presented in the Crimson, the course and its teachers may have to shoulder some responsibility here. As always, just because someone’s argument is badly made, doesn’t mean that it is actually wrong. It’s just disappointing that such a cheap and emotive argument was raised in a way that further fogs an important issue.
As I said to my students today, one of the most interesting way to try to understand a biassed or miscast argument is to understand who the bias favours – cui bono? (To whom the benefit? I am somewhat terrified, on looking for images for this phrase, that it has been highjacked by extremists and conspiracy theorists. It’s a shame because it’s historically beautiful.)
So why would the Crimson run this? It’s pretty manipulative so, unless this is just bad journalism, cui bono?
Having looked up how disciplinary boards are constituted at Harvard, I found a reference that there are three appointed faculty members and:
There are three students appointed to the board as full voting members. Two of these will be assigned to specific cases on a case-by-case basis and will not be in the same division as the student facing disciplinary action.
In this case, the Crimson’s story suddenly looks a lot… darker. If, by publishing this article, they reach the right students and convince them the action of the suspected plagiarists may have been overly influenced by academics who are not performing their duties – then we risk suddenly having a deadlocked board and a deleterious effect on what should have been an untainted process.
The Crimson has further distinguished itself with a follow-up article regarding the uncertainty students are feeling because of the process.
“It’s unfair to leave that uncertainty, given that we’re starting lives,” said the alumnus, who was granted anonymity by The Crimson because he said he feared repercussions from Harvard for discussing the case.
Oh, Harvard, you giant monster, unfairly delaying your decision on a plagiarism case because the lecturers were so very, very bad that students had to cheat. And, what’s worse, you are so evil that students are scared of you – they “fear the repercussions”!
Thank you, Crimson, for providing so much rich fodder for my discussion on how the words “logical argument”, “evidence” and “good journalism” can be so hard to fit into the same sentence.
The Precipice of “Everything’s Late”
Posted: September 3, 2012 Filed under: Education | Tags: advocacy, authenticity, education, feedback, higher education, learning, measurement, research, resources, student perspective, teaching, teaching approaches, thinking, time banking 7 CommentsI spent most of today working on the paper that I alluded to earlier where, after over a year of trying to work on it, I hadn’t made any progress. Having finally managed to dig myself out of the pit I was in, I had the mental and timeline capacity to sit down for the 6 hours it required and go through it all.
In thinking about procrastination, you have to take into account something important: the fact that most of us work in a hyperbolic model where we expend no effort until the deadline is right upon us and then we put everything in, this is temporal discounting. Essentially we place less importance on things in the future than the things that are important to us now. For complex, multi-stage tasks over some time this is an exceedingly bad strategy, especially if we focus on the deadline of delivery, rather than the starting point. If we underestimate the time it requires and we construct our ‘panic now’ strategy based on our proximity to the deadline, then we are at serious risk of missing the starting point because, when it arrives, it just won’t be that important.
Now, let’s increase the difficulty of the whole thing and remember that the more things we have to think about in the present, the greater the risk that we’re going to exceed our capacity for cognitive load and hit the ‘helmet fire’ point – we will be unable to do anything because we’ve run out of the ability to choose what to do effectively. Of course, because we suffer from a hyperbolic discounting problem, we might do things now that are easy to do (because we can see both the beginning and end points inside our window of visibility) and this runs the risk that the things we leave to do later are far more complicated.
This is one of the nastiest implications of poor time management: you might actually not be procrastinating in terms of doing nothing, you might be working constantly but doing the wrong things. Combine this with the pressures of life, the influence of mood and mental state, and we have a pit that can open very wide – and you disappear into it wondering what happened because you thought you were doing so much!
This is a terrible problem for students because, let’s be honest, in your teens there are a lot of important things that are not quite assignments or studying for exams. (Hey, it’s true later too, we just have to pretend to be grownups.) Some of my students are absolutely flat out with activities, a lot of which are actually quite useful, but because they haven’t worked out which ones have to be done now they do the ones that can be done now – the pit opens and looms.
One of the big advantages of reviewing large tasks to break them into components is that you start to see how many ‘time units’ have to be carried out in order to reach your goal. Putting it into any kind of tracking system (even if it’s as simple as an Excel spreadsheet), allows you to see it compared to other things: it reduces the effect of temporal discounting.
When I first put in everything that I had to do as appointments in my calendar, I assumed that I had made a mistake because I had run out of time in the week and was, in some cases, triple booked, even after I spilled over to weekends. This wasn’t a mistake in assembling the calendar, this was an indication that I’d overcommitted and, over the past few months, I’ve been streamlining down so that my worst week still has a few hours free. (Yeah, yeah, not perfect, but there you go.) However, there was this little problem that anything that had been pushed into the late queue got later and later – the whole ‘deal with it soon’ became ‘deal with it now’ or ‘I should have dealt with that by now’.
Like students, my overcommitment wasn’t an obvious “Yes, I want to work too hard” commitment, it snuck in as bits and pieces. A commitment here, a commitment there, a ‘yes’, a ‘sure, I can do that’, and because you sometimes have to make decisions on the fly, you suddenly look around and think “What happened”? The last thing I want to do here is lecture, I want to understand how I can take my experience, learn from it, and pass something useful on. The basic message is that we all work very hard and sometimes don’t make the best decisions. For me, the challenge is now, knowing this, how can I construct something that tries and defeats this self-destructive behaviour in my students?
This week marks the time where I hope to have cleared everything on the ‘now/by now’ queue and finally be ahead. My friends know that I’ve said that a lot this year but it’s hard to read and think in the area of time management without learning something. (Some people might argue but I don’t write here to tell you that I have everything sorted, I write here to think and hopefully pass something on through the processes I’m going through.)
Let’s Transform Education! (MOOC Hijinks Hilarity! Jinkies!)
Posted: September 1, 2012 Filed under: Education | Tags: advocacy, authenticity, design, education, educational problem, ethics, grand challenge, higher education, reflection, teaching, teaching approaches, thinking, tools, universal principles of design, work/life balance, workload 4 CommentsI had one of those discussions yesterday that every one in Higher Education educational research comes to dread: a discussion with someone who basically doesn’t believe the educational research and, within varying degrees of politeness, comes close to ignoring or denigrating everything that you’re trying to do. Yesterday’s high point was the use of the term “Mr Chips” to describe the (from the speaker’s perspective) incredibly low possibility of actually widening our entrance criteria and turning out “quality” graduates – his point was that more students would automatically mean much larger (70%) failure rates. My counter (and original point) is that since there is such a low correlation between school marks and University GPA (roughly 40-45% and it’s very noisy) that successful learning and teaching strategies could deal with an influx of supposedly ‘lower quality’ students, because the quality metric that we’re using (terminal high school grade or equivalent) is not a reliable indicator of performance. My fundamental belief is that good education is transformative. We start with the students that schools give us but good, well-constructed, education can, in the vast majority of cases, successfully educate students and transform them into functioning, self-regulating graduates. We have, as a community, carried out a lot of research that says that this works, provided that we are happy to accept that we (academics) are not by any stretch of the imagination the target demographic or majority experience in our classes, and that, please, let’s look at new teaching methods and approaches that actually work in developing the knowledge and characteristics that we’re after.
The “Mr Chips” thing is a reference to a rather sentimental account of the transformative influence of a school master, the eponymous Chips, and, by inference, using it in a discussion of the transformative power of education does cast the perception of my comments on equality of access, linked with educational design and learning systems as transformative technologies, as being seen as both naïve and (in a more personal reading) makes me uncomfortably aware that some people might think I’m talking about myself as being the key catalyst of some sort. One of the nice things about being an academic is that you can have a discussion like this and not actually come to blows over it – we think and argue for a living, after all. But I find this dismissive and rude. If we’re not trying to educate people and transform them, then what the hell are we doing? Advocating inclusion and transformation shouldn’t be seen as grandstanding – it should be seen as our job. I don’t want to be the keystone, I want systems that work and survive individuals, but that individuals can work within to improve and develop – we know this is possible and it’s happening in a lot of places. There are, however, pockets of resistance: people who are using the same old approaches out of laziness, ignorance and a refusal to update for what appear to be philosophical reasons but have no evidence to support them.
Frankly, I’m getting a little irritated by people doubting the value of the volumes of educational research. If I was dealing with people who’d read the papers, I’d be happier, but I’m often dealing with people who won’t read the papers because they just don’t believe that there’s a need to change or they refuse to accept what is in there because of a perceived difficulty in making it work. (A colleague demanded a copy of one of our papers showing the impact of our new approaches on retention – I haven’t heard from him since he got it. This probably means that he’s chosen to ignore it and is going to pretend that he never asked.) Over coffee this morning, musing on this, it occurred to me that at the same time that we’re not getting the greatest amount of respect and love in the educational research community, we’re also worried about the trend towards MOOCs. Many of our concerns about MOOCs are founded in the lack of evidence that they are educationally effective. And I saw a confluence.
All of the educational researchers who are not able to sway people inside their institutions – let’s just ignore them and surge into the MOOCs. We can still teach inside our own places, of course, and since MOOCs are free there’s no commercial conflict – but let’s take all of the research and practice and build a brave new world out in MOOC space that is the best of what we know. We can even choose to connect our in-house teaching into that system if we want. (Yes, we still have the face-to-face issue for those without a bricks-and-mortar campus, but how far could we go to make things better in terms of what MOOCs can offer?) We’re transformers, builders and creators. What could we do with the infinite canvas of the Internet and a lot of very clever people, working with a lot of very other clever people who are also driven and entrepreneurial?
The MOOC community will probably have a lot to say about this, which is why we shouldn’t see this as a hijack or a take-over, and I think it’s helpful to think of this very much as a confluence – a flowing together. I am, not for a second, saying that this will legitimise MOOCs, because this implies that they are illegitimate, but rather than keep fighting battles with colleagues and systems that can defeat 40 years of knowledge by saying “Well, I don’t think so”, let’s work with people who have already shown that they are looking to the future. Perhaps, combining people who are building giant engines of change with the people who are being frustrated in trying to bring about change might make something magical happen? I know that this is already happening in some places – but what if it was an international movement across the whole sector?
Jinkies! (Sorry, the title ran to this and I get to use a picture of a t-shirt with Velma on it!)
The purpose of this is manifold:
- We get to build the systems that we want to, to deliver education to students in the best ways we know.
- We (potentially) help to improve MOOCs by providing strong theory to construct evidence gathering mechanisms that allow us to really get inside what MOOCs are doing.
- More students get educated. (Ok, maybe not in our host institutions, but what is our actual goal anyway?)
- We form a strong international community of educational researchers with common outputs and sharing that isn’t necessarily owned by one company (sorry, iTunesU).
- If we get it right, students vote with their feet and employers vote with their wallets. We make educational research important and impossible to ignore through visible success.
Now this is, of course, a pipe dream in many ways. Who will pay for it? How long will it take before even not-for-pay outside education becomes barred under new terms and conditions? Who will pay my mortgage if I get fired because I’m working on a deliberately external set of courses for students who are not paying to come to my institution?
But, the most important thing, for me, is that we should continue what has been proposed and work more and more closely with the MOOC community to develop exemplars of good practice that have strong, evidence-based outcomes that become impossible to ignore. Much as students use temporal discounting to procrastinate about their work, administrators tend to use a more traditional financial discounting when it comes to what they consider important. If it takes 12 papers and two years of study to justify spending $5,000 on a new tool or time spent on learning design – forget about it. If, however, MOOCs show strong evidence of improving student retention (*BING*), student attraction (*BING*), student engagement (*BING*) and employability – well, BINGO. People will pay money for that.
I’ve spoken before about how successful I had to be before I was tolerated in my pursuit of educational research and, while I don’t normally talk about it in detail because it smacks of hubris and I sincerely believe that I am not a role model of any kind, I hope that you will excuse me so that I can explain why I think it’s crazy as to how successful I had to be in order to become tolerated – and not yet really believed. To summarise, I’m in three research groups, I’ve brought in (as part of a group and individually) somewhere in the order of $0.5M in one non-ed research area, I’ve brought in something like $30-50K in educational research money, I’ve published two A journals (one CS research, one CS ed), two A conferences (both ed) and one B conference (ed/CS) and I have a faculty level position as an Associate Dean and I have a national learning and teaching presence. All of the things on that line – that’s 2012. 2011 wasn’t quite as successful but it wasn’t bad by any stretch of the imagination. I think that’s an unreasonably high bar to pass in order to be allowed the luxury of asking questions about what it is that we’re doing with learning and teaching. But if I can leverage that to work with other colleagues who can then refer to what we’ve done in a way that makes administrators and managers accept the real value of an educational revolution – then my effort is shared over many more people and it suddenly looks like a much better investment of my time.
This is more musing that mission, I’m afraid, and I realise that any amount of this could be shot down but I look forward to some discussion!
Musing on scaffolding: Why Do We Keep Needing Deadlines?
Posted: August 29, 2012 Filed under: Education | Tags: authenticity, data visualisation, education, educational problem, educational research, ethics, higher education, in the student's head, measurement, research, teaching, teaching approaches, thinking, time banking 1 CommentOne of the things about being a Computer Science researcher who is on the way to becoming a Computer Science Education Researcher is the sheer volume of educational literature that you have to read up on. There’s nothing more embarrassing than having an “A-ha!” moment that turns out to have been covered 50 years and the equivalent of saying “Water – when it freezes – becomes this new solid form I call Falkneranium!”
Ahem. So my apologies to all who read my ravings and think “You know, X said that … and a little better, if truth be told.” However, a great way to pick up on other things is to read other people’s blogs because they reinforce and develop your knowledge, as well as giving you links to interesting papers. Even when you’ve seen a concept before, unsurprisingly, watching experts work with that concept can be highly informative.
I was reading Mark Guzdial’s blog some time ago and his post on the Khan Academy’s take on Computer Science appealed to me for a number of reasons, not least for his discussion of scaffolding; in this case, a tutor-guided exploration of a space with students that is based upon modelling, coaching and exploration. Importantly, however, this scaffolding fades over time as the student develops their own expertise and needs our help less. It’s like learning to ride a bike – start with trainer wheels, progress to a running-alongside parent, aspire to free wheeling! (But call a parent if you fall over or it’s too wet to ride home.)
One of my key areas of interest is self-regulation in students – producing students who no longer need me because they are self-aware, reflective, critical thinkers, conscious of how they fit into the discipline and (sufficiently) expert to be able to go out into the world. My thinking around Time Banking is one of the ways that students can become self-regulating – they manage their own time in a mature and aware fashion without me having to waggle a finger at them to get them to do something.
Today, R (postdoc in the Computer Science Education Research Group) and I were brainstorming ideas for upcoming papers over about a 2 hour period. I love a good brainstorm because, for some time afterwards, ideas and phrases come to me that allow me to really think about what I’m doing. Combining my reading of Mark’s blog and the associated links, especially about the deliberate reduction of scaffolding over time, with my thoughts on time management and pedagogy, I had this thought:
If imposed deadlines have any impact upon the development of student timeliness, why do we continue to need them into the final year of undergraduate and beyond? When do the trainer wheels come off?
Now, of course, the first response is that they are an administrative requirement, a necessary evil, so they are (somehow) exempt from a pedagogical critique. Hmm. For detailed reasons that will go into the paper I’m writing, I don’t really buy that. Yes, every course (and program) has a final administrative requirement. Yes, we need time to mark and return assignments (or to provide feedback on those assignments, depending on the nature of the assessment obviously). But all of the data I have says that not only do the majority of students hand up on the last day (if not later), but that they continue to do so into later years – getting later and later as they progress, rather than earlier and earlier. Our administrative requirement appears to have no pedagogical analogue.
So here is another reason to look at these deadlines, or at least at the way that we impose them in my institution. If an entry test didn’t correlate at all with performance, we’d change it. If a degree turned out students who couldn’t function in the world, industry consultation would pretty smartly suggest that we change it. Yet deadlines, which we accept with little comment most of the time, only appear to work when they are imposed but, over time, appear to show no development of the related skill that they supposedly practice – timeliness. Instead, we appear to enforce compliance and, as we would expect from behavioural training on external factors, we must continue to apply the external stimulus in order to elicit the appropriate compliance.
Scaffolding works. Is it possible to apply a deadline system that also fades out over time as our students become more expert in their own time management?
I have two days of paper writing on Thursday and Friday and ‘m very much looking forward to the further exploration of these ideas, especially as I continue to delve into the deep literature pile that I’ve accumulated!
More Thoughts on Partnership: Teacher/Student
Posted: August 23, 2012 Filed under: Education, Opinion | Tags: authenticity, blogging, education, educational problem, educational research, feedback, Generation Why, grand challenge, higher education, in the student's head, measurement, teaching, teaching approaches, thinking, time banking, universal principles of design Leave a commentI’ve just received some feedback on an abstract piece that is going into a local educational research conference. I talked about the issues with arbitrary allocation of deadlines outside of the framing of sound educational design and about how it fundamentally undermines any notion of partnership between teacher and student. The responses were very positive although I’m always wary when people staring using phrases like “should generate vigorous debate around expectations of academics” and “It may be controversial, but [probably] in a good way”. What interests me is how I got to the point of presenting something that might be considered heretical – I started by just looking at the data and, as I uncovered unexpected features, I started to ask ‘why’ and that’s how I got here.
When the data doesn’t fit your hypothesis, it’s time to look at your data collection, your analysis, your hypothesis and the body of evidence supporting your hypothesis. Fortunately, Bayes’ Theorem nicely sums it up for us: your belief in your hypothesis after you collect your evidence is proportional to how strongly your hypothesis was originally supported, modified by the chances of seeing what you did given the existing hypothesis. If your data cannot be supported under your hypothesis – something is wrong. We, of course, should never just ignore the evidence as it is in the exploration that we are truly scientists. Similarly, it is in the exploration of our learning and teaching, and thinking about and working on our relationship with our students, that I feel that we are truly teachers.
Once I accepted that I wasn’t in competition with my students and that my role was not to guard the world from them, but to prepare them for the world, my job got easier in many ways and infinitely more enjoyable. However, I am well aware that any decisions I make in terms of changing how I teach, what I teach or why I teach have to be based in sound evidence and not just any warm and fuzzy feelings about partnership. Partnership, of course, implies negotiation from both sides – if I want to turn out students who will be able to work without me, I have to teach them how and when to negotiate. When can we discuss terms and when do we just have to do things?
My concern with the phrase “everything is negotiable” is that it, to me, subsumes the notions that “everything is equivalent” and “every notion is of equal worth”, neither of which I hold to be true from a scientific or educational perspective. I believe that many things that we hold to be non-negotiable, for reasons of convenience, are actually negotiable but it’s an inaccurate slippery slope argument to assume that this means that we must immediately then devolve to an “everything is acceptable” mode.
Once again we return to authenticity. There’s no point in someone saying “we value your feedback” if it never shows up in final documents or isn’t recorded. There’s no point in me talking about partnership if what I mean is that you are a partner to me but I am a boss to you – this asymmetry immediately reveals the lack of depth in my commitment. And, be in no doubt, a partnership is a commitment, whether it’s 1:1 or 1:360. It requires effort, maintenance, mutual respect, understanding and a commitment from both sides. For me, it makes my life easier because my students are less likely to frame me in a way that gets in the way of the teaching process and, more importantly, allows them to believe that their role is not just as passive receivers of what I deign to transmit. This, I hope, will allow them to continue their transition to self-regulation more easily and will make them less dependent on just trying to make me happy – because I want them to focus on their own learning and development, not what pleases me!
One of the best definitions of science for me is that it doesn’t just explain, it predicts. Post-hoc explanation, with no predictive power, has questionable value as there is no requirement for an evidentiary standard or framing ontology to give us logical consistency. Seeing the data that set me on this course made me realise that I could come up with many explanations but I needed a solid framework for the discussion, one that would give me enough to be able to construct the next set of analyses or experiments that would start to give me a ‘why’ and, therefore, a ‘what will happen next’ aspect.
Short and Sweet
Posted: August 21, 2012 Filed under: Education | Tags: advocacy, authenticity, community, education, educational problem, educational research, ethics, Generation Why, grand challenge, higher education, in the student's head, learning, reflection, resources, teaching, teaching approaches, thinking, time banking, work/life balance, workload 1 CommentWell, it’s official. I’ve started to compromise my ability to work through insufficient rest. Despite reducing my additional work load, chewing through my backlog is keeping me working far too much and, as you can tell from the number and nature of the typos in these posts, it’s affecting me. I am currently reorganising tasks to see what I can continue to fit in without compromising quality, which means this week a lot of e-mail is being sent to sort out my priorities.
This weekend, I’m sitting down to brainstorm the rest of 2012 and work out what has to happen when – nothing is going to sneak up on me (again) this year.
In very good news, we have 18 students coming back for the pilot activity of “Our students, their words” where we ask students who love ICT an important question – “what do you like and why do you think someone else might like it?” We’re brainstorming with the students for all of Friday morning and passing their thoughts (as research) to a graphic designer to get some posters made. This is stage 1. Stage 2, the national campaign, is also moving – slowly but surely. This is why I really need to rest: I’m getting to the point where it’s important that I am at my best and brightest. Sleeping in and relaxing is probably the best thing I can do for the future of ICT! 🙂
Rather than be a hypocrite, I’m switching to ultra-short posts until I’m rested up enough to work properly again.
See you tomorrow!
A (measurement) league of our own?
Posted: August 19, 2012 Filed under: Education | Tags: advocacy, authenticity, education, educational problem, higher education, learning, measurement, reflection, teaching, teaching approaches, workload 1 CommentAs I’ve mentioned before, the number of ways that we are being measured is on the rise, whether it’s measures of our research output or ‘quality’, or the impact, benefits, quality or attractiveness of our learning and teaching. The fascination with research quality is not new but, given that we have had a “publish or perish” mentality where people would put out anything and be called ‘research active’, a move to a quality focus (which often entails far more preparation, depth of research and time to publication) from a quantity focus is not a trivial move. Worse, the lens through which we are assessed can often change far faster than we can change those aspects that are assessed.
If you look at some of the rankings of Universities, you’ll see that the overall metrics include things like the number of staff who are Nobel Laureates or have won the Fields Medal. Well, there are less than 52 Fields medallists and only a few hundred Nobel Laureates and, as the website itself distinguishes, a number of those are in the Economics area. This is an inherently scarce resource, however you slice it, and, much like a gallery that prides itself on having an excellent collection of precious art, you are more likely to be able to get more of these slices if you already have some. Thus, this measure of the research presence of your University is a bit of feedback loop.
Similarly the measurement of things like ‘number of papers in the top 20% of publications’. This conveniently ignores some of the benefits of being at better funded institutions, being part of an established community, being invited to lodge papers, and so on. Even where we have anonymous submission and evaluation, you don’t have to be a rocket scientist to spot connections, groups, and, of course, let’s not forget that a well-funded group will have more time, more resources, more postdocs. Basically, funding should lead to better results which leads to better measurement which may lead to better funding.
In terms of high prestige personnel, and their importance, or a history of quality publication, neither of these metrics can be changed overnight. Certainly a campaign to attract prestigious staff might be fruitful in the short term but, and let us be very frank here, if you can buy these staff with a combination of desirable locale issues and money, then it is a matter of bidding as to which University they go to next. But trying to increase your “number of high end publications in the last 5 years” is going to take 5 years to improve and this is kind of long-term thinking that we, as humans, appear to be very bad at.
Speaking of thinking in the long term, a number of the measures that would be most useful to us are not collected or used for assessment because they are over large timescales and, as I’ll discuss, may force us to realise that some things are intrinsically unmeasurable. Learning and teaching quality and impact is intrinsically hard to measure, mainly because we rarely seem take the time to judge the impact of tertiary institutions over an appropriate timescale. Given the transition issues in going from high school to University, measuring drop-out and retention rates in a student’s first semester leaves us wondering who is at fault. Are the schools not quite doing the job? Is it the University staff? The courses? The discipline identity? The student? Yes, we can measure retention and do a good job, with the right assessment, of maturing depth and type of knowledge but what about the core question?
How can we measure the real impact of undertaking studies in our field at our University?
After all, this is what these metrics are all about – determining the impact of a given set of academics at a given Uni so you can put them into a league table, hand out funding in some weighted scheme or tell students which Uni they should be going to. Realistically, we should come back in twenty years and find out how much was used, where their studies took them, whether they think it was valuable. How did our student use the tools we gave them to change the world? Of course, how do we then present a control to determine that it was us who caused that change. Oh, obviously a professional linkage is something we can think of as correlated – but not every engineer is Brunel and, most certainly, you don’t have to have gone to University to change the world.
This is most definitely not to say that shorter term measures of l&t quality aren’t important but we have to be very careful what we’re measuring and the reason that we’re measuring – and the purpose to which we put it. Measuring depth of knowledge, ability to apply that knowledge and professionally practice in a discipline? That’s worth measuring if we do it in a way that encourages constructive improvement rather than punishment or negative feedback that doesn’t show the way forward.
I don’t mind being measured, as long as it’s useful, but I’m getting a little tired of being ranked by mechanisms that I can’t change unless I go back in time and publish 10 more papers over the last 5 years or I manage to heal an entire educational system just so my metrics improve for reducing first-year drop out. (Hey, just so you know, I am working on increasing number of ICT students on a national level – you do have to think on the large scale occasionally.)
Apart from anything else, I wouldn’t rank my own students this way – it’s intrinsically arbitrary and unfair. Food for thought.





