A Puzzling Thought
Posted: September 19, 2012 Filed under: Education | Tags: advocacy, authenticity, card shouting, collaboration, curriculum, design, education, educational problem, educational research, ethics, feedback, higher education, in the student's head, measurement, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking 4 CommentsToday I presented one of my favourite puzzles, the Monty Hall problem, to a group of Year 10 high school students. Probability is a very challenging area to teach because we humans seem to be so very, very bad at grasping it intuitively. I’ve written before about Card Shouting, where we appear to train cards to give us better results by yelling at them, and it becomes all too clear that many people have instinctive models of how the world work that are neither robust nor transferable. This wouldn’t be a problem except that:
- it makes it harder to understand science,
- the real models become hard to believe because they’re counter-intuttitve, and
- casinos make a lot of money out of people who don’t understand probability.
Monty Hall is simple. There are three doors and behind one is a great prize. You pick a door but it doesn’t get opened. The host, who knows where the prize is, opens one of the doors that you didn’t pick but the door that he/she opens is always going to be empty. So the host, in full knowledge, opens a known empty door, but it has to be one that you didn’t pick. You then have a choice to switch to the door that you didn’t pick and that hasn’t been opened, or you can stay with your original pick.
Now let’s fast forward to the fact that you should always switch because you have a 2/3 chance of getting the prize if you do (no, not 50/50) so switching is the winning strategy. Going into today, what I expected was:
- Initially, most students would want to stay with their original choice, having decided that there was no benefit to switching or that it was a 50/50 deal so it didn’t make any sense.
- At least one student would actively reject the idea.
- With discussion and demonstration, I could get students thinking about this problem in the right way.
The correct mental framework for Monty Hall is essential. What are the chances, with 1 prize behind 3 doors, that you picked the right door initially. It’s 1/3, right? So the chances that you didn’t pick the correct door is 2/3. Now, if you just swapped randomly, there’d be no advantage but this is where you have to understand the problem. There are 2 doors that you didn’t pick and, by elimination, these 2 doors contain the prize 2/3 of the time. The host knows where the prize is so the host will never open a door and show you the prize, the host just removes a worthless door. Now you have two sets of doors – the one you picked (correct 1/3 of the time) and the remaining door from the unpicked pair (correct 2/3 of the time). So, given that there’s only one remaining door to pick in the unpicked pair, by switching you increase your chances of winning from 1/3 to 2/3.
Don’t believe me? Here’s an on-line simulator that you can run (Ignore what it says about Internet Explorer, it tends to run on most things.)
Still don’t believe me? Here’s some Processing code that you can run locally and see the rates converge to the expected results of 1/3 for staying and 2/3 for switching.
This is a challenging and counter-intuitive result, until you actually understand what’s happening, and this clearly illustrates one of those situations where you can ask students to plug numbers into equations for probability but, when you actually ask them to reason mathematically, you suddenly discover that they don’t have the correct mental models to explain what is going on. So how did I approach it?
Well, I used Peer Instruction techniques to get the class to think about the problem and then vote on it. As expected, about 60% of the class were stayers. Then I asked them to discuss this with a switcher and to try and convince each other of the rightness of their actions. Then I asked them to vote again.
No significant change. Dang.
So I wheeled out the on-line simulator to demonstrate it working and to ensure that everyone really understood the problem. Then I showed the Processing simulation showing the numbers converging as expected. Then I pulled out the big guns: the 100 door example. In this case, you select from 100 doors and Monty eliminates 98 (empty) doors that you didn’t choose.
Suddenly, when faced with the 100 doors, many students became switchers. (Not surprising.) I then pointed out that the two problems (3 doors and 100 doors) had reduced to the same problem, except that the remaining doors were the only door left standing from 2 and 99 doors respectively. And, suddenly, on the repeated vote, everyone’s a switcher. (I then ran the code on the 100 door example and had to apologise because the 99% ‘switch’ trace is so close to the top that it’s hard to see.)
Why didn’t the discussion phase change people’s minds? I think it’s because of the group itself, a junior group with very little vocabulary of probability. it would have been hard for the to articulate the reasons for change beyond much ‘gut feeling’ despite the obvious mathematical ability present. So, expecting this, I confirmed that they were understanding the correct problem by showing demonstration and extended simulation, which provided conflicting evidence to their previously held belief. Getting people to think about the 100 door model, which is a quite deliberate manipulation of the fact that 1/100 vs 99/100 is a far more convincing decision factor than 1/3 vs 2/3, allowed them to identify a situation where switching makes sense, validating what I presented in the demonstrations.
In these cases, I like to mull for a while to work out what I have and haven’t learned from this. I believe that the students had a lot of fun in the puzzle section and that most of them got what happened in Monty Hall, but I’d really like to come back to them in a year or two and see what they actually took away from today’s example.
More on Computer Science Education as a fundamentally challenging topic.
Posted: September 18, 2012 Filed under: Education | Tags: advocacy, authenticity, blogging, community, curriculum, education, educational problem, educational research, ethics, feedback, higher education, reflection, teaching, teaching approaches, thinking, threshold concepts, universal principles of design, vygotsky, workload Leave a comment“Homo sum, humani a me nihil alienum puto (I am a [human], nothing human is foreign to me)” , Terence, 163BC
While this is a majestic sentiment, we are constantly confronted by how many foreign ideas and concepts there are in our lives. In the educational field, Meyer and Land have identified threshold concepts as a set of concepts that are transformative once understood but troublesome and alien before they are comprehended. The existence of these, often counter-intuitive, concepts give the lie to Terence’s quote as it appears that certain concepts will be extremely foreign and hard to communicate or comprehend until we understand them. (I’ve discussed this before in my write-up of the ICER Keynote.)
Reading across the fields of education, educational psychology and Computer Science education research, it rapidly becomes apparent that some ideas have been described repeatedly over decades, but have gained little traction. Dewey’s disgust at the prison-like school classroom was recorded in 1938, yet you can walk onto any campus in the world and find the same “cells”, arrayed in ranks. The lecture is still the dominant communication form in many institutions, despite research support for the far greater efficacy of different approaches. For example, the benefits of social constructivism, including the zone of proximal development, are well known and extensively studied, yet even where group work is employed, it is not necessarily designed or facilitated to provide the most effective outcomes. The majority of course design and implementation shows little influence of any of the research conducted in the last 20 years, let alone the cognitive development stages of Piaget, the reliance upon authority found in Perry or even the existence of threshold concepts themselves. Why?
From a personal perspective, I was almost completely ignorant of the theoretical underpinnings of educational practice until very recently and I still rate myself as a rank novice in the area. I write here to be informed, not to be seen as an expert, and I learn from thinking and writing about what I’m doing. I am also now heavily involved in a research group that focuses on this so I have the peer support and time to start learning in the fascinating area of Computer Science Education. Many people, however, do not, and it is easy to see why one would not confront or even question the orthodoxy when one is unaware of any other truth.
Of course, as we all know, it is far harder to see that anything needs fixing when, instead of considering that our approach may be wrong, we identify our students as the weak link in the chain. It’s easy to do and, because we are often not scrupulously scientific in our recollection of events (because we are human), our anecdotal evidence dominates our experience. “Good” students pass, “bad” students fail. If we then define a bad student as “someone who fails”, we have a neat (if circular) definition that shields us from any thoughts on changing what we do.
When I found out how much I had to learn, I initially felt very guilty about some of the crimes that I had perpetrated against my students in my ignorance. I had bribed them with marks, punished them for minor transgressions with no real basis, talked at them for 50 minutes and assumed that any who did not recall my words just weren’t paying attention. At the same time, I carried out my own tasks with no bribery, negotiated my own deadlines and conditions, and checked my mail whenever possible in any meetings in which I felt bored. The realisation that, even through ignorance and human frailty, you have let your students down is not a good feeling, especially when you realise that you have been a hypocrite.
I lament the active procrastinator, who does everything except the right work and thus fails anyway with a confused look on their face, and I feel a great sympathy for the caring educator who, through lack of exposure or training, has no idea that what they are doing is not the best thing for their students. This is especially true when the educators have been heavily acculturated by their elders and superiors, at a vulnerable developmental time, and now not only have to question their orthodoxy, they must challenge their mentors and friends.
Scholarship in Computer Science learning and teaching illuminates one’s teaching practice. Discovering tools, theories and methodologies that can explain the actions of our students is of great importance to the lecturer and transforms the way that one thinks about learning and teaching. But transformative and highly illuminative mechanisms often come at a substantial cost in terms of the learning curve and we believe that this explains why there is a great deal of resistance from those members of the community who have not yet embraced the scholarship of learning and teaching. Combine this with a culture where you may be telling esteemed and valued colleagues that they have been practising poorly for decades and the resistance becomes even more understandable. We must address the fact that resistance to acceptance in the field may stem from effects that we would carefully address in our students (their ongoing problems with threshold concepts) but that we expect our colleagues to just accept these alien, challenging and unsettling ideas merely because we are right.
The burden of proof does not, I believe, lie with us. We have 70 years of studies in education and over 100 years of study in work practices to establish the rightness of our view. However, I wonder how we can approach our colleagues who continue to question these strange, counter-inutitive and frightening new ideas and help them to understand and eventually adopt these new concepts?
Howdy, Partner
Posted: September 17, 2012 Filed under: Education | Tags: advocacy, authenticity, collaboration, community, curriculum, design, education, educational problem, educational research, ethics, feedback, Generation Why, higher education, in the student's head, measurement, principles of design, student perspective, teaching, teaching approaches, thinking, time banking Leave a commentI am giving a talk on Friday about the partnership relationship between teacher and student and, in my opinion, why we often accidentally attack this through a less-than-optimal approach to assessment and deadlines. I’ve spoken before about how an arbitrary deadline that is convenient for administrative reasons is effectively pedagogically and ethically indefensible. For all that we disparage our students, if we do, for focusing on marks and sometimes resorting to cheating rather than focusing on educational goals, we leave ourselves open to valid accusations of hypocrisy if we have the same ‘ends justify the means’ approach to setting deadlines.
Consistency and authenticity are vital if we are going to build solid relationships, but let me go further. We’re not just building a relationship, we’re building an expectation of continuity over time. If students know that their interests are being considered, that what we are teaching is necessary and that we will always try to deal with them fairly, they are far more likely to invest the effort that we wish them to invest and develop the knowledge. More importantly, a good relationship is resilient, in that the occasional hiccup doesn’t destroy the whole thing. If we have been consistent and fair, and forces beyond our control affect something that we’ve tried to do, my experience is that students tolerate it quite well. If, however, you have been arbitrary, unprepared, inconsistent and indifferent, then you will (fairly or not) be blamed for anything else that goes wrong.
We cannot apply one rule to ourselves and a different one to our students and expect them to take us seriously. If you accept no work if it’s over 1 second late and keep showing up to lectures late and unprepared, then your students have every right to roll their eyes and not take you seriously. This doesn’t excuse them if they cheat, however, but you have certainly not laid the groundwork for a solid partnership. Why partnership? Because the students in higher education should graduate as your professional peers, even if they are not yet your peers in academia. I do not teach in the school system and I do not have to deal with developmental stages of the child (although I’m up to my armpits in neo-Piagetian development in the knowledge areas, of course).
We return to the scaffolding argument again. Much as I should be able to remove the supports for their coding and writing development over their degree, I should also be able to remove the supports for their professional skills, team-based activities and deadlines because, in a few short months, they will be out in the work force and they will need these skills! If I take a strictly hierarchical approach where a student is innately subordinate to me, I do not prepare them for a number of their work experiences and I risk limiting their development. If I combine my expertise and my oversight requirements with a notion of partnership, then I can work with the student for some things and prepare the student for a realistic workplace. Yes, there are rules and genuine deadlines but the majority experience in the professional workplace relies upon autonomy and self-regulation, if we are to get useful and creative output from these new graduates.
If I demand compliance, I may achieve it, but we are more than well aware that extrinsic motivating factors stifle creativity and it is only at those jobs where almost no cognitive function is required that the carrot and the stick show any impact. Partnership requires me to explain what I want and why I need it – why it’s useful. This, in turn, requires me to actually know this and to have designed a course where I can give a genuine answer that illustrates these points!
“Because I said so,” is the last resort of the tired parent and it shouldn’t be the backbone of an entire deadline methodology. Yes, there are deadlines and they are important but this does not mean that every single requirement falls into the same category or should be treated in the same way. By being honest about this, by allowing for exchange at the peer-level where possible and appropriate, and by trying to be consistent about the application of necessary rules to both parties, rather than applying them arbitrarily, we actually are making our students work harder but for a more personal benefit. It is easy to react to blind authority and be resentful, to excuse bad behaviour because you’re attending a ‘bad course’. It is much harder for the student to come up with comfortable false rationalisations when they have a more equal say, when they are informed in advance as to what is and what is not important, and when the deadlines are set by necessity rather than fiat.
I think a lot of people miss one of the key aspects of fixing assessment: we’re not trying to give students an easier ride, we’re trying to get them to do better work. Better work usually requires more effort but this additional effort is now directed along the lines that should develop better knowledge. Partnership is not some way for students to negotiate their way out of submissions, it’s a way that, among other things, allows me to get students to recognise how much work they actually have to do in order to achieve useful things.
If I can’t answer the question “Why do my students have to do this?” when I ask it of myself, I should immediately revisit the activity and learning design to fix things so that I either have an answer or I have a brand new piece of work for them to do.
De Profundis – or de-profounding?
Posted: September 16, 2012 Filed under: Education | Tags: advocacy, authenticity, collaboration, community, curriculum, design, education, educational problem, ethics, feedback, Generation Why, higher education, in the student's head, oscar wilde, principles of design, reflection, resources, student perspective, teaching, teaching approaches, thinking Leave a comment“It is common to assume that we are dealing with a highly intelligent book when we cease to understand it.” (de Botton, The Consolations of Philosophy, p157)
The notion of a lack of comprehension being a fundamental and innate fault of the reader, rather than the writer, is a mistake made, in many different and yet equally irritating ways, throughout the higher educational sector. A high pass rate may be seen as indicative of an easy course or a weak marker. A high failure rate may be attributed to the innate difficulty of the work or the inferior stuff of which the students are made. As I have written before, under such a presumption, I could fail all of my students and strut around, the smartest man in my University, for none have been able to understand the depths and subtlety of my area of knowledge.
Yet, if the real reason is that I have brought my students to a point where their abilities fail them and, either through ignorance or design, I do not strive to address this honestly and openly, then it doesn’t matter how many of them ultimately pass – I will be the biggest failure in the class. I know a great number of very interesting and intelligent educators but, were you to ask me if any of them could teach, I would have to answer that I did not know, unless I had actually seen them do so. For all of our pressure on students to contain the innate ability to persevere, to understand our discipline or to be (sorry, Ray) natural programmers, the notion that teaching itself might not be something that everyone is capable of is sometimes regarded as a great heresy. (The notion or insistence that developing as a teacher may require scholarship and, help us all, practise, is apostasy – our heresy leading us into exile.) Teaching revolves around imparting knowledge efficiently and effectively so that students may learn. The cornerstone of this activity is successful and continuing communication. Wisdom may be wisdom but it rapidly becomes hard to locate or learn from when it is swaddled in enough unnecessary baggage.
I have been, mostly thanks to the re-issue of cheap Penguins, undertaking a great deal of reading recently and I have revisited Marcus Aurelius, Seneca, de Botton and Wilde. The books that are the most influential upon me remain those books that, while profound, maintain their accessibility. Let me illustrate this with an example. For those who do not know what De Profundis means, it is a biblical reference to Psalm 130, appropriated by the ever humble Oscar Wilde as the title of his autobiographical letter to his former lover, from the prison in which he was housed because of that love.
But what it means is “From the depths”. In the original psalm, the first line is:
De profundis clamavi ad te, Domine;
From the depths, I have cried out to you, O Lord;
And in this reading, we see the measure of Wilde’s despair. Having been sentenced to hard labour, and having had his ability to write confiscated, his ability to read curtailed, and his reputation in tatters, he cries out from the depths to his Bosie, Lord Douglas.
De profundis [clamavi ad te, Bosie;]
If you have the context for this, then this immediately prepares you for the letter but, as it is, the number of people who are reading Wilde is shrinking, let alone the number of people who are reading a Latin Bible. Does this title still assist in the framing of the work, through its heavy dependence upon the anguish captured in Psalm 130, or is it time to retitle it “From the depths, I have cried out to you!” to capture both the translation and the sense. The message, the emotion and the hard-earned wisdom contained in the letter are still valuable but are we hurting the ability of people to discover and enjoy it by continuing to use a form of expression that may harm understanding?

Les Très Riches Heures du duc de Berry, Folio 70r – De Profundis the Musée Condé, Chantilly. (Another form of expression of this Psalm.)
Now, don’t worry, I’m not planning to rewrite Wilde but this raises a point in terms of the occasionally unhappy union of the language of profundity and the wisdom that it seeks to impart. You will note the irony that I am using a heavily structured, formal English, to write this and that there is very little use of slang here. This is deliberate because I am trying to be precise while still being evocative and, at the same time, illustrating that accurate use of more ornate language can obscure one’s point. (Let me rephrase that. The unnecessary use of long words and complex grammar gets in the way of understanding.)
When Her Majesty the Queen told the Commonwealth of her terrible year, her words were:
“1992 is not a year on which I shall look back with undiluted pleasure. In the words of one of my more sympathetic correspondents, it has turned out to be an Annus Horribilis.”
and I have difficulty thinking of a more complicated way of saying “1992 was a bad year” than to combine a complicated grammatical construction with a Latin term that is not going to be on the lips of the people who are listening to the speech. Let me try: “Looking back on 1992, it has been, in the words of one of my friends, a terrible year.” Same content. Same level of imparted knowledge. Much less getting in the way. (The professional tip here is to never use the letters “a”, “n”, “s” and “u” in one short word unless you are absolutely sure of your audience. “What did she say about… nahhh” is not the response you want from your loyal subjects.) [And there goes the Knighthood.]
I love language. I love reading. I am very lucky that, having had a very broad and classically based education, I can read just about anything and not be intimidated or confused by the language forms – providing that the author is writing in one of the languages that I read, of course! To assume that everyone is like me or, worse, to judge people on their ability because they find long and unfamiliar words confusing, or have never had the opportunity to use these skills before, is to leap towards the same problem outlined in the quote at the top. If we seek to label people unintelligent when they have not yet been exposed to something that is familiar to us, then this is just as bad as lauding someone’s intelligence because you don’t understand what they’re talking about.
If my students need to know something then I have to either ensure that they already do so, by clearly stating my need and being aware of the educational preparation in my locale, or I have to teach it to them in forms that they can understand and that will allow them to succeed. I may love language, classical works and big words, but I am paid to teach the students of 2012 to become the graduates, achievers and academics of the future. I have to understand, respect and incorporate their context, while also meeting the pedagogical and knowledge requirements of the courses that I teach.
No-one said it was going to be easy!
ICER 2012 Day Research Paper Session 4
Posted: September 15, 2012 Filed under: Education | Tags: education, educational research, feedback, Generation Why, higher education, icer, icer 2012, icer2012, in the student's head, reflection, student perspective, teaching, teaching approaches, thinking, tools Leave a commentThis session kicked off with “Ability to ‘Explain in Plain English’ Linked to Proficiency in Computer-based Programming”, (Laurie Murphy, Sue Fitzgerald, Raymond Lister and Renee McCauley (presenting)). I had seen a presentation along these lines at SIGCSE and this is an excellent example of international collaboration, if you look at the authors list. Does the ability to explain code in plain English correlate with ability to solve programming problems? The correlation appears to be there, whether or not we train students in Explaining in Plain English or not, but is this causation?
This raises a core question, addressed in the talk: Do we need to learn to read (trace) code before we learn to write code or vice versa? The early reaction of the Leeds group was that reading code didn’t amount to testing whether students could actually write code. Is there some unknown factor that must be achieved before either or both of these? This is a vexing question as it raises the spectre of whether we need to factor in some measure of general intelligence, which has not been used as a moderating factor.
Worse, we now return to that dreadful hypothesis of “programming as an innate characteristics”, where you were either born to program or not. Ray (unsurprisingly) believes that all of the skills in this area (EIPE/programming) are not innate and can be taught. This then raises the question of what the most pedagogically efficient way is to do this!
How Do Students Solve Parsons Programming Problems? — An Analysis of Interaction Traces
Juha Helminen (presenting), Petri Ihantola, Ville Karavirta and Lauri Malmi
This presentation was of particular interest to me because I am currently tearing apart my 1,900 student data corpus to try and determine the point at which students will give up on an activity, in terms of mark benefit, time expended and some other factors. This talk, which looked at how students solved problems, also recorded the steps and efforts that they took in order to try and solve them, which gave me some very interesting insights.
A Parsons problem is one where, given code fragments a student selects, arranges and composes a program in response to a question. Not all code fragments present will be required in the final solution. Adding to the difficulty, the fragments require different indentation to assert their execution order as part of block structure. For those whose eyes just glazed over, this means that it’s more than selecting a line to go somewhere, you have to associate it explicitly with other lines as a group. Juha presented a graph-based representation of the students’ traversals of the possible solutions for their Parsons problem. Students could ask for feedback immediately to find out how their programs were working and, unsurprisingly, some opted for a lot of “Am I there yet” querying. Some students queried feedback as much as 62 times for only 7 features, indicative of permutation programming, with very short inter-query intervals. (Are we there yet? No. Are we there yet? No. Are we there yet? No. Are we there yet? No.)
The primary code pattern of development was linear, with block structures forming the first development stages, but there were a lot of variations. Cycles (returning to the same point) also occurred in the development cycle but it was hard to tell if this was a deliberate reset pattern or a one where permutation programming had accidentally returned the programmer to the same state. (Asking the students “WHY” this had occurred would be an interesting survey question.)
There were some good comments from the audience, including the suggestion of correlating good and bad states with good and bad outcomes, using Markov chain analysis to look for patterns. Another improvement suggested was recording the time taken for the first move, to record the impact (possible impact) of cognition on the process. Were students starting from a ‘trial and error’ approach or only after things went wrong?
Tracking Program State: A Key Challenge in Learning to Program
Colleen Lewis (presenting, although you probably could have guessed that)
This paper won the Chairs’ Award for the best paper at the Conference and it was easy to see why. Colleen presented a beautifully explored case study of an 11 year old boy working on a problem in the Scratch programming language and trying to work out why he couldn’t draw a wall of bricks. By capturing Kevin’s actions, in code, his thoughts, from his spoken comments, we are exposed to the thought processes of a high achieving young man who cannot fathom why something isn’t working.
I cannot do justice to this talk by writing about something that was primarily visual, but Colleen’s hypothesis was that Kevin’s attention to the state (variables and environmental settings over which the program acts) within the problem is the determining factor in the debugging process. Once Kevin’s attention was focused on the correct problem, he solved it very quickly because the problem was easy to solve. Locating the correct problem required him to work through and determine which part of the state was at fault.
Kevin has a pile of ideas in his head but, as put by duSessa and Sherin (98), learning is about reliably using the right ideas in the correct context. Which of Kevin’s ideas are being used correctly at any one time? The discussion that followed talked about a lot of the problems that students have with computers, in that many students do not see computers as actually being deterministic. Many students, on encountering a problem, will try exactly the same thing again to see if the error occurs again – this requires a mental model that we expect a different set of outcomes with the same inputs and process, which is a loose definition of either insanity or nondeterminism. (Possibly both.)
I greatly enjoyed this session but the final exemplar, taking apart a short but incredibly semantically rich sequence and presenting it with a very good eye for detail, made it unsurprising that this paper won the award. Congratulations again, Colleen!
Our Influence: Prejudice As Predictor
Posted: September 14, 2012 Filed under: Education | Tags: advocacy, authenticity, community, education, educational problem, educational research, higher education, in the student's head, learning, measurement, principles of design, reflection, student perspective, teaching, teaching approaches, thinking Leave a commentIf you want to see Raymond Lister get upset, tell him that students fall into two categories: those who can program and those who can’t. If you’ve been reading much (anything) of what I’ve been writing recently, you’ll realise that I’ve been talking about things like cognitive development, self-regulation, dependence on authority, all of which have one thing in common in that students can be at different stages when they reach us. There is no guarantee that students will be self-reliant, cognitively mature and completely capable of making reasoned decisions at the most independent level.
There was a question raised several times during the conference and it’s the antithesis of the infamous “double hump conjecture”, that students divide into two groups naturally and irrevocably because of some innate characteristic. The question is “Do our students demonstrate their proficiency because of what we do or in spite of what we do?” If the innate characteristic conjecture is correct, and this is a frequently raised folk pedagogy, then our role has no real bearing on whether a student will learn to program or not.
If we accept that students come to us at different stages in their development, and that these development stages will completely influence their ability to learn and form mental models, then the innate characteristic hypothesis withers and dies almost immediately. A student who does not have their abilities ready to display can no more demonstrate their ability to program than a three-year old child can write Shakespeare – they are not yet ready to be able to learn, assemble, reassemble or demonstrate the requisite concepts and related skills.
However, a prejudicial perspective that students who cannot demonstrate the requisite ability are innately and permanently lacking that skill will, unpleasantly, viciously and unnecessarily, cause that particular future to lock in. Of course a derisive attitude to these ‘stupid’ or ‘slow’ students will make them withdraw or undermine their confidence! As I will note from the conference, confidence and support have a crucial impact on students. Undermining a student’s confidence is worse than not teaching them at all. Walking in with the mental model that separates the world into programmers and non-programmers forces that model into being.
Since I’ve entered the area of educational research, I’ve been exposed to things that I can separate into the following categories:
- Fascinating knowledge and new views of the world, based on solid research and valid experience.
- Nonsense
- Damned nonsense
- Rank stupidity
Where most of the latter come from other educators who react, our of fear or ignorance, to the lessons from educational research with disbelief, derision and resentment. “I don’t care what you say, or what that paper says, you’re wrong” says the voice of “experience”.
There is no doubt that genuine and thoughtful experience is, has been, and will always be a strong and necessary sibling to the educational and psychological theory that is the foundation of educational research. However, shallow experience can often be built up into something that it is not, when it is combined with fallacious thinking, cherry picking, confirmation bias and any other permutation of fear, resentment and inertia. The influence of folk pedagogies, lessons claimed from tea room mutterings and the projection of a comfortable non-reality that mysteriously never requires the proponent to ever expend any additional effort or change what they do, is a malign shadow over the illumination of good learning and teaching practice.
The best educators explain their successes with solid theory, strive to find a solution to the problems that lead to failure, and listen to all sources in order to construct a better practice and experience for their students. I hope, one day, to achieve this level- but I do know that doubting everything new is not the path forward for me.
I am pleased to say that the knowledge and joy of this (to me) new field far outstrips most of the other things that I have seen but I cannot stress any more how important it is that we choose our perspectives carefully. We, as educators, have disproportionally high influence: large shadows and big feet. Reading further into this discipline illustrates that we must very carefully consider the way that we think, the way that our students think and the capability that we actually have in the students for reasoning and knowledge accumulation before we make any rash or prejudicial statements about the innate capabilities of that most mythical of entities: the standard student.
ICER 2012 Research Paper Session 1
Posted: September 13, 2012 Filed under: Education | Tags: curriculum, education, educational research, higher education, icer, icer2012, in the student's head, measurement, teaching, teaching approaches, thinking, tools Leave a commentIt would not be over-stating the situation to say that every paper presented at ICER led to some interesting discussion and, in some cases, some more… directed discussion than others. This session started off with a paper entitled “Threshold Concepts and Threshold Skills in Computing” (Kate Sanders, Jonas Boustedt, Anna Eckerdal, Robert McCartney, Jan Erik Moström Lynda Thomas and Carol Zander), on whether threshold skills, as distinct from threshold concepts, existed and, if they did, what their characteristics would be. Threshold skills were described as transformative, integrative, troublesome knowledge, semi-irreversible (in that they’re never really lost), and requiring practice to keep current. The discussion that followed raised a lot of questions, including whether you could learn a skill by talking about it or asking someone – skill transfer questions versus environment. The consensus, as I judged it from the discussion, was that threshold skills didn’t follow from threshold concepts but there was a very rapid and high-level discussion that I didn’t quite follow, so any of the participants should feel free to leap in here!
The next talk was “On the reliability of Classifying Programming Tasks Using a Neo-Piagetian Theory of Cognitive Development” (Richard Gluga, Raymond Lister, Judy Kay, Sabina Kleitman and Donna Teague), where Ray raised and extended a number of the points that he had originally shared with us in the workshop on Sunday. Ray described the talk as being a bit “Neo-Piagetian theory for dummies” (for which I am eternally grateful) and was seeking to address the question as to where students are actually operating when we ask them to undertake tasks that require a reasonable to high level of intellectual development.
Ray raised the three bad programming habits he’d discussed earlier:
- Permutation programming (where students just try small things randomly and iteratively in the hope that they will finally get the right solution – this is incredibly troublesome if the many small changes take you further away from the solution )
- Shotgun debugging (where a bug causes the student to put things in with no systematic approach and potentially fixing things by accident)
- Voodoo coding/Cargo cult coding (where code is added by ritual rather than by understanding)
These approaches show one very important thing: the student doesn’t understand what they’re doing. Why is this? Using a Neo-Piagetian framework we consider the student as moving through the same cognitive development stages that they did as a child (Piagetian) but that this transitional approach applies to new and significant knowledge frameworks, such as learning to program. Until they reach the concrete operational stage of their development, they will be applying poor or inconsistent models – logically inadequate models to use the terminology of the area (assuming that they’ve reached the pre-operational stage). Once a student has made the next step in their development, they will reach the concrete operational stage, characterised (among other things, but these were the ones that Ray mentioned) by:
- Transitivity: being able to recognise how things are organised if you can impose an order upon them.
- Reversibility: that we can reverse changes that we can impose.
- Conservation: realising that the numbers of things stay the same no matter how we organise them.
In coding terms, these can be interpreted in several ways but the conservation idea is crucial to programming because understanding this frees the student from having to write the same code for the same algorithm every time. Grasping that conversation exists, and understanding it, means that you can alter the code without changing the algorithm that it implements – while achieving some other desirable result such as speeding the code up or moving to a different paradigm.
Ray’s paper discussed the fact that a vast number of our students are still pre-operational for most of first and second year, which changes the way that we actually try to teach coding. If a student can’t understand what we’re talking about or has to resort to magical thinking to solve problem, then we’ve not really achieved our goals. If we do start classifying the programming tasks that we ask students to achieve by the developmental stages that we’re expecting, we may be able to match task to ability, making everyone happy(er).
The final paper in the session was “Social Sensitivity Correlations with the Effectiveness of team Process Performance: An Empirical Study”, (Luisa Bender (presenting), Gursimra Walia, Krishna Kambhampaty, Travis Nygard and Kendall Nygard), which discussed the impact of socially sensitive team members in programming teams. (Social sensitivity is the ability to correctly understand the feelings and the viewpoints of other people.)
The “soft skills” are essential to teamwork process and a successful team enhances learning outcomes. Bad teams hinder team formation and progress, and things go downhill from there. From Wooley et al’s study of nearly 700 participants, the collective intelligence of the team stems from how well the team works rather than the individual intelligence of the participants. The group whose members were more socially sensitive had a higher group intelligence.
Just to emphasise that point: a team of smart people may not be as effective as a team as a team of people who can understand the feelings and perspectives of each other. (This may explain a lot!)
Social sensitivity is a good predictor of team performance and the effectiveness of team-oriented processes, as well as the satisfaction of the team members. However, it is also apparent that we in Science, Technology, Engineering and Mathematics (STEM) have lower social sensitivity readings (supporting Baron-Cohen’s assertion – no, not that one) than some other areas. Future work in this area is looking at the impact of a single high or low socially sensitive person in a group, a study that will be of great interest to anyone who is running teams made up on randomly assigned students. How can we construct these groups for the best results for the students?
ICER 2012 Day 1 Keynote: How Are We Thinking?
Posted: September 10, 2012 Filed under: Education | Tags: community, curriculum, education, educational problem, educational research, higher education, icer, icer 2012, in the student's head, reflection, teaching, teaching approaches, thinking, threshold concepts, tools, workload 3 CommentsWe started off today with a keynote address from Ed Meyer, from University of Queensland, on the Threshold Concepts Framework (Also Pedagogy, and Student Learning). I am, regrettably, not as conversant with threshold concepts as I should be, so I’ll try not to embarrass myself too badly. Threshold concepts are central to the mastery of a given subject and are characterised by some key features (Meyer and Land):
- Grasping a threshold concept is transformative because it changes the way that we think about something. These concepts become part of who we are.
- Once you’ve learned the concept, you are very unlikely to forget it – it is irreversible.
- This new concept allows you to make new connections and allows you to link together things that you previously didn’t realise were linked.
- This new concept has boundaries – they have an area over which they apply. You need to be able to question within the area to work out where it applies. (Ultimately, this may identify areas between schools of thought in an area.)
- Threshold concepts are ‘troublesome knowledge’. This knowledge can be counter-intuitive, even alien and will make no sense to people until they grasp the new concept. This is one of the key problems with discussing these concepts with people – they will wish to apply their intuitive understanding and fighting this tendency may take some considerable effort.
Meyer then discussed how we see with new eyes after we integrate these concepts. It can be argued that concepts such as these give us a new way of seeing that, because of inter-individual differences, students will experience in varying degrees as transformative, integrative, and (look out) provocative and troublesome. For this final one, a student experiences this in many ways: the world doesn’t work as I think it should! I feel lost! Helpless! Angry! Why are you doing this to me?
How do you introduce a student to one of these troublesome concepts and, more importantly, how can you describe what you are going to talk about when the concept itself is alien: what do you put in the course description given that you know that the student is not yet ready to assimilate the concept?
Meyer raised a really good point: how do we get someone to think inside the discipline? Do they understand the concept? Yes. Does this mean that they think along the right lines? Maybe, maybe not. If I don’t think like a Computer Scientist, I may not understand why a CS person sees a certain issue as a problem. We have plenty of evidence that people who haven’t dealt with the threshold concepts in CS Education find it alien to contemplate that the lecture is not the be-all and end-all of teaching – their resistance and reliance upon folk pedagogies is evidence of this wrestling with troublesome knowledge.
A great deal to think about from this talk, especially in dealing with key aspects of CS Ed as the threshold concept that is causing many of our non-educational research oriented colleagues so much trouble, as well as our students.
Loading the Dice: Show and Tell
Posted: September 6, 2012 Filed under: Education | Tags: authenticity, curriculum, design, education, educational problem, higher education, in the student's head, learning, principles of design, resources, teaching, teaching approaches, thinking, tools Leave a commentI’ve been using a set of four six-sided dice to generate random numbers for one of my classes this year, generally to establish a presentation order or things like that. We’ve had a number of students getting the same number and so we have to have roll-offs. Now in this case, the most common number rolled so far has been in the range of 17-19 but we have only generated about 18-20 rolls so, while that’s a little high, it’s not high enough to arouse suspicion.
Today we rolled again, and one student wasn’t quite there yet so I did it with the rest of the class. Once again, 18 showed up a bit. This time I asked the class about it. Did that seem suspicious? Then I asked them to look at the dice.
Oh.
Only two of the dice are actually standard dice. One has the number five on every face. One has three sixes and three twos. The students have seen these dice numerous times and have never actually examined them – of course, I didn’t leave them lying around for them to examine but, despite one or two starting to think “Hey, that’s a bit weird”, nobody ever twigged to the loading.

All of the dice in this picture are loaded through weight manipulation, rather than dot alteration. You can buy them for just about any purpose. Ah, Internet!
Having exposed this trick, to some amusement, the last student knocked on the door and I picked up the dice. He was then asked to roll for his position, with the rest of the class staying quiet. (Well, smirking.) He rolled something in 17-19, I forget what, and I wrote that up on the board. Then I asked him if it seemed high to him? On reflection, he said that these numbers all seemed pretty high, especially as the theoretical maximum was 24. I then asked if he’d like to inspect the dice.
He then did so, as I passed him the dice one at a time, and storing the inspected dice in my other hand. (Of course, as he peered at each die to see if it was altered, I quickly swapped one of the ‘real’ dice back into the position in my hand and, as the rest of the class watched and kept admirably quiet, I then forced a real die onto him. Magic is all about misdirection, after all.)
So, having inspected all of them, he was convinced that they were normal. I then plonked them down on the table and asked him to inspect them, to make sure. He lined them up, looked across the top face and, then, looked at the side. Light dawned. Loudly! What, of course, was so startling to him was that he had just inspected the dice and now they weren’t normal.
What was my point?
My students have just completed a project on data visualisation where they provided a static representation of a dataset. There is a main point to present, supported by static analysis and graphs, but the poster is fundamentally explanatory. The only room for exploration is provided by the poster producer and the reader is bound by the inherent limitations in what the producer has made available. Much as with our discussions of fallacies in argument from a recent tutorial, if information is presented poorly or you don’t get enough to go on, you can’t make a good decision.
Enter, the dice.
Because I deliberately kept the students away from them and never made a fuss about them, they assumed that they were normal dice. While the results were high, and suspicion was starting to creep in, I never gave them enough space to explore the dice and discern their true nature. Even today, while handing them to a student to inspect, I controlled the exploration and, by cherry picking and misdirection, managed to convey a false impression.
Now my students are moving into dynamic visualisation and they must prepare for sharing data in a way that can be explored by other people. While the students have a lot of control over who this exploration takes place, they must prepare for people’s inquisitiveness, their desire to assemble evidence and their tendency to want to try everything. They can’t rely upon hiding difficult pieces of data in their representation and they must be ready for users who want to keep exploring through the data in ways that weren’t originally foreseen. Now, in exploratory mode, they must prepare for people who want to try to collect enough evidence to determine if something is true or not, and to be able to interrogate the dataset accordingly.
Now I’m not saying that I believe that their static posters were produced badly, and I did require references to support statements, but the view presented was heavily controlled. They’ve now seen, in a simple analogue, how powerful that can be. Now, it’s time to break out of that mindset and create something that can be freely explored, letting their design guide the user to construct new things rather than to lead them down a particular path.
I can only hope that they’re exceed by this because I certainly am!!
(Reasonable) Argument, Evidence and (Good) Journalism: Is “Crimson” the Colour of Their Faces?
Posted: September 5, 2012 Filed under: Education | Tags: advocacy, authenticity, blogging, community, education, ethics, grand challenge, higher education, in the student's head, learning, reflection, student perspective, teaching, teaching approaches, thinking 2 CommentsI ran across a story on the Harvard Crimson about a surprisingly high level of suspected plagiarism in a course, Government 1310. The story opens up simply enough in the realms of fact, where the professor suspected plagiarism behaviour in 10-20 take home exams, which was against published guidelines, and has now expanded to roughly 125 suspicious final exams. There was a brief discussion of the assessment of the course and the steps taken so far by the faculty.
Then, the article takes a weird turn. Suddenly, we have a student account, an anonymous student who doesn’t wish their name to be associated with the plagiarism, who “suspected that Government 1310 was the course in question”. Hello? Why is this… ahhh. Here’s some more:
Though she said she followed the exam instructions and is not being investigated by the Ad Board, she said she thought the exam format lent itself to improper academic conduct.
“I can understand why it would be very easy to collaborate,” said the student
Oh. Collaborate. Interesting. Next we get the Q Guide rating for the course and this course gets 2.54/5 versus the apparent average of 3.91. Then we get some reviews from the Q Guide that “spoke critically of the course’s organisation and the difficulty of the exam questions”.
Spotting a pattern yet?
Another student said that he/she had joined a group of 15 other students just before the submission date and that they had been up all night trying to understand one of the questions (worth 20%).
I submitted this to my students to read and then asked them how they felt about it. Understandably, by the end of the reading, while my students were still thinking about plagiarism, they were thinking that there may have been some… justification. Then we started pulling the article apart.
When we start to look at the article, it becomes apparent that the facts presented all have a rather definite theme – namely that if cheating has occurred, that it has a justification because of the terrible way the course was taught (low Q Guide rating! 16 students confused!)
Now, I can not see the Q Guide data, because when I go to the page I get this information (and I need a Harvard login to go further):
Q Guide
The Q Guide was an annually published guide that reported the results of each year’s course evaluations. Formerly called the CUE Guide, it was renamed the Q Guide in 2007 because the evaluations now include the GSAS and are no longer run solely by the Committee on Undergraduate Education (CUE). In 2009, in place of The Q Guide, Harvard College integrated Q data with the online course selection tool (at my.harvard.edu), providing a simple and easy way to access and compare course evaluation data while planning your course schedule.
So if the article, regarding an exam run in 2012, is referring to the Q Guide for Gov 1310, then it’s one of two things: using an old name for new data (admittedly, fairly likely) or referring to old data. The question does arise, however, whether the Q Guide rating refers to this offering or a previous offering. I can’t tell you which it is because I don’t know. It’s not publicly available and the article doesn’t tell me. (Although you’ll note that the Q Guide text refers to this year‘s evaluations. There’s a part of me that strongly suspects that this is historical data but, of course, I’m speculating.)
However, the most insidious aspect is the presentation of 16 students who are confused about content in a way that overstates their significance. It’s a blatant example of emotive manipulation and encourages the reader to make a false generalisation. There were 279 students enrolled in Gov 1310. 16 is 5.7%. Would I be surprised in somewhere around 5% of my students weren’t capable of understanding all of the questions or thought that some material wasn’t in the course?
No, of course not. That’s roughly the percentage of my students who sometimes don’t know which Dr Falkner is teaching their class. (Hint: one is male and one is female. Noticeably so in both cases.)
I presented this to my Grand Challenge students as part of our studies of philosophical and logical fallacies, discussing how arguments are made to mislead and misdirect. The terrible shame is that, with a detected rate of plagiarism that is this high, I would usually have a very detailed look at the learning and teaching strategies employed (how often are exams being rewritten, how is information being presented, how is teaching being carried out) because this is an amazingly high level of suspected plagiarism.
Despite the misleading journalism presented in the Crimson, the course and its teachers may have to shoulder some responsibility here. As always, just because someone’s argument is badly made, doesn’t mean that it is actually wrong. It’s just disappointing that such a cheap and emotive argument was raised in a way that further fogs an important issue.
As I said to my students today, one of the most interesting way to try to understand a biassed or miscast argument is to understand who the bias favours – cui bono? (To whom the benefit? I am somewhat terrified, on looking for images for this phrase, that it has been highjacked by extremists and conspiracy theorists. It’s a shame because it’s historically beautiful.)
So why would the Crimson run this? It’s pretty manipulative so, unless this is just bad journalism, cui bono?
Having looked up how disciplinary boards are constituted at Harvard, I found a reference that there are three appointed faculty members and:
There are three students appointed to the board as full voting members. Two of these will be assigned to specific cases on a case-by-case basis and will not be in the same division as the student facing disciplinary action.
In this case, the Crimson’s story suddenly looks a lot… darker. If, by publishing this article, they reach the right students and convince them the action of the suspected plagiarists may have been overly influenced by academics who are not performing their duties – then we risk suddenly having a deadlocked board and a deleterious effect on what should have been an untainted process.
The Crimson has further distinguished itself with a follow-up article regarding the uncertainty students are feeling because of the process.
“It’s unfair to leave that uncertainty, given that we’re starting lives,” said the alumnus, who was granted anonymity by The Crimson because he said he feared repercussions from Harvard for discussing the case.
Oh, Harvard, you giant monster, unfairly delaying your decision on a plagiarism case because the lecturers were so very, very bad that students had to cheat. And, what’s worse, you are so evil that students are scared of you – they “fear the repercussions”!
Thank you, Crimson, for providing so much rich fodder for my discussion on how the words “logical argument”, “evidence” and “good journalism” can be so hard to fit into the same sentence.




