A Puzzling Thought

Today I presented one of my favourite puzzles, the Monty Hall problem, to a group of Year 10 high school students. Probability is a very challenging area to teach because we humans seem to be so very, very bad at grasping it intuitively. I’ve written before about Card Shouting, where we appear to train cards to give us better results by yelling at them, and it becomes all too clear that many people have instinctive models of how the world work that are neither robust nor transferable. This wouldn’t be a problem except that:

  1. it makes it harder to understand science,
  2. the real models become hard to believe because they’re counter-intuttitve, and
  3. casinos make a lot of money out of people who don’t understand probability.

Monty Hall is simple. There are three doors and behind one is a great prize. You pick a door but it doesn’t get opened. The host, who knows where the prize is, opens one of the doors that you didn’t pick but the door that he/she opens is always going to be empty. So the host, in full knowledge, opens a known empty door, but it has to be one that you didn’t pick. You then have a choice to switch to the door that you didn’t pick and that hasn’t been opened, or you can stay with your original pick.

Based on a game show, Monty Hall was the name of the presenter.

Now let’s fast forward to the fact that you should always switch because you have a 2/3 chance of getting the prize if you do (no, not 50/50) so switching is the winning strategy. Going into today, what I expected was:

  • Initially, most students would want to stay with their original choice, having decided that there was no benefit to switching or that it was a 50/50 deal so it didn’t make any sense.
  • At least one student would actively reject the idea.
  • With discussion and demonstration, I could get students thinking about this problem in the right way.

The correct mental framework for Monty Hall is essential. What are the chances, with 1 prize behind 3 doors, that you picked the right door initially. It’s 1/3, right? So the chances that you didn’t pick the correct door is 2/3. Now, if you just swapped randomly, there’d be no advantage but this is where you have to understand the problem. There are 2 doors that you didn’t pick and, by elimination, these 2 doors contain the prize 2/3 of the time. The host knows where the prize is so the host will never open a door and show you the prize, the host just removes a worthless door. Now you have two sets of doors – the one you picked (correct 1/3 of the time) and the remaining door from the unpicked pair (correct 2/3 of the time). So, given that there’s only one remaining door to pick in the unpicked pair, by switching you increase your chances of winning from 1/3 to 2/3.

Don’t believe me? Here’s an on-line simulator that you can run (Ignore what it says about Internet Explorer, it tends to run on most things.)

Still don’t believe me? Here’s some Processing code that you can run locally and see the rates converge to the expected results of 1/3 for staying and 2/3 for switching.

This is a challenging and counter-intuitive result, until you actually understand what’s happening, and this clearly illustrates one of those situations where you can ask students to plug numbers into equations for probability but, when you actually ask them to reason mathematically, you suddenly discover that they don’t have the correct mental models to explain what is going on. So how did I approach it?

Well, I used Peer Instruction techniques to get the class to think about the problem and then vote on it. As expected, about 60% of the class were stayers. Then I asked them to discuss this with a switcher and to try and convince each other of the rightness of their actions. Then I asked them to vote again.

No significant change. Dang.

So I wheeled out the on-line simulator to demonstrate it working and to ensure that everyone really understood the problem. Then I showed the Processing simulation showing the numbers converging as expected. Then I pulled out the big guns: the 100 door example. In this case, you select from 100 doors and Monty eliminates 98 (empty) doors that you didn’t choose.

Suddenly, when faced with the 100 doors, many students became switchers. (Not surprising.) I then pointed out that the two problems (3 doors and 100 doors) had reduced to the same problem, except that the remaining doors were the only door left standing from 2 and 99 doors respectively. And, suddenly, on the repeated vote, everyone’s a switcher. (I then ran the code on the 100 door example and had to apologise because the 99% ‘switch’ trace is so close to the top that it’s hard to see.)

Why didn’t the discussion phase change people’s minds? I think it’s because of the group itself, a junior group with very little vocabulary of probability. it would have been hard for the to articulate the reasons for change beyond much ‘gut feeling’ despite the obvious mathematical ability present. So, expecting this, I confirmed that they were understanding the correct problem by showing demonstration and extended simulation, which provided conflicting evidence to their previously held belief. Getting people to think about the 100 door model, which is a quite deliberate manipulation of the fact that 1/100 vs 99/100 is a far more convincing decision factor than 1/3 vs 2/3, allowed them to identify a situation where switching makes sense, validating what I presented in the demonstrations.

In these cases, I like to mull for a while to work out what I have and haven’t learned from this. I believe that the students had a lot of fun in the puzzle section and that most of them got what happened in Monty Hall, but I’d really like to come back to them in a year or two and see what they actually took away from today’s example.

 


Howdy, Partner

I am giving a talk on Friday about the partnership relationship between teacher and student and, in my opinion, why we often accidentally attack this through a less-than-optimal approach to assessment and deadlines. I’ve spoken before about how an arbitrary deadline that is convenient for administrative reasons is effectively pedagogically and ethically indefensible. For all that we disparage our students, if we do, for focusing on marks and sometimes resorting to cheating rather than focusing on educational goals, we leave ourselves open to valid accusations of hypocrisy if we have the same ‘ends justify the means’ approach to setting deadlines.

Consistency and authenticity are vital if we are going to build solid relationships, but let me go further. We’re not just building a relationship, we’re building an expectation of continuity over time. If students know that their interests are being considered, that what we are teaching is necessary and that we will always try to deal with them fairly, they are far more likely to invest the effort that we wish them to invest  and develop the knowledge. More importantly, a good relationship is resilient, in that the occasional hiccup doesn’t destroy the whole thing. If we have been consistent and fair, and forces beyond our control affect something that we’ve tried to do, my experience is that students tolerate it quite well. If, however, you have been arbitrary, unprepared, inconsistent and indifferent, then you will (fairly or not) be blamed for anything else that goes wrong.

We cannot apply one rule to ourselves and a different one to our students and expect them to take us seriously. If you accept no work if it’s over 1 second late and keep showing up to lectures late and unprepared, then your students have every right to roll their eyes and not take you seriously. This doesn’t excuse them if they cheat, however, but you have certainly not laid the groundwork for a solid partnership. Why partnership? Because the students in higher education should graduate as your professional peers, even if they are not yet your peers in academia. I do not teach in the school system and I do not have to deal with developmental stages of the child (although I’m up to my armpits in neo-Piagetian development in the knowledge areas, of course).

We return to the scaffolding argument again. Much as I should be able to remove the supports for their coding and writing development over their degree, I should also be able to remove the supports for their professional skills, team-based activities and deadlines because, in a few short months, they will be out in the work force and they will need these skills! If I take a strictly hierarchical approach where a student is innately subordinate to me, I do not prepare them for a number of their work experiences and I risk limiting their development. If I combine my expertise and my oversight requirements with a notion of partnership, then I can work with the student for some things and prepare the student for a realistic workplace. Yes, there are rules and genuine deadlines but the majority experience in the professional workplace relies upon autonomy and self-regulation, if we are to get useful and creative output from these new graduates.

If I demand compliance, I may achieve it, but we are more than well aware that extrinsic motivating factors stifle creativity and it is only at those jobs where almost no cognitive function is required that the carrot and the stick show any impact. Partnership requires me to explain what I want and why I need it – why it’s useful. This, in turn, requires me to actually know this and to have designed a course where I can give a genuine answer that illustrates these points!

“Because I said so,” is the last resort of the tired parent and it shouldn’t be the backbone of an entire deadline methodology. Yes, there are deadlines and they are important but this does not mean that every single requirement falls into the same category or should be treated in the same way. By being honest about this, by allowing for exchange at the peer-level where possible and appropriate, and by trying to be consistent about the application of necessary rules to both parties, rather than applying them arbitrarily, we actually are making our students work harder but for a more personal benefit. It is easy to react to blind authority and be resentful, to excuse bad behaviour because you’re attending a ‘bad course’. It is much harder for the student to come up with comfortable false rationalisations when they have a more equal say, when they are informed in advance as to what is and what is not important, and when the deadlines are set by necessity rather than fiat.

I think a lot of people miss one of the key aspects of fixing assessment: we’re not trying to give students an easier ride, we’re trying to get them to do better work. Better work usually requires more effort but this additional effort is now directed along the lines that should develop better knowledge. Partnership is not some way for students to negotiate their way out of submissions, it’s a way that, among other things, allows me to get students to recognise how much work they actually have to do in order to achieve useful things.

If I can’t answer the question “Why do my students have to do this?” when I ask it of myself, I should immediately revisit the activity and learning design to fix things so that I either have an answer or I have a brand new piece of work for them to do.


De Profundis – or de-profounding?

“It is common to assume that we are dealing with a highly intelligent book when we cease to understand it.” (de Botton, The Consolations of Philosophy, p157)

The notion of a lack of comprehension being a fundamental and innate fault of the reader, rather than the writer, is a mistake made, in many different and yet equally irritating ways, throughout the higher educational sector. A high pass rate may be seen as indicative of an easy course or a weak marker. A high failure rate may be attributed to the innate difficulty of the work or the inferior stuff of which the students are made. As I have written before, under such a presumption, I could fail all of my students and strut around, the smartest man in my University, for none have been able to understand the depths and subtlety of my area of knowledge.

Yet, if the real reason is that I have brought my students to a point where their abilities fail them and, either through ignorance or design, I do not strive to address this honestly and openly, then it doesn’t matter how many of them ultimately pass – I will be the biggest failure in the class. I know a great number of very interesting and intelligent educators but, were you to ask me if any of them could teach, I would have to answer that I did not know, unless I had actually seen them do so. For all of our pressure on students to contain the innate ability to persevere, to understand our discipline or to be (sorry, Ray) natural programmers, the notion that teaching itself might not be something that everyone is capable of is sometimes regarded as a great heresy. (The notion or insistence that developing as a teacher may require scholarship and, help us all, practise, is apostasy – our heresy leading us into exile.) Teaching revolves around imparting knowledge efficiently and effectively so that students may learn. The cornerstone of this activity is successful and continuing communication. Wisdom may be wisdom but it rapidly becomes hard to locate or learn from when it is swaddled in enough unnecessary baggage.

I have been, mostly thanks to the re-issue of cheap Penguins, undertaking a great deal of reading recently and I have revisited Marcus Aurelius, Seneca, de Botton and Wilde. The books that are the most influential upon me remain those books that, while profound, maintain their accessibility. Let me illustrate this with an example. For those who do not know what De Profundis means, it is a biblical reference to Psalm 130, appropriated by the ever humble Oscar Wilde as the title of his autobiographical letter to his former lover, from the prison in which he was housed because of that love.

But what it means is “From the depths”. In the original psalm, the first line is:

De profundis clamavi ad te, Domine;
From the depths, I have cried out to you, O Lord;

And in this reading, we see the measure of Wilde’s despair. Having been sentenced to hard labour, and having had his ability to write confiscated, his ability to read curtailed, and his reputation in tatters, he cries out from the depths to his Bosie, Lord Douglas.

De profundis [clamavi ad te, Bosie;]

If you have the context for this, then this immediately prepares you for the letter but, as it is, the number of people who are reading Wilde is shrinking, let alone the number of people who are reading a Latin Bible. Does this title still assist in the framing of the work, through its heavy dependence upon the anguish captured in Psalm 130, or is it time to retitle it “From the depths, I have cried out to you!” to capture both the translation and the sense. The message, the emotion and the hard-earned wisdom contained in the letter are still valuable but are we hurting the ability of people to discover and enjoy it by continuing to use a form of expression that may harm understanding?

Les Très Riches Heures du duc de Berry, Folio 70r – De Profundis the Musée Condé, Chantilly. (Another form of expression of this Psalm.)

Now, don’t worry, I’m not planning to rewrite Wilde but this raises a point in terms of the occasionally unhappy union of the language of profundity and the wisdom that it seeks to impart. You will note the irony that I am using a heavily structured, formal English, to write this and that there is very little use of slang here. This is deliberate because I am trying to be precise while still being evocative and, at the same time, illustrating that accurate use of more ornate language can obscure one’s point. (Let me rephrase that. The unnecessary use of long words and complex grammar gets in the way of understanding.)

When Her Majesty the Queen told the Commonwealth of her terrible year, her words were:

“1992 is not a year on which I shall look back with undiluted pleasure. In the words of one of my more sympathetic correspondents, it has turned out to be an Annus Horribilis.”

and I have difficulty thinking of a more complicated way of saying “1992 was a bad year” than to combine a complicated grammatical construction with a Latin term that is not going to be on the lips of the people who are listening to the speech. Let me try: “Looking back on 1992, it has been, in the words of one of my friends, a terrible year.” Same content. Same level of imparted knowledge. Much less getting in the way. (The professional tip here is to never use the letters “a”, “n”, “s” and “u” in one short word unless you are absolutely sure of your audience. “What did she say about… nahhh” is not the response you want from your loyal subjects.) [And there goes the Knighthood.]

I love language. I love reading. I am very lucky that, having had a very broad and classically based education, I can read just about anything and not be intimidated or confused by the language forms – providing that the author is writing in one of the languages that I read, of course! To assume that everyone is like me or, worse, to judge people on their ability because they find long and unfamiliar words confusing, or have never had the opportunity to use these skills before, is to leap towards the same problem outlined in the quote at the top. If we seek to label people unintelligent when they have not yet been exposed to something that is familiar to us, then this is just as bad as lauding someone’s intelligence because you don’t understand what they’re talking about.

If my students need to know something then I have to either ensure that they already do so, by clearly stating my need and being aware of the educational preparation in my locale, or I have to teach it to them in forms that they can understand and that will allow them to succeed. I may love language, classical works and big words, but I am paid to teach the students of 2012 to become the graduates, achievers and academics of the future. I have to understand, respect and incorporate their context, while also meeting the pedagogical and knowledge requirements of the courses that I teach.

No-one said it was going to be easy!


ICER 2012 Day 3 Research Paper Session 5

The last of the research paper sessions and, dear reader, I am sure that you are as glad as I that we are here. Reading about an interesting conference that you didn’t attend is a bit like receiving a message from a friend talking about how he kissed the person that you always loved from afar. Thanks for the information but I would rather have been there myself.

This session opened with “Toward a Validated Computing Attitudes Survey” (Allison Elliott Tew, Brian Dorn and Oliver Schneider), where the problems with negative perceptions of the field and hostile classroom environments, combined with people thinking that they would be no good at CS, conspire to prevent students coming in to, or selecting, our discipline. The Computing Attitudes Survey was built, with major modification, from the Colorado Learning Attitudes about Science Survey (CLASS, pronounced C-LASS). To adapt the original survey, some material was just copied across with a word change (computer science replacing physics), some terminology was changed (algorithm for formula) and some discipline specific statements were added. Having established an expert opinion basis for the discipline specific content, students can now see how much they agree with the experts.

There is, as always, the rip of contentious issues. “You have to know maths to be able to program” was a three-way split within the expert group as to who agreed, disagreed or was neutral. What was interesting, and what I’ll be looking at in future, is the evidence of self-defeating thought in many answers (no, not questions. The questions weren’t self-defeatist but the answers often were.) What was also interesting is that attitudes seem to get worse in the CLASS instrument after you take the course!

Confidence, as simple as “I think I can do this”, plays a fundamental part in determining how students will act. Given the incredibly difficult decisions that a student faces when selecting their degree or concentration, it is no surprise that anyone who thinks “Computing is too hard for me” or “Computing is no use to me” will choose to do something else.

The authors are looking for volunteers where they can run these trials again so, after you’ve read their paper, if you’re interested, you should probably e-mail them.

“A Statewide Survey on Computing Education Pathways and Influences: Factors in Broadening Participation in Computing” (Mark Guzdial, Barbara Ericson, Tom McKlin and Shelly Engelman)

The final research paper in the conference dealt with the final evaluation of the Georgia Computes! initiative, which had run from October 2006 to August of this year. This multi-year project cannot be contained in my nervous babbling but I can talk about the instrument that was presented. Having run summer camps, weekend workshops, competitions, teacher workshops, a teachers’ lending library, first year engagement and seeded first-year summer camps (whew!), the question was: What had been the impact of Georgia Computes! ? What factors influence undergrad enrolment into intro CS courses?

There were many questions and results presented but I’d like to focus on the top four reasons given, from survey, as to why students weren’t going to undertake a CS Major or Minor:

  1. I don’t want to do the type of work
  2. Little interest in the subject matter
  3. Don’t enjoy Computing Courses
  4. Don’t have confidence that I would succeed.

Looking at those points, after a state-wide and highly successful campaign over 6 years has finished, it is very, very sobering for me. What these students are saying is that they cannot see the field as attractiveinterestingenjoyable or that they are capable. But these are all aspects that we can work on, although some of these will require a lot of work.

Two further things that Barb said really struck me. Firstly, that if you take into account encouragement and ability, that men will tend to be satisfied and continue on if they receive either or both – the factors are not separable for men – but that women and minorities need encouragement in order to feel satisfied and to convince them to keep going. Secondly, when it comes to giving encouragement, male professors are just as effective as female professors in terms of giving encouragement to women.

As a male lecturer, who is very, very clearly aware of the demographic disgrace that is the under-representation of women in CS, this first fact gives me a partial strategy to increase retention (and reinforces a believe I have held anecdotally for some time) but the second fact gives me the agency to assist in this process, as well as greater hope for a steadily increasing female cohort over time.

Overall, a very positive note on which to finish the session papers!


ICER 2012 Day Research Paper Session 4

This session kicked off with “Ability to ‘Explain in Plain English’ Linked to Proficiency in Computer-based Programming”,  (Laurie Murphy, Sue Fitzgerald, Raymond Lister and Renee McCauley (presenting)). I had seen a presentation along these lines at SIGCSE and this is an excellent example of international collaboration, if you look at the authors list. Does the ability to explain code in plain English correlate with ability to solve programming problems? The correlation appears to be there, whether or not we train students in Explaining in Plain English or not, but is this causation?

This raises a core question, addressed in the talk: Do we need to learn to read (trace) code before we learn to write code or vice versa? The early reaction of the Leeds group was that reading code didn’t amount to testing whether students could actually write code. Is there some unknown factor that must be achieved before either or both of these? This is a vexing question as it raises the spectre of whether we need to factor in some measure of general intelligence, which has not been used as a moderating factor.

Worse, we now return to that dreadful hypothesis of “programming as an innate characteristics”, where you were either born to program or not. Ray (unsurprisingly) believes that all of the skills in this area (EIPE/programming) are not innate and can be taught. This then raises the question of what the most pedagogically efficient way is to do this!

How Do Students Solve Parsons Programming Problems? — An Analysis of Interaction Traces
Juha Helminen (presenting), Petri Ihantola, Ville Karavirta and Lauri Malmi

This presentation was of particular interest to me because I am currently tearing apart my 1,900 student data corpus to try and determine the point at which students will give up on an activity, in terms of mark benefit, time expended and some other factors. This talk, which looked at how students solved problems, also recorded the steps and efforts that they took in order to try and solve them, which gave me some very interesting insights.

A Parsons problem is one where, given code fragments a student selects, arranges and composes a program in response to a question. Not all code fragments present will be required in the final solution. Adding to the difficulty, the fragments require different indentation to assert their execution order as part of block structure. For those whose eyes just glazed over, this means that it’s more than selecting a line to go somewhere, you have to associate it explicitly with other lines as a group. Juha presented a graph-based representation of the students’ traversals of the possible solutions for their Parsons problem. Students could ask for feedback immediately to find out how their programs were working and, unsurprisingly, some opted for a lot of “Am I there yet” querying. Some students queried feedback as much as 62 times for only 7 features, indicative of permutation programming, with very short inter-query intervals. (Are we there yet? No. Are we there yet? No. Are we there yet? No. Are we there yet? No.)

The primary code pattern of development was linear, with block structures forming the first development stages, but there were a lot of variations. Cycles (returning to the same point) also occurred in the development cycle but it was hard to tell if this was a deliberate reset pattern or a one where permutation programming had accidentally returned the programmer to the same state. (Asking the students “WHY” this had occurred would be an interesting survey question.)

There were some good comments from the audience, including the suggestion of correlating good and bad states with good and bad outcomes, using Markov chain analysis to look for patterns. Another improvement suggested was recording the time taken for the first move, to record the impact (possible impact) of cognition on the process. Were students starting from a ‘trial and error’ approach or only after things went wrong?

Tracking Program State: A Key Challenge in Learning to Program
Colleen Lewis (presenting, although you probably could have guessed that)

This paper won the Chairs’ Award for the best paper at the Conference and it was easy to see why. Colleen presented a beautifully explored case study of an 11 year old boy working on a problem in the Scratch programming language and trying to work out why he couldn’t draw a wall of bricks. By capturing Kevin’s actions, in code, his thoughts, from his spoken comments, we are exposed to the thought processes of a high achieving young man who cannot fathom why something isn’t working.

I cannot do justice to this talk by writing about something that was primarily visual, but Colleen’s hypothesis was that Kevin’s attention to the state (variables and environmental settings over which the program acts) within the problem is the determining factor in the debugging process. Once Kevin’s attention was focused on the correct problem, he solved it very quickly because the problem was easy to solve. Locating the correct problem required him to work through and determine which part of the state was at fault.

Kevin has a pile of ideas in his head but, as put by duSessa and Sherin (98), learning is about reliably using the right ideas in the correct context. Which of Kevin’s ideas are being used correctly at any one time? The discussion that followed talked about a lot of the problems that students have with computers, in that many students do not see computers as actually being deterministic. Many students, on encountering a problem, will try exactly the same thing again to see if the error occurs again – this requires a mental model that we expect a different set of outcomes with the same inputs and process, which is a loose definition of either insanity or nondeterminism. (Possibly both.)

I greatly enjoyed this session but the final exemplar, taking apart a short but incredibly semantically rich sequence and presenting it with a very good eye for detail, made it unsurprising that this paper won the award. Congratulations again, Colleen!


ICER 2012 Day 2 Research Session 3

The session kicked off with “The Abstraction Transition Taxonomy: Developing Desired Learning Outcomes through the Lens of Situated Cognition”, (Quintin Cutts (presenting), Sarah Esper, Marlena Fecho, Stephen Foster and Beth Simon) and the initial question: “Do our learning outcomes for programming classes match what we actually do as computational thinkers and programmers?” To answer this question, we looked Eric Mazur’s Peer Instruction, an analysis of PU questions as applied to a CS principles pilot course, and then applied the Abstraction Transition Taxonomy (ATT) to published exams, with a wrap of observations and ‘where to from here’.

Physicists have, some time ago, noticed that their students can plug numbers into equations (turn the handle, so to speak) but couldn’t necessarily demonstrate that they understood things: they couldn’t demonstrate that that they thought as physicists should. (The Force Concept Inventory was mentioned here and, if you’re not familiar, it’s a very interesting thing to look up.) To try and get students who thought as physicists, Mazur developed Peer Instruction (PI), which had pre-class prep work, in-class questions, followed by voting, discussion and re-voting, with an instructor leading class-wide discussion. These activities prime the students to engage with the correct explanations – that is, the way that physicists think about and explain problems.

Looking at Computer Science, many CS people use the delivery of a working program as a measure of the correct understanding and appropriate use of programming techniques.

Given that generating a program is no guarantee of understanding, which is sad but true given the existence of the internet, other students and books. We could try and force a situation where students are isolated from these support factors but this then leads us back to permutation programming, voodoo code and shotgun debugging unless the students actually understand the task and how to solve it using our tools. In other words, unless they think as Computer Scientists.

UCSD had a CS Principles Pilot course that used programming to foster computational thinking that was aimed at acculturation into the CS ‘way’ rather than trying to create programmers. The full PI implementation asked students to reason about their programs, through exploratory homework and a PI classroom, with some limited time traditional labs as well. While this showed a very positive response, the fear was that this may have been an effect of the lecturers themselves so analysis was required!

By analysing the PI questions, a taxonomy was developed that identified abstraction levels and the programming concepts within them. The abstraction levels were “English”, “Computer Science Speak” and “Code”. The taxonomy was extended with the transitions between these levels (turning an English question into code for example is a 1-3 transition, if English is abstraction level 1 and Code 3. Similarly, explain this code in English is 3-1). Finally, they considered mechanism (how does something work) and rationale (why did we do it this way)?

Analysing the assignment and assessment questions to determine what was being asked, in terms of abstraction level and transitions, and whether it was mechanism or rationale, revealed that 21% of the in-class multiple choice questions were ‘Why?’ questions but there actually very few ‘Why?’ questions in the exam. Unsurprisingly, almost every question asked in the PI framework is a ‘Why?’ question,  so there should be room for improvement in the corresponding examinations. PI emphasises the culture of the discipline through the ‘Why?’ framing because it requires acculturation and contextualisation to get yourself into the mental space where a Rationale becomes logical.

The next paper “Subgoal-Labeled Instructional Material Improves Performance and Transfer in Learning to Develop Mobile Applications”, Lauren Margulieux, Mark Guzdial and Richard Catrambone, dealt with mental models and how the cognitive representation of an action will affect both the problem state and how well we make predictions. Students have so much to think about – how do they choose?

The problem with just waiting for a student to figure it out is high cognitive load, which I’ve referred to before as helmet fire. If students become overwhelmed they learn nothing, so we can explicitly tell students and/or provide worked examples. If we clearly label the subgoals in a worked example, students remember the subgoals and the transition from one to another. The example given here was an Android App Inventor worked example, one example of which had no labels, the other of which had subgoal labels added as overlay callouts to the movie as the only alteration. The subgoal points were identified by task analysis – so this was a very precise attempt to get students to identify the important steps required to understand and complete the task.

(As an aside, I found this discussion very useful. It’s a bit like telling a student that they need comments and so every line has things like “x=3; //x is set to 3” whereas this structured and deliberate approach to subgoal definition shows students the key steps.)

In the first experiment that was run, the students with the subgoals (and recall that this was the ONLY difference in the material) had attempted more, achieved more and done it in less time. A week later, they still got things right more often. In the second experiment, a talk-aloud experiment, the students with the subgoals discussed the subgoals more, tried random solution strategies less and wasted less effort than the other group. This is an interesting point. App Inventor allows you to manipulate blocks of code and the subgoal group were less likely to drag out a useless block to solve the problem. The question, of course, is why. Was it the video? Was it the written aspects? Was it both?

Students appear to be remembering and using the subgoals and, as was presented, if performance is improving, perhaps the exact detail of why it’s happening is something that we wish to pursue but, in the short term, we can still use the approach. However, we do have to be careful with how many labels we use as overloading visual cues can lead to confusion, thwarting any benefit.

The final paper in the session was “Using collaboration to overcome disparities in Java experience”, Colleen Lewis (presenting), Nathaniel Titterton and Michael Clancy. This presented the transformation of a a standard 3 Lecture, 2 hours of lab and 1 discussion hour course into a 1 x 1 hour lecture with 2 x 3 hour labs, with the labs now holding the core of the pedagogy. Students are provided feedback through targeted tutoring, using on-line multiple choices for the students to give feedback and assist the TAs. Pair programming gives you someone to talk to before you talk to the TA but the TA can monitor the MCQ space and see if everyone is having a problem with a particular problem.

This was addressing a problem in a dual speed entry course, where some students had AP CS and some didn’t, therefore the second year course was either a review for those students who had Java (from AP CS) or was brand new. Collaboration and targeted support was aimed at reducing the differences between the cohorts and eliminate disadvantage.

Now, the paper has a lot of detail on the different cohorts, by intake, by gender, by retention pattern, but the upshot is that the introduction of the new program reduced the differences between those students who did and did not have previous Java experience. In other words, whether you started at UCB in CS 1 (with no AP CS) or CS  1.5 (with AP CS), the gap between your cohorts shrank – which is an excellent result. Once this high level of collaboration was introduced, the only factor that retained any significant difference was the first exam, but this effect disappeared throughout the course as students received more exposure to collaboration.

I strongly recommend reading all three of these papers!


The Narrative Hunger: Stories That Meet a Need

I have been involved in on-line communities for over 20 years now and, apparently, people are rarely surprised when they meet me. “Oh, you talk just like you type.” is the effective statement and I’m quite happy with this. While some people adopt completely different personae on-line, for a range of reasons, I seem to be the same. It then comes as little surprise that I am as much of storyteller in person as I am online. I love facts, revel in truth, but I greatly enjoying putting them together into a narrative that conveys the information in a way that is neither dry nor dull. (This is not to say that the absence of a story guarantees that things must be dry and dull but, without a focus on those elements of narrative that appeal to common human experience, we always risk this outcome.)

One of Katrina’s recent posts referred to the use of story telling in education. As she says, this can be contentious because:

stories can be used to entertain students, to have them enjoy your lectures, but are not necessarily educational.

The shibboleth of questionable educational research is often a vaguely assembled study, supported by the conjecture that the “students loved it”, and it is very easy to see how story telling could fall into this. However, we as humans are fascinated by stories. We understand the common forms even where we have not read Greek drama or “The Hero With a Thousand Faces”. We know when stories ring true and when they fall flat. Searching the mental engines of our species for the sweet spots that resonate across all of us is one way to convey knowledge in a more effective and memorable way. Starting from this focus, we must then observe our due diligence in making sure that our story framework contains a worthy payload.

Not all stories are of the same value.

I love story telling and I try to weave together a narrative in most of my lectures, even down to leaving in sections where deliberate and tangential diversion becomes part of the teaching, to allow me to contrast a point or illuminate it further by stripping it of its formal context and placing it elsewhere. After all, an elephant next to elephants is hardly memorable but an elephant in a green suit, as King of a country, tends to stick in the mind.

The power of the narrative is that it involves the reader or listener in the story. A well-constructed narrative leads the reader to wonder about what is going to happen next and this is model formation. Very few of us read in a way where the story unfolds with us completely distant from it – in fact, maintaining distance from a story is a sign of a poor narrative. When the right story is told, or the right person is telling it, you are on the edge of your seat, hungry to know more. When it is told poorly, then you stifle a yawn and smile politely, discreetly peering at your watch as you attempt to work out the time at which you can escape.

Of course, this highlights the value of narrative for us in teaching but it also reinforces that requirement that it be more than an assemblage of rambling anecdotes, it must be a constructed narration that weaves through points in a recognisable way and giving us the ability to conjecture on its direction. O. Henry endings, the classic twist endings, make no sense unless you have constructed a mental model that can be shaken by the revelations of the last paragraphs. Harry Potter book 7 makes even less sense unless one has a model of the world in which the events of the book can be situated.

As always, this stresses the importance of educational design, where each story, each fact, each activity, is woven into the greater whole with a defined purpose and in full knowledge of how it will be used. There is nothing more distracting than someone who rambles during a lecture about things that not only seem irrelevant, but are irrelevant. Whereas a musing on something that, on first glance, appears irrelevant can lead to exploration of the narrative by students. Suddenly, they are within a Choose Your Own Adventure book and trying to work out where each step will take them.

Stories are an excellent way to link knowledge and problems. They excite, engage and educate, when used correctly. We are all hungry for stories: we are players within our own stories, observers of those of the people around us and, eventually, will form part of the greater narrative by the deeds for which we are written up in the records to come. It makes sense to use this deep and very human aspect of our intellect to try and assist with the transfer of knowledge.


Our Influence: Prejudice As Predictor

If you want to see Raymond Lister get upset, tell him that students fall into two categories: those who can program and those who can’t. If you’ve been reading much (anything) of what I’ve been writing recently, you’ll realise that I’ve been talking about things like cognitive developmentself-regulationdependence on authority, all of which have one thing in common in that students can be at different stages when they reach us. There is no guarantee that students will be self-reliant, cognitively mature and completely capable of making reasoned decisions at the most independent level.

There was a question raised several times during the conference and it’s the antithesis of the infamous “double hump conjecture”, that students divide into two groups naturally and irrevocably because of some innate characteristic. The question is “Do our students demonstrate their proficiency because of what we do or in spite of what we do?” If the innate characteristic conjecture is correct, and this is a frequently raised folk pedagogy, then our role has no real bearing on whether a student will learn to program or not.

If we accept that students come to us at different stages in their development, and that these development stages will completely influence their ability to learn and form mental models, then the innate characteristic hypothesis withers and dies almost immediately. A student who does not have their abilities ready to display can no more demonstrate their ability to program than a three-year old child can write Shakespeare – they are not yet ready to be able to learn, assemble, reassemble or demonstrate the requisite concepts and related skills.

However, a prejudicial perspective that students who cannot demonstrate the requisite ability are innately and permanently lacking that skill will, unpleasantly, viciously and unnecessarily, cause that particular future to lock in. Of course a derisive attitude to these ‘stupid’ or ‘slow’ students will make them withdraw or undermine their confidence! As I will note from the conference, confidence and support have a crucial impact on students. Undermining a student’s confidence is worse than not teaching them at all. Walking in with the mental model that separates the world into programmers and non-programmers forces that model into being.

Since I’ve entered the area of educational research, I’ve been exposed to things that I can separate into the following categories:

  • Fascinating knowledge and new views of the world, based on solid research and valid experience.
  • Nonsense
  • Damned nonsense
  • Rank stupidity

Where most of the latter come from other educators who react, our of fear or ignorance, to the lessons from educational research with disbelief, derision and resentment. “I don’t care what you say, or what that paper says, you’re wrong” says the voice of “experience”.

There is no doubt that genuine and thoughtful experience is, has been, and will always be a strong and necessary sibling to the educational and psychological theory that is the foundation of educational research. However, shallow experience can often be built up into something that it is not, when it is combined with fallacious thinking, cherry picking, confirmation bias and any other permutation of fear, resentment and inertia. The influence of folk pedagogies, lessons claimed from tea room mutterings and the projection of a comfortable non-reality that mysteriously never requires the proponent to ever expend any additional effort or change what they do, is a malign shadow over the illumination of good learning and teaching practice.

The best educators explain their successes with solid theory, strive to find a solution to the problems that lead to failure, and listen to all sources in order to construct a better practice and experience for their students. I hope, one day, to achieve this level- but I do know that doubting everything new is not the path forward for me.

I am pleased to say that the knowledge and joy of this (to me) new field far outstrips most of the other things that I have seen but I cannot stress any more how important it is that we choose our perspectives carefully. We, as educators, have disproportionally high influence: large shadows and big feet. Reading further into this discipline illustrates that we must very carefully consider the way that we think, the way that our students think and the capability that we actually have in the students for reasoning and knowledge accumulation before we make any rash or prejudicial statements about the innate capabilities of that most mythical of entities: the standard student.


ICER 2012 Research Paper Session 1

It would not be over-stating the situation to say that every paper presented at ICER led to some interesting discussion and, in some cases, some more… directed discussion than others. This session started off with a paper entitled “Threshold Concepts and Threshold Skills in Computing” (Kate Sanders, Jonas Boustedt, Anna Eckerdal, Robert McCartney, Jan Erik Moström Lynda Thomas and Carol Zander), on whether threshold skills, as distinct from threshold concepts, existed and, if they did, what their characteristics would be. Threshold skills were described as transformative, integrative, troublesome knowledge, semi-irreversible (in that they’re never really lost), and requiring practice to keep current. The discussion that followed raised a lot of questions, including whether you could learn a skill by talking about it or asking someone – skill transfer questions versus environment. The consensus, as I judged it from the discussion, was that threshold skills didn’t follow from threshold concepts but there was a very rapid and high-level discussion that I didn’t quite follow, so any of the participants should feel free to leap in here!

The next talk was “On the reliability of Classifying Programming Tasks Using a Neo-Piagetian Theory of Cognitive Development” (Richard Gluga, Raymond Lister, Judy Kay, Sabina Kleitman and Donna Teague), where Ray raised and extended a number of the points that he had originally shared with us in the workshop on Sunday. Ray described the talk as being a bit “Neo-Piagetian theory for dummies” (for which I am eternally grateful)  and was seeking to address the question as to where students are actually operating when we ask them to undertake tasks that require a reasonable to high level of intellectual development.

Ray raised the three bad programming habits he’d discussed earlier:

  1. Permutation programming (where students just try small things randomly and iteratively in the hope that they will finally get the right solution – this is incredibly troublesome if the many small changes take you further away from the solution )
  2. Shotgun debugging (where a bug causes the student to put things in with no systematic approach and potentially fixing things by accident)
  3. Voodoo coding/Cargo cult coding (where code is added by ritual rather than by understanding)

These approaches show one very important thing: the student doesn’t understand what they’re doing. Why is this? Using a Neo-Piagetian framework we consider the student as moving through the same cognitive development stages that they did as a child (Piagetian) but that this transitional approach applies to new and significant knowledge frameworks, such as learning to program. Until they reach the concrete operational stage of their development, they will be applying poor or inconsistent models – logically inadequate models to use the terminology of the area (assuming that they’ve reached the pre-operational stage). Once a student has made the next step in their development, they will reach the concrete operational stage, characterised (among other things, but these were the ones that Ray mentioned) by:

  1. Transitivity: being able to recognise how things are organised if you can impose an order upon them.
  2. Reversibility: that we can reverse changes that we can impose.
  3. Conservation: realising that the numbers of things stay the same no matter how we organise them.

In coding terms, these can be interpreted in several ways but the conservation idea is crucial to programming because understanding this frees the student from having to write the same code for the same algorithm every time. Grasping that conversation exists, and understanding it, means that you can alter the code without changing the algorithm that it implements – while achieving some other desirable result such as speeding the code up or moving to a different paradigm.

Ray’s paper discussed the fact that a vast number of our students are still pre-operational for most of first and second year, which changes the way that we actually try to teach coding. If a student can’t understand what we’re talking about or has to resort to magical thinking to solve problem, then we’ve not really achieved our goals. If we do start classifying the programming tasks that we ask students to achieve by the developmental stages that we’re expecting, we may be able to match task to ability, making everyone happy(er).

The final paper in the session was “Social Sensitivity Correlations with the Effectiveness of team Process Performance: An Empirical Study”, (Luisa Bender (presenting), Gursimra Walia, Krishna Kambhampaty, Travis Nygard and Kendall Nygard), which discussed the impact of socially sensitive team members in programming teams. (Social sensitivity is the ability to correctly understand the feelings and the viewpoints of other people.)

The “soft skills” are essential to teamwork process and a successful team enhances learning outcomes. Bad teams hinder team formation and progress, and things go downhill from there. From Wooley et al’s study of nearly 700 participants, the collective intelligence of the team stems from how well the team works rather than the individual intelligence of the participants. The group whose members were more socially sensitive had a higher group intelligence.

Just to emphasise that point: a team of smart people may not be as effective as a team as a team of people who can understand the feelings and perspectives of each other. (This may explain a lot!)

Social sensitivity is a good predictor of team performance and the effectiveness of team-oriented processes, as well as the satisfaction of the team members. However, it is also apparent that we in Science, Technology, Engineering and Mathematics (STEM) have lower social sensitivity readings (supporting Baron-Cohen’s assertion – no, not that one) than some other areas. Future work in this area is looking at the impact of a single high or low socially sensitive person in a group, a study that will be of great interest to anyone who is running teams made up on randomly assigned students. How can we construct these groups for the best results for the students?


ICER 2012 Day 1 Keynote: How Are We Thinking?

We started off today with a keynote address from Ed Meyer, from University of Queensland, on the Threshold Concepts Framework (Also Pedagogy, and Student Learning). I am, regrettably, not as conversant with threshold concepts as I should be, so I’ll try not to embarrass myself too badly. Threshold concepts are central to the mastery of a given subject and are characterised by some key features (Meyer and Land):

  1. Grasping a threshold concept is transformative because it changes the way that we think about something. These concepts become part of who we are.
  2. Once you’ve learned the concept, you are very unlikely to forget it – it is irreversible.
  3. This new concept allows you to make new connections and allows you to link together things that you previously didn’t realise were linked.
  4. This new concept has boundaries – they have an area over which they apply. You need to be able to question within the area to work out where it applies. (Ultimately, this may identify areas between schools of thought in an area.)
  5. Threshold concepts are ‘troublesome knowledge’. This knowledge can be counter-intuitive, even alien and will make no sense to people until they grasp the new concept. This is one of the key problems with discussing these concepts with people – they will wish to apply their intuitive understanding and fighting this tendency may take some considerable effort.

Meyer then discussed how we see with new eyes after we integrate these concepts. It can be argued that concepts such as these give us a new way of seeing that, because of inter-individual differences, students will experience in varying degrees as transformative, integrative, and (look out) provocative and troublesome. For this final one, a student experiences this in many ways: the world doesn’t work as I think it should! I feel lost! Helpless! Angry! Why are you doing this to me?

How do you introduce a student to one of these troublesome concepts and, more importantly, how can you describe what you are going to talk about when the concept itself is alien: what do you put in the course description given that you know that the student is not yet ready to assimilate the concept?

Meyer raised a really good point: how do we get someone to think inside the discipline? Do they understand the concept? Yes. Does this mean that they think along the right lines? Maybe, maybe not. If I don’t think like a Computer Scientist, I may not understand why a CS person sees a certain issue as a problem. We have plenty of evidence that people who haven’t dealt with the threshold concepts in CS Education find it alien to contemplate that the lecture is not the be-all and end-all of teaching – their resistance and reliance upon folk pedagogies is evidence of this wrestling with troublesome knowledge.

A great deal to think about from this talk, especially in dealing with key aspects of CS Ed as the threshold concept that is causing many of our non-educational research oriented colleagues so much trouble, as well as our students.