When Does Failing Turn You Into a Failure?

The threat of failure is very different from the threat of being a failure. At the Creative Innovations conference I was just at, one of the strongest messages there was that we learn more from failure than we do from success, and that failure is inevitable if you are actually trying to be innovative. If you learn from your failures and your failure is the genuine result of something that didn’t work, rather than you sat around and watched it burn, then this is just something that happens, was the message from CI, and any other culture makes us overly-cautious and risk averse. As most of us know, however, we are more strongly encouraged to cover up our failures than to celebrate them – and we are frequently better off not trying in certain circumstances than failing.

At the recent Adelaide Conventicle, which I promise to write up very, very soon, Dr Raymond Lister presented an excellent talk on applying Neo-Piagetian concepts and framing to the challenges students face in learning programming. This is a great talk (which I’ve had the good fortune to see twice and it’s a mark of the work that I enjoyed it as much the second time) because it allows us to talk about failure to comprehend, or failure to put into practice, in terms of a lack of the underlying mechanism required to comprehend – at this point in the student’s development. As part of the steps of development, we would expect students to have these head-scratching moments where they are currently incapable of making any progress but, framing it within developmental stages, allows us to talk about moving students to the next stage, getting them out of this current failure mode and into something where they will achieve more. Once again, failure in this case is inevitable for most people until we and they manage to achieve the level of conceptual understanding where we can build and develop. More importantly, if we track how they fail, then we start to get an insight into which developmental stage they’re at.

One thing that struck me with Raymond’s talk, was that he starts off talking about “what ruined Raymond” and discussing the dire outcomes promised to him if he watched too much television, as it was to me for playing too many games, and it is to our children for whatever high tech diversion is the current ‘finger wagging’ harbinger of doom. In this case, ruination is quite clearly the threat of becoming a failure. However, this puts us in a strange position, because if failure is almost inevitable but highly valuable if managed properly and understood, what is it about being a failure that is so terrible? It’s like threatening someone that they’ll become too enthusiastic and unrestrained in their innovation!

I am, quelle surprise, playing with words here because to be a failure is to be classed as someone for whom success is no longer an option. If we were being precise, then we would class someone as a perpetual failure or, more simply, unsuccessful. This is, quite usually, the point at which it is acceptable to give up on someone – after all, goes the reasoning, we’re just pouring good money after bad, wasting our time, possibly even moving the deck chairs on the Titanic, and all those other expressions that allow us to draw that good old categorical line between us and others and put our failures into the “Hey, I was trying something new” basket and their failures into the “Well, he’s just so dumb he’d try something like that.” The only problem with this is that I’m really not sure that a lifetime of failure is a guaranteed predictor of future failure. Likely? Yeah, probably. So likely we can gamble someone’s life on it? No, I don’t believe so.

When I was failing courses in my first degree, it took me a surprisingly long time to work out how to fix it, most of which was down to the fact that (a) I had no idea how to study but (b) no-one around me was vaguely interested in the fact that I was failing. I was well on my way to becoming a perpetual failure, someone who had no chance of holding down a job let alone having a career, and it was a kind and fortuitous intervention that helped me. Now, with a degree of experience and knowledge, I can look back into my own patterns and see pretty much what was wrong with me – although, boy, would I have been a difficult cuss to work with. However, failing, which I have done since then and I will (no doubt) do again, has not appeared to have turned me into a failure. I have more failings than I care to count but my wife still loves me, my friends are happy to be seen with me and no-one sticks threats on my door at work so these are obviously in the manageable range. However, managing failure has been a challenging thing for me and I was pondering this recently – how people deal with being told that they’re wrong is very important to how they deal with failing to achieve something.

I’m reading a rather interesting, challenging and confronting, article on, and I cannot believe there’s a phrase for this, rage murders in American schools and workplaces, which claims that these horrifying acts are, effectively, failed revolts, which is with Mark Ames, the author of “Going Postal” (2005). Ames seems to believe that everything stems from Ronald Reagan (and I offer no opinion either way, I hasten to add) but he identifies repeated humiliation, bullying and inhumane conditions as taking ordinary people, who would not usually have committed such actions, and turning them into monstrous killing machines. Ames’ thesis is that this is not the rise of psychopathy but a rebellion against breaking spirit and the metaphorical enslavement of many of the working and middle class that leads to such a dire outcome. If the dominant fable of life is that success is all, failure is bad, and that you are entitled to success, then it should be, as Ames says in the article, exactly those people who are most invested in these cultural fables who would be the most likely to break when the lies become untenable. In the language that I used earlier, this is the most awful way to handle the failure of the fabric of your world – a cold and rational journey that looks like madness but is far worse for being a pre-meditated attempt to destroy the things that lied to you. However, this is only one type of person who commits these acts. The Monash University gunman, for example, was obviously delusional and, while he carried out a rational set of steps to eliminate his main rival, his thinking as to why this needed to happen makes very little sense. The truth is, as always, difficult and muddy and my first impression is that Ames may be oversimplifying in order to advance a relatively narrow and politicised view. But his language strikes me: the notion of the “repeated humiliation, bullying and inhumane conditions”, which appears to be a common language among the older, workplace-focused, and otherwise apparently sane humans who carry out such terrible acts.

One of the complaints made against the radio network at the heart of the recent Royal Hoax, 2DayFM, is that they are serial humiliators of human beings and show no regard for the general well-being of the people involved in their pranks – humiliation, inhumanity and bullying. Sound familiar? Here I am, as an educator, knowing that failure is going to happen for my students and working out how to bring them up into success and achievement when, on one hand, I have a possible set of triggers where beating down people leads to apparent madness, and at least part of our entertainment culture appears to delight in finding the lowest bar and crawling through the filth underneath it. Is telling someone that they’re a failure, and rubbing it in for public enjoyment, of any vague benefit to anyone or is it really, as I firmly believe, the best way to start someone down a genuinely dark path to ruination and resentment.

Returning to my point at the start of this (rather long) piece, I have met Raymond several times and he doesn’t appear even vaguely ruined to me, despite all of the radio, television and Neo-Piagetian contextual framing he employs. The message from Raymond and CI paints failure as something to be monitored and something that is often just a part of life – a stepping stone to future success – but this is most definitely not the message that generally comes down from our society and, for some people, it’s becoming increasingly obvious that their inability to handle the crushing burden of permanent classification as a failure is something that can have catastrophic results. I think we need to get better at genuinely accepting failure as part of trying, and to really, seriously, try to lose the classification of people as failures just because they haven’t yet succeeded at some arbitrary thing that we’ve defined to be important.


ICER 2012 Day 3 Research Paper Session 5

The last of the research paper sessions and, dear reader, I am sure that you are as glad as I that we are here. Reading about an interesting conference that you didn’t attend is a bit like receiving a message from a friend talking about how he kissed the person that you always loved from afar. Thanks for the information but I would rather have been there myself.

This session opened with “Toward a Validated Computing Attitudes Survey” (Allison Elliott Tew, Brian Dorn and Oliver Schneider), where the problems with negative perceptions of the field and hostile classroom environments, combined with people thinking that they would be no good at CS, conspire to prevent students coming in to, or selecting, our discipline. The Computing Attitudes Survey was built, with major modification, from the Colorado Learning Attitudes about Science Survey (CLASS, pronounced C-LASS). To adapt the original survey, some material was just copied across with a word change (computer science replacing physics), some terminology was changed (algorithm for formula) and some discipline specific statements were added. Having established an expert opinion basis for the discipline specific content, students can now see how much they agree with the experts.

There is, as always, the rip of contentious issues. “You have to know maths to be able to program” was a three-way split within the expert group as to who agreed, disagreed or was neutral. What was interesting, and what I’ll be looking at in future, is the evidence of self-defeating thought in many answers (no, not questions. The questions weren’t self-defeatist but the answers often were.) What was also interesting is that attitudes seem to get worse in the CLASS instrument after you take the course!

Confidence, as simple as “I think I can do this”, plays a fundamental part in determining how students will act. Given the incredibly difficult decisions that a student faces when selecting their degree or concentration, it is no surprise that anyone who thinks “Computing is too hard for me” or “Computing is no use to me” will choose to do something else.

The authors are looking for volunteers where they can run these trials again so, after you’ve read their paper, if you’re interested, you should probably e-mail them.

“A Statewide Survey on Computing Education Pathways and Influences: Factors in Broadening Participation in Computing” (Mark Guzdial, Barbara Ericson, Tom McKlin and Shelly Engelman)

The final research paper in the conference dealt with the final evaluation of the Georgia Computes! initiative, which had run from October 2006 to August of this year. This multi-year project cannot be contained in my nervous babbling but I can talk about the instrument that was presented. Having run summer camps, weekend workshops, competitions, teacher workshops, a teachers’ lending library, first year engagement and seeded first-year summer camps (whew!), the question was: What had been the impact of Georgia Computes! ? What factors influence undergrad enrolment into intro CS courses?

There were many questions and results presented but I’d like to focus on the top four reasons given, from survey, as to why students weren’t going to undertake a CS Major or Minor:

  1. I don’t want to do the type of work
  2. Little interest in the subject matter
  3. Don’t enjoy Computing Courses
  4. Don’t have confidence that I would succeed.

Looking at those points, after a state-wide and highly successful campaign over 6 years has finished, it is very, very sobering for me. What these students are saying is that they cannot see the field as attractiveinterestingenjoyable or that they are capable. But these are all aspects that we can work on, although some of these will require a lot of work.

Two further things that Barb said really struck me. Firstly, that if you take into account encouragement and ability, that men will tend to be satisfied and continue on if they receive either or both – the factors are not separable for men – but that women and minorities need encouragement in order to feel satisfied and to convince them to keep going. Secondly, when it comes to giving encouragement, male professors are just as effective as female professors in terms of giving encouragement to women.

As a male lecturer, who is very, very clearly aware of the demographic disgrace that is the under-representation of women in CS, this first fact gives me a partial strategy to increase retention (and reinforces a believe I have held anecdotally for some time) but the second fact gives me the agency to assist in this process, as well as greater hope for a steadily increasing female cohort over time.

Overall, a very positive note on which to finish the session papers!


ICER 2012 Day Research Paper Session 4

This session kicked off with “Ability to ‘Explain in Plain English’ Linked to Proficiency in Computer-based Programming”,  (Laurie Murphy, Sue Fitzgerald, Raymond Lister and Renee McCauley (presenting)). I had seen a presentation along these lines at SIGCSE and this is an excellent example of international collaboration, if you look at the authors list. Does the ability to explain code in plain English correlate with ability to solve programming problems? The correlation appears to be there, whether or not we train students in Explaining in Plain English or not, but is this causation?

This raises a core question, addressed in the talk: Do we need to learn to read (trace) code before we learn to write code or vice versa? The early reaction of the Leeds group was that reading code didn’t amount to testing whether students could actually write code. Is there some unknown factor that must be achieved before either or both of these? This is a vexing question as it raises the spectre of whether we need to factor in some measure of general intelligence, which has not been used as a moderating factor.

Worse, we now return to that dreadful hypothesis of “programming as an innate characteristics”, where you were either born to program or not. Ray (unsurprisingly) believes that all of the skills in this area (EIPE/programming) are not innate and can be taught. This then raises the question of what the most pedagogically efficient way is to do this!

How Do Students Solve Parsons Programming Problems? — An Analysis of Interaction Traces
Juha Helminen (presenting), Petri Ihantola, Ville Karavirta and Lauri Malmi

This presentation was of particular interest to me because I am currently tearing apart my 1,900 student data corpus to try and determine the point at which students will give up on an activity, in terms of mark benefit, time expended and some other factors. This talk, which looked at how students solved problems, also recorded the steps and efforts that they took in order to try and solve them, which gave me some very interesting insights.

A Parsons problem is one where, given code fragments a student selects, arranges and composes a program in response to a question. Not all code fragments present will be required in the final solution. Adding to the difficulty, the fragments require different indentation to assert their execution order as part of block structure. For those whose eyes just glazed over, this means that it’s more than selecting a line to go somewhere, you have to associate it explicitly with other lines as a group. Juha presented a graph-based representation of the students’ traversals of the possible solutions for their Parsons problem. Students could ask for feedback immediately to find out how their programs were working and, unsurprisingly, some opted for a lot of “Am I there yet” querying. Some students queried feedback as much as 62 times for only 7 features, indicative of permutation programming, with very short inter-query intervals. (Are we there yet? No. Are we there yet? No. Are we there yet? No. Are we there yet? No.)

The primary code pattern of development was linear, with block structures forming the first development stages, but there were a lot of variations. Cycles (returning to the same point) also occurred in the development cycle but it was hard to tell if this was a deliberate reset pattern or a one where permutation programming had accidentally returned the programmer to the same state. (Asking the students “WHY” this had occurred would be an interesting survey question.)

There were some good comments from the audience, including the suggestion of correlating good and bad states with good and bad outcomes, using Markov chain analysis to look for patterns. Another improvement suggested was recording the time taken for the first move, to record the impact (possible impact) of cognition on the process. Were students starting from a ‘trial and error’ approach or only after things went wrong?

Tracking Program State: A Key Challenge in Learning to Program
Colleen Lewis (presenting, although you probably could have guessed that)

This paper won the Chairs’ Award for the best paper at the Conference and it was easy to see why. Colleen presented a beautifully explored case study of an 11 year old boy working on a problem in the Scratch programming language and trying to work out why he couldn’t draw a wall of bricks. By capturing Kevin’s actions, in code, his thoughts, from his spoken comments, we are exposed to the thought processes of a high achieving young man who cannot fathom why something isn’t working.

I cannot do justice to this talk by writing about something that was primarily visual, but Colleen’s hypothesis was that Kevin’s attention to the state (variables and environmental settings over which the program acts) within the problem is the determining factor in the debugging process. Once Kevin’s attention was focused on the correct problem, he solved it very quickly because the problem was easy to solve. Locating the correct problem required him to work through and determine which part of the state was at fault.

Kevin has a pile of ideas in his head but, as put by duSessa and Sherin (98), learning is about reliably using the right ideas in the correct context. Which of Kevin’s ideas are being used correctly at any one time? The discussion that followed talked about a lot of the problems that students have with computers, in that many students do not see computers as actually being deterministic. Many students, on encountering a problem, will try exactly the same thing again to see if the error occurs again – this requires a mental model that we expect a different set of outcomes with the same inputs and process, which is a loose definition of either insanity or nondeterminism. (Possibly both.)

I greatly enjoyed this session but the final exemplar, taking apart a short but incredibly semantically rich sequence and presenting it with a very good eye for detail, made it unsurprising that this paper won the award. Congratulations again, Colleen!


ICER 2012 Day 2 Discussion Papers Session 2

This is a brief note on this session as these papers are presented to the community with the intention of sparking discussion and, in this case, one of the most interesting issues that arose was the use for reference of the first of a pair of papers, where the first paper asserted a finding and the second then retracted it. This is not to say that the actual papers presented themselves weren’t interesting (far from it) but you can read about them in the proceedings and this particular session raised yet another reason to come to ICER: because this is where a lot of the authors are.

In this case, one of the authors on the retraction paper very politely identified himself and then pointed out why the paper in question that the presenting authors were referring to had then been followed-up with a paper that illustrated some of the problems in the original work. (I am trying quite hard to avoid potentially embarrassing anyone so please excuse how circumspect I am being.)

The reception of the actual discussion paper was, unsurprisingly, framed in the revelation that a supporting paper had been undermined and the questions revolved around issues with metrics and how the authors had addressed the (so-called) possible Hawthorne effect issues.

But this is exactly what these kind of paper sessions are for. This is a place to present ideas for the community and now the authors can go back, rework their approach on stronger soil and come back with something stronger. Yes, there is no doubt that they would much rather have not built upon that paper but imagine how much worse it would have been had this made it (undetected) to the journal stage!

 


ICER 2012 Day 2 Research Session 3

The session kicked off with “The Abstraction Transition Taxonomy: Developing Desired Learning Outcomes through the Lens of Situated Cognition”, (Quintin Cutts (presenting), Sarah Esper, Marlena Fecho, Stephen Foster and Beth Simon) and the initial question: “Do our learning outcomes for programming classes match what we actually do as computational thinkers and programmers?” To answer this question, we looked Eric Mazur’s Peer Instruction, an analysis of PU questions as applied to a CS principles pilot course, and then applied the Abstraction Transition Taxonomy (ATT) to published exams, with a wrap of observations and ‘where to from here’.

Physicists have, some time ago, noticed that their students can plug numbers into equations (turn the handle, so to speak) but couldn’t necessarily demonstrate that they understood things: they couldn’t demonstrate that that they thought as physicists should. (The Force Concept Inventory was mentioned here and, if you’re not familiar, it’s a very interesting thing to look up.) To try and get students who thought as physicists, Mazur developed Peer Instruction (PI), which had pre-class prep work, in-class questions, followed by voting, discussion and re-voting, with an instructor leading class-wide discussion. These activities prime the students to engage with the correct explanations – that is, the way that physicists think about and explain problems.

Looking at Computer Science, many CS people use the delivery of a working program as a measure of the correct understanding and appropriate use of programming techniques.

Given that generating a program is no guarantee of understanding, which is sad but true given the existence of the internet, other students and books. We could try and force a situation where students are isolated from these support factors but this then leads us back to permutation programming, voodoo code and shotgun debugging unless the students actually understand the task and how to solve it using our tools. In other words, unless they think as Computer Scientists.

UCSD had a CS Principles Pilot course that used programming to foster computational thinking that was aimed at acculturation into the CS ‘way’ rather than trying to create programmers. The full PI implementation asked students to reason about their programs, through exploratory homework and a PI classroom, with some limited time traditional labs as well. While this showed a very positive response, the fear was that this may have been an effect of the lecturers themselves so analysis was required!

By analysing the PI questions, a taxonomy was developed that identified abstraction levels and the programming concepts within them. The abstraction levels were “English”, “Computer Science Speak” and “Code”. The taxonomy was extended with the transitions between these levels (turning an English question into code for example is a 1-3 transition, if English is abstraction level 1 and Code 3. Similarly, explain this code in English is 3-1). Finally, they considered mechanism (how does something work) and rationale (why did we do it this way)?

Analysing the assignment and assessment questions to determine what was being asked, in terms of abstraction level and transitions, and whether it was mechanism or rationale, revealed that 21% of the in-class multiple choice questions were ‘Why?’ questions but there actually very few ‘Why?’ questions in the exam. Unsurprisingly, almost every question asked in the PI framework is a ‘Why?’ question,  so there should be room for improvement in the corresponding examinations. PI emphasises the culture of the discipline through the ‘Why?’ framing because it requires acculturation and contextualisation to get yourself into the mental space where a Rationale becomes logical.

The next paper “Subgoal-Labeled Instructional Material Improves Performance and Transfer in Learning to Develop Mobile Applications”, Lauren Margulieux, Mark Guzdial and Richard Catrambone, dealt with mental models and how the cognitive representation of an action will affect both the problem state and how well we make predictions. Students have so much to think about – how do they choose?

The problem with just waiting for a student to figure it out is high cognitive load, which I’ve referred to before as helmet fire. If students become overwhelmed they learn nothing, so we can explicitly tell students and/or provide worked examples. If we clearly label the subgoals in a worked example, students remember the subgoals and the transition from one to another. The example given here was an Android App Inventor worked example, one example of which had no labels, the other of which had subgoal labels added as overlay callouts to the movie as the only alteration. The subgoal points were identified by task analysis – so this was a very precise attempt to get students to identify the important steps required to understand and complete the task.

(As an aside, I found this discussion very useful. It’s a bit like telling a student that they need comments and so every line has things like “x=3; //x is set to 3” whereas this structured and deliberate approach to subgoal definition shows students the key steps.)

In the first experiment that was run, the students with the subgoals (and recall that this was the ONLY difference in the material) had attempted more, achieved more and done it in less time. A week later, they still got things right more often. In the second experiment, a talk-aloud experiment, the students with the subgoals discussed the subgoals more, tried random solution strategies less and wasted less effort than the other group. This is an interesting point. App Inventor allows you to manipulate blocks of code and the subgoal group were less likely to drag out a useless block to solve the problem. The question, of course, is why. Was it the video? Was it the written aspects? Was it both?

Students appear to be remembering and using the subgoals and, as was presented, if performance is improving, perhaps the exact detail of why it’s happening is something that we wish to pursue but, in the short term, we can still use the approach. However, we do have to be careful with how many labels we use as overloading visual cues can lead to confusion, thwarting any benefit.

The final paper in the session was “Using collaboration to overcome disparities in Java experience”, Colleen Lewis (presenting), Nathaniel Titterton and Michael Clancy. This presented the transformation of a a standard 3 Lecture, 2 hours of lab and 1 discussion hour course into a 1 x 1 hour lecture with 2 x 3 hour labs, with the labs now holding the core of the pedagogy. Students are provided feedback through targeted tutoring, using on-line multiple choices for the students to give feedback and assist the TAs. Pair programming gives you someone to talk to before you talk to the TA but the TA can monitor the MCQ space and see if everyone is having a problem with a particular problem.

This was addressing a problem in a dual speed entry course, where some students had AP CS and some didn’t, therefore the second year course was either a review for those students who had Java (from AP CS) or was brand new. Collaboration and targeted support was aimed at reducing the differences between the cohorts and eliminate disadvantage.

Now, the paper has a lot of detail on the different cohorts, by intake, by gender, by retention pattern, but the upshot is that the introduction of the new program reduced the differences between those students who did and did not have previous Java experience. In other words, whether you started at UCB in CS 1 (with no AP CS) or CS  1.5 (with AP CS), the gap between your cohorts shrank – which is an excellent result. Once this high level of collaboration was introduced, the only factor that retained any significant difference was the first exam, but this effect disappeared throughout the course as students received more exposure to collaboration.

I strongly recommend reading all three of these papers!


ICER 2012: More posts over the weekend

I still have a lot to post on ICER but I have to actually do work things today. I hope to catch up with all of this over the weekend.

As always, if you are an author and I get something wrong, please let me know and I’ll fix it. If you have a strong opinion about what I’ve posted, because of a conflict over the content or my interpretation, I look forward to your comments.

If you are wearing a hat that is covered in chocolate bars, then I hope that you are still enjoying your glorious victory without falling into a diabetic coma.

When you gaze into the honeycomb, the honeycomb gazes into you.


ICER 2012 Research Paper Session 2

Ok, true confession time. My (and Katrina’s) paper was in this session and I’ll write this up separately. So this session consisted of “Adapting Disciplinary Commons Model: Lessons and Results from Georgia” (Brianna Morrison, Lijun Ni and Mark Guzdial) and… another paper. 🙂

The goals of the original disciplinary commons were:
  • To document and share knowledge about student learning in CS classrooms
  • To establish practices for the scholarship of teaching by making it public, peer-reviewed and amenable for public use. (portfolio model)
While the first goal was achieved, the second wasn’t as, although portfolios were produced, people just wanted to keep them private. However, they did
develop a strong and vibrant community, with associated change of practice as a result of participation. The next stage was a Disciplinary Commons for Computing Educators (DCCE, for Georgia), with the adaptation that this apply to both High School teachers AND university-level educators.
The new goals were:
  1. Creating community
  2. Sharing resources and knowledge of how things are taught in other contexts.
  3. Supporting student recruitment within the high school environment.
I was interested to learn that there is no Computer Science teaching certificate in Georgia, hence a teacher must be certified in another discipline, such as Mathematics, Science, and Business being most likely. (I believe this is what was said although on reviewing my notes, I find this a little confusing. I’m assuming that this is due to the transition into the Georgia teaching framework.)
The community results were very interesting, as the initial community formed where one person was the hub – a network but not a robust one! After working on this in year 3, a much more evenly distributed group was formed that could survive a few people dropping out.  Given that many of the students in the program had no (or very few) peers in the home university, these networks were crucial to giving them important information. Teachers who work in isolation need supporting networks – you can see what else someone does, and ask how they do it.
I love these community-building projects and the network example gave one of the fantastic insights into why regular progress and impact checks can make the difference between an ok project and a highly successful one. Identifying that a network based on one (hub) person is unstable and altering your practices to make the network graph more heavily meshed is an excellent adaptation that reinforces the key focus on this project: creating community.
I read a design magazine called Desktop, much to the amusement of my more design-oriented friends, and one of the smaller regular features is that of the desks and working environments of professional designers. As I try to learn more about this area, this helps give me some insight into the community and, because of this, it accelerates my development. The community project described in this paper allows people who are already trying to hold a presence in a tricky and evolving area by connecting them with people who have similar issues and joining all of them to a shared repository of knowledge and experience. It would be great to see more programs like this.

ICER 2012 Research Paper Session 1

It would not be over-stating the situation to say that every paper presented at ICER led to some interesting discussion and, in some cases, some more… directed discussion than others. This session started off with a paper entitled “Threshold Concepts and Threshold Skills in Computing” (Kate Sanders, Jonas Boustedt, Anna Eckerdal, Robert McCartney, Jan Erik Moström Lynda Thomas and Carol Zander), on whether threshold skills, as distinct from threshold concepts, existed and, if they did, what their characteristics would be. Threshold skills were described as transformative, integrative, troublesome knowledge, semi-irreversible (in that they’re never really lost), and requiring practice to keep current. The discussion that followed raised a lot of questions, including whether you could learn a skill by talking about it or asking someone – skill transfer questions versus environment. The consensus, as I judged it from the discussion, was that threshold skills didn’t follow from threshold concepts but there was a very rapid and high-level discussion that I didn’t quite follow, so any of the participants should feel free to leap in here!

The next talk was “On the reliability of Classifying Programming Tasks Using a Neo-Piagetian Theory of Cognitive Development” (Richard Gluga, Raymond Lister, Judy Kay, Sabina Kleitman and Donna Teague), where Ray raised and extended a number of the points that he had originally shared with us in the workshop on Sunday. Ray described the talk as being a bit “Neo-Piagetian theory for dummies” (for which I am eternally grateful)  and was seeking to address the question as to where students are actually operating when we ask them to undertake tasks that require a reasonable to high level of intellectual development.

Ray raised the three bad programming habits he’d discussed earlier:

  1. Permutation programming (where students just try small things randomly and iteratively in the hope that they will finally get the right solution – this is incredibly troublesome if the many small changes take you further away from the solution )
  2. Shotgun debugging (where a bug causes the student to put things in with no systematic approach and potentially fixing things by accident)
  3. Voodoo coding/Cargo cult coding (where code is added by ritual rather than by understanding)

These approaches show one very important thing: the student doesn’t understand what they’re doing. Why is this? Using a Neo-Piagetian framework we consider the student as moving through the same cognitive development stages that they did as a child (Piagetian) but that this transitional approach applies to new and significant knowledge frameworks, such as learning to program. Until they reach the concrete operational stage of their development, they will be applying poor or inconsistent models – logically inadequate models to use the terminology of the area (assuming that they’ve reached the pre-operational stage). Once a student has made the next step in their development, they will reach the concrete operational stage, characterised (among other things, but these were the ones that Ray mentioned) by:

  1. Transitivity: being able to recognise how things are organised if you can impose an order upon them.
  2. Reversibility: that we can reverse changes that we can impose.
  3. Conservation: realising that the numbers of things stay the same no matter how we organise them.

In coding terms, these can be interpreted in several ways but the conservation idea is crucial to programming because understanding this frees the student from having to write the same code for the same algorithm every time. Grasping that conversation exists, and understanding it, means that you can alter the code without changing the algorithm that it implements – while achieving some other desirable result such as speeding the code up or moving to a different paradigm.

Ray’s paper discussed the fact that a vast number of our students are still pre-operational for most of first and second year, which changes the way that we actually try to teach coding. If a student can’t understand what we’re talking about or has to resort to magical thinking to solve problem, then we’ve not really achieved our goals. If we do start classifying the programming tasks that we ask students to achieve by the developmental stages that we’re expecting, we may be able to match task to ability, making everyone happy(er).

The final paper in the session was “Social Sensitivity Correlations with the Effectiveness of team Process Performance: An Empirical Study”, (Luisa Bender (presenting), Gursimra Walia, Krishna Kambhampaty, Travis Nygard and Kendall Nygard), which discussed the impact of socially sensitive team members in programming teams. (Social sensitivity is the ability to correctly understand the feelings and the viewpoints of other people.)

The “soft skills” are essential to teamwork process and a successful team enhances learning outcomes. Bad teams hinder team formation and progress, and things go downhill from there. From Wooley et al’s study of nearly 700 participants, the collective intelligence of the team stems from how well the team works rather than the individual intelligence of the participants. The group whose members were more socially sensitive had a higher group intelligence.

Just to emphasise that point: a team of smart people may not be as effective as a team as a team of people who can understand the feelings and perspectives of each other. (This may explain a lot!)

Social sensitivity is a good predictor of team performance and the effectiveness of team-oriented processes, as well as the satisfaction of the team members. However, it is also apparent that we in Science, Technology, Engineering and Mathematics (STEM) have lower social sensitivity readings (supporting Baron-Cohen’s assertion – no, not that one) than some other areas. Future work in this area is looking at the impact of a single high or low socially sensitive person in a group, a study that will be of great interest to anyone who is running teams made up on randomly assigned students. How can we construct these groups for the best results for the students?


ICER 2013 – San Diego!

We’re in the closing phases of ICER 2012 and we’re just learning about the location of ICER 2013, which is San Diego. Currently, we’re hearing about Vernor Vinge, one of my favourite authors, and his book “Rainbow’s End” is set on the UCSD campus.

So, I guess I’ll try to get to ICER 2013, end of July/beginning of August, in San Diego. See some of you there!


ICER 2012 Day 1: Discussion Papers Session 1

ICER contains a variety of sessions: research papers, discussion papers, lightning talks and elevator pitches. The discussion papers allow people to present ideas and early work in order to get the feedback of the community. This is a very vocal community so opening yourself up to discussion is going to be a bit like drinking from the firehouse: sometimes you quench your thirst for knowledge and sometimes you’re being water-cannonned.

Web-scale Data Gathering with BlueJ
Ian Utting, Neil Brown, Michael Kölling, Davin McCall and Philip Stevens

BlueJ is a very long-lived and widely used Java programming environment with a development environment designed to assist with the learning and teaching of object-oriented programming, as well as Java. The BlueJ project is now adding automated instrumentation to every single BlueJ installation and students can opt-in to a data reporting mechanism that will allow the collection and formation of a giant data repository: Project Blackbox. (As a note, that’s a bit of a super villain name, guys.)

BlueJ has 1-2M New users per year, typically using it for ~90 days and all of these users will be able to opt-in, can opt-out later, although this can be disabled in config. To protect user identity, locally generated (anon) UUID will be generated and linked to user+installation (So home and lab won’t correlate). On the technical side, the stored data will includes time-stamps, tool invocations, source code snapshots, and course-grained location. You can also connect (locally available) personal data about students and link it to UUID data. Groups can be tagged and queries restricted to that tag (and that includes taxonomic data if you’re looking into the murky world of assessment taxonomy).
In terms of making this work, ethical approval has been obtained from the hosting organisation, for verified academic researchers, initially via SQL queries on multi-terabyte repository but the data will not be fully public (this will be one of largest repositories of assignment solutions in the world).
Timescale: private beta by end of 2012, with a full-scale roll out next Spring, AY 2013. Very usefully, you can still get access to the data even if you don’t contribute.
There was a lot of discussion on this: we’re all hungry for the data. One question that struck me was from Sally Fincher: Given that we will have web-scale data gathering, do we have web scale questions? We can all think of things to do but this level of data is now open to entirely new analyses. How will we use this? What else do we need to do?

Evaluating an Early Software Engineering Course with Projects and Tools from Open Source Software
Robert McCartney, Swapna Gokhale and Therese Smith

We tend to give Software Engineering students a project that requires them to undertake design and then, as a group, produce a large software artefact from scratch. In this talk, Robert discussed using existing projects that use a range of skills that are directly relevant to one of the most common activities our students will carray out in industry: maintenance and evolution.

Under a model of developing new features in an open-source system, the instructors provide a pre-selected set of projects and then the 2 person team:

  1. picks a project
  2. learns to comprehend code
  3. proposes enhancements
  4. describes and documents
  5. implements and presents
The evaluation seeks to understand how the students’ understanding of issues has changed especially regarding the importance of maintenance and evolution, the value of documentation, the importance of tools and how reverse engineering can aid comprehension. This approach has been trialled and early student response is positive but the students thought that 10,000 Lines of Code (LOC) projects were too small, hence the project size has increased to 100,000 LOC.

A Case Study of Environmental Factors Influencing Teaching Assistant Job Satisfaction
Elizabeth Patitsas

Elizabeth presented some interesting work on the impact of lecture theatres on what our TAs do. If the layout is hard to work with then, unsurprisingly, the TAs are less inclined to walk around and more inclined to disengage, sitting down the front checking e-mail. When we say ‘less inclined’, we mean that in closed lab layouts TAs spend 40% of the their time interacting with students, versus 76% in an open layout. However, these effects are also seen in windowless spaces: make a space unpleasant and you reduce the time that people spend answering questions and engaging.

The value of a pair of TAs was stressed: a pair gives you a backup but doesn’t lead to decision problems when coming to consensus. However, the importance of training was also stressed, as already clearly identified in the literature.

Education and Research: Evidence of a Dual Life
Joe Mirõ Julia, David López and Ricardo Alberich

Joe provided a fascinating coloration network analysis of the paper writing groups in ICER and generally. In CS education,  we tend to work in smaller groups than other CS research areas and newcomers tend to come alone to conferences. The ICER colouration network graph has a very well-defined giant component that centres around Robert (see above) but, across the board, roughly 50% of conference authors are newcomer. One of the most common ways for people to enter the traditional CS research community is through what can be described as a mentoring process, we extend the group through an existing connection and then these people join the giant component. There is, however, no significant evidence of mentoring in the edu community.
Unsurprisingly, different countries and borders hinder the growth of the giant component.
There was a lot of discussion on this as well, as we tried to understand what was going on and, outside of the talk, I raised my suggestion with Joe that hemispherical separation was a factor worth considering because of the different timetables that we worked to. Right now, I am at a conference in the middle of teaching, while the Northern Hemisphere has only just gone back to school.