Unearthing the Community: A Surprisingly Rapid Result

I am thinking of a number between 1 and 100…

Next Monday I am co-hosting the first Adelaide Computing Education Conventicle, an offshoot of the very successful program in the Eastern states which encourages the presentation of work that has gone to conferences, or is about to go, and to provide a forum for conversations and panel discussions on Computing Education. The term ‘conventicle’ refers to “A secret or unlawful religious meeting, typically of people with nonconformist views” and stems from the initial discussions in Melbourne and Sydney, back when Computing Education was not perhaps as accepted as it is now. The name is retained for gentle amusement and a linkage to previous events. To quote my own web page on this:

The Conventicle is a one-day conference about all aspects of teaching computing in higher education, in its practical and theoretical aspects, which includes computer science, information systems, information technology, and branches of both mathematics and statistics. The Conventicle is free and open to all who wish to attend. The format will consist of presentations, discussion forums and opportunities to network over lunch, and morning and afternoon tea.

The Conventicles have a long history in other states, allowing a discussion forum for how we teach, why we teach, what we can do better and provide us with an opportunity to share our knowledge at a local level without having to travel to conferences or subscribe to an every growing set of journals.

One of my ALTA colleagues set his goal as restarting the conventicles where they had stopped and starting them where they had never been and, combining this with my goal of spreading the word on CSE, we decided to work together and host the informal one-day event. The Australian gravity well is deep and powerful: few of my colleagues get to go to the larger educational conferences and being able to re-present some key papers, especially when the original presenters can be there, is fantastic. We’re very lucky to have two interstate visitors. Simon, my ALTA colleague, is presenting some of his most recent work, and Raymond Lister, from UTS, is presenting a very interesting paper that I saw him present at ICER. When he mentioned that he might be able to come, I didn’t wast much time trying to encourage him… and ask him if he’d mind presenting a paper. It appears that I’m learning how to run a conference.

The other good news is that we have a full program! It turns out that many people are itching to talk about their latest projects, their successes, recent papers and about the things that challenge so many of us. I still have space for a lot more people to attend and, with any luck, by this time tomorrow I’ll have the program nailed down. If you’re in the neighbourhood, please check out the web page and let me know if you can come.

I hope to see at least some of the following come out of the First Adelaide Computing Education Conventicle:

  1. Raised awareness of Computing Education across my faculty and University.
  2. Raised awareness of how many people are already doing research in this!
  3. An opportunity for the local community to get together and make connections.
  4. Some good discussion with no actual blows being landed. ūüôā

In the longer term, I’d love to see joint papers, grant applications and all those good things that help us to tick our various boxes. Of course, being me, I also want to learn more, to help other people to learn more (even if it’s just by hosting) and get some benefit for all of our students.

There’s enough time to get it all organised, which is great, but I’ll have a busy Monday next week!

 


ICER 2012 Day 3 Research Paper Session 5

The last of the research paper sessions and, dear reader, I am sure that you are as glad as I that we are here. Reading about an interesting conference that you didn’t attend is a bit like receiving a message from a friend talking about how he kissed the person that you always loved from afar. Thanks for the information but I would rather have been there myself.

This session opened with “Toward a Validated Computing Attitudes Survey” (Allison Elliott Tew, Brian Dorn and Oliver Schneider), where the problems with negative perceptions of the field and hostile classroom environments, combined with people thinking that they would be no good at CS, conspire to prevent students coming in to, or selecting, our discipline. The Computing Attitudes Survey was built, with major modification, from the Colorado Learning Attitudes about Science Survey (CLASS, pronounced C-LASS). To adapt the original survey, some material was just copied across with a word change (computer science replacing physics), some terminology was changed (algorithm for formula) and some discipline specific statements were added. Having established an expert opinion basis for the discipline specific content, students can now see how much they agree with the experts.

There is, as always, the rip of contentious issues. “You have to know maths to be able to program” was a three-way split within the expert group as to who agreed, disagreed or was neutral. What was interesting, and what I’ll be looking at in future, is the evidence of self-defeating thought in many answers (no, not questions. The questions weren’t self-defeatist but the answers often were.) What was also interesting is that attitudes seem to get worse in the CLASS instrument after you take the course!

Confidence, as simple as “I think I can do this”, plays a fundamental part in determining how students will act. Given the incredibly difficult decisions that a student faces when selecting their degree or concentration, it is no surprise that anyone who thinks “Computing is too hard for me” or “Computing is no use to me” will choose to do something else.

The authors are looking for volunteers where they can run these trials again so, after you’ve read their paper, if you’re interested, you should probably e-mail them.

“A Statewide Survey on Computing Education Pathways and Influences: Factors in Broadening Participation in Computing” (Mark Guzdial, Barbara Ericson, Tom McKlin and Shelly Engelman)

The final research paper in the conference dealt with the final evaluation of the Georgia Computes! initiative, which had run from October 2006 to August of this year. This multi-year project cannot be contained in my nervous babbling but I can talk about the instrument that was presented. Having run summer camps, weekend workshops, competitions, teacher workshops, a teachers’ lending library, first year engagement and seeded first-year summer camps (whew!), the question was: What had been the impact of Georgia Computes! ? What factors influence undergrad enrolment into intro CS courses?

There were many questions and results presented but I’d like to focus on the top four reasons given, from survey, as to why students¬†weren’t going to undertake a CS Major or Minor:

  1. I don’t want to do the type of work
  2. Little interest in the subject matter
  3. Don’t enjoy Computing Courses
  4. Don’t have confidence that I would succeed.

Looking at those points, after a state-wide and highly successful campaign over 6 years has finished, it is very, very sobering for me. What these students are saying is that they cannot see the field as attractive, interesting, enjoyable or that they are capable. But these are all aspects that we can work on, although some of these will require a lot of work.

Two further things that Barb said really struck me. Firstly, that if you take into account encouragement and ability, that men will tend to be satisfied and continue on if they receive either or both – the factors are not separable for men – but that women and minorities need encouragement in order to feel satisfied and to convince them to keep going. Secondly, when it comes to giving encouragement, male professors are just as effective as female professors in terms of giving encouragement to women.

As a male lecturer, who is very, very clearly aware of the demographic disgrace that is the under-representation of women in CS, this first fact gives me a partial strategy to increase retention (and reinforces a believe I have held anecdotally for some time) but the second fact gives me the agency to assist in this process, as well as greater hope for a steadily increasing female cohort over time.

Overall, a very positive note on which to finish the session papers!


ICER 2012 Day Research Paper Session 4

This session kicked off with “Ability to ‚ÄėExplain in Plain English‚Äô Linked to Proficiency in Computer-based Programming”, ¬†(Laurie Murphy, Sue Fitzgerald, Raymond Lister and Renee McCauley (presenting)). I had seen a presentation along these lines at SIGCSE and this is an excellent example of international collaboration, if you look at the authors list. Does the ability to explain code in plain English correlate with ability to solve programming problems? The correlation appears to be there, whether or not we train students in Explaining in Plain English or not, but is this causation?

This raises a core question, addressed in the talk: Do we need to learn to read (trace) code before we learn to write code or vice versa? The early reaction of the Leeds group was that reading code didn’t amount to testing whether students could actually write code. Is there some unknown factor that must be achieved before either or both of these? This is a vexing question as it raises the spectre of whether we need to factor in some measure of general intelligence, which has not been used as a moderating factor.

Worse, we now return to that dreadful hypothesis of “programming as an innate characteristics”, where you were either born to program or not. Ray (unsurprisingly) believes that all of the skills in this area (EIPE/programming) are not innate and can be taught. This then raises the question of what the most pedagogically efficient way is to do this!

How Do Students Solve Parsons Programming Problems? ‚ÄĒ An Analysis of Interaction Traces
Juha Helminen (presenting), Petri Ihantola, Ville Karavirta and Lauri Malmi

This presentation was of particular interest to me because I am currently tearing apart my 1,900 student data corpus to try and determine the point at which students will give up on an activity, in terms of mark benefit, time expended and some other factors. This talk, which looked at how students solved problems, also recorded the steps and efforts that they took in order to try and solve them, which gave me some very interesting insights.

A Parsons problem is one where, given code fragments a student selects, arranges and composes a program in response to a question. Not all code fragments present will be required in the final solution. Adding to the difficulty, the fragments require different indentation to assert their execution order as part of block structure. For those whose eyes just glazed over, this means that it’s more than selecting a line to go somewhere, you have to associate it explicitly with other lines as a group.¬†Juha presented a graph-based representation of the students’ traversals of the possible solutions for their Parsons problem. Students could ask for feedback immediately to find out how their programs were working and, unsurprisingly, some opted for a lot of “Am I there yet” querying. Some students queried feedback as much as 62 times for only 7 features, indicative of permutation programming, with very short inter-query intervals. (Are we there yet? No.¬†Are we there yet? No.¬†Are we there yet? No.¬†Are we there yet? No.)

The primary code pattern of development was linear, with block structures forming the first development stages, but there were a lot of variations. Cycles (returning to the same point) also occurred in the development cycle but it was hard to tell if this was a deliberate reset pattern or a one where permutation programming had accidentally returned the programmer to the same state. (Asking the students “WHY” this had occurred would be an interesting survey question.)

There were some good comments from the audience, including the suggestion of correlating good and bad states with good and bad outcomes, using Markov chain analysis to look for patterns. Another improvement suggested was recording the time taken for the first move, to record the impact (possible impact) of cognition on the process. Were students starting from a ‘trial and error’ approach or only after things went wrong?

Tracking Program State: A Key Challenge in Learning to Program
Colleen Lewis (presenting, although you probably could have guessed that)

This paper won the Chairs’ Award for the best paper at the Conference and it was easy to see why. Colleen presented a beautifully explored case study of an 11 year old boy working on a problem in the Scratch programming language and trying to work out why he couldn’t draw a wall of bricks. By capturing Kevin’s actions, in code, his thoughts, from his spoken comments, we are exposed to the thought processes of a high achieving young man who cannot fathom why something isn’t working.

I cannot do justice to this talk by writing about something that was primarily visual, but Colleen’s hypothesis was that Kevin’s attention to the state (variables and environmental settings over which the program acts) within the problem is the determining factor in the debugging process. Once Kevin’s attention was focused on the correct problem, he solved it very quickly because the problem was easy to solve. Locating the correct problem required him to work through and determine which part of the state was at fault.

Kevin has a pile of ideas in his head but, as put by duSessa and Sherin (98), learning is about reliably using the right ideas in the correct context. Which of Kevin’s ideas are being used correctly at any one time? The discussion that followed talked about a lot of the problems that students have with computers, in that many students do not see computers as actually being deterministic. Many students, on encountering a problem, will try exactly the same thing again to see if the error occurs again – this requires a mental model that we expect a different set of outcomes with the same inputs and process, which is a loose definition of either insanity or nondeterminism. (Possibly both.)

I greatly enjoyed this session but the final exemplar, taking apart a short but incredibly semantically rich sequence and presenting it with a very good eye for detail, made it unsurprising that this paper won the award. Congratulations again, Colleen!


ICER 2012 Day 2 Discussion Papers Session 2

This is a brief note on this session as these papers are presented to the community with the intention of sparking discussion and, in this case, one of the most interesting issues that arose was the use for reference of the first of a pair of papers, where the first paper asserted a finding and the second then retracted it. This is not to say that the actual papers presented themselves weren’t interesting (far from it) but you can read about them in the proceedings and this particular session raised yet another reason to come to ICER: because this is where a lot of the authors are.

In this case, one of the authors on the retraction paper very politely identified himself and then pointed out why the paper in question that the presenting authors were referring to had then been followed-up with a paper that illustrated some of the problems in the original work. (I am trying quite hard to avoid potentially embarrassing anyone so please excuse how circumspect I am being.)

The reception of the actual discussion paper was, unsurprisingly, framed in the revelation that a supporting paper had been undermined and the questions revolved around issues with metrics and how the authors had addressed the (so-called) possible Hawthorne effect issues.

But this is exactly what these kind of paper sessions are for. This is a place to present ideas for the community and now the authors can go back, rework their approach on stronger soil and come back with something stronger. Yes, there is no doubt that they would much rather have not built upon that paper but imagine how much worse it would have been had this made it (undetected) to the journal stage!

 


ICER 2012 Day 2 Research Session 3

The session kicked off with “The Abstraction Transition Taxonomy: Developing Desired Learning Outcomes through the Lens of Situated Cognition”, (Quintin Cutts (presenting), Sarah Esper, Marlena Fecho, Stephen Foster and Beth Simon) and the initial question: “Do our learning outcomes for programming classes match what we actually do as computational thinkers and programmers?” To answer this question, we looked Eric Mazur’s Peer Instruction, an analysis of PU questions as applied to a CS principles pilot course, and then applied the Abstraction Transition Taxonomy (ATT) to published exams, with a wrap of observations and ‘where to from here’.

Physicists have, some time ago, noticed that their students can plug numbers into equations (turn the handle, so to speak) but couldn’t necessarily demonstrate that they understood things: they couldn’t demonstrate that that they thought as physicists should. (The Force Concept Inventory was mentioned here and, if you’re not familiar, it’s a very interesting thing to look up.) To try and get students who thought as physicists, Mazur developed Peer Instruction (PI), which had pre-class prep work, in-class questions, followed by voting, discussion and re-voting, with an instructor leading class-wide discussion. These activities prime the students to engage with the correct explanations – that is, the way that physicists think about and explain problems.

Looking at Computer Science, many CS people use the delivery of a working program as a measure of the correct understanding and appropriate use of programming techniques.

Given that generating a program is no guarantee of understanding, which is sad but true given the existence of the internet, other students and books. We could try and force a situation where students are isolated from these support factors but this then leads us back to permutation programming, voodoo code and shotgun debugging unless the students actually understand the task and how to solve it using our tools. In other words, unless they think as Computer Scientists.

UCSD had a CS Principles Pilot course that used programming to foster computational thinking that was aimed at acculturation into the CS ‘way’ rather than trying to create programmers. The full PI implementation asked students to¬†reason about their programs, through exploratory homework and a PI classroom, with some limited time traditional labs as well. While this showed a very positive response, the fear was that this may have been an effect of the lecturers themselves so analysis was required!

By analysing the PI questions, a taxonomy was developed that identified abstraction levels and the programming concepts within them. The abstraction levels were “English”, “Computer Science Speak” and “Code”. The taxonomy was extended with the transitions between these levels (turning an English question into code for example is a 1-3 transition, if English is abstraction level 1 and Code 3. Similarly, explain this code in English is 3-1). Finally, they considered mechanism (how does something work) and rationale (why did we do it this way)?

Analysing the assignment and assessment questions to determine what was being asked, in terms of abstraction level and transitions, and whether it was mechanism or rationale, revealed that 21% of the in-class multiple choice questions were ‘Why?’ questions but there actually very few ‘Why?’ questions in the exam. Unsurprisingly, almost every question asked in the PI framework is a ‘Why?’ question, ¬†so there should be room for improvement in the corresponding examinations. PI emphasises the culture of the discipline through the ‘Why?’ framing because it requires acculturation and contextualisation to get yourself into the mental space where a Rationale becomes logical.

The next paper “Subgoal-Labeled Instructional Material Improves Performance and Transfer in Learning to Develop Mobile Applications”,¬†Lauren Margulieux, Mark Guzdial and Richard Catrambone, dealt with mental models and how the cognitive representation of an action will affect both the problem state and how well we make predictions. Students have so much to think about – how do they choose?

The problem with just waiting for a student to figure it out is¬†high cognitive load, which I’ve referred to before as helmet fire. If students become overwhelmed they learn nothing, so we can explicitly tell students and/or provide worked examples. If we clearly label the subgoals in a worked example, students remember the subgoals and the transition from one to another. The example given here was an Android App Inventor worked example, one example of which had no labels, the other of which had subgoal labels added as overlay callouts to the movie as the only alteration. The subgoal points were identified by task analysis – so this was a very precise attempt to get students to identify the important steps required to understand and complete the task.

(As an aside, I found this discussion very useful. It’s a bit like telling a student that they need comments and so every line has things like “x=3; //x is set to 3” whereas this structured and deliberate approach to subgoal definition shows students the key steps.)

In the first experiment that was run, the students with the subgoals (and recall that this was the ONLY difference in the material) had attempted more, achieved more and done it in less time. A week later, they still got things right more often. In the second experiment, a talk-aloud experiment, the students with the subgoals discussed the subgoals more, tried random solution strategies less and wasted less effort than the other group. This is an interesting point. App Inventor allows you to manipulate blocks of code and the subgoal group were less likely to drag out a useless block to solve the problem. The question, of course, is why. Was it the video? Was it the written aspects? Was it both?

Students appear to be remembering and using the subgoals and, as was presented, if performance is improving, perhaps the exact detail of why it’s happening is something that we wish to pursue but, in the short term, we can still use the approach. However, we do have to be careful with how many labels we use as overloading visual cues can lead to confusion, thwarting any benefit.

The final paper in the session was “Using collaboration to overcome disparities in Java experience”,¬†Colleen Lewis (presenting), Nathaniel Titterton and Michael Clancy. This presented the transformation of a a standard 3 Lecture, 2 hours of lab and 1 discussion hour course into a 1 x 1 hour lecture with 2 x 3 hour labs, with the labs now holding the core of the pedagogy. Students are provided feedback through targeted tutoring, using on-line multiple choices for the students to give feedback and assist the TAs. Pair programming gives you someone to talk to before you talk to the TA but the TA can monitor the MCQ space and see if everyone is having a problem with a particular problem.

This was addressing a problem in a dual speed entry course, where some students had AP CS and some didn’t, therefore the second year course was either a review for those students who had Java (from AP CS) or was brand new. Collaboration and targeted support was aimed at reducing the differences between the cohorts and eliminate disadvantage.

Now, the paper has a lot of detail on the different cohorts, by intake, by gender, by retention pattern, but the upshot is that the introduction of the new program reduced the differences between those students who did and did not have previous Java experience. In other words, whether you started at UCB in CS 1 (with no AP CS) or CS  1.5 (with AP CS), the gap between your cohorts shrank Рwhich is an excellent result. Once this high level of collaboration was introduced, the only factor that retained any significant difference was the first exam, but this effect disappeared throughout the course as students received more exposure to collaboration.

I strongly recommend reading all three of these papers!


ICER 2012: More posts over the weekend

I still have a lot to post on ICER but I have to actually do work things today. I hope to catch up with all of this over the weekend.

As always, if you are an author and I get something wrong, please let me know and I’ll fix it. If you have a strong opinion about what I’ve posted, because of a conflict over the content or my interpretation, I look forward to your comments.

If you are wearing a hat that is covered in chocolate bars, then I hope that you are still enjoying your glorious victory without falling into a diabetic coma.

When you gaze into the honeycomb, the honeycomb gazes into you.


ICER 2012 Research Paper Session 2

Ok, true confession time. My (and Katrina’s) paper was in this session and I’ll write this up separately. So this session consisted of “Adapting Disciplinary Commons Model: Lessons and Results from Georgia” (Brianna Morrison,¬†Lijun Ni¬†and Mark Guzdial) and… another paper. ūüôā

The goals of the original disciplinary commons were:
  • To document and share knowledge about student learning in CS classrooms
  • To establish practices for the scholarship of teaching by making it public,¬†peer-reviewed and amenable for public use. (portfolio model)
While the first goal was achieved, the second wasn’t as, although portfolios were produced, people just wanted to keep them private. However, they did
develop a strong and vibrant community, with associated change of practice as a result of participation. The next stage was a Disciplinary Commons for Computing Educators (DCCE, for Georgia), with the adaptation that this apply to both High School teachers AND university-level educators.
The new goals were:
  1. Creating community
  2. Sharing resources and knowledge of how things are taught in other contexts.
  3. Supporting student recruitment within the high school environment.
I was interested to learn that there is no Computer Science teaching certificate in Georgia, hence a teacher must be certified in another discipline, such as Mathematics, Science, and Business being most likely. (I believe this is what was said although on reviewing my notes, I find this a little confusing. I’m assuming that this is due to the transition into the Georgia teaching framework.)
The community results were very interesting, as the initial community formed where one person was the hub Рa network but not a robust one! After working on this in year 3, a much more evenly distributed group was formed that could survive a few people dropping out.  Given that many of the students in the program had no (or very few) peers in the home university, these networks were crucial to giving them important information. Teachers who work in isolation need supporting networks Рyou can see what else someone does, and ask how they do it.
I love these community-building projects and the network example gave one of the fantastic insights into why regular progress and impact checks can make the difference between an ok project and a highly successful one. Identifying that a network based on one (hub) person is unstable and altering your practices to make the network graph more heavily meshed is an excellent adaptation that reinforces the key focus on this project: creating community.
I read a design magazine called Desktop, much to the amusement of my more design-oriented friends, and one of the smaller regular features is that of the desks and working environments of professional designers. As I try to learn more about this area, this helps give me some insight into the community and, because of this, it accelerates my development. The community project described in this paper allows people who are already trying to hold a presence in a tricky and evolving area by connecting them with people who have similar issues and joining all of them to a shared repository of knowledge and experience. It would be great to see more programs like this.