SIGCSE Day 3, “What We Say, What They Do”, Saturday, 9-10:15am, (#SIGCSE2014)

The first paper was “Metaphors we teach by” presented by Ben Shapiro from Tufts. What are the type of metaphors that CS1 instructors use and what are the wrinkles in these metaphors. What do we mean by metaphors? Ben’s talking about conceptual metaphors, linguistic devices to allow us to understand one idea in terms o another idea that we already know. Example: love is a journey – twists and turns, no guaranteed good ending, The structure of a metaphor is that you have a thing we’re trying to explain (the target) in terms of something we already know (the source).  Conceptual metaphors are explanatory devices to assist us in understanding new things.

Metaphors are widely used in teaching in CS, pointers, stacks and loops – all metaphorical aspects of computer science, but that’s not the focus of this study. How do people teach with metaphor? The authors couldn’t find any studies on general metaphor use in CS and its implication on student learning. An example from a birds-of-a-feather session held at this conference, a variable is like a box. A box can hold many different things but it holds things. (This has been the subject of a specific study.) Ben also introduced the “Too much milk” metaphor. This metaphor is laid out as follows. Jane comes home from work, goes to get milk from the fridge but her roommate has already drunk it (bad roommate!). Jane goes out to get more milk. While she’s out, her roommate comes back with milk, then Jane comes back with milk. Now they have too much milk! This could be used to explain race conditions in CS. Another example is the use of bus lockers mapping to virtual memory.
Ben returned to boxes again? One of the problems is that boxes can hold many things but a variable can only hold one thing – which appears to be a confusing point for learners who knew how boxes work. Is this a common problem? Metaphors have some benefits but come with this kind of baggage? Metaphors are partial mappings – they don’t match every aspect of the target to the source. (If it was a complete mapping they’d be the same thing.)
The research questions that the group considered were:
  • What metaphors do CS1 instructors use for teaching?
  • What are the trying to explain?
  • What are the sources that they use?
Learners don’t know where the mappings start and stop – where do the metaphors break down for students? What mistakes do they make because of these misunderstandings? Why does this matter? We all have knowledge on how to explain but we don’t have good published collections of the kind of metaphors that we use to teach CS, which would be handy for new teachers. We could study these and work out which are more effective. What are the most enduring and universal metaphors?
The study was interview-based, interviewing Uni-level CS1 instructors, ended up with 10 people, with an average of 13 years of teaching.  The interview questions given to these instructors were (paraphrased):
  • Levels taught and number of years
  • Tell me about a metahpor
  • Target to source mapping
  • Common questions students have
  • Where the metaphor breaks down
  • How to handle the breakdown in teaching.
Ben then presented the results. (We had a brief discussion of similes versus metaphors but I’ll leave that to you.) An instructor discussed using the simile of a portkey from Harry Potter to explain return statements in functions, because students had trouble with return existing immediately. The group of 10 people provided 18 different CS Concepts (Targets) and 19 Metaphorical Explanations (Sources).
What’s the target for “Card Catalogs”? Memory addressing and pointers. The results were interesting – there’s a wide range of ways to explain things! (The paper contains a table of a number of targets and sources.)
Out of date cultural references were identified as a problem and you have to be aware of the students’ cultural context. (Card desk and phone booths are nowhere near as widely used as they used to be.) Where do students make inferences beyond the metaphor? None of the 10 participants could give a single example of this happening! (This is surprising – Ben called it weird.) Two hypotheses – our metaphors are special and don’t get overextended (very unlikely) OR CS1 instructors poorly understand student thinking (more likely).
The following experimental studies may shed some light on this:
  • Which metaphors work better?
  • Cognitive clinical internviews, exploring how students think with metaphors and where incorrect inferences are drawn.
There was also a brief explanation of PCK (teachers’ pedagogical content knowledge) but I don’t have enough knowledge to fully flesh this out. Ben, if you’re reading this, feel free to add a beautifully explanatory comment. 🙂
The next walk was “‘Explain in Plain English’ Questions Revisited: Data Structures Problems” presented by Sue Fitzgerald and Laurie. This session opened with a poll to find out what the participants wanted and we all wanted to find out how to get students to use plain English. An Explain in Plain English  (EiPE) question asks you to describe what a chunk of code does, but not in a line by line discussion. A student’s ability to explain what a chink of code does correlates with a student’s ability to write and read code. The study wanted to investigate if this was just a novice phenomenon or if this advanced during the years and expertise. This study looked at 120 undergraduates in a CS2 course in data structures and algorithms using C++, with much more difficult questions than in earlier studies: linked lists, recursive calls and so on.
The students were given two questions in an exam with some preamble to describe the underlying class structure with a short example and a diagram. The students then had to look at a piece of code and determine what would happen in order to answer in the question as a plain English response. (There’s always a problem where you throw to an interactive response system where the question isn’t repeated, perhaps we need two screens.)
The SOLO taxonomy was used to analyse the problems (more Neo-Piagetian goodness!). Four of the SOLO categories were used: relational (summarises the code), multistructural (line by line explanation of the code) , unistructural (only describes one portion rather than the whole idea), and pre structural (misses it completely, gibberish). I was interested to see the examples presented, with pointers and mutual function calling, because it quickly became apparent that the room I was in (which had a lot of CS people in it) were having to think relatively hard about the answer to the second example. One of the things about working memory is that it’s not very deep and none of us were quite ready to work in a session 🙂 but a lot of good discussion ensued. The students would have had ready access to the preamble code but I do wonder how much obfuscation is really required here. The speaker made a parenthetical comment that experts usually doodle but where was our pen and paper! (As someone else said, reinforcing the point that we didn’t come prepared to work, nobody told us we had to bring paper. 🙂 ) We then got to classify a student response that was quite “student-y”. (A question came up as to whether an answer can be relational if it’s wrong – the opinion appears to be that a concise, complete and incorrect answer could be considered relational. A point for later discussion.) The answer we saw was multistructural because it was a line-by-line answer – it wasn’t clear, concise and abstract. We then saw another response that was much more terse but far less accurate. THe group tossed up between unistructural and pre structural. (The group couldn’t see the original code or the question, so this uncertainty make sense. Again, a problem with trying to have an engaging on-line response system and a presentation on the same screen. The presenters did a great job of trying to make it work but it’s not ideal.)
What about correlations? For the first question asked, students who gave relational and multistructural answers generally passed, with a 58% grade. Those who answered at the uni or pre level generally failed with an average grade of 38%. In the second test question, the relational and multi group generally passed with a grade of 61.2%, the uni and pre group generally failed with an achieved grade of 42%.
So these correlations hold for no-novice programmers. A mix of explaining, writing and reading code is an effective way to develop good programming skills and EiPE questions give students good practice in the valuable skills of explaining code. Instructors can overestimate how well students understand presented code – asking them to explain it back is very useful for student self-assessment. The authors’ speculation is that explaining code to peers is probably part of the success of peer instruction and pair programming.
The final talk was “A Formative Study of Influences on Student Testing Behaviours” presented by Kevin Buffardi, from VT. In their introductory CS1 and CS2 courses they use Test-Driven Development (TDD) – code a little, test a little, for incremental development. It’s popular in industry, so students come out with relevant experience, but some previous studies have found improvement in student work when they closely adhered to TDD philosophy. BUT a lot of students didn’t follow it at all! So the authors were looking for ways to encourage students to follow this, especially when they were on their own and programming by themselves. Because it’s a process, you can tell what happened by looking at the final program but they use WebCAT and so can track the developmental stages of the program as students submit their work for partial grading. These snapshots provide clear views of what the students are doing over time. (I really have to look at what we could do with WebCAT. Our existing automarker is getting a bit creaky.) Students also received hints back when they submitted their work, general and instructor level.
The first time students achieved something with any type of testing, they would get a “Good Start” feedback and be entitled to a free hint. If you kept up with your testing, you would ‘buy’ more hints. If your test coverage was good, you got more hints. If your coverage was poor, you got general feedback. (Prior to this, WebCAT only gave 3 hints. Now there are no free hints but you can buy an unlimited number.) This is an adaptive feedback mechanism, to encourage testing with hints as incentives. The study compared reinforcement treatments:
  • Constant – even time a goal achieve, you got a hint (Consistently rewards target behaviour)
  • Delayed – Hints when earned, at most one hint per hour (less inceptive for hammering the system)
  • Random – 50% chance of hints when goal is met. (Should reduce dependency on extrinsic behaviours)
Should you show them the goal or not? This was an additional factor – the goals were either visual (concrete goal) or obscured (suggest improvement without specified target). These were a paired treatment.
What was the impact? There were no differences in the number of lines written, but the visual goal lead to students getting better test coverage than obscured goal. There didn’t appear to be a long term effect but there is an upcoming ITiCSE talk that will discuss this further. There were some changes from one submission to another but this wasn’t covered in detail.
The authors held formative group interviews where the students explained their development process and interaction with WebCAT. They said that they valued several types of evaluation, they paid attention to RED progress bars (visualisation and dash boarding – I’d argue that is is more about awareness than motivation), and noticed when they earned a hint but didn’t get it. The students drew their individual developmental process as a diagram and, while everyone had a unique approach, but there were two general approaches. Test last approach showed up: write a solution, submit a solution to WebCAT, take a break, do some testing, then submit to WebCAT again. Periodic testing approach was the other pattern seen, where they wrote solutions, WebCAT, write tests, submit to WebCAT, then revise solution and tests, and iterate.
Going forward, the automated evaluation became part of their development strategy. There were conflicting interests: the correctness reports from WebCAT were actually reducing the need to write their own tests because they were getting an indication of how well it was working. This is an important point for me, because from the examples I saw, I really couldn’t see what I would call test-driven development, especially for test last, so the framework is not encouraging the right behaviour. Kevin handled my question on this well, because it’s a complicated issue, and I’m really looking forward to seeing the ITiCSE paper follow-up! Behavioural change is difficult and, as Kevin rightly noted, it’s optimistic to think that we can achieve it in the short term.
Everyone wants to get students doing the right thing but it’s a very complicated issue. Much food for thought and a great session!

SIGSCE Day 2, “Focus on K-12: Informal Education, Curriculum and Robots”, Paper 1, 3:45-5:00, (#SIGCSE2014)

The first paper is “They can’t find us: The Search for Informal CS Education” by Betsy DiSalvo, Cecili Reid, Parisa Khanipour Roshan, all from Georgia Tech. (Mark wrote this paper up recently.) There are lots of resources around, MOOCs, on-line systems tools, Khan academy and Code Academy and, of course the aggregators. If all of this is here, why aren’t we getting the equalisation effects we expect?

Well, the wealth and the resource-aware actually know how to search and access these, and are more aware of them, so the inequality persists. The Marketing strategies are also pointed at this group, rather than targeting those needing educational equity. The cultural values of the audiences vary. (People think Scratch is a toy, rather than a useful and pragmatic real-world tool.) There’s also access – access to technical resource, social support for doing this and knowledge of the search terms. We can address this issues by research mechanisms to address the ignored community.

Children’s access to informal learning is through their parents so how their parents search make a big difference. How do they search? The authors set up a booth to ask 16 parents in the group how they would do it. 3 were disqualified for literacy or disability reasons (which is another issue). Only one person found a site that was relevant to CS education. Building from that, what are the search terms that they are using for computer learning and why aren’t hey coming up with good results. The terms that parents use supported this but the authors also used Google insights to see what other people were using. The most popular terms for the topic, the environment and the audience. Note: if you search for kids in computer learning you get fewer results than if you search for children in computer learning. The three terms that came up as being best were:

  • kids computer camp
  • kids computer classes
  • kids computer learning

The authors reviewed across some cities to see if there was variation by location for these search terse. What was the quality of these? 191 out of 840 search results were unique and relevant, with an average of 4.5 per search.

(As a note, MAN, does Betsy talk and present quickly. Completely comprehensible and great but really hard to transcribe!)

Results included : Camp, after school program, camp/afterschool, higher education, online activities, online classes/learning, directory results (often worse than Google), news, videos or social networks (again the quality was lower). Computer camps dominated what you could find on these search results – but these are not an option for low-income parents at $500/week so that’s not a really useful resource for them. Some came up for after school and higher ed in the large and midsize cities, but very little in the smaller cities. Unsurprisingly, smaller cities and lower socio-economic groups are not going to be able to find what they need to find, hence the inequality continues. There are many fine tools but NONE of them showed up on the 800+ results.

Without a background in CS or IT, you don’t know that these things exist and hence you can’t find it for your kids. Thus, these open educational resources are less accessible to these people, because they are only accessible through a mechanism that needs extra knowledge. (As a note, the authors only looked at the first two pages because “no-one looks past that”. 🙂 ) Other searches for things like kids maths learning, kids animal learning or kids physics learning turned up 48 out of 80 results (average of 16 unique results per search term), where 31 results were online, 101 had classes at uni – a big difference.

(These studies were carried out before code.org. Running the search again for kids computer learning does turn up code.org. Hooray, there is progress! If the study was run again, how much better would it be?)

We need to take a top down approach to provide standards for keywords and search terms, partnering with formal education and community programs. The MOOCs should talk to the Educational programming community, both could talk to the tutorial community and then we can throw in the Aggregators as well. Distant islands that don’t talk are just making this problem worse.

The bottom-up approach is getting an understanding of LSEO parenting, building communities and finding out how people search and making sure that we can handle it. Wow! Great talk but I think my head is going to explode!

During question time, someone asked why people aren’t more creative with their searches. This is, sadly, missing the point that, sitting in this community, we are empowered and skilled in searching. The whole point is that people outside of our community aren’t guaranteed to be able to find a way too be creative. I guess the first step is the same as for good teaching, putting ourselves in the heads of someone who is a true novice and helping to bring them to a more educated state.

 

 


SIGCSE Day 2, “Software Engineering: Courses”, Thursday, 1:45-3:00pm, (#SIGCSE2014)

Regrettably, despite best efforts, I was a bit late getting back from the lunch and I missed the opening session, so my apologies to Andres Neyem, Jose Benedetto and Andres Chacon, the authors of the first paper. From the discussion I heard, their course sounds interesting so I have to read their paper!

The next paper was “Selecting Open Source Software Projects to Teach Software Engineering” presented by Robert McCartney from University of Connecticut. The overview is why would we do this, the characteristics of the students, the projects and the course, finding good protects, what we found, how well it worked and what the conclusions were.

In terms of motivation, most of their SE course is in project work. The current project approach emphasises generative aspects. However, in most of SE, the effort involves maintenance and evolution. (Industry SE’s regularly tweak and tune, rather than build from the bottom.) The authors wanted to change focus to software maintenance and evolution, have the students working on an existing system, understanding it, adding enhancements, implementing, testing and documenting their changes. But if you’re going to do this, where do you get code from?

There are a lot of open source projects, available on0line, in a variety of domains and languages and at different stages of development. There should* be a project that fits every group. (*should not necessarily valid in this Universe.) The students are not actually being embedded in the open source community, the team is forking  the code and not planning to reintegrate it. The students themselves are in 2nd and 3rd year, with courses in OO and DS in Java, some experience with UML diagrams and Eclipse.

For each team of students, they get to pick a project from a set, try to understand the code, propose enhancements, describe and document  all o their plans, build their enhancements and present the results back. This happens over about 14 weeks. The language is Java and the code size has to be challenging but not impossible (so about 10K lines). The build time had to fit into a day or two of reasonable effort (which seems a little low to me – NF). Ideally, it should be a team-based project, where multiple developed could work in parallel. An initial look at the open source repositories on these criteria revealed a lot of issues: not many Java programs around 10K but Sourceforge showed promise. Interestingly, there were very few multi-developer projects around 10K lines. Choosing candidate projects located about 1000 candidates, where 200 actually met the initial size criterion. Having selected some, they added more criteria: had to be cool, recent, well documented, modular and have capacity to be built (no missing jar files, which turned out to be a big problem). Final number of projects: 19, size range 5.2-11 k lines.

That’s not a great figure. The takeaway? If you’re going to try and find projects for students, it’s going to take a while and the final yield is about 2%. Woo. The class ended up picking 16 projects and were able to comprehend the code (with staff help). Most of the enhancements, interestingly, involved GUIs. (Thats not so great, in my opinion, I’d always prefer to see functional additions first and shiny second.)

In concluding, Robert said that it’s possible to find OSS projects but it’s a lot of work. A search capability for OSS repositories would be really nice. Oh – now he’s talking about something else. Here it comes!

Small projects are not built and set up to the same standard as larger projects. They are harder to build, less-structured and lower quality documentation, most likely because it’s one person building it and they don’t notice the omissions. Thes second observation is that running more projects is harder for the staff. The lab supervisor ends up getting hammered. The response in later offerings was to offer fewer but larger projects (better design and well documented) and the lab supervisor can get away with learning fewer projects. On the next offering, they increased the project size (40-100K lines), gave the students the build information that was required (it’s frustrating without being amazingly educational). Overall, even with the same projects, teams produced different enhancements but with a lot less stress on the lab instructor.

Rather unfortunately, I had to duck out so I didn’t see Claudia’s final talk! I’ll write it up as a separate post later. (Claudia, you should probably re-present it at home. 🙂 )


SIGCSE, day 2, Le déjeuner des internationaux (#SIGCSE2014)

We had a lunch for the international contingent at SIGCSE, organised by Annemieke Craig from Deakin and Catherine Lang from Latrobe (late of Swinburne). There are apparently about 80 internationals here and we had about 24 at the lunch. Australians were over-represented but there were a lot of familiar faces and that’s always nice in a group of 1300 people.

Lots of fun and just one more benefit of a good conference. The group toasted Claudia Szabos’ success with the Best Paper award, again. We’re still having a lot of fun with that.


SIGCSE Day 2, Assessment and Evaluation Session, Friday 10:45-12:00, (#SIGCSE2014)

The session opened with a talk on the “Importance of Early Performance in CS1: Two Conflicting Assessment Stories” by Leo Porter and Daniel Zingaro. Frequent readers will know that I published a paper in ICER 2012 on the impact of early assignment submission behaviour on later activity so I was looking forward to seeing what the conflict was. This was, apparently, supposed to be a single story but, like much research, it suddenly turned out that there were two different stories.

In early term performance, do you notice students falling into a small set of performance groups? Does it feel that you can predict the results? (Shout out to Ahadi and Lister’s “Geek genes, prior knowledge, stumbling points and learning edge momentum: parts of the one elephant?” from ICER 2013!). Is there a truly bimodal distribution of ability? The results don’t match a neat bell curve. (I’m sure a number of readers to wait and see where this goes.)

Why? Well, the Geek Gene theory is that there is internet and immutable talent you have or you don’t. The author didn’t agree with this and the research supports this, by the way. The next possibility is a stumbling block, where you misunderstand something critical. The final possibility is learning edge momentum, where you build knowledge incrementally and early mistakes cascade.

In evaluating these theories, the current approach is over a limited number of assessments but it’s hard to know what happened in-between. we need more data! Leo uses Peer Instruction (PI) a lot so has a lot of clicker question data to draw on. (Leo gave a quick background on PI but you can look that up. 🙂 ) The authors have some studies to see the correlation between vote and group vote.

The study was run over a CS1 course in Python with 126 students, with 34 PI sessions over 12 weeks and 8 prac lab sessions. The instructor was experiences in PI and the material. Components for analysis include standard assessments (midterm and finals), on-class PI for the last two weeks and the PI results per student, averaged bi-weekly to reduce noise because students might be absent and are graded on participation.

(I was slightly surprised to see that more than 20% of the students had scored 100% on the midterm!) The final was harder but it was hard to see the modalities in the histograms. Comparing this with the last two weeks of course for PI, and this isn’t bi-modal either and looks very different. The next step was to use the weekly assessments to see how they would do in the last two weeks, and that requires a correlation. The Geek Gene should have a strong correlation early and no change. Stumbling block should see strong correlation somewhat early and then no change. Lastly, for LEM, strong correlation somewhat early, then no change – again. This is not really that easy to distinguish.

The results were interesting. Weeks 1,2 don’t’ correlate much at all but from weeks 3,4 onwards, correlation is roughly 40% but it doesn’t get better. Looking at the final exam correlation with the Week 11/12 PI scores, correlation is over 60% (growing steadily from weeks 3,4) Let’s look at the exam content (analyse the test) – where did the content fall? 54% of the questions target the first weeks, 46% target the latter half. Buuuuuuut, the later questions were more conceptually rich – and this revealed a strong bias for the first half of the class (87%) and only 13% of the later stuff. The early test indicators were valid because the exam is mostly testing the early section! The PI in Week 11 and 12 was actually 50/50 first half and second half, so no wonder that correlated!

Threads to validity? Well, the data was noisy and participation was variable. The PI questions are concept tests, focused on a signal concept and many not actually reflect writing code. There were different forms of assessment. The PI itself may actually change student performance because students generally do better in PI courses. So what does all this mean?

Well, the final exam correlation supports stumbling block and LEM but the Week 11 and 12 are different! The final exam story isn;t ideal but the Week 11/12 improvements are promising. We’re addicted tot his kind of assessment ands student performance early in term will predict assessment based on that material, but the PI is f more generally used.

It’s interesting to know that there were mot actual MCQs on the final exam.

The next talk was “Reinventing homework as a cooperative, formative assessment” by Don Blaheta. There are a couple of problems in teaching: the students need practice and the students need feedback. In reinventing homework, the big problem is that trading is a lot of work and matching comments to grades and rubrics is hard, with a delay for feedback, it’s not group work and solitary work isn’t the best for all students, and a lot of the students don’t read the comments anyway. (My ears pricked up, this is very similar to the work I was presenting on.)

There’s existing work on automation, off-the-shelf programming, testing systems and online suites, with immediate feedback. But some things just can’t be auto graded and we have to come back to manual marking. Diagrams can’t be automarked.

To deal with this, the author tried “Work together, write alone” but there is confusion about what and what isn’t acceptable as collaboration – the lecturer ends up grading the same thing three times. What about revising previous work? It’s great for learning but students may nt have budgeted any time for it, some will be happy with a lower mark. here’s the issue of apathy and it increases the workload.

How can we package these ideas together to get them to work better? We can make the homework group work, the next idea is that there’s a revision cycle where an early (ungraded) version is hand back with comments – limited scale response of correct, substantial understanding, littler or no understanding. (Then homework is relatively low stakes.) Other mechanisms include comments, no grades; grade, no comments; limed scale. (Comments, no grades, should make them look at the comments – with any luck.) Don’t forget that revision increases workload where everything else theoretically decreases it! Comments identify higher order problems and marks are not handed back to students. The limited scale now reduces marking over head and can mark improvement rather than absolutes. (And the author referred to my talk from yesterday, which startled me quite a lot, but it’s nice to see! Thanks, Don!)

It’s possible to manage the group, which is self-policing, very interestingly – the “free rider” problem rears its ugly head. Some groups did divide the task but moved to full group model after initially splitting up the work. Grades could swing and students might not respond positively.

In the outcomes, while the n is small, he doesn’t see a high homework mark correlated with a lot exam average, with would be the expected indicator of the “free rider” or “plagiarist” effect. So, nothing significant but an indication that things are on the right track. Looking at class participation, students are working in different ways, but overall it’s positive in effect. (The students liked it but you know my thoughts on that. 🙂 ) Increased cooperation is a great outcome as is making revisions on existing code.

The final talk was on “Evaluating an Inverted CS1” presented by Jennifer Campbell form the University of Toronto. Their CS1 is a 12 week course with 3 lecture hours and a 2 hour Lab per week with Python in Objects early, Classes-late approach. Lecture size is 130-150 students across mostly 1st years with some higher and some non-CS. Typical lab sizes are 30 students with one TA.

The inverted classroom is also known as the flipped classroom: resources are available and materials are completed before the students show up. The face-to-face time is used for activities. Before the lecture, students watch videos, with two instructors, with screencasts and some embedded quizzes (about 15 questions), worth 0.5% per week. In class, the students work on exercises on paper, solo or in pairs, exercises were not handed in or for credit and the instructor plus 1 TA per 100 enrolled students. (There was an early indicator of possible poor attendance in class because the ratio in reality is higher than that.) Most weeks the number of lecture hours were reduced from three to two.

In coursework, there were 9 2-hour labs, some lecture prep, some auto-graded programming assignments, two larger TA-graded programming assignments, one 50-minute midterm and a three hour final exam.

How did it go? Pre- and post-course surveys on paper, relating to demography, interest in pursuing a CS1 program, interest in CS1, enthusiasm, difficulty, time spent and more. (Part of me thinks that these things are better tracked by looking at later enrolments in the course or degree transfers.) Weekly lecture attendance counts and enrolment tracked, along with standard university course evaluation.

There was a traditional environment available for comparison, from a  previous offering, so they had collected all of that data. (If you’re going to make a change, establish a baseline first.) Sadly, the baselines were different for the different terms so comparison wasn’t as easy,

The results? Across their population, 76% of students are not intending to purse CS, 62% had no prior programming experience, 53% were women! I was slightly surprised that tradition lecture attendance was overall higher with a much steeper decline early on. For students who completed the course, the average mark for prep work was 81% so the students were preparing the material but were then not attending the lecture. Hmm. This came out again in the ‘helpfulness’ graphs where the online materials outscored the in-lecture activities. But the traditional lecture still outscored both – which makes me think this is a hearts and mind problem combined with some possible problems in the face-to-face activities. (Getting f2f right for flipped classes is hard and I sympathise entirely if this is a start-up issue.)

For those people who responded pre and post survey on their enthusiasm and enthusiasm increased but it was done on paper and we already know that there was a drop in attendance so this had bias, but on-line university surveys also backed this up. In terms of perceptions of difficulty and time, women found it harder and more time consuming. What was more surprising is that prior programming experience did not correlate with difficulty or time spent.

Outcomes? The drop rate was comparable to past offerings and 25% of students dropped the course. The pass rates were comparable with 86% pass rate and there was comparable performance on “standard” exam questions. There was no significant difference in the performance on those three exam questions. The students who were still attending at the end wanted more of these types of course, not really surprisingly.

Lessons learned – there was a lot learnt! In the resources read, video preparation took ~600 hours and development of in-class exercises took ~130 hours. The extra TA support cost money and, despite trying to make the load easier, two lecture hours per week were too few. (They’ve now reverted to three hours, most weekly two hour labs are replaced with online exercises and a TA drop-in help centre, which allows them to use the same TA resources as a traditional offering.) In terms of lecture delivery, the in-class exercises on paper were valuable test preparation. There was no review of the lecture material that had been pre-delivered (which is always our approach, by the way) so occasionally students had difficulty getting starred. However, they do now start each lecture with a short worked example to prime the students on the material that they had seen before. (It’s really nice to see this because we’re doing almost exactly the same thing in our new Object Oriented Programming course!)  They’ve now introduced a weekly online exercise to allow them to assess whether they should be coming to class but lecture attendance is still lower than for the traditional course.

The take away is that the initial resource cost is pretty big but you then get to re-use it on more than occasion, a pretty common result. They’re on their third offering, having made ongoing changes. A follow-up paper on the second offering has been re-run and will be pretend as Horton et al, “Comparing Outcomes in Inverted and Traditional CS1”, which will appear in ITiCSE 2014.

They haven’t had the chance to ask the students why they’re not coming to the lectures but that would be very interesting to find out. A good talk to finish on!


SIGCSE Day 2, Keynote 2, “Transforming US Education with Computer Science”, (#SIGCSE2014)

Today’s keynote, “Transforming US Education with Computer Science”, is being given by Hadi Partovi from Code.org. (Claudia and I already have our Code.org swag stickers.)

There are 1257 registered attendees so far, which gives you some idea of the scale of SIGCSE. This room is pretty full and it’s got a great vibe. (Yeah, yeah, I know, ‘vibe’. If that’s the worst phrase I use today, consider yourself lucky, D00dz.) The introductory talk included a discussion of the SIGCSE Special Projects small grant program (to US$5,000). They have two rounds a year so go to SIGCSE’s website and follow the links to see more. (Someone remind me that it’s daylight saving time on Saunday morning, the dreaded Spring forward, so that I don’t miss my flight!)

SIGCSE 2015 is going to be in Kansas City, by the way, and I’ve heard great things about KC BBQ – and they have a replica of the Arch de Triomphe so… yes. (For those who don’t know, Kansas City is in Missouri. It’s name after the river which flows through it, which is named after the local Kansa tribe. Or that’s what this page says. I say it’s just contrariness.) I’ve never been to Missouri, or Kansas for that matter, so I could tick off two states in the one trip… of course, then I’d have to go to Topeka, well just because, but you know that I love driving.

We started the actual keynote with the Hour of Code advertising movie. I did some of the Hour of Code stuff from the iOS app and found it interesting (I’m probably being a little over-critical in that half-hearted endorsement. It’s a great idea. Chill out, Nick!)

Hadi started off referring to last year’s keynote, which questioned the value of code.org, which started as a hobby. He decided to build a larger organisation to try and realise the potential of transforming the untapped resource into a large crop of new computer scientists.

Who.what is Code.org?

  • A marketing organisation to make videos with celebrities?
  • A coalition of tech companies looking for employees?
  • A political advocacy group of educations and technologies?
  • Hour of code organisers?
  • An SE house that makes tutorials
  • Curriculum organisers?
  • PD organisation?
  • Grass roots movement?

It’s all of the above. Their vision is that every school should teach it to every student or at least give them the opportunity. Why CS? Three reasons: job gap, under-represented students and CS is foundational for every student in the 21st Century. Every job uses it.

Some common myths about code.org:

  • It’s all hype and Hour of Code – actually, there are many employees and 15 of them are here today.
  • They want to go it alone – they have about 100 partners who are working with the,
  • They are only about coding and learning to code – (well, the name doesn’t help) they’re actually about teaching fundamentals of Computer Science
  • This is about the software industry coming in to tell schools how to do their jobs – no, software firms fund it but they don’t run the org, which is focused on education, down to the pre-school level

Hmm, the word “disrupt” has now been used. I don’t regard myself as a disruptive innovator, I’m more of a seductive innovator – make something awesome and you’ll seduce people across to it, without having to set fire to anything. (That’s just me, though.)

Principle goals of Code.org start with “Educate K-12 students in CS throughout the US”. That’s their biggest job. (No surprise!) Next one is to Advocate to remove legislative barriers and the final pillar is to Celebrate CS and change perceptions.

Summary of first year – hour of code, 28 million students in 35,000 classrooms with 48% girls (applause form the audience), in 30 languages over 170 countries. 97% positive ratings of the teacher experience versus 0.2% negative. In their 20 hour K-8 Intro Course, 800,000 students in 13,000 students, 40% girls. In school district partnerships they have 23 districts with PD workshops for about 500 teachers for K-12. In their state advocacy role, they’ve changed policy in 5 states. Their team is still pretty lean with only 20 people but they’re working pretty hard with partnerships across industry, nonprofit and government. Hadi also greatly appreciated the efforts of the teachers who had put in the extra work to make this all happen in the classroom.

They’re working on a full curriculum with 20 hour modules all the way up to middle school, aligned with common core. From high school up, they go into semester courses. These course are Computer Science or leverage CS to teach other things, like maths. (Obviously, my ears pricked up because of our project with the Digital Technologies National Curriculum project in Australia.)

The models of growth include an online model, direct to teachers, students and parents (crucial), fuelled by viral marketing, word-of-mouth, volunteers, some A/B testing, best fit for elementary school and cost effectiveness. (On the A/B testing side, there was a huge difference in responses between a button labelled “Start” and a button labelled “Get started”. Start is much more successful! Who knew?) Attacking the problem earlier, it’s easy to get more stuff into the earlier years because they are less constrained in requirements to teach specific content.

The second model of growth is in district partnerships, where the district provides teachers, classrooms and computers. Code.org provide stipends, curriculum, marketing. Managing costs for scale requires then to aim for US$5-10K per High School, which isn’t 5c but is manageable.

The final option for growth is about certification exams, incentives, scholarships and schools of Ed.

Hadi went on to discuss the Curriculum, based on blockly, modified and extended. His thoughts on blended learning were that they achieved making learning feel like a game with blended learning (The ability to code Angry Birds is one of the extensions they developed for blackly) On-line and blended learning also makes a positive difference to teachers. On-line resources most definitely don’t have to remove teachers, instead, done properly, they support teachers in their ongoing job. Another good thing is to make everything web-based, cross-browser, which reduces the local IT hassle for CS teachers. Rather than having to install everything locally, you can just run it over the web. (Anyone who has ever had to run a lab knows the problem I’m talking about. If you don’t know, go and hug your sys admin.) But they still have a lot to learn: about birding game design and traditional curriculum, however they have a lot of collaborations going on. Evaluation is, as always, tricky and may combine traditional evaluation and large-scale web analytics. But there are amazing new opportunities because of the wealth of data and the usage patterns available.

He then showed three demos, which are available on-line, “Building New Tutorial Levels”, new tutorials that show you how to create puzzles rather than just levels through the addition of event handing (with Flappy Bird as the example), and the final tutorial is on giving hints to students. (Shout outs to all of the clear labelling of subgoals and step achievement…) That last point is great because you can say “You’re using all the pieces but in the wrong way” but with enough detail to guide a student, adding a hint for a specific error. There are about 11,000,000 submissions for providing feedback on code – 2,000,000 for correct, 9,000,000 for erroneous. (Code.org/hints)

So how can you help Code.org?

If tour in a Uni, bring a CS principles course to the Uni, partner with your school of Ed to bring more CS into the Ed program (ideally a teaching methods course). Finally, help code. org scale by offering K-5 workshops for them. You can e-mail univ@code.org if you’re interested. (Don’t know if this applies in Australia. Will check.) This idea is about 5 weeks old so write in but don’t expect immediate action, they’re still working it out.

If you’re just anyone, Uni or not? Convince your school district to teach CS. Code.org will move to your region in if 30+ high schools are on board. Plus you can leap into and give feedback on the curriculum or add hints to their database. There are roughly a million students a week doing Hour of Code stuff so there’s a big resource out there.

Hadi moved on to the Advocate pillar. Their overall vision is that CS is foundational – a core offering one very school rather than a vocational specialisation for a small community. The broad approach is to change state policy. (A colleague near me muttered “Be careful what you wish for” because that kind of widespread success would swamp us if we weren’t prepared. Always prepare for outrageous success!)

At the national level, there is a CS Education Act with bi-partisan sponsors in both house, to support STEM funding to be used as CS, currently before the house. In the NCAA, there’s a new policy published from an idea spawned at SIGCSE, apparently by Mark! CS can now count as an NCAA scholarship, which is great progress. At the state level, Allowing CS to satisfy existing high school math/science graduation requirements but this has to be finalised with the new requirement for Universities to allow CS to meet their math/science requirements as well! In states where CS counts, CS enrolment is 50% higher (Calc numbers are unchanged), with 37% more minority representation. The states with recent policy changed are are small but growing. Basically, you can help. Contact Code.org if your state or district has issues recognising CS. There’s also a petition on the code.org site which is state specific for the US, which you can check out if you want to help. (The petition is to seek recognition that everyone in the US should have the opportunity to learn Computer Science.)

Finally, on the Celebrate pillar, they’ve come a long way from one cool video, to Hour of Code. Tumblr took 3.5 years to reach 15,000,000, Facebook took 3 years, Hour of Code took 5 days, which is very rapid adoption. More girls participated in CS in US schools in one week than in the previous 70 years. (Hooray!) And they’re  doing it again in CSEd Week from December 8-14. Their goal is to get 100  million students to try the Hour of Code. See if you can get it on the Calendar now – and advertise with swag. 🙂

In closing, Hadi believes that CS is at an incredible inflection pint, with lots of opportunities, so now is the time to try stuff or, if it didn’t work before, try it again because there’s a lot of momentum and it’s a lot easier to do now. We have growing and large numbers. When we work together towards a shared goal, anything is possible.

Great talk, thanks, Hadi!


SIGCSE 2014 Day 2, About to start, (#SIGCSE2014)

The hall’s starting to fill up as everyone gets ready for the second keynote. Good to see so many people are still here, although how many people will be here for my workshop on Saturday afternoon is probably another matter!


SIGCSE 2014: Collecting and Analysing Student Data 1, Paper 3, Thursday 3:15 – 5:00pm (#SIGCSE2014)

Ok, this is the last paper, with any luck I can summarise it in four words so you don’t die of work poisoning. The final talk was “Using CodeBrowser to Seek Difference Between Novice Programmers” by Kenny Heinonen, Kasper Hirvikoski, Matti Luukkainen, and Arto Vihavainen, from the University of Helsinki. I regret to say that due to some battery issues, this blog is probably going to be cut short. My apologies to the speakers!

The takeaway from the talk is that CodeBrowser is a fancy tool for identifying challenges that students face as they are learning to program. it sues your snapshot data and, if you have lots of students, course outcomes and another measures should be used to find a small number of students to analyse first. (Oh, and penguins are cool.)

Helsinki has hundreds of locals and thousands of MOOC participants learning to program, recording student progress as they learn to program. The system is built on top of NetBeans and provides scaffolding for students as they learn to program. Ok, so were recording the students’ progress but so what? Well, we have snapshots with time and source and we can use this to identify students at risk of dropping CS1 and a parallel maths course. (Retention and early drop-out? Subjects close to my heart!) It can also be used to seek insight into the problems that students are facing. There are not a great many systems that allow you to analyse and visualise code snapshots, apparently.

Looks interesting, I’ll have to go and check it out!

Sorry, battery is going, committing before this all goes away!


SIGCSE 2014: Collecting and Analysing Student Data 1, Paper 2, Thursday 3:15 – 5:00pm (#SIGCSE2014)

Whoo! I nearly burnt out a digit writing up the first talk but it’s a subject close to my heart. I’ll try to be a little more terse for these next two talks.

The second talk in this session was “Blackbox: A Large Scale Repository of Novice Programmers’ Activity” by the amazing Blackbox team at Kent, Neil Brown, Michael Kölling, Davin McCall, and Ian Utting. The Blackbox data is the anonymised student data from students coding into the BlueJ Java programming environment. It’s a rich source of information on how students code and Mark and I have been scheming to do something with the Blackbox data for some time. With Ian and Neil here, it’s a good opportunity to steal their brains. I tried to get Ian to agree to doing all the work but it turns out that he’s been in the game long enough to not say “yes” when someone asks him to without context. (Maybe it’s just me.)

Michael was presenting, with some help from Neil, and reviewed the relationship between Blackbox and BlueJ. BlueJ is an educational programming environment for CS education using Java, dating back to the original Blue in 1996. (For those who don’t know, that’s old for this kind of thing. We should throw it a party.) BlueJ is a graphically operated development environment so novice programmers can drag things out to build programs. It’s a well-established and widely used environment.

(Hey, that means BlueJ is 18. Someone buy BlueJ a beer.)

BlueJ has about 2,000,000 users in 2013, who use it for about three months and then move on (it’s not a production tool, it’s a learning environment). The idea of Blackbox came out of SIGCSE sessions about three years ago where some research questions were raised, nice set-up, good design and really small student groups. One of our common problems is having enough students to actually do a big study and, frankly, all of us are curious about how students code. (It’s really hard to tell this from the final program, trust me.) So BlueJ has lots of users, can we look at their data and then share this with people?

Of course, the first question is “what do we collect?” Normally, we’d collect what we need to answer a research question but this data was going to be used to support lots of different (and currently unasked) research questions. The community was consulted at SIGCSE in 2012 but there has been an evolution of this over time. There are a lot of things collected – go and look at them in the paper because Michael flicked past that slide! 🙂

From an ethical standpoint, participation is an explicit decision made by the student to have their data collected or not. (This does raise the spectre of bias, especially as all the students must be over 16 for legal reasons.) So it’s opt in and THEN anonymised just to make it totally tasty from an ethical perspective.

Session data is collected for each session: start time, end time, project, path and userID (centrally anonymised for tracking).

So much for keeping it short, hey? Here’s a quick picture to give you a break.

dog-ate-my-homework

Other things that can be captured are object creation and invocation among many other useful measures.  For me, the fact that you can see how and when students are testing is fascinating, as it allows us to evaluate the whole expectation, observation and reflection scientific cycle in action.

The Blackbox project has already been running for 9 months. The opt-in rate is 40% (higher than I or anyone else expected). This means that there’s data from 250,000 users, recording roughly 11 events per second, over more than 1,000,000 projects and 20,000,000 compilations. What a fantastic resource! Michael then handed over to Neil to talk about the challenges.

Neil talked about tracking users, starting front he problem that one machine profile does not necessarily correspond to one user. Another problem is anonymisation, stripping project paths and the code where possible. You can’t guarantee anonymisation, because people sometimes use their own names as variable or class names, but they do what they can. It’s made clear on opt-in what’s going to happen. Data integrity is another challenge. Is it complete? No – it’s client side and there’s no guarantee of completeness, or even connectivity. But the Data that you do have for each session you is consistent. So the data is consistent but not complete. If you want, locally, you can tie your local data to the Blackbox data that they have on your students but then ethics becomes your problem. This can be done by Experiment and Participant Identifiers as part of the set-up so your students can be grouped. More example mini analyses are in the paper.

Looking at Error Frequency, Neil talked about certain errors and how their frequency changes over the weeks of 2013 (Semicolon expected, unknown variable). Over time, the syntax errors decreased (suggesting a learning effect) but others stay more constant.

The data is not completely open, and you need to request access as a researcher, sign a privacy and access restriction agreement. Students need not apply! There’s a SIGCSE workshop on this Saturday but I can’t go as my Puzzle Based Learning workshop is on at the same time. Great resource, go and check it out!

The final talk was “Using CodeBrowser to Seek Difference Between Novice Programmers” by Kenny Heinonen, Kasper Hirvikoski, Matti Luukkainen, and Arto Vihavainen, University of Helsinki,


SIGCSE 2014: Collecting and Analysing Student Data 1, “AP CS Data”, Thursday 3:15 – 5:00pm (#SIGCSE2014)

The first paper, “Measuring Demographics and Performance in Computer Science Education at a Nationwide Scale using AP CS Data”, from Barbara Ericson and Mark Guzdial, has been mentioned in these hallowed pages before, as well as Mark’s blog (understandably). Barb’s media commitments have (fortunately) slowed down btu it was great to see so many people and the media taking the issue of under-representation in AP CS seriously for a change. Mark presented and introduced the Advanced Placement CS program which is the only nationwide measure of CS education in the US. This allows us tto use the AP CS to compare with other AP exams, find who is taking the AP CS exams and how well do they perform. Looking longitudinally, how has this changed and what influences exam-taking? (There’s been an injection of funds into some states – did this work?)

The AP are exams you can take while in secondary school that gives you college credit or placement in college (similar to the A levels, as Mark put it). There’s an audit process of the materials before a school can get accreditation. The AP exam is scored 1-5, where 3 is passing. The overall stats are a bit worrying, when Back, Hispanic and female students are grossly under-represented. Look at AP Calculus and this really isn’t as true for female students and there is better representation for Black and Hispanic students. (CS has 4% Black and 7.7% Hispanic students when America’s population is 13.1% Black students and 16.9% Hispanic) The pass rates for AP Calculus are about the same as for AP CS so what’s happening?

Looking at a (very cool) diagram, you see that AP overall is female heavy – CS is a teeny, tiny dot and is the most male dominated area, and 1/10th the size of calculus. Comparing AP CS to the others. there has been steady growth since 1997 in Calculus, Biology, Stats, Physics, Chem and Env Science AP exams – but CS is a flat, sunken pancake that hasn’t grown much at all. Mark then analysed the data by states, counting the number of states in each category along features such as ‘schools passing audit/10K pop’, #exams/pop and % passing exams. Mark then moved onto diversity data: female, Black and Hispanic test takers. It’s worth noting that Jill Pala made the difference to the entire state she taught in, raising the number of women. Go, Jill! (And she asked a really good question in my talk, thanks again, Jill!)

How has this changed overtime? California and Maryland have really rapid growth in exam takers over the last 6 years, with NSF involvement. But Michigan and Indiana have seen much less improvement. In Georgia, there’s overall improvement, but mostly women and Hispanic students, but not as much for Black students. The NSF funding appears to have paid off, GA and MA have improved over the last 6 years but female test takers have still not exceeded 25% in the last 6 years.

Why? What influences exam taking?

  1. The wealth in the state influences number of schools passing audit
  2. Most of the variance in the states comes from under-representation in certain groups.

It’s hard to add wealth but if you want more exam takers, increase your under-represtentation group representation! That’s the difference between the states.

Conclusions? It’s hard to compare things most of the time and the AP CS is the best national pulse we have right now. Efforts to improve are having an effect but wealth matters, as in the rest of education.

All delivered at a VERY high speed but completely comprehensible – I think Mark was trying to see how fast I can blog!