SIGCSE 2013: The Revolution Will Be Televised, Perspectives on MOOC Education

Long time between posts, I realise, but I got really, really unwell in Colorado and am still recovering from it. I attended a lot of interesting sessions at SIGCSE 2013, and hopefully gave at least one of them, but the first I wanted to comment on was a panel with Mehram Sahami, Nick Parlante, Fred Martin and Mark Guzdial, entitled “The Revolution Will Be Televised, Perspectives on MOOC Education”. This is, obviously, a very open area for debate and the panelists provided a range of views and a lot of information.

Mehram started by reminding the audience that we’ve had on-line and correspondence courses for some time, with MIT’s OpenCourseWare (OCW) streaming video from the 1990s and Stanford Engineering Everywhere (SEE) starting in 2008. The SEE lectures were interesting because viewership follows a power law relationship: the final lecture has only 5-10% of the views of the first lecture. These video lectures were being used well beyond Stanford, augmenting AP courses in the US and providing entire lecture series in other countries. The videos also increased engagement and the requests that came in weren’t just about the course but were more general – having a face and a name on the screen gave people someone to interact with. From Mehram’s perspective, the challenges were: certification and credit, increasing the richness of automated evaluation, validated peer evaluation, and personalisation (or, as he put it, in reality mass customisation).

Nick Parlante spoke next, as an unashamed optimist for MOOC, who has the opinion that all the best world-changing inventions are cheap, like the printing press, arabic numerals and high quality digital music. These great ideas spread and change the world. However, he did state that he considered artisinal and MOOC education to be very different: artisinal education is bespoke, high quality and high cost, where MOOCs are interesting for the massive scale and, while they could never replace artisinal, they could provide education to those who could not get access to artisinal.

It was at this point that I started to twitch, because I have heard and seen this argument before – the notion that MOOC is better than nothing, if you can’t get artisinal. The subtext that I, fairly or not, hear at this point is the implicit statement that we will never be able to give high quality education to everybody. By having a MOOC, we no longer have to say “you will not be educated”, we can say “you will receive some form of education”. What I rarely hear at this point is a well-structured and quantified argument on exactly how much quality slippage we’re tolerating here – how educational is the alternative education?

Nick also raised the well-known problems of cheating (which is rampant in MOOCs already before large-scale fee paying has been introduced) and credentialling. His section of the talk was long on optimism and positivity but rather light on statistics, completion rates, and the kind of evidence that we’re all waiting to see. Nick was quite optimistic about our future employment prospects but I suspect he was speaking on behalf of those of us in “high-end” old-school schools.

I had a lot of issues with what Nick said but a fair bit of it stemmed from his examples: the printing press and digital music. The printing press is an amazing piece of technology for replicating a written text and, as replication and distribution goes, there’s no doubt that it changed the world – but does it guarantee quality? No. The top 10 books sold in 2012 were either Twilight-derived sadomasochism (Fifty Shades of Unncessary) or related to The Hunger Games. The most work the printing presses were doing in 2012 was not for Thoreau, Atwood, Byatt, Dickens, Borges or even Cormac McCarthy. No, the amazing distribution mechanism was turning out copy after copy of what could be, generously, called popular fiction. But even that’s not my point. Even if the printing presses turned out only “the great writers”, it would be no guarantee of an increase in the ability to write quality works in the reading populace, because reading and writing are different things. You don’t have to read much into constructivism to realise how much difference it makes when someone puts things together for themselves, actively, rather than passively sitting through a non-interactive presentation. Some of us can learn purely from books but, obviously, not all of us and, more importantly, most of us don’t find it trivial. So, not only does the printing press not guarantee that everything that gets printed is good, even where something good does get printed, it does not intrinsically demonstrate how you can take the goodness and then apply it to your own works. (Why else would there be books on how to write?)  If we could do that, reliability and spontaneously, then a library of great writers would be all you needed to replace every English writing course and editor in the world. A similar argument exists for the digital reproduction of music. Yes, it’s cheap and, yes, it’s easy. However, listening to music does not teach you to how write music or perform on a given instrument, unless you happen to be one of the few people who can pick up music and instrumentation with little guidance. There are so few of the latter that we call them prodigies – it’s not a stable model for even the majority of our gifted students, let alone the main body.

Fred Martin spoke next and reminded us all that weaker learners just don’t do well in the less-scaffolded MOOC environment. He had used MOOC in a flipped classroom, with small class sizes, supervision and lots of individual discussion. As part of this blended experience, it worked. Fred really wanted some honest figures on who was starting and completing MOOCs and was really keen that, if we were to do this, that we strive for the same quality, rather than accepting that MOOCs weren’t as good and it was ok to offer this second-tier solution to certain groups.

Mark Guzdial then rounded out the panel and stressed the role of MOOCs as part of a diverse set of resources, but if we were going to do that then we had to measure and report on how things had gone. MOOC results, right now, are interesting but fundamentally anecdotal and unverified. Therefore, it is too soon to jump into MOOC because we don’t yet know if it will work. Mark also noted that MOOCs are not supporting diversity yet and, from any number of sources, we know that many-to-one (the MOOC model) is just not as good as 1-to-1. We’re really not clear if and how MOOCs are working, given how many people who do complete are actually already degree holders and, even then, actual participation in on-line discussion is so low that these experienced learners aren’t even talking to each other very much.

It was an interesting discussion and conducted with a great deal of mutual respect and humour, but I couldn’t agree more with Fred and Mark – we haven’t measured things enough and, despite Nick’s optimism, there are too many unanswered questions to leap in, especially if we’re going to make hard-to-reverse changes to staffing and infrastructure. It takes 20 years to train a Professor and, if you have one that can teach, they can be expensive and hard to maintain (with tongue firmly lodged in cheek, here). Getting rid of one because we have a promising new technology that is untested may save us money in the short term but, if we haven’t validated the educational value or confirmed that we have set up the right level of quality, a few years now from now we might discover that we got rid of the wrong people at the wrong time. What happens then? I can turn off a MOOC with a few keystrokes but I can’t bring back all of my seasoned teachers in a timeframe less than years, if not decades.

I’m with Mark – the resource promise of MOOCs is enormous and they are part of our future. Are they actually full educational resources or courses yet? Will they be able to bring education to people that is a first-tier, high quality experience or are we trapped in the same old educational class divisions with a new name for an old separation? I think it’s too soon to tell but I’m watching all of the new studies with a great deal of interest. I, too, am an optimist but let’s call me a cautious one!


Heading to SIGCSE!

Snowed under – get it?

I’m pretty snowed under for the rest of the week and, while I dig myself out of a giant pile of papers on teaching first year programmers (apparently it’s harder than throwing Cay’s book at them and yelling “LEARN!”), I thought I’d talk about some of the things that are going on in our Computer Science Education Research Group. The first thing to mention is, of course, the group is still pretty new – it’s not quite “new car smell” territory but we are certainly still finding out exactly which direction we’re going to take and, while that’s exciting, it also makes for bitten fingernails at paper acceptance notification time.

We submitted a number of papers to SIGCSE and a special session on Contributing Student Pedagogy and collaboration, following up on our multi-year study on this and Computer Science Education paper. One of the papers and the special session have been accepted, which is fantastic news for the group. Two other papers weren’t accepted. While one was a slightly unfortunate near-miss (but very well done, lead author who shall remain nameless [LAWSRN]), the other was a crowd splitter. The feedback on both was excellent and it’s given me a lot to think about, as I was lead on the paper that really didn’t meet the bar. As always, it’s a juggling act to work out what to put into a paper in order to support the argument to someone outside the group and, in hindsight quite rightly, the reviewers thought that I’d missed the mark and needed to try a different tack. However, with one exception, the reviewers thought that there was something there worth pursuing and that is, really, such an important piece of knowledge that it justifies the price of admission.

Yes, I’d have preferred to have got it right first time but the argument is crucial here and I know that I’m proposing something that is a little unorthodox. The messenger has to be able to deliver the message. Marathons are not about messengers who run three steps and drop dead before they did anything useful!

The acceptances are great news for the group and will help to shape what we do for the next 12-18 months. We also now have some papers that, with some improvement, can be sent to another appropriate conference. I always tell my students that academic writing is almost never wasted because if it’s not used here, or published there, the least that you can learn is not to write like that or not about that topic. Usually, however, rewriting and reevaluation makes work stronger and more likely to find a place where you can share it with the world.

We’re already planning follow-up studies in November on some of the work that will be published at SIGCSE and the nature of our investigations are to try and turn our findings into practically applicable steps that any teacher can take to improve participation and knowledge transfer. These are just some of the useful ideas that we hope to have ready for March but we’ll see how much we get done. As always. We’re coming up to the busy end of semester with final marking, exams and all of that, as well as the descent into admin madness as we lose the excuse of “hey, I’d love to do that but I’m teaching.” I have to make sure that I wrestle enough research time into my calendar to pursue some of the exciting work that we have planned.

I look forward to seeing some of you in Colorado in March to talk about how it went!

Things to do in Denver when you’re Ed?


Students and Programming: A stroll through the archives in the contemplation of self-regulation.

I’ve been digging back into the foundations of Computer Science Education to develop some more breadth in the area and trying to fill in some of the reading holes that have developed as I’ve chased certain ideas forward. I’ve been looking at Maye’s “Psychology of How Novices Learn Computer Programming” from 1981, following it forward to a number of papers including McCracken (Chair) et al’s “A multi-national, multi-institutional study of assessment of programming skills of first-year CS students”. Among the many interesting items presented in this paper was a measure of Degree of Closeness (DoC): a quantification of how close the student had come to providing a correct solution, assessed on their source code. The DoC is rated on a five-point scale, with 1 being the furthest from a correct solution. These “DoC 1” students are of a great deal of interest to me because they include those students who submitted nothing – possible evidence of disengagement or just the student being overwhelmed. In fact the DoC 1 students were classified into three types:

  • Type 1: The student handed up an empty file.
  • Type 2: The student’s work showed no evidence of a plan.
  • Type 3: The student appeared to have a plan but didn’t carry it out.

Why did the students do something without a plan? The authors hypothesise that the student may have been following a heuristic approach, doing what they could, until they could go no further. Type 3 was further subdivided into 3a (the student had a good plan or structure) and 3b (the student had a poor plan or structure). All of these, however, have one thing in common and that is that they can indicate a lack of resource organisation, which may be identified as a shortfall in metacognition. On reflection, however, many of these students blamed external factors for their problems. The Type 1 students blamed the time that they had to undertake the task, the lab machines, their lack of familiarity with the language. The DoC 5 students (from the same school) described their difficulties in terms of the process of creating a solution. Other comments from DoC 1 and 2 students included information such as insufficient time, students “not being good” at whatever this question was asking and, in one case, “Too cold environment, problem was too hard.” The most frequent complaint among the low performing students was that they had not had enough time, the presumption being that, had enough time been available, a solution was possible. Combine this with the students who handed up nothing or had no plan and we must start to question this assertion. (It is worth noting that some low-performing students had taken this test as their first ever solo lab-based examination so we cannot just dismiss all of these comments!)

The paper discusses a lot more and is rather critical of its own procedure (perhaps the time pressure was too high, the specifications a little cluttered, highly procedural rather than OO) and I would not argue with the authors on any of this but, from my perspective, I am zooming in on the issue of time because, if you’ve read any of my stuff before, you’ll know that I am working in self-regulation and time management. I look at the Types of DoC 1 students and I can see exactly what I saw in my own student timeliness data and reflection reports: a lack of ability to organise resources. This is now, apparently, combined with a persistent belief that fixing this was beyond the student’s control. It’s unsurprising that handing up nothing suddenly became a valid option.

The null submission could be a clear indicator of organisational ability, where the student can’t muster any kind of solution to the problem at all. Not one line of code or approximate solution. What is puzzling about this is that the activity was, in fact, heavily scheduled. Students sat in a lab and undertook it. There was no other task for them to perform except to do this code in either 1 or 1.5 hours. To not do anything at all may be a reaction to time pressure (as the authors raised) or it could be complete ignorance of how to solve the problem. There’s too much uncertainty here for me to say much more about this.

The “no plan” solution can likely be explained by the heuristic focus and I’ve certainly seen evidence of it. One of the most unforgiving aspects of the heuristic solution is that, without a design, it is easy to end up in a place where you are running out of time and have no idea of where to go to solve unforeseen problems that have arisen. These students are the ones who I would expect to start the last day that something is due and throw together a solution, working later and panicking more as they realised that their code wasn’t working. Having done a bit here and a piece there, they may cobble something together and hand it up but it is unlikely to work and is never robust.

The “I planned it but I couldn’t do it” group fall heavily into the problem space of self-regulation, because they had managed to organise their resources – so why didn’t anything come out? Did they procrastinate? Was their meta-planning process deficient, in that they spent most of their time perfecting a plan and not leaving enough time to make it happen? I have a number of students who have a tendency to go down the rabbit hole when chasing design issues and I sometimes have to reach down, grab them by the ears and haul them out. The reality of time constraints is that you have to work out what you can do and then do as much as you can with that time.

This is fascinating because I’m really trying to work out at which point students will give up and DoC 1 basically amounts to an “I didn’t manage it” mark in my local system. I have data that shows the marks students get from automated marking (immediate assessment) so I can look to see how long people will try to get above what (effectively) would be above DoC 1, and probably up around DoC 3. (The paper defines DoC 3 as “In reading the source code, the outline of a viable solution was apparent, including meaningful comments, stub code, or a good start on the code.” This would be enough to meet our assessment requirements although the mark wouldn’t be great.) DoC 1 would, I suspect, amount to “no submission” in many cases so my DoC 1 students are those who stayed enrolled (and sat the exam) but never created a repository or submission. (There are so many degrees of disengagement!)

I, of course, now have to move further forward along this paper line and I will hopefully intersect with my ‘contemporary’ reading into student programming activity. I will be reading pretty solidly on all of this for the upcoming months as we try to refine the time management and self-regulation strategies that we’ll be employing next year.


Authenticity and Challenge: Software Engineering Projects Where Failure is an Option

It’s nearly the end of semester and that means that a lot of projects are coming to fruition – or, in a few cases, are still on fire as people run around desperately trying to put them out. I wrote a while about seeing Fred Brooks at a conference (SIGCSE) and his keynote on building student projects that work. The first four of his eleven basic guidelines were:

  1. Have real projects for real clients.
  2. Groups of 3-5.
  3. Have lots of project choices
  4. Groups must be allowed to fail.

We’ve done this for some time in our fourth year Software Engineering option but, as part of a “Dammit, we’re Computer Science, people should be coming to ask about getting CS projects done” initiative, we’ve now changed our third year SE Group Project offering from a parallel version of an existing project to real projects for real clients, although I must confess that I have acted as a proxy in some of them. However, the client need is real, the brief is real, there are a lot of projects on the go and the projects are so large and complex that:

  1. Failure is an option.
  2. Groups have to work out which part they will be able to achieve in the 12 weeks that they have.

For the most part, this approach has been a resounding success. The groups have developed their team maturity faster, they have delivered useful and evolving prototypes, they have started to develop entire tool suites and solve quite complex side problems because they’ve run across areas that no-one else is working in and, most of all, the pride that they are taking in their work is evident. We have lit the blue touch paper and some of these students are skyrocketing upwards. However, let me not lose sight of one our biggest objectives, that we be confident that these students will be able to work with clients. In the vast majority of cases, I am very happy to say that I am confident that these students can make a useful, practical and informed contribution to a software engineering project – and they still have another year of projects and development to go.

The freedom that comes with being open with a client about the possibility of failure cannot be overvalued. This gives both you and the client a clear understanding of what is involved- we do not need to shield the students, nor does the client have to worry about how their satisfaction with software will influence things. We scaffold carefully but we have to allow for the full range of outcomes. We, of course, expect the vast majority of projects to succeed but this experience will not be authentic unless we start to pull away the scaffolding over time and see how the students stand by themselves. We are not, by any stretch, leaving these students in the wilderness. I’m fulfilling several roles here: proxying for some clients, sharing systems knowledge, giving advice, mentoring and, every so often, giving a well-needed hairy eyeball to a bad idea or practice. There is also the main project manager and supervisor who is working a very busy week to keep track of all of these groups and provide all of what I am and much, much more. But, despite this, sometimes we just have to leave the students to themselves and it will, almost always, dawn on them that problem solving requires them to solve the problem.

I’m really pleased to see this actually working because it started as a brainstorm of my “Why aren’t we being asked to get involved in more local software projects” question and bouncing it off the main project supervisor, who was desperate for more authentic and diverse software projects. Here is a distillation of our experience so far:

  1. The students are taking more ownership of the projects.
  2. The students are producing a lot of high quality work, using aggressive prototyping and regular consultation, staged across the whole development time.
  3. The students are responsive and open to criticism.
  4. The students have a better understanding of Software Engineering as a discipline and a practice.
  5. The students are proud of what they have achieved.

None of this should come as much of a surprise but, in a 25,000+ person University, there are a lot of little software projects on the 3-person team 12 month scale, which are perfect for two half-year project slots because students have to design for the whole and then decide which parts to implement. We hope to give these projects back to them (or similar groups) for further development in the future because that is the way of many, many software engineers: the completion, extension and refactoring of other people’s codebases. (Something most students don’t realise is that it only takes a very short time for a codebase you knew like the back of your hand to resemble the product of alien invaders.)

I am quietly confident, and hopeful, that this bodes well for our Software Engineers and that we still start to seem them all closely bunched towards the high achieving side of the spectrum in terms of their ability to practice. We’re planning to keep running this in the future because the early results have been so promising. I suppose the only problem now is that I have to go and find a huge number of new projects for people to start on for 2013.

As problems go, I can certainly live with that one!


Great News, Another Group Paper Accepted!

Our Computer Science Education Research group has been doing the usual things you do when forming a group: stating a vision, setting goals, defining objectives and then working like mad. We’ve been doing a lot of research and we’ve been publishing our work to get peer review, general feedback and a lot of discussion going. This year, we presented a paper in SIGCSE, we’ve already had a paper accepted for DEXA in Vienna (go, Thushari!) and, I’m very pleased to say that we’ve been just been notified that our paper “A Fast Measure for Identifying At-Risk Students in Computer Science” has been accepted as one of the research papers for ICER 2012, in Auckland, New Zealand.

This is great news for our group and I’m really looking forward to some great discussion on our work.

I’ll see some of you at ICER!

Click here to go to the ICER 2012 Information Page!


Graphs, DAGS and Inverted Pyramids: When Is a Prerequisite Not a Prerequisite?

I attended a very interesting talk at SIGCSE called “Bayesian Network Analysis of Computer Science Grade Distributions” (Anthony and Raney, Baldwin-Wallace College). Their fundamental question was how could they develop computational tools to increase the graduation rate of students in their 4 year degree. Motivated by a desire to make grade predictions, and catch students before they fall off, they started searching their records back to 1998 to find out if they could get some answers out of student performance data.

One of their questions was: Are the prerequisites actually prerequisite? If this is true, then there should be at least some sort of correlation between performance and attendance in a prerequisite course and the courses that depend upon it. I liked their approach because it took advantage of structures and data that they already had, and to which they applied a number of different analytical techniques.

They started from a graph of the prerequisites, which should be able to be built as something where you start from an entry subject and can progress all the way through to some sort of graduation point, but can only progress to later courses if you have the prereqs. (If we’re being Computer Science-y, prereq graphs can’t contain links that take you around in a loop and must be directed acyclic graphs (DAGs), but you can ignore that bit.) As it turns out, this structure can easily be converted to certain analytical structures, which makes the analysis a lot easier as we don’t have to justify any structural modification.

Using one approach, the researchers found that they could estimate a missing mark in the list of student marks to an accuracy of 77% – that is they correctly estimate the missing (A,B,C,D,F) grade 77% of the time, compared with 30% of the time if they don’t take the prereqs into account.

They presented a number of other interesting results but one that I found both informative and amusing was that they tried to use an automated network learning algorithm to pick the most influential course in assessing how a student will perform across their degree. However, as they said themselves, they didn’t constrain the order of their analysis – although CS400 might depend upon CS300 in the graph, their algorithm just saw them as connected. Because of this, the network learning picked their final year, top grade, course as the most likely indicator of good performance. Well, yes, if you get an A in CSC430 then you’ve probably done pretty well up until now. The machine learning involved didn’t have this requirement as a constraint so it just picked the best starting point – from its perspective. (I though that this really reinforced what the researchers were talking about – that finding the answer here was more than just correlation and throwing computing power at it. We had to really understand what we wanted to make sure we got the right answer.)

Machine learning is going to give you an answer, in many cases, but it’s always interesting to see how many implicit assumptions there are that we ignore. It’s like trying to build a pyramid by saying “Which stone is placed to indicate that we’ve finished”, realising it’s the capstone and putting that down on the ground and walking away. We, of course, have points requirements for degrees, so it gets worse because now you have to keep building and doing it upside down!

Picture of an upside down pyramid.

I’m certainly not criticising the researchers here – I love their work, I think that they’re very open about where they are trying to take this and I thought it was a really important point to drive home. Just because we see structures in a certain way, we always have to be careful how we explain them to machines because we need useful information that can be used in our real teaching worlds. The researchers are going to work on order-constrained network learning to refine this and I’m really looking forward to seeing the follow-up on this work.

I am also sketching out some similar analysis for my new PhD student to do when he starts in April. Oh, I hope he’s not reading this because he’s going to be very, VERY busy. 🙂


Staring In The SIGCSE Mirror

One of the great things about going to a top notch conference like SIGCSE is that you get a lot of exposure to great educators bringing their A game for their presentations and workshops. It’s a great event in many ways but it’s also highly educational. I wrote furiously during the time that I was there (and there are still some blog posts to come) because there was so much knowledge flowing, I felt that I had to get it all down.

It is also valuable because it is humbling. There are educators who are scraping together feather, burnt cork and a few pebbles and producing educational materials and content that would knock your socks off. Given that attending SIGCSE is a significant financial expenditure for an Australian, it’s a quiet reminder that my journey to SIGCSE had better have a valuable outcome now that I’m back. A lot of my colleagues are doing amazing things with far less – I have no excuse to not do at least as well. (And I’ve certainly been trying.)

It’s inspirational. Sometimes it feels like we’re all adrift in a giant cold sea, in little boats, in the dark. We do what we can in our own space but have no idea how many people are out there. Yet there are so many other people out there. Holding up our lights allows us to see all of the other boats around us – not a small fishing fleet but a vast, floating city of light. Better still, you’ll see how many are close enough to you that you can ask them for help – or offer them assistance. Sound, ethical education is one of the great activities of our species, but it’s not always as valued as it could be – it’s easier when you have some inspiration and a sea full of stars.

It’s levelling. It doesn’t matter whether you’re from the greatest or the smallest college – if your work was accepted to SIGCSE then other people will hear about it. Your talk will be full of people from all over the sphere who want to hear about your work.

It encourages people to try techniques so that they, in turn, may come back and present one day. It also reminds us that there is a place where CS Education is the primary, valued, topic of conversation and, in these days of the primacy of research value, that’s an important level of encouragement.

There are so many more things that I could say about the experience but I think that my volume of blogging speaks pretty much for itself on this event. How did it make me feel? It made me want to be better at what I did, a lot better, and it gave me ways to do that. It made me hungry for new challenges at the same time it gave me the materials and tools to bring a ladder to scale those challenges.

I’ve always said that I don’t pretend to be an expert – that this blog is reflective, not instructive or dogmatic – but that doesn’t mean that I don’t strive to master this area. Attending events like SIGCSE helps me to realise that, with work and application, one day I may even manage it.

 


SIGCSE, Why Can’t I Implement Everything?

I was going to blog about Mike Richards’ excellent paper on ubicomp, but Katrina did a much better job so I recommend that you go and look over there.

My observation on this session are more feeling based, in that I’ve seen many things at this conference and almost every time, I’ve wanted to tell more people about it, or adopt the mechanism. As Katrina said to me, when we were discussing it over lunch, you can’t do everything because we only have so many people and not every idea has to be implemented at every University.

But it’s such a shame! I want small home-rolled mobile computing platforms and fascinating programming environments! Everything good I saw, I want to bring home and share with people. However, the hard part is that I want them to be as fascinated and as excited as I am – and they’re getting it from me second-hand.

The other things that I have to remember is that whatever we do, we have to commit to and do well, we can’t just bring stuff in, try it and throw it away in case there’s a possibility of our ad-hoc approach hurting our students. We have to work out what we want to improve, measure it, try the change and then measure it again to see what has changed.

You’ll see a few more SIGCSE posts, because there’s still some very interesting things to report and comment on, but an apparent movement away from the content here isn’t a sign that I’ve stopped thinking about – it’s a sign that I’m thinking about which bits I can implement and which bits I have to put into the ‘long term’ box to bring up at a strategy level down the track.

I’ve met a lot of great people and heard many wonderful things – thanks to everyone at SIGCSE!


SIGCSE Wrap-up 2012

And SIGCSE is over! Raja and I presented the infamous puzzle-based learning (PBL) workshop. It took three years to get into a form where it was accepted – but it was worth it. ALl of the participants seemed to have a good time but, more importantly, seemed to get something useful. The workshop about 12 hours of information jammed into 3 hours but it’s a start.

Today’s lunch was pretty good but, despite the keynote being two really interesting people (Fernanda Viégas and Martin Wattenberg, from Google’s Big Picture visualisation in Cambridge, MA) and the content being interesting – there wasn’t much room for us to take it further beyond contributing our datasets to the Many Eyes project and letting it go to the world. I suspect that, if this talk had preceded Hal’s yesterday, it would have been much better but, after the walled garden talk, the discussion of what a small group of very clever people had done was both interesting and inspirational – but where was the generative content as a general principle?

I’m probably being too harsh – it’s not as if Fernanda and Martin didn’t give us a great and interesting talk. I suspect that Hal’s talk may just have made me a lot more aware of the many extended fingers in the data pies that I work with on a regular basis.

So let me step back and say that the current focus on presenting data in easily understood ways is important and exciting. It would be fantastic if all of the platforms available were open, extensible and generative. There we go – a nice positive message. Fernanda and Martin are doing great stuff and I’d love to see all of it in the public domain sometime. 🙂

Following the lunch, Raja and I had to set-up for our workshop, and that meant that our audience was going to be the last SIGCSE people we’d see as everyone else was leaving or heading off to another workshop. We think it went well but I guess we’ll see. I’ll try to put a PBL post in the queue before I start jumping on planes again.

Bye, SIGCSE, it’s been fun. See you… next year?


SIGCSE: Scratching Alice – What Do Students Learn About Programming From Game, Music Video, And Storytelling Projects?

I went to a fascinating talk that drew data from 11-14 year olds at a programming camp. Students used a 3D programming language called Alice or a visual programming language called Scratch, to tell stories, produce music videos and write games. The faculty running the program noticed that there appeared to be a difference in the style of programming that students mastered depending on whether they used Alice or Scratch. At first glance, these languages both provide graphical programming environments and can be very similarly used. They both offer loops, the ability to display text, can produce graphics and you can assign values to locations in memory – not surprising, given that these are what we would hope to find in any modern high-level programming language. For many years, students produced programs in Alice, with a strong storytelling focus, but from 2008, the camp switched to Scratch, and a game-writing and music-video focus.

And the questions that students asked started to change.

Students started to ask questions about selection statements and conditional expressions – choosing which piece of code to run at a given point and calculating true and false conditions. This was a large departure from the storytelling time when students, apparently, didn’t need this knowledge.

The paper is called “What Do Students Learn About Programming From Game, Music Video, And Storytelling Projects?“, Adams and Webster, and they show a large number of interesting figures determined by data mining the code produced from all of the years of the camp. Unsurprisingly, the game programming required students to do a lot more of what we would generally recognise as programming – choosing between different pathways in the code, determining if a condition has been met – and this turns out to be statistically significant for this study. Yes, Scratch games use more if statements and conditionals than Alice storytelling activities and this is a clear change in the nature and level of the concepts that the students have been exposed to.

Students tended to write longer programs as they got older, regardless of language, games were longer than other programs, IF statements were used 100 times more often in games than stories and LOOPS were used 100 times more often in games and videos than stories.

Some other, interesting, results include data on gender differences in the data:

  • Boys put, on average, 3.2 animations of fire into all of their games, compared to the girl’s rather dull 0.8. Come on, girls, why isn’t everything on fire?
  • Boys use infinite loops far more frequently than girls. (I’d love to see if there’s an age-adjusted pattern to this as well.)
  • Girls appear to construct more conditional statements. This would usually indicate a higher level of utility with the concepts.

We generally have two things that we try to do when we carry out outreach – amuse/engage the audience and educate the audience. There’s not doubt that the choice of language and the exercise are important and this paper highlights it. They’re not saying that Alice is better or worse than Scratch but that, depending on what you want, your choice of activity is going to make students think in a certain way. If all you’re after is engagement then you don’t need students practising these higher-level programming skills – but if you’re trying to start out proto-programmers, maybe a storytelling approach isn’t what you’re after.