SIGCSE Day 3, “MOOCs”, Saturday, 10:45-12:00pm, (#SIGCSE2014)

This session is (unsurprisingly) of very great interest to the Adelaide Computer Science Education Research group and, as the expeditionary force of CSER, I’ve been looking forward to attending. (I’d call myself an ambassador except I’m not carrying my tuxedo.) The opening talk was “Facilitating Human Interaction in an Online Programming Course” presented by Joe Warren, from Rice University. They’ve been teaching a MOOC for a while and they had some observations to share on how to make things work better. The MOOC is an introduction to interactive programming in Python, based one that Joe had taught for years, which was based on building games. First on-line session was in Fall, 2012, after  face-to-face test run. 19,000 students completed three offerings over Fall ’12, Spring ’13 and Fall ’13.

The goal was to see how well they could put together a high quality on-line course. They sussed recorded videos and machine-graded quizzes, with discussion forums and peer-assessed mini projects. They provided a help desk manned by course staff. CodeSkulptor was the key tool to enable human interaction, a browser based IDE for Python, which was easy to set up, and cloud-saved URLs for code, which were easy to share. (It’s difficult to have novices install tools without causing problems and code visibility is crucial for sharing.) Because they needed a locally run version of Python for interactivity (games focus) so they used Skulpt which translated Python into JavaScript, combined it with CodeMirror, an editor, and then ran it in the browser. CodeSkulptor was built on top.

Students could write code and compile it in the browser, but when they save it a hash is generated for unique storage in a cloud-based account with an access URL – anyone can run your code if you share the URL. (The URL includes a link to CodeSkulptor.org) CodeSkulptor has about 12 million visits with 4 million files saved, which is pretty good. The demo shown had keyboard input, graphic images and sound output – and for those of you know about these things, this is a great result without having to install a local compiler – the browser-based solution works pretty well.

Peer assessment occurred at weekly mini-projects, where the Coursera course provided URLs for CodeSkulptor and a grading rubric which gets sent to students in a web-form. The system isn’t anonymised but students knew it was shared and were encouraged to leave out any personal details in their comments if they wanted to be anonymous (as the file handles were anonymised). (Apparently, the bigger problem was inappropriate content, rather than people worrying about anonymity.) The students run it, assess it in about 10 minutes so it takes about an hour to assess 6 peers. The big advantage is that the code form your URL is guaranteed to run on the grader’s machine because it’s the same browser-based environment. A very detailed rubric was required to ensure good grading: lots of small score items with little ambiguity. The rubric did’t leave much room for assessment – the students were human machines. Why? Having humans grade it was an educational experience and learned from reading and looking at each other’s programs. Also, machine graders have difficulty with animated games, so this is a generalisable approach.

The Help Desk addressed the problem of getting timely expert help for the students – this is a big issue for students. The Code Clinic had custom e-mail that focuses on coding problems (because posting code was not allowed under the class Honour Code). Solutions for common problems were then shared to the rest of the class via the forum. (It looked like the code hash changed every time it got saved? That is a little odd from a naming perspective if true.)

How do CodeClinic work? in Spring 2013 they had about 2,500 help requests. On due days, response time was about 15 minutes (usually averaged 40+), overall handling time average was 6 minutes (open e-mail, solve problem respond). Over 70 days, 3-4 staff sent about 4,000 e-mails. That handling time for a student coding request is very short and it’s a good approach to handling problems at scale. That whole issue about response time going DOWN on due date days is important – that’s normally where I get slammed and slow down! It’s the most popular class at the Uni, which is great!

The chose substantial human-human interaction, using traditional methods on line with peer assessment nd help desks. MOOCs have some advantages over in-person classes – the forums are active because of their size and the help desk scaling works really effectively because it’s always used and hence it makes sense to always staff it. The takeaway is that you have to choose your tools well and you’ll be able to do some good things.

The second talk was “An Environment for Learning Interactive Programming”, also from Rice, and presented by Terry Tang. There was a bit of adverblurb at the start but Terry was radiating energy so I can’t really blame him. He was looking at the same course as mentioned in the previous talk (which saves me some typing, thank you session organisers!). In this talk, Terry was going to focus on SimpleGUI, a browser-based Python GUI library, and Viz mode, a program visualisation tool. (A GUI is a  Graphical User Interface. When you use shapes and windows to interact with a computer, that’s the GUI.)

Writing games requires a fully-functional GUI library so, given the course is about games, this had to be addressed! One could use an existing Python library but these are rarely designed to support Python in the browser and many of them are too complicated as APIs for novice programmers (good to see this acknowledged!).  Desired features of the new library: event-driven support, drawing support and enable students to be able to create simple but interesting programs. So they wrote SimpleGUI . Terry presented a number of examples of this and you can read about it in the talk. (Nick code for “I’m not writing that bit.”) The program was only 227 lines long because a lot of the tricky stuff was being done in the GUI.

Terry showed some examples of student code, built from scratch, on the SimpleGUI, and showed us a FlappyBirds clone – scoring 3, which got a laugh from the crowd.

Terry then moved on to Viz mode, to meet the requirement of letting students visualise the execution of their own code. One existing solution is the Online Python Tutor, which runs code on a server, generates a log file and then ships the trace to some visualisation code in the browse (in JavaScript) to process the trace and produce a state diagram. The code, with annotations, is presented back to the user and they can step through the code, with the visualisations showing the evolution of state and control over time. The resulting visualisation is pretty good and very easy to follow. Now this is great but it runs on a backend server, which could melt on due dates, and OPT can’t visualise event-driven programs (for those who don’t know, game programming is MOSTLY event-driven). So they wrote Viz mode.

From CodeSkulptor, you can run your program in regular or Viz mode. In Viz mode, a new panel with state diagrams shows up, and a console that shows end tags. This is all happening in the browser, which scales well although there are limits to computation in this environment, and is all integrated with the existing CodeSkulptor environment. Terry then showed some more examples.

An important note is that event handlers don’t automatically fire in Viz mode so an GUI elements will have additional buttons to explicitly fire events (like Draw for graphical panes or Timers for events like that). It’s a pretty good tool, from what we were shown. Overall, the Rice experience looks very positive but their tool set and approach to support appears to be the keys to their success. Only some of the code is open source, which is a pity.

Barb Ericson asked a great question: could you set up something where the students are forced to stop and then guess what is going to happen next? They haven’t done it yet but, as Joe said, they might do it now!

The final talk was not from Rice but was from Australia, woohoo (Melbourne and NICTA)! “Teaching Creative Problem Solving in a MOOC” was presented by Carleton Coffrin from NICTA. Carleton was at Learning@SCale earlier and what has been seen over the past year is MOOCs 1.0 – scaling content delivery, with linear delivery, multiple-choice questions and socialisation only in the forums. What is MOOC 2.0? Flexible delivery, specific assessments, gamification, pedagogy, personalised and adaptive approaches. Well, it turns out that they’ve done it so let’s talk about it with a discrete optimisation MOOC offered on Coursera by University of Melbourne. Carleton then explained what discrete optimisation was – left to you to research in detail, dear reader, but it’s hard and the problems are very complex (NP-hard for those who care about such things). Discrete optimisation in practice is trying to explain known techniques to complicated real-world problems. Adaptation and modification of existing skills is a challenge.

How do we prepare students for new optimisation problems that we can’t anticipate? By teaching general problem-solving skills.

What was the class design? The scope of the course was over six areas in the domain, which you can find in the paper, and five assignments of NP-hard complexity. In the lectures, the lecturer used a weatherman format with a lecturer projected over the slides with a great deal of enthusiasm – and a hat. (The research question of the global optimum for hats was not addressed.) The lecturer was very engaging and highly animated, which added to the appeal of the recorded lectures. The instructor constructs problems, students write code and generate solution, encode the solution in a standard format, this is passed back, graded and feedback is returned. Students get the feedback and can then resubmit until the student is happy with their grade. (Yay! I love to see this kind of thing.) I will note that the feedback told them what quality of solution that had to present rather than suggestions of how to do it. Where constraint violations occurred, there was some targeted feedback. Overall, the feedback was pretty reasonable but what you’d expect in good automated feedback. The students did demonstrate persistence in response to this feedback.

From a pedagogical perspective, discovery-based learning was seen to be very important as part of the course. Rather than teach mass, volume and density by using a naked formula, exemplars were presented using water and floating (or sinking) objects to allow the students to explore the solutions and the factors. The material is all in the lectures but it’s left to the students to find the right approach to find solutions to new problems – they can try different lecture ideas on different problems.

The instructor can see all of the student results, rank them, strip out the results and then present a leader board to show quality. This does allow students to see that higher numbers are achieved but I’m not sure that there’s any benefit beyond what’s given in the hints. They did add a distribution graph for really large courses as the leader board got too long. (I’m not a big fans of leader boards, but you know that.)

The structure of the course was suggested, with introductory materials, but then students could bounce around. On-line doesn’t require a linear structure! The open framework was effectively require for iterative improvement of the assignments.

How well did it work? 17,000 people showed up. 795 stayed to the end, which is close to what we’d expect from previous MOOC data but still a bit depressing. However, only 4,000 only tried to do the assignments and, in the warm up, a lot of people dropped out after the warm-up assignment. Looking at this, 1,884 completed the warm-up and stayed (got qualified), which makes the stay rate about 42%. (Hmm, not sure I agree with this numerical handling but I don’t have a better solution.)

Did students use the open framework for structure? It looks like there was revision behaviour, using the freedom of the openness to improve previous solutions with new knowledge. The actual participation rate was interesting because some students completed in 20, and some in 60.

Was it a success or a failure? Well, the students love it (yeah, you know how I feel about that kind of thing). Well, they surveyed the students at the end and they had realised that optimisation takes time (which is very, very true). The overall experience was positive despite the amount of work involved, and the course was rated as being hard.  The students were asked what their favourite part of the course and this was presented as a word cloud. Programming dominated (?!) followed by assignments (?!?!?!?!).

Their assignment choice was interesting because they deliberately chose examples that would work for one solution approach but not another. (For example, the Travelling Salesman Problem was provided at a scale where the Dynamic Programming solution wouldn’t fit into memory.)

There’s still a lot of dependency on this notion that “leaderboards are motivating”. From looking at the word cloud, which is a very high level approach, the students enjoyed the assignments and were happy to do programming, in a safe, retry-friendly (and hence failure tolerant) environment. In my opinion, the reminder of the work they’ve done is potentially more likely to be the reason they liked leader boards rather than as a motivating factor. (Time to set up a really good research study!)

Anyway, the final real session was a corker and I greatly enjoyed it! On to lunch and FRIED CHICKEN.



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s