ITiCSE 2014, Closing Session, #ITiCSE #ITiCSE2014

Well, thanks for reading over the last three days, I hope it’s been interesting. I’ve certainly enjoyed it and, tadahh, here we are at the finish line to close off the conference. Mats opened the session and is a bit sad because we’re at the end but reflected on the work that has gone into it with Åsa, his co-chair. Tony and Arnold were thanked for being the Program Chairs and then Arnold insisted upon thanking us as well, which is nice. I have to start writing a paper for next year, apparently. Then there were a lot of thanks, with the occasional interruption of a toy car being dropped. You should go to the web site because there are lots of people mentioned there. (Student volunteers got done twice to reflect their quality and dedication.)

Some words on ITiCSE 2015, which will be held next year in Vilnius, Lithuania from the 6th of July. There are a lot of lakes in Lithuania, apparently, and there’s something about the number of students in Sweden which I didn’t get. So, come to Lithuania because there are lots of students and a number of lakes.

The conference chairs got a standing ovation, which embarrassed me slightly because I had my laptop out so I had to give them a crouching ovation to avoid tipping the machine on to the floor that nearly stripped a muscle off the bone, so kudos, organisers.

That’s it. We’re done. See you later, everyone!


ITiCSE 2014, Day 3, Final Session, “CS Ed Research”, #ITiCSE2014 #ITiCSE

The first paper, in the final session, was the “Effect of a 2-week Scratch Intervention in CS1 on Learners with Varying Prior Knowledge”, presented by Shitanshu Mirha, from IIT Bombay. The CS1 course context is a single programming course for all freshmen engineer students, thus it has to work for novice and advanced learners. It’s the usual problem: novices get daunted and advanced learners get bored. (We had this problem in the past.) The proposed solution is to use Scratch, because it’s low-floor (easy to get started), high-ceiling (can build complex projects) and wide-walls (applies to a wide variety of topics and themes). Thus it should work for both novice and advanced learners.

The theoretical underpinning is that novice learners reach cognitive overload while trying to learn techniques for programming and a language at the same time. One way to reduce cognitive load is to use visual programming environments such as Scratch. For advanced learners, Scratch can provide a sufficiently challenging set of learning material. From the perspective of Flow theory, students need to reach equilibrium between challenge level and perceived skill.

The research goal was to investigate the impact of a two-week intervention in a college course that will transition to C++. What would novices learn in terms of concepts and C++ transition? What would advanced students learn? What was the overall impact on students?

The cohort was 450 students, no CS majors, with a variety of advanced and novice learners, with a course objective of teaching programming in C++ across 14 weeks. The Scratch intervention took place over the first four weeks in terms of teaching and assessment. Novice scaffolding was achieved by ramping up over the teaching time. Engagement for advanced learners was achieved by starting the project early (second week). Students were assessed by quizzes, midterms and project production, with very high quality projects being demonstrated as Hall of Fame projects.

Students were also asked to generate questions on what they learned and these could be used for other students to practice with. A survey was given to determine student perception of usefulness of the Scratch approach.

The results for Novices were presented. While the Novices were able to catch up in basic Scratch comprehension (predict output and debug code), this didn’t translate into writing code in Scratch or debugging programs in C++. For question generation, Novices were comparable to advanced learners in terms of number of questions generated on sequences, conditionals and data. For threads, events and operators, Novices generated more questions – although I’m not sure I see the link that demonstrates that they definitely understood the material. Unsurprisingly, given the writing code results, Novices were weaker in loops and similar programming constructs. More than 53% of Novices though the Scratch framing was useful.

In terms of Advanced learner engagement, there were more Advanced projects generated. Unsurprisingly, Advanced projects were far more complicated. (I missed something about Most-Loved projects here. Clarification in the comments please!) I don’t really see how this measures engagement – it may just be measuring the greater experience.

Summarising, Scratch seemed to help Novices but not with actual coding or working with C++, but it was useful for basic concepts. The author claims that the larger complexity of Advanced user projects shows increased engagement but I don’t believe that they’ve presented enough here to show that. The sting in the tail is that the Scratch intervention did not help the Novices catch up to the Advanced users for the type of programming questions that they would see in the exam – hence, you really have to question its utility.

The next paper is “Enhancing Syntax Error Messages Appears Ineffectual” presented by Paul Denny, from The University of Auckland. Apparently we could only have one of Paul or Andrew Luxton-Reilly, so it would be churlish to say anything other than hooray for Paul! (Those in the room will understand this. Sorry we missed you, Andrew! Catch up soon.) Paul described this as the least impressive title in the conference but that’s just what science is sometimes.

Java is the teaching language at Auckland, about to switch to Python, which means no fancy IDEs like Scratch or Greenfoot. Paul started by discussing a Java statement with a syntax error in it, which gave two different (but equally unhelpful) error messages for the same error.

if (a < 0) || (a > 100)
  error=true;

// The error is in the top line because there should be surrounding parentheses around conditions
// One compiler will report that a ';' is required at the ||, which doesn't solve the right problem.
// The other compiler says that another if statement is required at the ||
// Both of these are unhelpful - as well as being wrong. It wasn't what we intended.

The conclusion (given early) is simple: enhancing the error messages with a controlled empirical study found no significant effect. This work came from thinking about an early programming exercise that was quite straightforward but seemed to came students a lot of grief. For those who don’t know, programs won’t run until we fix the structural problems in how we put the program elements together: syntax errors have to be fixed before the program will run. Until the program runs, we get no useful feedback, just (often cryptic) error messages from the compiler. Students will give up if they don’t make progress in a reasonable interval and a lack of feedback is very disheartening.

The hypothesis was that providing more useful error messages for syntax errors would “help” users, help being hard to quantify. These messages should be:

  • useful: simple language, informal language and targeting errors that are common in practice. Also providing example code to guide students.
  • helpful: reduce the number of non-compiling submissions in total, reduce number of consecutive non-compiling submissions AND reduce the number of attempts to resolve a specific error.

In related work, Kummerfeld and Kay (ACE 2003), “The neglected battle fields of Syntax Errors”, provided a web-based reference guide to search for the error text and then get some examples. (These days, we’d probably call this Stack Overflow. 🙂 ) Flowers, Carver and Jackson, 2004, developed Gauntlet to provide more informal error messages with user-friendly feedback and humour. The paper was published in Frontiers in Education, 2004, “Empowering Students and Building Confidence in Novice Programmers Through Gauntlet.” The next aspect of related work was from Tom Schorsch, SIGCSE 1995, with CAP, making specific corrections in an environment. Warren Toomey modified BlueJ to change the error subsystem but there’s no apparent published work on this. The final two were Dy and Rodrigo, Koli Calling 2010, with a detector for non-literal Java errors and Debugging Tutor: Preliminary evaluation, by Carter and Blank, KCSC, January 2014.

The work done by the authors was in CodeWrite (written up in SIGCSE 2011 and ITiCSE 2011, both under Denny et al). All students submit non-compiling code frequently. Maybe better feedback will help and influence existing systems such as Nifty reflections (cloud bat) and CloudCoder. In the study, student had 10 problems they could choose from, with a method, description and return result. The students were split in an A/B test, where half saw raw feedback and half saw the enhanced message. The team built an error recogniser that analysed over 12,000 submissions with syntax errors from a 2012 course and the raw compiler message identified errors 78% of the time. (“All Syntax Errors are Not Equal”, ITiCSE 2012). In other cases, static analysis was used to work out what the error was. Eventually, 92% of the errors were classifiable from the 2012 dataset. Anything not in that group was shown as raw error message to the student.

In the randomised controlled experiment, 83 students had to complete the 10 exercises (worth 1% each), using the measures of:

  • number of consecutive non-compiing submissions for each exercise
  • Total number of non-compiling submissions
  • … and others.

Do students even read the error messages? This would explain the lack of impact. However, examining student code change there appears to be a response to the error messages received, although this can be a slow and piecemeal approach. There was a difference between the groups, but it wasn’t significant, because there was a 17% reduction in non-compiling submissions.

I find this very interesting because the lack of significance is slightly unexpected, given that increased expressiveness and ease of reading should make it easier for people to find errors, especially with the provision of examples. I’m not sure that this is the last word on this (and I’m certainly not saying the authors are wrong because this work is very rigorous) but I wonder what we could be measuring to nail this one down into the coffin.

The final talk was “A Qualitative Think-Aloud Study of Novice Programmers’ Code Writing Strategies”, which was presented by Tony Clear, on behalf of the authors. The aim of the work was to move beyond the notion of levels of development and attempt to explore the process of learning, building on the notion of schemas and plans. Assimilation (using existing schemas to understand new information) and accommodation  (new information won’t fit so we change our schema) are common themes in psychology of learning.

We’re really not sure how novice programmers construct new knowledge and we don’t fully understand the cognitive process. We do know that learning to program is often perceived as hard. (Shh, don’t tell anyone.) At early stages, movie programmers have very few schemas to draw on, their knowledge is fragile and the cognitive load is very high.

Woohoo, Vygotsky reference to the Zone of Proximal Development – there are things students know, things that can learn with help, and then the stuff beyond that. Perkins talked about attitudinal factors – movers, tinkerers and stoppers. Stoppers stop and give up in the face of difficulty, tinkers fiddle until it works and movers actually make good progress and know what’s going on. The final aspect of methodology was inductive theory construction, while I’ll let you look up.

Think-aloud protocol requires the student to clearly vocalise what they were thinking about as they completed computation tasks on a computer, using retrospective interviews to address those points in the videos where silence, incomprehensibility or confused articulation made interpreting the result impossible. The scaffolding involve tutoring, task performance and follow-up. The programming tasks were in a virtual world-based pogromming environment to solve tasks of increasing difficulty.

How did they progress? Jacquie uses the term redirection to mean that the student has been directed to re-examine their work, but is not given any additional information. They’re just asked to reconsider what they’ve done. Some students may need a spur and then they’re fine. We saw some examples of students showing their different progression through the course.

Jacquie has added a new category, PLANNERS, which indicates that we can go beyond the Movers to explain the kind of behaviour we see in advanced students in the top quartile. Movers who stretch themselves can become planners if they can make it into the Zone of Proximal Development and, with assistance, develop their knowledge beyond what they’d be capable of by themselves. The More Competent Other plays a significant role in helping people to move up to the next level.

Full marks to Tony. Presenting someone else’s work is very challenging and you’d have to be a seasoned traveller to even reasonably consider it! (It was very nice to see the lead author recognising that in the final slide!)

 


ITiCSE 2014, Day 3, Session 7B, Peer Instruction, #ITiCSE2014 #ITiCSE

The first talk was “Peer Instruction: a Link to the Exam” presented by Daniel Zingaro from University of Toronto. Peer Instruction (PI) is an active learning pedagogy developed for physics and now heavily used in computing. Students complete a reading quiz prior to class and teachers use multiple-choice quizzes to assess knowledge. (You can look this one up in a number of places but I’ve discussed it here before a bit.) There’s a lot of research that shows gains between individual and group vote, with enduring improvements in student learning. (We can use isomorphic questions to reduce the likelihood of copying.) Both students and instructors value the learning.

PI appears to demonstrate improved learning outcomes on the final exam grades, as well as perceived depth of learning. (Couple of studies here from Beth Simon et al, and Daniel himself, checking Beth’s results.) But what leads to this improved outcome? The peer discussion. The class wide discussion? Both? If one part isn’t useful then we can adapt it to make it more useful to Computer Scientists. Daniel is going to use isomorphic questions to investigate relationships between PI components and final exam grades.

The isomorphic questions test the same concept with different questions, where if they get the first one right, we hope that they get the second one right – and if people learn how to do one, then that knowledge flows on to the other. (The example given was of loop complexity in nested loops depending on different variables.)

Daniel has two question modes in this experiment, which are slightly different. Both modes include the PI components, but the location of the isomorphic questions vary between the two approaches in the second question. In the Peer ℗ mode, the isomorphic question comes directly after the group vote and the second mode (Combined – C), the Q2 isomorphic questions occur direct after the instructor has had a chance to influence the class.

Are the questions really isomorphic and of the same difficulty? An external ranker was used to verity this and then the question pairs and mode were randomised. The difficulty of the questions was found to be statistically equivalent, based on the percentage of Q1 that were found to be correct.

Daniel had two hypotheses. Firstly, that peer scores will correlate to final exam scores. Secondly, that combined scores will also correlate with final exam scores, but the correlation should be stronger than for Peer, with the Combined questions representing learning from the full PI cycle. In terms of the final exam, there were three measures of the final exam grades: total exam score, score on the tracing question (similar to PI questions) and score on a code-writing question (very different to PI questions).

The implementation was a CS1 course with 3 lectures/week, with reading quizzes worthy 4% submitted prior to each lecture, clicker responses worth 5%, where the lectures on average contained three PI cycles- one cycle per lecture contained the follow-up isomorphic question. Multiple regression was used to test relationships between PI and final exam scores.

All of the results were statistically significant. For code-tracing, hat students know before exam explains 13% of their scores in the final exam. With the peer questions, it goes up to 16%. With combined as well, it goes up to 19%. Is this practically significant? Daniel raised this question because it doesn’t rise very much.

In terms of code writing, Baseline is 16%, + Peers is 22% and +Combined is 25%, so we’re starting to see more contribution from peers than instructor in this case. Are we measuring the different difficulty of a problem that peers couldn’t correct, which is why the instructor does less?

Overall? Baseline 21%, Peer 30% and then Combined is 34%. (Any questions about the stats, please read the paper. 🙂 )

Maybe adding combined questions to peer questions increases our predictive accuracy, just because we’re adding more data and this being able to produce a better model?

In discussion, PI performance related to final exam scores (as expected). Peer learning alone is important and the instructor-led discussion is important, over and above peer learning. This validates the role of the instructor in a “student-centred” classroom. Given that PI uses MCQs, we might expect it to only correlate with code-tracing but it does appear to correlate with code-writing problems as well – there may be deep conceptual similarities between PI questions and programming skills. But would the students that learned from PI also the students that would have learned from any other form of instruction? Still an open question and there’s a lot of ongoing work still to do.

The next paper was “Comparing Outcomes in Inverted and Traditional CS1” presented by Diane Horton from U Toronto. I’ve been discussing early intervention and student attendance issues in inverted/hybrid courses with Jennifer Campbell and Michelle Craig, also from U Toronto and also on this paper, as part of an attempt to get some good answers so I’d just come straight out of a lunch, discussing inverted classrooms and their outcomes. (Again, this is why we come to conferences – much of the value is in the meetings and discussion that are just so hard to have when you’re doing your day job or fitting a Skype meeting into the wee small hours to bridge the continental time gap.)

As a reminder, in inverted teaching, some or all of the material is delivered outside the classroom. Work typically done as homework is done in the lecture with the help of instructor or TAs. There were three research questions. Would the inverted offerings have better outcomes? Would inverted teaching affect students’ behaviour or experience? Would particular subgroups respond differently, especially English-language learners and beginner programmers?

The CS1 course at Toronto is a 12 week course in Python, objects-early, classes-late, with most students in 1st year and less than half looking to major in CS. The lectures are roughly 200 students, with 5 of these lecture sections.

Before the lecture, students prepared by watching videos, from the instructors, mostly screencasts of live programming with voice over, credit for attempting quizzes embedded in videos is 0.5% per week. (It’s scary how small that fraction has to be and really rather sad, from a behavioural perspective.)

During the lecture, the instructors used a worked example and students worked on worksheet-based exercises for most of the lectures, with assistance, solo or in pairs. This was a responsive teaching approach because the instructor could draw the class together as required. There was no mark reward or penalty that depended on attendance. If you were solid on the material, it was okay to miss the lecture. The mark scheme reflected some marks for lecture preparation with an increased number of online exercises and decreased weighting on labs.

The inverted CS1 course had gone well in the pilot in January 2013, which was published in a peer at SIGCSE ’14, but it was hard to compare this with the previous class as the make-up of the cohort varies from September to January courses. The study was run again in a more similar cohort in September 2014. The data presented here is for a similar cohort with a high overlap of instructors, compared the traditional offering.

For the present study, there were pre- and post-course surveys about attitude and behaviour, competed on-paper in the lecture. Weekly lecture attendance counts were made and standard university course valuations collected. In terms of attendance, the inverted pilot was the lowest, but the inverted class had lower attendance most of the time – an effect that we have also seen under some circumstances and are still thinking about. Interestingly, students thought that the inverted lectures weren’t seen as being as useful as face-to-face lectures but the online support materials were seen to be very helpful. As a package, this seems to be an overall positive experience.

The hypothesis was that students in the inverted offering would self-report a higher quality of learning experience and greater enjoyment but this wasn’t supported in the data, nor was it for beginners in particular. However, when asked if they wanted more inverted courses, there was a very strong positive response to this question.

The authors expected that beginners would benefit more because they need more helps and the gap between beginners and experiences students would reduce – this wasn’t supported by the data. Also, there was no reduction of gaps for English language learners, either.

Would the inverted course help people stay on and pass the course? Well, however success was defined, the pass rate was remarkably consistent, even when the beginners were isolated. However, it does appear that the overall level of knowledge, as measured by the final exam grades, actually improved in the inverted offerings, across two exams of similar difficult, with a jump from an average grade of 66% to 74% between the terms. Is this just due to the inverted teaching?

Maybe students learned more in the inverted offering because they spent more time on task? Based on self-reported student time, this doesn’t appear to be true. Maybe the beginners got killed off early to reduce their numbers and raise the mark? No, the drop rates among beginners were the same. It appears that the 8 percentage point increase may be related to the inverted mode, although, obviously, more work is required.

Is it worth it? They used no additional TA resources but the development time was enormous. You may not be ready for this investment. There are other options, you don’t need to use videos and you can use pre-existing materials to reduce costs.

Future work involves looking at dropping patterns – who drops when – and student who stumble and recover. They’re also looking at a full online CS1 course for course credit.

The final talk was on “Making Group Processes Explicit to Students: A Case for Justice” presented by Ville Isomöttönen. Project courses have a very long history in Computer Science, as capstones, using authentic customer projects, and the intention is to provide a realistic experience. (Editor’s note: It’s worth noting that some of this may be coming from the “We got punished like this so you can be too.”) What do students actually learn from this? Are they learning what we want them to learn or they are learning something very different and, potentially, much darker?

(This sounds like the kind of philosophical paper I’d give, let’s see where it goes! 🙂 )

If we have tertiary students, why can’t we just place them into a workplace for work experience? They’re adults – maybe we can separate this aspect and the pedagogue. The author’s study wants to look at how to promote conceptual learning in the response of realistic course work. Parker (1999) proposes that students are spending their effort of building wiring products, rather than actually learning about and reflecting upon the professional issues we consider important. The conjecture is that just because the situation is realistic doesn’t mean that the conceptual learning is happening as we intended.

The study is based around a fairly straight forward project-based learning structure, but had a Pass/Fail grade, with no distinction grading as far as I could tell. The teaching was baed on weekly group discussions, with self/peer evaluations, also housed in a group situation, and technical supervision offered by teaching assistant. Throughout the course, students are prompted to think about their operation at a conceptual level. Hmm. I’m not sure what the speaker means by this as, without a very detailed description of what is going on, this could have many different implementations.

We then cut to a diagram of justice conceptualised – I may have missed something as I’m not quite sure how this sits with the group work. I can’t find the diagram online but it involves participation, involving and negotiating with others – fused together as the skill of justice. This sits above statuses, norms and roles. Some of the related work deals with fairness (Richards 2009) as a key attribute of successful group work, Clear 2002 uses it in diagnostic technique, and Pieterse and Thompson 2010 mentioned ‘social loafers’ and ‘diligent isolates’.

I’m dreadfully sorry, dear reader, but I’m not following this properly so this may be a bit sketchy. Go and read the paper and I’ll try to get this together. Everyone else in the room appears to be getting this so I may just be tired or struggling with my (not very good) hearing and someone who is speaking rather quietly.

The underlying pedagogy comes from the social realist mindset (Moore,2000, Maton and Moore, 2010) and “avoids the dilemma between constructivist relativism and positivist absolutism”. We should also look at the Integrative Pedagogy (Tynjana (sp)), where the speaker feels that what they are describing is a realist version of this.

The course was surveyed with a preliminary small study (N=21/26, which is curious. Which one is it? Ah, 21 out of 26 enrolled, there we go.). The survey questions were… rather loose and very open to influence, unfortunately, from my quick glance at them but I will have to read the original paper.

Justice is a difficult topic to address, especially where it’s reified as a professional skill that can be developed, and discussing the notion of justice in terms of the ways that a group can work together fairly is very important. I suppose I’m not 100% convinced how much is added in this context through the use of a new term that is an apparent parent to communication and negotiation, with the desired outcome of fairness, because the amalgamation seems to obscure the individual components that would be improved upon to return to a fair state. The very small study, and a small survey, is a valid approach for a case study or phenomenographic approach, but I get the feeling that I was seeing a grounded theory argument. We do have to expose our desired processes to students if we’re going to achieve cognitive apprenticeship and there is a great deal of tension between industrial practice and key concepts, so this is a very interesting area to work in. I completely agree with the speaker that our heavy technical focus often precludes discussions of the empathic, philosophical and intangible, but I’m yet to see how this approach contributes.

The discussions mentioned as important are very important but group reports and discussion are a built-in part of many SE process models so I wonder how the justice theme amplifies this aspects. Again, getting students to engage in a dialogue that they do not expect to have in CS can be very challenging but we could be discussing issues such as critical thinking and ethics, which are often equally alien and orthogonal to the technical, without forming a compound concept that potentially obscures the underlying component mechanisms.

Simon asked a very good question: you didn’t present anything that showed a problem where the students would have needed the concept of justice. Apparently, this is in the writings that are yet to be analysed. The answer to the question ended up as an unlabelled graph on the blackboard which was focused on a skill difference with more experienced peers. I still can’t see how justice ties into this. I have to go and get my hearing checked.


ITiCSE 2014, Day 3, Session 6A, “Digital Fluency”, #ITiCSE2014 #ITiCSE

I'm at the Ångstrom Laboratory of Uppsala so this portrait hangs in the main hall. Hi, Anders!

I’m at the Ångstrom Laboratory of Uppsala so this portrait hangs in the main hall. Hi, Anders!

The first paper was “A Methodological Approach to Key Competences in Informatics”, presented by Christina Dörge. The motivation for this study is moving educational standards from input-oriented approaches to output-oriented approaches – how students will use what you teach them in later life. Key competencies are important but what are they? What are the definitions, terms and real meaning of the words “key competencies”? A certificate of a certain grade or qualification doesn’t actually reflect true competency is many regards. (Bologna focuses on competencies but what do really mean?) Competencies also vary across different disciplines as skills are used differently in different areas – can we develop a non-normative approach to this?

The author discussed Qualitative Content Analysis (QCA) to look at different educational methods in the German educational system: hardware-oriented approaches, algorithm-oriented, application-oriented, user-oriented, information-oriented and, finally, system-oriented. The paradigm of teaching has shifted a lot over time (including the idea-oriented approach which is subsumed in system-oriented approaches). Looking across the development of the paradigms and trying to work out which categories developed requires a coding system over a review of textbooks in the field. If new competencies were added, then they were included in the category system and the coding started again. The resulting material could be referred to as “Possible candidates of Competencies in Informatics”, but those that are found in all of the previous approaches should be included as Competencies in Informatics. What about the key ones? Which of these are found in every part of informatics: theoretical, technical, practical and applied (under the German partitioning)? A key competency should be fundamental and ubiquitous.

The most important key competencies, by ranking, was algorithmic thinking, followed by design thinking, then analytic thinking (must look up the subtle difference here). (The paper contains all of the details) How can we gain competencies, especially these key ones, outside of a normative model that we have to apply to all contexts? We would like to be able to build on competencies, regardless of entry point, but taking into account prior learning so that we can build to a professional end point, regardless of starting point. What do we want to teach in the universities and to what degree?

The author finished on this point and it’s a good question: if we view our progression in terms of competency then how we can use these as building blocks to higher-level competencies? THis will help us in designing pre-requsitites and entry and exit points for all of our educational design.

The next talk was “Weaving Computing into all Middle School Disciplines”, presented by Susan Rodger from Duke. There were a lot of co-authors who were undergraduates (always good to see). The motivation for this project was there are problems with CS in the K-12 grades. It’s not taught in many schools and definitely missing in many high schools – not all Unis teach CS (?!?). Students don’t actually know what it is (the classic CS identify problem). There are also under-represented groups (women and minorities). Why should we teach it? 21st century skills, rewordings and many useful skills – from NCWIT.org.

Schools are already content-heavy so how do we convince people to add new courses? We can’t really so how about trying to weave it in to the existing project framework. Instead of doing a poster or a PowerPoint  prevention, why not provide an animations that’s interactive in some way and  that will involve computing. One way to achieve this is to use Alice, creating interactive stories or games, learning programming and computation concepts in a drag-and-drop code approach. Why Alice? There are many other good tools (Greenfoot, Lego, Scratch, etc) – well, it’s drag-and-drop, story-based and works well for women. The introductory Alice course in 2005 started to attract more women and now the class is more than 50% women. However, many people couldn’t come in because they didn’t have the prerequisites so the initiative moved out to 4th-6th grade to develop these skills earlier. Alice Virtual Worlds excited kids about computing, even at the younger ages.

The course “Adventures in Alice Programming” is aimed at grades 5-12 as Outreach, without having to use computing teachers (which would be a major restriction). There are 2-week teacher workshops where, initially, the teachers are taught Alice for a week, then the following week they develop lesson plans. There’s a one-week follow-up workshop the following summer. This initiative is funded until Summer, 2015, and has been run since 2008. There are sites: Durham, Charleston and Southern California.  The teachers coming in are from a variety of disciplines.

How is this used on middle and high schools by teachers? Demonstrations, examples, interactive quizzes and make worlds for students to view. The students may be able to undertake projects, take and build quizzes, view and answer questions about a world, and the older the student, the more they can do.

Recruitment of teachers has been interesting. Starting from mailing lists and asking the teachers who come, the advertising has spread out across other conferences. It really helps to give them education credits and hours – but if we’re going to pay people to do this, how much do we need to pay? In the first workshop, paying $500 got a lot of teachers (some of whom were interested in Alice). The next workshop, they got gas money ($50/week) and this reduced the number down to the more interested teachers.

There are a lot of curriculum materials available for free (over 90 tutorials) with getting-started material as a one-hour tutorial showing basic set-up, placing objects, camera views and so on. There are also longer tutorials over several different stories. (Editor’s note: could we get away from the Princess/Dragon motif? The Princess says “Help!” and waits there to be rescued and then says “My Sweet Prince. I am saved.” Can we please arm the Princess or save the Knight?) There are also tutorial topics on inheritance, lists and parameter usage. The presenter demonstrated a lot of different things you can do with Alice, including book reports and tying Alice animations into the real world – such as boat trips which didn’t occur.

It was weird looking at the examples, and I’m not sure if it was just because of the gender of the authors, but the kitchen example in cooking with Spanish language instruction used female characters, the Princess/Dragon had a woman in a very passive role and the adventure game example had a male character standing in the boat. It was a small sample of the materials so I’m assuming that this was just a coincidence for the time being or it reflects the gender of the creator. Hmm. Another example and this time the Punnett Squares example has a grey-haired male scientist standing there. Oh dear.

Moving on, lots of helper objects are available for you to use if you’re a teacher to save on your development time which is really handy if you want to get things going quickly.

Finally, on discussing the impact, one 200 teachers have attend the workshops since 2008, who have then go on to teach 2900 students (over 2012-2013). From Google Analytics, over 20,000 users have accessed the materials. Also, a number of small outreach activities, Alice for an hour, have been run across a range of schools.

The final talk in this session was “Early validation of Computational Thinking Pattern Analysis”, presented by Hilarie Nickerson, from University of Colorado at Boulder. Computational thinking is important and, in the US, there have been both scope and pedagogy discussions, as well as instructional standards. We don’t have as much teacher education as we’d like. Assuming that we want the students to understand it, how can we help the teachers? Scalable Game Design integrates game and simulation design into public school curricula. The intention is to broaden participation for all kinds of schools as after-scjool classes had identified a lot of differences in the groups.

What’s the expectation of computational thinking? Administrators and industry want us to be able to take game knowledge and potentially use it for scientific simulation. A good game of a piece of ocean is also a predator-prey model, after all. Does it work? Well, it’s spread across a wide range of areas and communities, with more than 10,000 students (and a lot of different frogger games). Do they like it? There’s a perception that programming is cognitively hard and boring (on the congnitive/affective graph ranging from easy-hard/exciting-boring) We want it to be easy and exciting. We can make it easier with syntactic support and semantic support but making it exciting requires the students to feel ownership and to be able to express their creativity. And now they’re looking at the zone of proximal flow, which I’ve written about here. It’s good see this working in a project first, principles first model for these authors. (Here’s that picture again)

Figure from A. Repenning, "Programming Goes to School", CACM, 55, 5, May, 2012.

Figure from A. Repenning, “Programming Goes to School”, CACM, 55, 5, May, 2012.

The results? The study spanned 10,000 students, 45% girls and 55% boys (pretty good numbers!), 48% underrepresented, with some middle schools exposing 350 students per year. The motivation starts by making things achievable but challenging – starting from 2D basics and moving up to more sophisticated 3D games. For those who wish to continue: 74% boys, 64% girls and 69% of minority students want to continue. There are other aspects that can raise motivation.

What about the issue of Computing Computational Thinking? The authors have created a Computational Thinking Pattern Analysis (CTPA) instrument that can track student learning trajectories and outcomes. Guided discovery, as a pedagogy, is very effective in raising motivation for both genders, where direct instruction is far less effective for girls (and is also less effective for boys).

How do we validate this? There are several computational thinking patterns grouped using latent semantic analysis. One of the simpler patterns for a game is the pair generation and absorption where we add things to the game world (trucks in Frogger or fish in predator/prey) and then remove them (truck gets off the screen/fish gets eaten). We also need collision detection. Measuring skill development across these skills will allow you to measure it in comparison to the tutorial and to other students. What does CTPA actually measure? The presence of code patterns that corresponded to computational thinking constructs suggest student skill with computational thinking (but doesn’t prove it) and is different from measuring learning. The graphs produced from this can be represented as a single number, which is used for validation. (See paper for the calculation!)

This has been running for two years now, with 39 student grades for 136 games, with the two human graders shown to have good inter-rater consistency. Frogger was not very heavily correlated (Spearman rank) but Sokoban, Centipede and the Sims weren’t bad, and removing design aspects of rubrics may improve this.

Was their predictive validity in the project? Did the CTPA correlate with the skill score of the final game produced? Yes, it appears to be significant although this is early work. CTPA does appear to be cabal of measuring CT patterns in code that correlate with human skill development. Future work on this includes the refinement of CTPA by dealing with the issue of non-orthogonal constructs (collisions that include generative and absorptive aspects), using more information about the rules and examining alternative calculations. The group are also working not oils for teachers, including REACT (real-time visualisations for progress assessment) and recommend possible skill trajectories based on their skill progression.


ITiCSE 2014, Day 3, Keynote, “Meeting the Future Challenges of Education and Digitization”, #ITiCSE2014 #ITiCSE @jangulliksen

This keynote was presented by the distinguished Professor Jan Gulliksen (@jangulliksen) of KTH. He started with two strange things. He asked for a volunteer and, of course, Simon put his hand up. Jan then asked Simon to act as a support department to seek help with putting on a jacket. Simon was facing the other way so had to try and explain to Jan the detailed process of orientating and identifying the various aspects of the jacket in order. (Simon is an exceedingly thoughtful and methodical person so he had a far greater degree of success than many of us would.) We were going to return to this. The second ‘strange thing’ was a video of President Obama speaking on Computer Science. Professor Gulliksen asked us how often a world leader would speak to a discipline community about the importance of their discipline. He noted that, in his own country, there was very little discussion in the political parties on Computer Science and IT. He noted that Chancellor Merkel had expressed a very surprising position, in response to the video, as the Internet being ‘uncharted territory‘.

Professor Gulliksen then introduced himself as the Dean of the School of Computer Science and communication in KTH, Stockholm, but he had 25 years of previous experience at Uppsala. Within this area, he had more than 20 years of experience working with the introduction of user-centred systems in public organisations. He showed two pictures, over 20 years apart, which showed how little the modern workspace has changed in that time, except that the number of post-it colours have increased! He has a great deal of interest in how we can improve the design for all users. Currently, he is looking at IT for mental and psychological disabilities, finder by Vinnova and PTS, which is not a widely explored area and can be of great help to homeless people. His team have been running workshops with these people to determine the possible impact of increased IT access – which included giving them money to come to the workshop. But they didn’t come. So they sent railway tickets. But they still didn’t come. But when they used a mentor to talk them through getting up, getting dressed, going to the station – then they came. (Interesting reflection point for all teachers here.) Difficult to work within the Swedish social security system because the homeless can be quite paranoid about revealing their data and it can be hard to work with people who have no address, just a mobile number. This is, however, a place where our efforts can have great societal impact.

Professor Gulliksen asks his PhD students: What is really your objective with this research? And he then gives them three options: change the world, contribute new knowledge or you want your PhD. The first time he asked this in Sweden, the student started sweating and asked if they could have a fourth option. (Yes, but your fourth is probably one of the three.) The student then said that they wanted to change the world, but on thinking about it (what have you done), wanted to change to contribute new knowledge, then thought about it some more (ok, but what have you done), after further questioning it devolved to “I think I want my PhD”. All of these answers can be fine but you have to actually achieve your purpose.

Our biggest impact is on the people that we produce, in terms of our contribution to the generation and dissemination of knowledge. Jan wants to know how we can be more aware of this role in society. How can we improve society through IT? This led to the committee for Digitisation, 2012-2015: Sweden shall be the best country in the world when it comes to using the opportunities for digitisation.  Sweden produced “ICT for Everyone”, a Digital Agenda for Sweden, which preceded the European initiative. There are 170 different things to be achieved with IT politics but less than a handful of these have not been met since October, 2011. As a researcher, Professor Gulliksen had to come to an agreement with the minister to ensure that his academic freedom, to speak truth to power, would not be overly infringed – even though he was Norwegian. (Bit of Nordic humour here, which some of you may not get.)

The goal was that Sweden would be the best country in the world when it came to seizing these opportunities. That’s a modest goal (The speaker is a very funny man) but how do we actually quantify this? The main tasks for the commission were to develop the action plan, analyse progress in relate to goals, show the opportunities available, administer the organisations that signed the digital agenda (Nokia, Apple and so on) and collaborate with the players to increase digitisation. The committee itself is 7 people, with an ‘expert’ appointed because you have to do this, apparently. To extend the expertise, the government has appointed the small commission, a group of children aged 8-18, to support the main commission with input and proposals showing opportunities for all ages.

The committee started with three different areas: digital inclusion and equal opportunities; school, education and digital competence; and entrepreneurship and company development. The digital agenda itself has four strategic areas in terms of user participation:

  1. Easy and safe to use
  2. Services that create some utility
  3. Need for infrastructure
  4. IT’s role for societal development.

And there are 22 areas of mission under this that map onto the relevant ministries (you’ll have to look that up for yourself, I can’t type that quickly.) Over the year and a half that the committee has been running, they have achieved a lot.

The government needs measurements and ranking to show relative progress, so things like the World Economics Forum’s Networked Readiness Index (which Sweden topped) is often trotted out. But in 2013, Sweden had dropped to third, with Finland and Singapore going ahead – basically, the Straits Tiger is advancing quickly unsurprisingly. Other measures include the ICT development Index (ID) where Sweden is also doing well. You can look for this on the Digital Commisson’s website (which is in Swedish but translates). The first report has tried to map out the digital Swedend – actions and measures carried, key players and important indicators. Sweden is working a lot in the space but appears to be more passive in re-use than active in creativity but I need to read the report on this (which is also in Swedish). (I need to learn another language, obviously.) There was an interesting quadrant graph of organisations ranked by how active they were and how powerful their mandate was, which started a lot of interesting discussion. (This applies to academics in Unis as well, I realise.) (Jag behöver lära sig ett annat språk, uppenbarligen.)

The second report was released in March this year, focusing on the school system. How can Sweden produce recommendations on how the school system will improve? If the school system isn’t working well, you are going to fall behind in the rankings. (Please pay attention, Australian Government!) In Sweden, there’s a range of access to schools across Sweden but access is only one thing, actual use of the resources is another. Why should we do this? (Arguments to convince politicians). Reduce digital divide, economy needs IT-skilled labours, digital skills are needed to be an active citizen, increased efficiency and speed of learning and many other points! Sweden’s students are deteriorating on the PISA-survey rankings, particularly for boys, where 30% of Swedish boys are not reaching basic literacy in the first 9 years of schools, which is well below the OECD average. Interestingly, Swedish teachers are among the lowest when it comes to work time spent on skills development in the EU. 18% of teachers spend more than 6 days, but 9% spend none at all and is the second worst in European countries (Malta takes out the wooden spain).

The concrete proposals in the SOU were:

  • Revised regulatory documents with a digital perspective
  • Digitally based national tests in primary/secondary
  • web based learning in elementary ands second schools
  • digital skilling of teachers
  • digital skilling for principals
  • clarifying the digital component of teacher education programs
  • research, method development and impact measurement
  • innovation projects for the future of learning

Universities are also falling behind so this is an area of concern.

Professor Gulliksen also spoke about the digital champions of the EU (all European countries had one except Germany, until recently, possibly reflecting the Chancellor’s perspective) where digital champion is not an award, it’s a job: a high profile, dynamic and energetic individual responsible for getting everyone on-line and improving digital skills. You need to generate new ideas to go forward, for your country, rather than just copying things that might not fit. (Hello, Hofstede!)

The European Digital Champions work for digital inclusion and help everyone, although we all have responsibility. This provides strategic direction for government but reinforces that the ICT competence required for tomorrow’s work life has to be put in place today. He asked the audience who their European digital champions were and, apart from Sweden, no-one knew. The Danish champion (Lars Frelle-Petersen) has worked with the tax office to force everyone on-line because it’s the only way to do your tax! “The only way to conduct public services should be on the Internet” The digital champion of Finland (Linda Liukas, from Rails girls) wants everyone to have three mandatory languages: English, Chinese and JavaScript. (Groans and chuckles from the audience for the language choice.) The digital champion of Bulgaria (Gergeana Passy) wants Sofia to be the first free WiFi capital of Europe. Romania’s champion (Paul André Baran) is leading the library and wants libraries to rethink their role in the age of ICT. Ireland’s champion (Sir David Puttnam) believes that we have to move beyond triage mentality in education to increase inclusion.

In Sweden, 89% of the population is on-line and it’s plateaued at that. Why? Of those that are not on the Internet, most of them are more than 76% years old. This is a self-correcting problem, most likely. (50% of two year olds are on the Internet in Sweden!) The 1.1 million Swedes not online are not interested (77%) and 18% think it’s too complicated.

Jan wanted to leave us with two messages. The first is that we need to increase the amount of ICT practitioners. Demand is growing at 3% a year and supply is not keeping pace for trained, ICT graduates. If the EU want to stay competitive, they either have to grow them (education) or import them. (Side note: The Grand Coalition for Digital Jobs)

The second thought is the development of digital competence and improvement of digital skills among ICT users. 19% of the work force is ICT intensive, 90% of jobs require some IT skills but 53% of the workforce are not confident enough in their IT skills to seek another job in that sphere. We have to build knowledge and self-confidence. Higher Ed institutions have to look beyond the basic degree to share the resources and guidelines to grow digital competence across the whole community. Push away from the focus on exams and graduation to concentrate on learning – which is anathema to the usual academic machine. We need to work on new educational and business models to produced mature, competent and self-confident people with knowledge and make industry realise that this is actually what they want.

Professor Gulliksen believes that we need to recruit more ICT experience by bringing experts in to the Universities to broaden academia and pedagogy with industry experience. We also really, really need to balance the gender differences which show the same weird cultural trends in terms of self-deception rather than task description.

Overall, a lot of very interesting ideas – thank you, Professor Gulliksen!

Arnold Pears, Uppsala, challenged one of the points on engaging with, and training for, industry in that we prepare our students for society first, and industrial needs are secondary. Jan agreed with this distinction. (This followed on from a discussion that Arnold and I were having regarding the uncomfortable shoulder rubbing of education and vocational training in modern education. The reason I come to conferences is to have fascinating discussions with smart people in the breaks between interesting talks.)

The jacket came back up again at the end. When discussing Computer Science, Jan feels the need to use metaphors – as do we all. Basically, it’s easy to fall into the trap of thinking you can explain something as being simple when you’re drawing down on a very rich learned context for framing the knowledge. CS people can struggle with explaining things, especially to very new students, because we build a lot of things up to reach the “operational” level of CS knowledge and everything, from the error messages presented when a program doesn’t work to the efficiency of long-running programs, depends upon understanding this rich context. Whether the threshold here is a threshold concept (Meyer and Land), neo-Piaegtian, Learning Edge Momentum or Bloom-related problem doesn’t actually matter – there’s a minimum amount of well-accepted context required for certain metaphors to work or you’re explaining to someone how to put a jacket on with your eyes closed. 🙂

One of the final questions raised the issue of computing as a chore, rather than a joy. Professor Gulliksen noted that there are only two groups of people who are labelled as users, drug users and computer users, and the systematic application of computing as a scholastic subject often requires students to lock up more powerful computer (their mobile phones) to use locked-down, less powerful serried banks of computers (based on group purchasing and standard environments). (Here’s an interesting blog on a paper on why we should let students use their phones in classes.)


ITiCSE 2014, Day 2, Session4A, Software Engineering, #ITiCSE2014 #ITiCSE

The first talk, “Things Coming Together: Learning Experiences in a Software Studio”, was presented by Julia Prior, from UTS. (One of the nice things about conferences is catching up with people so Julia, Katrina and I got to have a great chat over breakfast before taxiing into the venue.)
Julia started with the conclusions. From their work, the group have evidence of genuine preparation for software practice, this approach works for complex technical problems and tools, it encourages effective group work, builds self-confidence, it also builds the the more elusive prof competencies, provides immersion in rich environments, and furnishes different paths to group development and success. Now for the details!
Ther are three different parts of a studio, based on the arts and architecture model:
  • People: learning community
    teachers and learners
  • Process: creative , reflective
    • interactions
    • physical space
    • collaboration
  • Product: designed object – a single focus for the process
UTS have been working on designing and setting up a Software Development Studio for some time and have had a chance to refine their approach. The subject was project-based on a team project for parks and wildlife, using the Scrum development method. The room the students were working in was trapezoidal, with banks of computers up and down.
What happened? What made this experience different was that an ethnographer sat in and observed the class, as well as participating, for the whole class and there was also an industry mentor who spent 2-3 hours a week with the students. There were also academic mentors. The first week started with Lego where students had to build a mini-town based on a set of requirements, with colour and time constraints. Watching the two groups working at this revealed two different approaches: one planned up front, with components assigned to individuals, and finished well on time. The other group was in complete disarray, took pieces out as they needed it, didn’t plan or allocate roles. This left all the building to two members, with two members passing blocks, and the rest standing around. (This was not planned – it just happened.)
The group that did the Lego game well quickly took on Scrum and got going immediately, (three members already knew about Scrum), including picking their team. The second group felt second-rate and this was reflected in their sprint – no one had done the required reading or had direction, thus they needed a lot of mentor intervention. After some time, during presentations, the second group presented first and, while it was unadventurous, they had developed a good plan. The other group, with strong leadership, were not prepared for their presentation and it was muddled and incomplete. Some weeks after that presentation practice, the groups had started working together with leaders communicating, which was at odds with the first part of the activity.
Finding 1: Group Relations.
  • Intra-Group Relations: Group 1 has lots of strong characters and appeared to be competent and performing well, with students in group learning about Scrum from each other. Group 2 was more introverted, with no dominant or strong characters, but learned as a group together. Both groups ended up being successful despite the different paths. Collaborative learning inside the group occurred well, although differently.
  • Inter-Group Relations: There was good collaborative learning across and between groups after the middle of the semester, where initially the groups were isolated (and one group was strongly focused on winning a prize for best project). Groups learned good practices from observing each other.
Finding 2: Things Coming Together
The network linking the students together doesn’t start off being there but is built up over time – it is strongly relational. The methodologies, mentors and students are tangible components but all of the relationships are intangible. Co-creation becomes a really important part of the process.
Across the whole process, integration become a large focus, getting things working in a complex context.Group relations took more effort and the group had to be strategic in investing their efforts. Doing time was an important part of the process – more time spent together helped things to work better. This time was an investment in developing a catalyst for deep learning that improved the design and development of the product. (Student feedback suggested that students should be timetabled into the studio more.) This time was also spent developing the professional competencies and professional graduates that are often not developed in this kind of environment.
(Apologies, Julia, for a slightly sketchy write-up. I had Internet problems at the start of the process so please drop me a line if you’d like to correct or expand upon anything.)
The next talk was on “Understanding Students’ Preferences of Software Engineering Projects” presented by Robert McCartney. The talk as about a maintenance-centrerd Sofwtare Engineering course (this is a close analogue to industry where you rarely build new but you often patch old.)
We often teach SE with project work where the current project approach usually has a generative aspect based on planning, designing and building. In professional practice, most of SE effort involves maintenance and evolution. The authors developed a maintenance-focused SE course to change the focus to maintenance and evolution. Student start with some existing system and the project involves comprehending and documenting the existing code, proposing functional enhancements, implement, test and document changes.
This is a second-year course, with small teams (often pairs), but each team has to pick a project, comprehend it, propose enhancements, describe and document, implement enhancements, and present their results. (Note: this would often be more of a third-year course in its generative mode.) Since the students are early on, they are pretty fresh in their knowledge. They’ll have some Object Oriented programming and Data Structures, experience with UML class diagrams and experience using Eclipse. (Interesting – we generally avoid IDEs but it may be time to revisit this.)
The key to this approach is to have enough projects of sufficient scope to work on and the authors went out to the open source project community to grab existing open source code and work on it, but without the intention to release it back into the wild. This lifts the chances of having good, authentic code, but it’s important to make sure that the project code works. There are many pieces of O/S code out there, with a wide range of diversity, but teachers have to be involved in the clearing process for these things as there many crap ones out there as well. (My wording. 🙂 )
The paper mith et al “Selecting Open Souce Software Projects to Teach Software Engineering” was presented at SIGCSE 2014 and described the project search process. Starting from the 1000 open source projects that were downloaded, 200 were the appropriate size, 20 were suitable (could build, had sensible structure and documentation). This takes a lot of time to get this number of projects and is labour intensive.
Results in the first year: find suitable projects was hard, having each team work on a different project is too difficult for staff (the lab instructor has to know about 20 separate projects), and small projects are often not as good as the larger projects. Up to 10,000 lines of code were considered small projects but theses often turned out to be single-developer projects, which meant that there was no group communication structure and a lot of things didn’t get written down so the software wouldn’t build as the single developer hadn’t needed to let anyone know the tricks and tips.
In the second year, the number of projects was cut down to make it easier on the lab instructors (down to 10) and the size of the projects went up (40-100k lines) in order to find the group development projects. The number of teams grew and then the teams could pick whichever project they wanted, rather than assigning one team per project on a first-come first-served approach. (The first-come first-served approach meant students were picking based on the name and description of the project, which is very shallow.) To increase group knowledge, the group got a project description , with links to the source code and commendation, build instructions (which had been tested), the list of proposed enhavements and a screen shot of the working program. This gave the group a lot more information to make a deeper decision as to which project they wanted to undertake and students could get a much better feeling for what they took on.
What the students provided, after reviewing the projects, was their top 3 projects and list of proposed enhancements, with an explanation of their choices and a description of the relationship between the project and their proposed enhancement. (Students would receive their top choice but they didn’t know this.)
Analysing the data  with a thematic analysis, abstracting the codes into categories and then using Axial coding to determine the relations between categories to combine the AC results into a single thematic diagram. The attract categories were: Subject Appeal (consider domain of interest, is it cool or flashy), Value Added (value of enhancement, benefit to self or users), Difficulty (How easy/hard it is), and Planning (considering the match between team skills and the skills that the project required, the effects of the project architecture). In the axial coding, centring on value-adding, the authors came up with a resulting thematic map.
Planning was seen as a sub-theme of difficulty, but both subject appeal and difficulty (although considered separately) were children of value-adding. (You can see elements of this in my notes above.) In the relationship among the themes, there was a lot of linkage that led to concepts such as weighing value add against difficulty meant that enhancements still had to be achievable.
Looking at the most frequent choices, for 26 groups, 9 chose an unexacting daily calendar scheduler (Rapla), 7 chose an infrastructure for games (Triple A) and a few chose a 3D home layout program (Sweet Home). Value-add and subject-appeal were dominant features for all of these. The only to-four project that didn’t mention difficulty was a game framework. What this means is that if we propose projects that provide these categories, then we would expect them to be chosen preferentially.
The bottom line is that the choices would have been the same if the selection pool had been 5 rather than 10 projects and there’s no evidence that there was that much collaboration and discussion between those groups doing the same projects. (The dreaded plagiarism problem raises its head.) The number of possible enhancements for such large projects were sufficiently different that the chance of accidentally doing the same thing was quite small.
Caveats: these results are based on the students’ top choices only and these projects dominate the data. (Top 4 projects discussed in 47 answers, remaining 4 discussed in 15.) Importantly, there is no data about why students didn’t choose a given project – so there may have been other factors in play.
In conclusion, the students did make the effort to look past the superficial descriptions in choosing projects. Value adding is a really important criterion, often in conjunction with subject appeal and perceived difficulty. Having multiple teams enhancing the same project (independently) does not necessarily lead to collaboration.
But, wait, there’s more! Larger projects meant that teams face more uniform comprehension tasks and generally picked different enhancements from each other. Fewer projects means less stress on the lab instructor. UML diagrams are not very helpful when trying to get the big-picture view. The UML view often doesn’t help with the overall structure.
In the future, they’re planning to offer 10 projects to 30 teams, look at software metrics of the different projects, characterise the reasons that students avoid certain projects, and provide different tools to support the approach. Really interesting work and some very useful results that I suspect my demonstrators will be very happy to hear. 🙂
The questions were equally interesting, talking about the suitability of UML for large program representation (when it looks like spaghetti) and whether the position of projects in a list may have influenced the selection (did students download the software for the top 5 and then stop?). We don’t have answers to either of these but, if you’re thinking about offering a project selection for your students, maybe randomising the order of presentation might allow you to measure this!

 


ITiCSE 2014, Session 3C: Gender and Diversity, #ITiCSE2014 #ITiCSE @patitsel

This sessions was dedicated to the very important issues of gender and diversity. The opening talk in this session was “A Historical Examination of the Social Factors Affecting Female Participation in Computing”, presented by Elizabeth Patitsas (@patitsel). This paper was a literature review of the history of the social factors affecting the old professional association of the word “computer” with female arithmeticians to today’s very male computing culture. The review spanned 73 papers, 5 books, 2 PhD theses and a Computing Educators Oral History project. The mix of sources was pretty diverse. The two big caveats were that it only looked at North America (which means that the sources tend to focus on Research Intensive universities and white people) and that this is a big picture talk, looking at social forces rather than individual experiences. This means that, of course, individuals may have had different experiences.

The story begins in the 19th Century, when computer was a job and this was someone who did computations, for scientists, labs, or for government. Even after first wave feminism, female education wasn’t universally available and the women in education tended to be women of privilege. After the end of the 19th century, women started to enter traditional universities to attempt to study PhDs (although often receiving a Bachelors for this work) but had few job opportunities on graduation, except teaching or being a computer. Whatever work was undertaken was inherently short-term as women were expected to leave the work force on marriage, to focus on motherhood.

During the early 20th Century, quantitative work was seen to be feminine and qualitative work required the rigour of a man – things have changed in perceptions, haven’t they! The women’s work was grunt work: calculating, microscopy. Then there’s men’s work: designing and analysing. The Wars of the 20th Century changed this by removing men and women stepping into the roles of men. Notably, women were stereotyped as being better coders in this role because of their computer background. Coding was clerical, performed by a woman under the direction of a male supervisor. This became male typed over time. As programming became more developed over the 50s and 60s and the perception of it as a dark art started to form a culture of asociality. Random hiring processes started to hurt female participation, because if you are hiring anyone then (quitting the speaker) if you could hire a man, why hire a woman? (Sound of grinding teeth from across the auditorium as we’re all being reminded of stupid thinking, presented very well for our examination by Elizabeth.)

CS itself stared being taught elsewhere but became its own school-discipline in the 60s and 70s, with enrolment and graduation of women matching that of physics very closely. The development of the PC and its adoption in the 80s changed CS enrolments in the 80s and CS1 became a weeder course to keep the ‘under qualified’ from going on to further studies in Computer Science. This then led to fewer non-traditional CS students, especially women, as simple changes like requiring mathematics immediately restricted people without full access to high quality education at school level.

In the 90s, we all went mad and developed hacker culture based around the gamer culture, which we already know has had a strongly negative impact on female participation – let’s face it, you don’t want to be considered part of a club that you don’t like and goes to effort to say it doesn’t welcome you. This led to some serious organisation of women’s groups in CS: Anita Borg Institute, CRA-W and the Grace Hopper Celebration.

Enrolments kept cycling. We say an enrolment boom and bust (including greater percentage of women) that matched the dot-com bubble. At the peak, female enrolment got as high as 30% and female faculty also increased. More women in academia corresponded to more investigation of the representation of women in Computer Science. It took quite a long time to get serious discussions and evidence identifying how systematic the under-representation is.

Over these different decades, women had very different experiences. The first generation had a perception that they had to give up family, be tough cookies and had a pretty horrible experience. The second generation of STEM, in 80s/90s, had female classmates and wanted to be in science AND to have families. However, first generation advisers were often very harsh on their second generation mentees as their experiences were so dissimilar. The second generation in CS doesn’t match neatly that of science and biology due to the cycles and the negative nerd perception is far, far stronger for CS than other disciplines.

Now to the third generation, starting in the 00s, outperforming their male peers in many cases and entering a University with female role models. They also share household duties with their partners, even when both are working and family are involved, which is a pretty radical change in the right direction.

If you’re running a mentoring program for incoming women, their experience may be very. very different from those of the staff that you have to mentor them. Finally, learning from history is essential. We are seeing more students coming in than, for a number of reasons, we may be able to teach. How will we handle increasing enrolments without putting on restrictions that disproportionately hurt our under-represented groups? We have to accept that most of our restrictions actually don’t apply in a uniform sense and that this cannot be allowed to continue. It’s wrong to get your restrictions in enrolment at a greater expense on one group when there’s no good reason to attack one group over another.

One of the things mentioned is that if you ask people to do something because of they are from group X, and make this clear, then they are less likely to get involved. Important note: don’t ask women to do something because they’re women, even if you have the intention to address under-representation.

The second paper, “Cultural Appropriation of Computational Thinking Acquisition Research: Seeding Fields of Diversity”, presented by Martha Serra, who is from Brazil and good luck to them in the World Cup tonight! Brazil adapted scalable game design to local educational needs, with the development of a web-ased system “PoliFacets”, seeding the reflection of IT and Educational researchers.

Brazil is the B in BRICS, with nearly 200 million people and the 5th largest country in the World. Bigger than Australia! (But we try harder.) Brazil is very regionally diverse: rain forest, wetlands, drought, poverty, Megacities, industry, agriculture and, unsurprisingly, it’s very hard to deal with such diversity. 80% of youth population failed to complete basic education. Only 26% of the adult population reach full functional literacy. (My jaw just dropped.)

Scalable Game Design (SGD) is a program from the University of Colorado in Boulder, to motivate all students in Computer Science through game design. The approach uses AgentSheets and AgentsCubes as visual programming environments. (The image shown was of a very visual programming language that seemed reminiscent of Scratch, not surprising as it is accepted that Scratch picked up some characteristics from AgentSheets.)

The SGD program started as an after-school program in 2010 with a public middle school, using a Geography teacher as the program leader. In the following year, with the same school, a 12-week program ran with a Biology teacher in charge. Some of the students who had done it before had, unfortunately, forgotten things by the next year.  The next year, a workshop for teachers was introduced and the PoliFacets site. The next year introduced more schools, with the first school now considered autonomous, and the teacher workshops were continued. Overall, a very positive development of sustainable change.

Learners need stimulation but teachers need training if we’re going to introduce technology – very similar to what we learned in our experience with digital technologies.

The PolFacets systems is a live documentation web-based system used to assist with the process. Live demo not available as the Brazilian corner of internet seems to be full of football. It’s always interesting to look at a system that was developed in a different era – it makes you aware how much refactoring goes into the IDEs of modern systems to stop them looking like refugees from a previous decade. (Perhaps the less said about the “Mexican Frogger” game the better…)

The final talk (for both this session and the day) was “Apps for Social Justice: Motivating Computer Science Learning with Design and Real-World Problem Solving”, presented by Sarah Van Wart. Starting with motivation, tech has diversity issues, with differential access and exposure to CS across race and gender lines. Tech industry has similar problems with recruiting and retaining more diverse candidates but there are also some really large structural issues that shadow the whole issue.

Structurally, white families have 18-20 times the wealth of Latino and African-American people, while jail population is skewed the opposite way. The schools start with the composition of the community and are supposed to solve these distribution issues, but instead they continue to reflect the composition that they inherited. US schools are highly tracked and White and Asian students tend to track into Advanced Placement, where Black and Latino students track into different (and possibly remedial) programs.

Some people are categorically under-represented and this means that certain perspectives are being categorically excluded – this is to our detriment.

The first aspect of the theoretical prestige is Conceptions of Equity. Looking at Jaime Escalante, and his work with students to do better at the AP calculus exam. His idea of equity was access, access to a high-value test that could facilitate college access and thus more highly paid careers. The next aspect of this was Funds of Knowledge, Gonzalez et al, where focusing on a white context reduces aspects of other communities and diminishes one community’s privilege. The third part, Relational Equity (Jo Boaler), reduced streaming and tracking, focusing on group work, where each student was responsible for each student’s success. Finally,Rico Gutstein takes a socio-political approach with Social Justice Pedagogy to provide authentic learning frameworks and using statistics to show up the problems.

The next parts of the theoretical perspective was  Computer Science Education, and Learning Sciences (socio-cultrual perspective on learning, who you are and what it means to be ‘smart’)

In terms of learning science, Nasir and Hand, 2006, discussed Practice-linked Identities, with access to the domain (students know what CS people do), integral roles (there are many ways to contribute to a CS project) and self-expression and feeling competent (students can bring themselves to their CS practice).

The authors produced a short course for a small group of students to develop a small application. The outcome was BAYP (Bay Area Youth Programme), an App Inventor application that queried a remote database to answer user queries on local after-school program services.

How do we understand this in terms of an equity intervention? Let’s go back to Nasir and Hand.

  1. Access to the domain: Design and data used together is part of what CS people do, bridging students’ concepts and providing an intuitive way of connecting design to the world. When we have data, we can get categories, then schemas and so on. (This matters to CS people, if you’re not one. 🙂 )
  2. Integral Roles: Students got to see the importance of design, sketching things out, planning, coding, and seeing a segue from non-technical approaches to technical ones. However, one other very important aspect is that the oft-derided “liberal arts” skills may actually be useful or may be a good basis to put coding upon, as long as you understand what programming is and how you can get access to it.
  3. Making a unique contribution: The students felt that what they were doing was valuable and let them see what they could do.

Take-aways? CS can appeal to so many peopleif we think about how to do it. There are many pathways to help people. We have to think about what we can be doing to help people. Designing for their own community is going to be empowering for people.

Sarah finished on some great questions. How will they handle scaling it up? Apprenticeship is really hard to scale up but we can think about it. Does this make students want to take CS? Will this lead to AP? Can it be inter-leaved with a project course? Could this be integrated into a humanities or social science context? Lots to think about but it’s obvious that there’s been a lot of good work that has gone into this.

What a great session! Really thought-provoking and, while it was a reminder for many of us how far we have left to go, there were probably people present who had heard things like this for the first time.