ITiCSE 2014, Day 2, Session4A, Software Engineering, #ITiCSE2014 #ITiCSE

The first talk, “Things Coming Together: Learning Experiences in a Software Studio”, was presented by Julia Prior, from UTS. (One of the nice things about conferences is catching up with people so Julia, Katrina and I got to have a great chat over breakfast before taxiing into the venue.)
Julia started with the conclusions. From their work, the group have evidence of genuine preparation for software practice, this approach works for complex technical problems and tools, it encourages effective group work, builds self-confidence, it also builds the the more elusive prof competencies, provides immersion in rich environments, and furnishes different paths to group development and success. Now for the details!
Ther are three different parts of a studio, based on the arts and architecture model:
  • People: learning community
    teachers and learners
  • Process: creative , reflective
    • interactions
    • physical space
    • collaboration
  • Product: designed object – a single focus for the process
UTS have been working on designing and setting up a Software Development Studio for some time and have had a chance to refine their approach. The subject was project-based on a team project for parks and wildlife, using the Scrum development method. The room the students were working in was trapezoidal, with banks of computers up and down.
What happened? What made this experience different was that an ethnographer sat in and observed the class, as well as participating, for the whole class and there was also an industry mentor who spent 2-3 hours a week with the students. There were also academic mentors. The first week started with Lego where students had to build a mini-town based on a set of requirements, with colour and time constraints. Watching the two groups working at this revealed two different approaches: one planned up front, with components assigned to individuals, and finished well on time. The other group was in complete disarray, took pieces out as they needed it, didn’t plan or allocate roles. This left all the building to two members, with two members passing blocks, and the rest standing around. (This was not planned – it just happened.)
The group that did the Lego game well quickly took on Scrum and got going immediately, (three members already knew about Scrum), including picking their team. The second group felt second-rate and this was reflected in their sprint – no one had done the required reading or had direction, thus they needed a lot of mentor intervention. After some time, during presentations, the second group presented first and, while it was unadventurous, they had developed a good plan. The other group, with strong leadership, were not prepared for their presentation and it was muddled and incomplete. Some weeks after that presentation practice, the groups had started working together with leaders communicating, which was at odds with the first part of the activity.
Finding 1: Group Relations.
  • Intra-Group Relations: Group 1 has lots of strong characters and appeared to be competent and performing well, with students in group learning about Scrum from each other. Group 2 was more introverted, with no dominant or strong characters, but learned as a group together. Both groups ended up being successful despite the different paths. Collaborative learning inside the group occurred well, although differently.
  • Inter-Group Relations: There was good collaborative learning across and between groups after the middle of the semester, where initially the groups were isolated (and one group was strongly focused on winning a prize for best project). Groups learned good practices from observing each other.
Finding 2: Things Coming Together
The network linking the students together doesn’t start off being there but is built up over time – it is strongly relational. The methodologies, mentors and students are tangible components but all of the relationships are intangible. Co-creation becomes a really important part of the process.
Across the whole process, integration become a large focus, getting things working in a complex context.Group relations took more effort and the group had to be strategic in investing their efforts. Doing time was an important part of the process – more time spent together helped things to work better. This time was an investment in developing a catalyst for deep learning that improved the design and development of the product. (Student feedback suggested that students should be timetabled into the studio more.) This time was also spent developing the professional competencies and professional graduates that are often not developed in this kind of environment.
(Apologies, Julia, for a slightly sketchy write-up. I had Internet problems at the start of the process so please drop me a line if you’d like to correct or expand upon anything.)
The next talk was on “Understanding Students’ Preferences of Software Engineering Projects” presented by Robert McCartney. The talk as about a maintenance-centrerd Sofwtare Engineering course (this is a close analogue to industry where you rarely build new but you often patch old.)
We often teach SE with project work where the current project approach usually has a generative aspect based on planning, designing and building. In professional practice, most of SE effort involves maintenance and evolution. The authors developed a maintenance-focused SE course to change the focus to maintenance and evolution. Student start with some existing system and the project involves comprehending and documenting the existing code, proposing functional enhancements, implement, test and document changes.
This is a second-year course, with small teams (often pairs), but each team has to pick a project, comprehend it, propose enhancements, describe and document, implement enhancements, and present their results. (Note: this would often be more of a third-year course in its generative mode.) Since the students are early on, they are pretty fresh in their knowledge. They’ll have some Object Oriented programming and Data Structures, experience with UML class diagrams and experience using Eclipse. (Interesting – we generally avoid IDEs but it may be time to revisit this.)
The key to this approach is to have enough projects of sufficient scope to work on and the authors went out to the open source project community to grab existing open source code and work on it, but without the intention to release it back into the wild. This lifts the chances of having good, authentic code, but it’s important to make sure that the project code works. There are many pieces of O/S code out there, with a wide range of diversity, but teachers have to be involved in the clearing process for these things as there many crap ones out there as well. (My wording. 🙂 )
The paper mith et al “Selecting Open Souce Software Projects to Teach Software Engineering” was presented at SIGCSE 2014 and described the project search process. Starting from the 1000 open source projects that were downloaded, 200 were the appropriate size, 20 were suitable (could build, had sensible structure and documentation). This takes a lot of time to get this number of projects and is labour intensive.
Results in the first year: find suitable projects was hard, having each team work on a different project is too difficult for staff (the lab instructor has to know about 20 separate projects), and small projects are often not as good as the larger projects. Up to 10,000 lines of code were considered small projects but theses often turned out to be single-developer projects, which meant that there was no group communication structure and a lot of things didn’t get written down so the software wouldn’t build as the single developer hadn’t needed to let anyone know the tricks and tips.
In the second year, the number of projects was cut down to make it easier on the lab instructors (down to 10) and the size of the projects went up (40-100k lines) in order to find the group development projects. The number of teams grew and then the teams could pick whichever project they wanted, rather than assigning one team per project on a first-come first-served approach. (The first-come first-served approach meant students were picking based on the name and description of the project, which is very shallow.) To increase group knowledge, the group got a project description , with links to the source code and commendation, build instructions (which had been tested), the list of proposed enhavements and a screen shot of the working program. This gave the group a lot more information to make a deeper decision as to which project they wanted to undertake and students could get a much better feeling for what they took on.
What the students provided, after reviewing the projects, was their top 3 projects and list of proposed enhancements, with an explanation of their choices and a description of the relationship between the project and their proposed enhancement. (Students would receive their top choice but they didn’t know this.)
Analysing the data  with a thematic analysis, abstracting the codes into categories and then using Axial coding to determine the relations between categories to combine the AC results into a single thematic diagram. The attract categories were: Subject Appeal (consider domain of interest, is it cool or flashy), Value Added (value of enhancement, benefit to self or users), Difficulty (How easy/hard it is), and Planning (considering the match between team skills and the skills that the project required, the effects of the project architecture). In the axial coding, centring on value-adding, the authors came up with a resulting thematic map.
Planning was seen as a sub-theme of difficulty, but both subject appeal and difficulty (although considered separately) were children of value-adding. (You can see elements of this in my notes above.) In the relationship among the themes, there was a lot of linkage that led to concepts such as weighing value add against difficulty meant that enhancements still had to be achievable.
Looking at the most frequent choices, for 26 groups, 9 chose an unexacting daily calendar scheduler (Rapla), 7 chose an infrastructure for games (Triple A) and a few chose a 3D home layout program (Sweet Home). Value-add and subject-appeal were dominant features for all of these. The only to-four project that didn’t mention difficulty was a game framework. What this means is that if we propose projects that provide these categories, then we would expect them to be chosen preferentially.
The bottom line is that the choices would have been the same if the selection pool had been 5 rather than 10 projects and there’s no evidence that there was that much collaboration and discussion between those groups doing the same projects. (The dreaded plagiarism problem raises its head.) The number of possible enhancements for such large projects were sufficiently different that the chance of accidentally doing the same thing was quite small.
Caveats: these results are based on the students’ top choices only and these projects dominate the data. (Top 4 projects discussed in 47 answers, remaining 4 discussed in 15.) Importantly, there is no data about why students didn’t choose a given project – so there may have been other factors in play.
In conclusion, the students did make the effort to look past the superficial descriptions in choosing projects. Value adding is a really important criterion, often in conjunction with subject appeal and perceived difficulty. Having multiple teams enhancing the same project (independently) does not necessarily lead to collaboration.
But, wait, there’s more! Larger projects meant that teams face more uniform comprehension tasks and generally picked different enhancements from each other. Fewer projects means less stress on the lab instructor. UML diagrams are not very helpful when trying to get the big-picture view. The UML view often doesn’t help with the overall structure.
In the future, they’re planning to offer 10 projects to 30 teams, look at software metrics of the different projects, characterise the reasons that students avoid certain projects, and provide different tools to support the approach. Really interesting work and some very useful results that I suspect my demonstrators will be very happy to hear. 🙂
The questions were equally interesting, talking about the suitability of UML for large program representation (when it looks like spaghetti) and whether the position of projects in a list may have influenced the selection (did students download the software for the top 5 and then stop?). We don’t have answers to either of these but, if you’re thinking about offering a project selection for your students, maybe randomising the order of presentation might allow you to measure this!

 


ITiCSE 2014: Working Groups Reports #ITiCSE2014 #ITiCSE

Unfortunately, there are too many working groups, reporting at too high a speed, for me to capture it here. All of the working groups are going to release reports and I suggest that you have a look into some of the areas covered. The topics reported on today were:

  • Methodology and Technology for In-Flow Peer Review

    In-flow peer review is the review of an exercise as it is going on. Providing elements to review can be difficult as it may encourage plagiarism but there are many benefits to this, which generally justifies the decision to do review. Picking who can review what for maximum benefit is also very difficult.

    We’ve tried to do a lot of work here but it’s really challenging because there are so many possibly right ways.

  • Computational Thinking in K-9 Education

    Given that there are national, and localised, definitions of what “Computational Thinking” is, this is challenging to identify. Many K-12 teachers are actually using CT techniques but wouldn’t know to answer “yes” if asked if they were. Many issues in play here but the working group are a multi-national and thoughtful group who have lots of ideas.

    As a note, K-9 refers to Kindergarten to Year 9, not dogs. Just to be clear.

  • Increasing Accessibility and Adoption of Smart Technologies for Computer Science Education

    How can you integrate all of the whizz-bang stuff into the existing courses and things that we already use everyday? The working group have proposed an architecture to help with the adoption. It’s a really impressive, if scary, slide but I’ll be interested to see where this goes. (Unsurprisingly, it’s a three-tier model that will look familiar to anyone with a networking or distributed systems background.) Basically, let’s not re-invent the wheel when it comes to using smarter technologies but let’s also find out the best ways to build these systems and then share that, as well as good content and content delivery. Identity management is, of course, a very difficult problem for any system so this is a core concern.

    There’s a survey you can take to share your knowledge with this workgroup. (The feared and dreaded Simon noted that it would be nice if their survey was smarter.) A question from the floor was that, while the architecture was nice and standards were good, what impact would this have on the chalkface? (This is a neologism I’ve recently learned about, the equivalent of the coalface for the educational teaching edge.) This is a good question. You only have to look at how many standards there are to realise that standard construction and standard adoption are two very different beasts. Cultural change is something that has to be managed on top of technical superiority. The working group seems to be on top of this so it will be interesting to see where it goes.

  • Strengthening Methodology Education in Computing

    Unsurprisingly, computing is a very broad field and is methodologically diverse. There’s a lot of ‘borrowing’ from other fields, which is a nice way of saying ‘theft’. (Sorry, philosophers, but ontologies are way happier with us.) Our curricular have very few concrete references to methodology, with a couple of minor exceptions. The working group had a number of objectives, which they reduced down to fewer and remove the term methodology. Literature reviews on methodology education are sparse but there is more on teaching research methods. Embarrassingly, the paper that shows up for this is a 2006 report from a working group from this very conference. Oops. As Matti asked, are we really this disinterested in this topic that we forget that we were previously interested in it? The group voted to change direction to get some useful work out of the group. They voted not to produce a report as it was too challenging to repurpose things at this late stage. All their work would be toward annotating the existing paper rather than creating a new one.

    One of the questions was why the previous paper had so few citations, cited 5 times out of 3000 downloads, despite the topic being obviously important. One aspect mentioned is that CS researchers are a separate community and I reiterated some early observations that we have made on the pathway that knowledge takes to get from the CS Ed community into the CS ‘research’ community. (This summarises as “Do CS Ed research, get it into pop psychology, get it into the industrial focus and then it will sneak into CS as a curricular requirement, at which stage it will be taken seriously.” Only slightly tongue-in-cheek.)

  • A Sustainable Gamification Strategy for Education

    Sadly, this group didn’t show up, so this was disbanded. I imagine that they must have had a very good reason.

Interesting set of groups – watch for the reports and, if you use one, CITE IT! 🙂


ITiCSE 2014, Monday, Session 1A, Technology and Learning, #ITiCSE2014 #ITiCSE @patitsel @guzdial

(The speakers are going really. really quickly so apologies for any errors or omissions that slip through.)

The chair had thanked the Spanish at the opening for the idea of long coffee breaks and long lunches – a sentiment I heartily share as it encourages discussions, which are the life blood of good conferences. The session opened with “SPOC – supported introduction to Programming” presented by Marco Piccioni. SPOCs are Small Private On-line Courses and are part of the rich tapestry of hand-crafted terminology that we are developing around digital delivery. The speaker is from ETH-Zurich and says that they took a cautious approach to go step-by-step in taking an existing and successful course and move it into the on-line environment. The classic picture from University of Bologna of the readers/scribes was shown. (I was always the guy sleeping in the third row.)

No paper aeroplanes?

No paper aeroplanes?

We want our teaching to be interesting and effective so there’s an obis out motivation to get away from this older approach. ETH has an interesting approach where the exam is 10 months after the lecture, which leads to interesting learning strategies for students who can’t solve the instrumentality problem of tying work now into success in the future. Also, ETH had to create an online platform to get around all of the “my machine doesn’t work” problems that would preclude the requirement to install an IDE. The final point of motivation was to improve their delivery.

The first residential version of the course ran in 2003, with lectures and exercise sessions. The lectures are in German and the exercise sessions are in English and German, because English is so dominant in CS. There are 10 extensive home assignments including programming and exercise sessions groups formed according to students’ perceived programming proficiency level. (Note on the last point: Hmmm, so people who can’t program are grouped together with other people who can’t program? I believe that the speaker clarifies this as “self-perceived” ability but I’m still not keen on this kind of streaming. If this worked effectively, then any master/apprentice model should automatically fail) Groups were able to switch after a week, for language or not working with the group.

The learning platform for the activity was Moodle and their experience with it was pretty good, although it didn’t do everything that they wanted. (They couldn’t put interactive sessions into a lecture, so they produced a lecture-quiz plug-in for Moodle. That’s very handy.) This is used in conjunction with a programming assessment environment, in the cloud, which ties together the student performance at programming with the LMS back-end.

The SPOC components are:

  • lectures, with short intros and video segments up to 17 minutes. (Going to drop to 10 minutes based on student feedback),
  • quizzes, during lectures, testing topic understanding immediately, and then testing topic retention after the lecture,
  • programming exercises, with hands-on practice and automatic feedback

Feedback given to the students included the quizzes, with a badge for 100% score (over unlimited attempts so this isn’t as draconian as it sounds), and a variety of feedback on programming exercises, including automated feedback (compiler/test suite based on test cases and output matching) and a link to a suggested solution. The predefined test suite was gameable (you could customise your code for the test suite) and some students engineered their output to purely match the test inputs. This kind of cheating was deemed to be not a problem by ETH but it was noted that this wouldn’t scale into MOOCs. Note that if someone got everything right then they got to see the answer – so bad behaviour then got you the right answer. We’re all sadly aware that many students are convinced that having access to some official oracle is akin to having the knowledge themselves so I’m a little cautious about this as a widespread practice: cheat, get right answer, is a formula for delayed failure.

Reporting for each student included their best attempt and past attempts. For the TAs, they had a wider spread of metrics, mostly programmatic and mark-based.

On looking at the results, the attendance to on-line lectures was 71%, where the live course attendance remained stable. Neither on-line quizzes nor programming exercises counted towards the final grade. Quiz attempts were about 5x the attendance and 48% got 100% and got the badge, significantly more than the 5-10% than would usually do this.

Students worked on 50% of the programming exercises. 22% of students worked on 75-100% of the exercises. (There was a lot of emphasis on the badge – and I’m really not sure if there’s evidence to support this.)

The lessons learned summarised what I’ve put above: shortening video lengths, face-to-face is important, MCQs can be creative, ramification, and better feedback is required on top of the existing automatic feedback.

The group are scaling from SPOC to MOOC with a Computing: Art, Magic, Science course on EdX launching later on in 2014.

I asked a question about the badges because I was wondering if putting in the statement “100% in the quiz is so desirable that I’ll give you a badge” was what had led to the improved performance. I’m not sure I communicated that well but, as I suspected, the speaker wants to explore this more in later offerings and look at how this would scale.

The next session was “Teaching and learning with MOOCs: Computing academics’ perspectives and engagement”, presented by Anna Eckerdal. The work was put together by a group composed from Uppsala, Aalto, Maco and Monash – which illustrates why we all come to conferences as this workgroup was put together in a coffee-shop discussion in Uppsala! The discussion stemmed from the early “high hype” mode of MOOCs but they were highly polarising as colleagues either loved it or hated it. What was the evidence to support either argument? Academics’ experience and views on MOOCs were sought via a questionnaire sent out to the main e-mail lists, to CS and IT people.

The study ran over June-JUly 2013, with 236 responses, over > 90 universities, and closed- and open-ended questions. What were the research questions: What are the community views on MOOC from a teaching perspective (positive and negative) and how have people been incorporating them into their existing courses? (Editorial note: Clearly defined study with a precise pair of research questions – nice.)

Interestingly, more people have heard concern expressed about MOOCs, followed by people who were positive, then confused, the negative, then excited, then uninformed, then uninterested and finally, some 10% of people who have been living in a time-travelling barrel in Ancient Greece because in 2013 they have heard no MOOC discussion.

Several themes were identified as prominent themes in the positive/negative aspects but were associated with the core them of teaching and learning. (The speaker outlined the way that the classification had been carried out, which is always interesting for a coding problem.) Anna reiterated the issue of a MOOC as a personal power enhancer: a MOOC can make a teacher famous, which may also be attractive to the Uni. The sub themes were pedagogy and learning env, affordance of MOOCs, interaction and collaboration, assessment and certificates, accessibility.

Interestingly, some of the positive answers included references to debunked approaches (such as learning styles) and the potential for improvements. The negatives (and there were many of them) referred to stone age learning and ack of relations.

On affordances of MOOCs, there were mostly positive comments: helping students with professional skills, refresh existing and learn new skills, try before they buy and the ability to transcend the tyranny of geography. The negatives included the economic issues of only popular courses being available, the fact that not all disciplines can go on-line, that there is no scaffolding for identity development in the professional sense nor support development of critical thinking or teamwork. (Not sure if I agree with the last two as that seems to be based on the way that you put the MOOC together.)

I’m afraid I missed the slide on interaction and collaboration so you’ll (or I’ll) have to read the paper at some stage.

There was nothing positive about assessment and certificates: course completion rates are low, what can reasonably be assessed, plagiarism and how we certify this. How does a student from a MOOC compete with a student from a face-to-face University.

1/3 of the respondents answered about accessibility, with many positive comments on “Anytime. anywhere, at one’s own pace”. We can (somehow) reach non-traditional student groups. (Note: there is a large amount of contradictory evidence on this one, MOOCs are even worse than traditional courses. Check out Mark Guzdial’s CACM blog on this.) Another answer was “Access to world class teachers” and “opportunity to learn from experts in the field.” Interesting, given that the mechanism (from other answers) is so flawed that world-class teachers would barely survive MOOC ification!

On Academics’ engagement with MOOCs, the largest group (49%) believed that MOOCs had had no effect at all, about 15% said it had inspired changes, roughly 10% had incorporated some MOOCs. Very few had seen MOOCs as a threat requiring change: either personally or institutionally. Only one respondent said that their course was a now a MOOC, although 6% had developed them and 12% wanted to.

For the open-ended question on Academics’ engagement, most believed that no change was required because their teaching was superior. (Hmm.) A few reported changes to teaching that was similar to MOOCs (on line materials or automated assessment) but wasn’t influenced by them.

There’s still no clear vision of the role of MOOCs in the future: concerned is as prominent as positive. There is a lot of potential but many concerns.

The authors had several recommendations of concern: focusing on active learning, we need a lot more search in automatic assessment and feedback methods, and there is a need for lots of good policy from the Universities regarding certification and the role of on-site and MOOC curricula. Uppsala have started the process of thinking about policy.

The first question was “how much of what is seen here would apply to any new technology being introduced” with an example of the similar reactions seen earlier to “Second Life”. Anna, in response, wondered why MOOC has such a global identity as a game-changer, given its similarity to previous technologies. The global discussion leads to the MOOC topic having a greater influence, which is why answering these questions is more important in this context. Another issue raised in questions included the perceived value of MOOCs, which means that many people who have taken MOOCs may not be advertising it because of the inherent ranking of knowledge.

@patitsel raised the very important issue that under-represented groups are even more under-represented in MOOCs – you can read through Mark’s blog to find many good examples of this, from cultural issues to digital ghettoisation.

The session concluded with “Augmenting PBL with Large Public Presentations: A Case Study in Interactive Graphics Pedagogy”. The presenter was a freshly graduated student who had completed the courses three weeks ago so he was here to learn and get constructive criticism. (Ed’s note: he’s in the right place. We’re very inquisitive.)

Ooh, brave move. He’s starting with anecdotal evidence. This is not really the crowd for that – we’re happy with phenomenographic studies and case studies to look at the existence of phenomena as part of a study, but anecdotes, even with pictures, are not the best use of your short term in front of a group of people. And already a couple of people have left because that’s not a great way to start a talk in terms of framing.

I must be honest, I slightly lost track of the talk here. EBL was defined as project-based learning augmented with constructively aligned public expos, with gamers as the target audience. The speaker noted that “gamers don’t wait” as a reason to have strict deadlines. Hmm. Half Life 3 anyone? The goal was to study the pedagogical impact of this approach. The students in the study had to build something large, original and stable, to communicate the theory, work as a group, demonstrate in large venues and then collaborate with a school of communication. So, it’s a large-scale graphics-based project in teams with a public display.

Grading was composed of proposals, demos, presentation and open houses. Two projects (50% and 40%) and weekly assignments (10%) made up the whole grading scheme. The second project came out after the first big Game Expo demonstration. Project 1 had to be interactive groups, in groups of 3-4. The KTH visualisation studio was an important part of this and it is apparently full of technology, which is nice and we got to hear about a lot of it. Collaboration is a strong part of the visualisation studio, which was noted in response to the keynote. The speaker mentioned some of the projects and it’s obvious that they are producing some really good graphics projects.

I’ll look at the FaceUp application in detail as it was inspired by the idea to make people look up in the Metro rather than down at their devices. I’ll note that people look down for a personal experience in shared space. Projecting, even up, without capturing the personalisation aspect, is missing the point. I’ll have to go and look at this to work out if some of these issues were covered in the FaceUp application as getting people to look up, rather than down, needs to have a strong motivating factor if you’re trying to end digitally-inspired isolation.

The experiment was to measure the impact on EXPOs on ILOs, using participation, reflection, surveys and interviews. The speaker noted that doing coding on a domain of knowledge you feel strongly about (potentially to the point of ownership) can be very hard as biases creep in and I find it one of the real challenges in trying to do grounded theory work, personally. I’m not all that surprised that students felt that the EXPO had a greater impact than something smaller, especially where the experiment was effectively created with a larger weight first project and a high-impact first deliverable. In a biological human sense, project 2 is always going to be at risk of being in the refectory period, the period after stimulation during which a nerve or muscle is less able to be stimulated. You can get as excited about the development, because development is always going to be very similar, but it’s not surprising that a small-scale pop is not as exciting as a giant boom, especially when the boom comes first.

How do we grade things like this? It’s a very good question – of course the first question is why are we grading this? Do we need to be able to grade this sort of thing or just note that it’s met a professional standard? How can we scale this sort of thing up, especially when the main function of the coordinator is as a cheerleader and relationships are essential. Scaling up relationships is very, very hard. Talking to everyone in a group means that the number of conversations you have is going to grow at an incredibly fast rate. Plus, we know that we have an upper bound on the number of relationships we can actually have – remember Dunbar’s number of 120-150 or so? An interesting problem to finish on.


ITiCSE 2014: Monday, Keynote 1, “New Technology, New Learning?” #ITiCSE2014 #ITiCSE

This keynote was presented by Professor Yvonne Rogers, from University College of London. The talk was discussing how we could make learning more accessible and exciting for everyone and encourage students to think, to create and share our view. Professor Rogers started by sharing a tweet by Conor Gearty on a guerrilla lecture, with tickets to be issued at 6:45pm, for LSE students. (You can read about what happened here.) They went to the crypt of Westminster Cathedral and the group, split into three smaller groups, ended up discussing the nature of Hell and what it entailed. This was a discussion on religion but, because of the way that it was put together, it was more successful than a standard approach – context shift, suspense driving excitement and engagement. (I wonder how much suspense I could get with a guerrilla lecture on polymorphism… )

Professor Rogers says that suspense matters, as the students will be wondering what is coming next, and this will hopefully make them more inquisitive and thus drive them along the path to scientific enquiry. The Ambient Wood was a woodland full of various technologies for student pairs, with technology and probes, an explorative activity. You can read about the Ambient Wood here. The periscope idea ties videos into the direction that you are looking – a bit like Google Glass without a surveillance society aspect (a Woodopticon?). (We worked on similar ideas at Adelaide for an early project in the Arts Precinct to allow student exploration to drive the experience in arts, culture and botanical science areas.) All of the probes were recorded in the virtual spatial environment matching the wood so that, after the activity, the students could then look at what they did. Thus, a group of 10-12 year olds had an amazing day exploring and discovering, but in a way that was strongly personalised, with an ability to see it from the bird’s eye view above them.

And, unsurprisingly, we moved on to MOOCs, with an excellent slide on MOOC HYSTERIA. Can we make these as engaging as the guerrilla lecture or the ambient wood?

hysertia

MOOCs, as we know, are supposed to increase our reach and access to education but, as Professor Rogers noted, it is also a technology that can make the lecturer a “bit of a star”. This is one of the most honest assessments of some of the cachet that I’ve heard – bravo, Professor Rogers. What’s involved in a MOOC? Well, watching things, doing quizzes, and there’s probability a lot of passive, rather than active, learning. Over 60% of the people who sign up to do a MOOC, from the Stanford experience, have a degree – doing Stanford for free is a draw for the already-degreed. How can we make MOOCs fulfil their promise, give us good learning, give us active learning and so on? Learning analytics give us some ideas and we can data mine to try and personalise the course to the student. But this has shifted what our learning experience is and do we have any research to show the learning value of MOOCs?

In 2014, 400 students taking a Harvard course:

  1. Learned in a passive way
  2. Just want to complete
  3. Take the easy option
  4. Were unable to apply what they learned
  5. Don’t reflect on or talk to their colleagues about it.

Which is not what we want? What about the Flipped Classroom? Professor Rogers attributed this to Khan but I’m not sure I agree with this as there were people, Mazur for example, who were doing this in Peer Instruction well before Khan – or at least I thought so. Corrections in the questions please! The idea of the flip is that we don’t have content delivery in lectures with the odd question – we have content beforehand and questions in class. What is the reality?

  1. Still based on chalk and talk.
  2. Is it simply a better version of a bad thing?
  3. Are students more motivated and more active?
  4. Very labour-intensive for the teacher.

So where’s the evidence? Well, it does increase interaction in class between instructors and students. It does allow for earlier identification of misconceptions. Pierce and Fox, 2012, found that it increased exam results for pharmacology students. It also fostered critical thinking in case scenarios. Maybe this will work for 10s-100s – what about classes of thousands? Can we flip to this? (Should we even have classes of this size is another good question)

Then there’s PeerWise, Paul Denny (NZ), where there is active learning in which students create questions, answer them and get feedback. Students create the questions and then they get to try other student’s questions and can then rate the question and rate the answer. (We see approaches like this, although not as advanced, in other technologies such as Piazza.)

How effective is this? Performance in PeerWise correlated with exam marks (Anyadi, Green and Tang, 2013), with active student engagement. It’s used for revision before the exams, and you get hihg-quality questions and answers, while supporting peer interaction. Professor Rogers then showed the Learning Pyramid, from the National Training Laboratories, Bethel, Maine. The PeerWise system plays into the very high retention area.

pyramid

Professor Rogers then moved on to her own work, showing us a picture of the serried rank nightmare of a computer-based classroom: students in rows, isolated and focused on their screens. Instead of ‘designing for one’, why don’t we design to orchestrate shared activities, with devices that link to public displays and can actively foster collaboration. One of Professor Rogers’ students is looking at ways to share simulations across tablets and screens. This included “4Decades“, a a simulation of climate management, with groups representing the different stakeholders to loo at global climate economics. We then saw a video that I won’t transcribe. The idea is that group work encourages discussion, however we facilitate it, and this tends to leading to teaching others in the sharing of ideas. Another technology that Professor Rogers’ group have developed in this space is UniPad: orchestrating collaborate activities across multiple types of devices, with one device per 6-7 students, and used in classes without many researchers present. Applications of this technology include budgeting for students (MyBank), with groups interacting and seeing the results on a public display. Given how many students operate in share houses collaboratively, this is quite an interesting approach to the problem. From studies on this, all group members participated and used the tablet as a token for discussion, taking ownership of a part of the problem. This also extended to reflection on other’s activities, including identifying selfish behaviour on the part of other people. (Everyone who has had flatmates is probably groaning at the moment. Curse you, Love Tarot Pay-By-The-Minute Telephone Number, which cost me and my flatmates a lot of dollars after a flatmate skipped out on us.)

The next aspect Professor Rogers discussed was physical creation toolkits, such as MaKey MaKey, where you can build alternative input for a computer, based on a simple printed circuit board with alligator clips and USB cables. The idea is simple: you can turn anything you like into a keyboard key. Demonstrations included a banana space bar, a play dough MarioKart gamepad, and many other things (a water bowl in front of the machine became a cat-triggered photo booth). This highlights one of the most important aspects of thinking about learning: learning for life. How can we keep people interested in learning in the face of busy, often overfull, lives when many people still think about learning as something that had to be endured on their pathway into the workforce? (Paging my climbing friends with their own climbing wall: you could make the wall play music if you wanted to. Just saying.)

One of the computers stopped working during a trial of the MaKey MaKey system with adult learners and the collaboration that ensued changed the direction of the work and more people were assigned to a single kit. Professor Rogers showed a small video of a four-person fruit orchestra of older people playing Twinkle Twinkle Little Star. (MORE KIWI!) This elicited a lot of ideas, including for their grandchildren and own parent, transforming exercise to be more fun, to help people learn fundamental knowledge skills and give good feedback. We often heavily intervene in the learning experience and the reflection of the Fruit Orchestra was that intervening less in self-driven activities such as MaKey MaKey might be a better way to go, to increase autonomy and thus drive engagement.

Next was the important question: How can we gets to create and code, where coding is just part of the creating? Can we learn to code differently beyond just choosing a particular language? We have many fascinating technologies but what is the suite of tools over the top that will drive creativity and engagement in this area, to produce effective learning? The short video shown demonstrated a pop-out prefabricated system, where physical interfaces and gestures across those represented coding instructions: coding without any typing at all. (Previous readers will remember my fascination with pre-literate programming.) This early work, electronics on a sheet, is designed to be given away because the production cost is less than 3 Euros. The project is called “code me” from University College London and is designed to teach logic without people realising it: the fundamental building block of computational thinking. Future work includes larger blocks with Bluetooth input and sensors. (I can’t find a web page for this.)

What role should technology play in learning? Professor Rogers mentioned thinking about this in two ways. The inside learning using technology to think about the levels students to reach to foster attainment: personalise, monitor, motivate, flexible, adaptive. The outside learning approach is to work with other people away from the screen: collaborate, create, connect, reflect and play. Professor Rogers believes that the choice is ours but that technology should transform learning to make it active, creative, collaborative, exciting (some other things I didn’t catch) and to recognise the role of suspense in making people think.

An interesting and thought-provoking keynote.

 


CSEDU, Day 1, Keynote 2, “Mathematics Teaching: is the future syncretic?” (#csedu14 #csedu #AdelEd)

This is an extension of the position paper that was presented this morning. I must be honest and say that I have a knee-jerk reaction when I run across titles like this. There’s always the spectre of Rand or Gene Ray in compact phrases of slightly obscure terminology. (You should probably ignore me, I also twitch every time I run across digital hermeneutics and that’s perfectly legitimate.) The speaker is Larissa Fradkin who is trying to improve the quality of mathematics teaching and overall interest in mathematics – which is a good thing and so I should probably be far more generous about “syncretic”. Let’s review the definition of syncretic:

Again, from Wikipedia, Syncretism /ˈsɪŋkrətɪzəm/ is the combining of different, often seemingly contradictory beliefs, while melding practices of various schools of thought. (The speaker specified this to religious and philosophical schools of thought.)

There’s a reference in the talk to gnosticism, which combined oriental mysticism, Judaism and Christianity. Apparently, in this talk we are going to have myths debunked regarding the Maths Wars of Myths, including traditionalist myths and constructivist myths. Then discuss the realities in the classroom.

Two fundamental theories of learning were introduced: traditionalist and constructivist. Apparently, these are drummed into poor schoolteachers and yet we academics are sadly ignorant of these. Urm. You have to be pretty confident to have a go at Piaget: “Piaget studied urchins and then tried to apply it to kids.” I’m really not sure what is being said here but the speaker has tried to tell two jokes which have fallen very flat and, regrettably, is making me think that she doesn’t quite grasp what discovery learning is. Now we are into Guided Teaching and scaffolding with Vygotsky, who apparently, as a language teacher, was slightly better than a teacher of urchins.

The first traditionalist myth is that intelligence = implicit memory (no conscious awareness) + basic pattern recognition. Oh, how nice, the speaker did a lot of IQ tests and went from 70 to 150 in 5 tests. I don’t think many people in the serious educational community places much weight  on the assessment of intelligence through these sorts of test – and the objection to standardised testing is coming from the edu research community of exactly those reasons. I commented on this speaker earlier and noted that I felt that she was having an argument that was no longer contemporary. Sadly, my opinion is being reinforced. The next traditionalist myth is that mathematics should be taught using poetry, other mnemonics and coercion.

What? If the speaker is referring to the memorisation of the multiplication tables, we are taking about a definitional basis for further development that occupies a very short time in the learning phase. We are discussing a type of education that is already identified as negative as if the realisation that mindless repetition and extrinsic motivational factors are counter-productive. Yes, coercion is an old method but let’s get to what you’re proposing as an alternative.

Now we move on to the constructivist myths. I’m on the edge of my seat. We have a couple of cartoons which don’t do anything except recycle some old stereotypes. So, the first myth is “Only what students discover for themselves is truly learned.” So the problem here is based on Rebar, 2007, met study. Revelation: Child-centred, cognitively focused and open classroom approaches tend to perform poorly.

Hmm, not our experience.

The second myth is both advanced and debunked by a single paper, that there are only two separate and distinct ways to teach mathematics: conceptual understanding and drills. Revelation: Conceptual advanced are invariably built on the bedrock of technique.

Myth 3: Math concepts are best understood and mastered when presented in context, in that way the underlying math concept will follow automatically. The speaker used to teach with engineering examples but abandoned them because of the problem of having to explain engineering problems, engineering language and then the problem. Ah, another paper from Hung-Hsi Wu, UCB, “The Mathematician and Mathematics Education Reform.” No, I really can’t agree with this as a myth. Situated learning is valid and it works, providing that the context used is authentic and selected carefully.

Ok, I must confess that I have some red flags going up now – while I don’t know the work of Hung-Hsi Wu, depending on a single author, especially one whose revelatory heresy is close to 20 years old, is not the best basis for a complicated argument such as this. Any readers with knowledge in this should jump on to the comments and get us informed!

Looking at all of these myths, I don’t see myths, I see straw men. (A straw man is a deliberately weak argument chosen because it is easy to attack and based on a simplified or weaker version of the problem.)

I’m in agreement with many of the outcomes that Professor Fradkin is advocating. I want teachers to guide but believe that they can do it in the constriction of learning environments that support constructivist approaches. Yes, we should limit jargon. Yes, we should move away from death-by-test. Yes, Socratic dialogue is a great way to go.

However, as always, if someone says “Socratic dialogue is the way to go but I am not doing it now” then I have to ask “Why not?” Anyone who has been to one of my sessions knows that when I talk about collaboration methods and student value generation, you will be collaborating before your seat has had a chance to warm up. It’s the cornerstone of authentic teaching that we use the methods that we advocate or explain why they are not suitable – cognitive apprenticeship requires us to expose our selves as we got through the process we’re trying to teach!

Regrettably, I think my initial reaction of cautious mistrust of the title may have been accurate. (Or I am just hopelessly biassed by an initial reaction although I have been trying to be positive.) I am trying very hard to reinterpret what has been said. But there is a lot of anecdote and dependency upon one or two “visionary debunkers” to support a series of strawmen presented as giant barriers to sensible teaching.

Yes, listening to students and adapting is essential but this does not actually require one to abandon constructivist or traditionalist approaches because we are not talking about the pedagogy here, we’re talking about support systems. (Your take on that may be different.)

There is some evidence presented at the end which is, I’m sorry to say, a little confusing although there has obviously been a great deal of success for an unlisted, uncounted number and unknown level of course – success rates improved from 30 passing to 70% passing and no-one had to be trained for the exam. I would very much like to get some more detail on this as claiming that the syncretic approach is the only way to reach 70% is essential is a big claim. Also, a 70% pass rate is not all that good – I would get called on to the carpet if I did that for a couple of offerings. (And, no, we don’t dumb down the course to improve pass rate – we try to teach better.)

Now we move into on-line techniques. Is the flipped classroom a viable approach? Can technology “humanise” the classroom? (These two statements are not connected, for me, so I’m hoping that this is not an attempt to entail one by the other.) We then moved on to a discussion of Khan, who Professor Fradkin is not a fan of, and while her criticisms of Khan are semi-valid (he’s not a teacher and it shows), her final statement and dismissal of Khan as a cram-preparer is more than a little unfair and very much in keeping with the sweeping statements that we have been assailed by for the past 45 minutes.

I really feel that Professor Fradkin is conflating other mechanisms with blended and flipped learning – flipped learning is all about “me time” to allow students to learn at their own pace (as she notes) but then she notes a “Con” of the Khan method of an absence of “me time”. What if students don’t understand the recorded lectures at all? Well… how about we improve the material? The in-class activities will immediately expose faulty concept delivery and we adapt and try again (as the speaker has already noted). We most certainly don’t need IT for flipped learning (although it’s both “Con” point 3 AND 4 as to why Khan doesn’t work), we just need to have learning occur before we have the face-to-face sessions where we work through the concepts in a more applied manner.

Now we move onto MOOCs. Yes, we’re all cautious about MOOCs. Yes, there are a lot of issues. MOOCs will get rid of teachers? That particular strawman has been set on fire, pushed out to sea, brought back, set on fire again and then shot into orbit. Where they set it on fire again. Next point? Ok, Sebastian Thrun made an overclaim that the future will have only 10 higher ed institutions in 50 years. Yup. Fire that second strawman into orbit. We’ve addressed Professor Thrun before and, after all, he was trying to excite and engage a community over something new and, to his credit, he’s been stepping back from that ever since.

Ah, a Coursera course that came from a “high-quality” US University. It is full of imprecise language, saying How and not Why, with a Monster generator approach. A quick ad hominen attack on the lecturer in the video (He looked like he had been on drugs for 10 years). Apparently, and with no evidence, Professor Fradkin can guarantee that no student picked up any idea of what a function was from this course.

Apparently some Universities are becoming more cautious about MOOCs. Really.

I’m sorry to have editorialised so badly during this session but this has been a very challenging talk to listen to as so much of the underlying material has been, to my understanding, misrepresented at least. A very disappointing talk over all and one that could have been so much better – I agree with a lot of the outcomes but I don’t really think that this is not the way to lead towards it.

Sadly, already someone has asked to translate the speaker’s slides into German so that they can send it to the government! Yes, text books are often bad and a lack of sequencing is a serious problem. Once again I agree with the conclusion but not the argument… Heresy is an important part of our development of thought, and stagnation is death, but I think that we always need to be cautious that we don’t sensationalise and seek strawmen in our desire to find new truths that we have to reach through heresy.

 


SIGCSE 2014: Collecting and Analysing Student Data 1, “AP CS Data”, Thursday 3:15 – 5:00pm (#SIGCSE2014)

The first paper, “Measuring Demographics and Performance in Computer Science Education at a Nationwide Scale using AP CS Data”, from Barbara Ericson and Mark Guzdial, has been mentioned in these hallowed pages before, as well as Mark’s blog (understandably). Barb’s media commitments have (fortunately) slowed down btu it was great to see so many people and the media taking the issue of under-representation in AP CS seriously for a change. Mark presented and introduced the Advanced Placement CS program which is the only nationwide measure of CS education in the US. This allows us tto use the AP CS to compare with other AP exams, find who is taking the AP CS exams and how well do they perform. Looking longitudinally, how has this changed and what influences exam-taking? (There’s been an injection of funds into some states – did this work?)

The AP are exams you can take while in secondary school that gives you college credit or placement in college (similar to the A levels, as Mark put it). There’s an audit process of the materials before a school can get accreditation. The AP exam is scored 1-5, where 3 is passing. The overall stats are a bit worrying, when Back, Hispanic and female students are grossly under-represented. Look at AP Calculus and this really isn’t as true for female students and there is better representation for Black and Hispanic students. (CS has 4% Black and 7.7% Hispanic students when America’s population is 13.1% Black students and 16.9% Hispanic) The pass rates for AP Calculus are about the same as for AP CS so what’s happening?

Looking at a (very cool) diagram, you see that AP overall is female heavy – CS is a teeny, tiny dot and is the most male dominated area, and 1/10th the size of calculus. Comparing AP CS to the others. there has been steady growth since 1997 in Calculus, Biology, Stats, Physics, Chem and Env Science AP exams – but CS is a flat, sunken pancake that hasn’t grown much at all. Mark then analysed the data by states, counting the number of states in each category along features such as ‘schools passing audit/10K pop’, #exams/pop and % passing exams. Mark then moved onto diversity data: female, Black and Hispanic test takers. It’s worth noting that Jill Pala made the difference to the entire state she taught in, raising the number of women. Go, Jill! (And she asked a really good question in my talk, thanks again, Jill!)

How has this changed overtime? California and Maryland have really rapid growth in exam takers over the last 6 years, with NSF involvement. But Michigan and Indiana have seen much less improvement. In Georgia, there’s overall improvement, but mostly women and Hispanic students, but not as much for Black students. The NSF funding appears to have paid off, GA and MA have improved over the last 6 years but female test takers have still not exceeded 25% in the last 6 years.

Why? What influences exam taking?

  1. The wealth in the state influences number of schools passing audit
  2. Most of the variance in the states comes from under-representation in certain groups.

It’s hard to add wealth but if you want more exam takers, increase your under-represtentation group representation! That’s the difference between the states.

Conclusions? It’s hard to compare things most of the time and the AP CS is the best national pulse we have right now. Efforts to improve are having an effect but wealth matters, as in the rest of education.

All delivered at a VERY high speed but completely comprehensible – I think Mark was trying to see how fast I can blog!

 


Great News, Another Group Paper Accepted!

Our Computer Science Education Research group has been doing the usual things you do when forming a group: stating a vision, setting goals, defining objectives and then working like mad. We’ve been doing a lot of research and we’ve been publishing our work to get peer review, general feedback and a lot of discussion going. This year, we presented a paper in SIGCSE, we’ve already had a paper accepted for DEXA in Vienna (go, Thushari!) and, I’m very pleased to say that we’ve been just been notified that our paper “A Fast Measure for Identifying At-Risk Students in Computer Science” has been accepted as one of the research papers for ICER 2012, in Auckland, New Zealand.

This is great news for our group and I’m really looking forward to some great discussion on our work.

I’ll see some of you at ICER!

Click here to go to the ICER 2012 Information Page!