ICER 2012 Research Paper Session 1

It would not be over-stating the situation to say that every paper presented at ICER led to some interesting discussion and, in some cases, some more… directed discussion than others. This session started off with a paper entitled “Threshold Concepts and Threshold Skills in Computing” (Kate Sanders, Jonas Boustedt, Anna Eckerdal, Robert McCartney, Jan Erik Moström Lynda Thomas and Carol Zander), on whether threshold skills, as distinct from threshold concepts, existed and, if they did, what their characteristics would be. Threshold skills were described as transformative, integrative, troublesome knowledge, semi-irreversible (in that they’re never really lost), and requiring practice to keep current. The discussion that followed raised a lot of questions, including whether you could learn a skill by talking about it or asking someone – skill transfer questions versus environment. The consensus, as I judged it from the discussion, was that threshold skills didn’t follow from threshold concepts but there was a very rapid and high-level discussion that I didn’t quite follow, so any of the participants should feel free to leap in here!

The next talk was “On the reliability of Classifying Programming Tasks Using a Neo-Piagetian Theory of Cognitive Development” (Richard Gluga, Raymond Lister, Judy Kay, Sabina Kleitman and Donna Teague), where Ray raised and extended a number of the points that he had originally shared with us in the workshop on Sunday. Ray described the talk as being a bit “Neo-Piagetian theory for dummies” (for which I am eternally grateful)  and was seeking to address the question as to where students are actually operating when we ask them to undertake tasks that require a reasonable to high level of intellectual development.

Ray raised the three bad programming habits he’d discussed earlier:

  1. Permutation programming (where students just try small things randomly and iteratively in the hope that they will finally get the right solution – this is incredibly troublesome if the many small changes take you further away from the solution )
  2. Shotgun debugging (where a bug causes the student to put things in with no systematic approach and potentially fixing things by accident)
  3. Voodoo coding/Cargo cult coding (where code is added by ritual rather than by understanding)

These approaches show one very important thing: the student doesn’t understand what they’re doing. Why is this? Using a Neo-Piagetian framework we consider the student as moving through the same cognitive development stages that they did as a child (Piagetian) but that this transitional approach applies to new and significant knowledge frameworks, such as learning to program. Until they reach the concrete operational stage of their development, they will be applying poor or inconsistent models – logically inadequate models to use the terminology of the area (assuming that they’ve reached the pre-operational stage). Once a student has made the next step in their development, they will reach the concrete operational stage, characterised (among other things, but these were the ones that Ray mentioned) by:

  1. Transitivity: being able to recognise how things are organised if you can impose an order upon them.
  2. Reversibility: that we can reverse changes that we can impose.
  3. Conservation: realising that the numbers of things stay the same no matter how we organise them.

In coding terms, these can be interpreted in several ways but the conservation idea is crucial to programming because understanding this frees the student from having to write the same code for the same algorithm every time. Grasping that conversation exists, and understanding it, means that you can alter the code without changing the algorithm that it implements – while achieving some other desirable result such as speeding the code up or moving to a different paradigm.

Ray’s paper discussed the fact that a vast number of our students are still pre-operational for most of first and second year, which changes the way that we actually try to teach coding. If a student can’t understand what we’re talking about or has to resort to magical thinking to solve problem, then we’ve not really achieved our goals. If we do start classifying the programming tasks that we ask students to achieve by the developmental stages that we’re expecting, we may be able to match task to ability, making everyone happy(er).

The final paper in the session was “Social Sensitivity Correlations with the Effectiveness of team Process Performance: An Empirical Study”, (Luisa Bender (presenting), Gursimra Walia, Krishna Kambhampaty, Travis Nygard and Kendall Nygard), which discussed the impact of socially sensitive team members in programming teams. (Social sensitivity is the ability to correctly understand the feelings and the viewpoints of other people.)

The “soft skills” are essential to teamwork process and a successful team enhances learning outcomes. Bad teams hinder team formation and progress, and things go downhill from there. From Wooley et al’s study of nearly 700 participants, the collective intelligence of the team stems from how well the team works rather than the individual intelligence of the participants. The group whose members were more socially sensitive had a higher group intelligence.

Just to emphasise that point: a team of smart people may not be as effective as a team as a team of people who can understand the feelings and perspectives of each other. (This may explain a lot!)

Social sensitivity is a good predictor of team performance and the effectiveness of team-oriented processes, as well as the satisfaction of the team members. However, it is also apparent that we in Science, Technology, Engineering and Mathematics (STEM) have lower social sensitivity readings (supporting Baron-Cohen’s assertion – no, not that one) than some other areas. Future work in this area is looking at the impact of a single high or low socially sensitive person in a group, a study that will be of great interest to anyone who is running teams made up on randomly assigned students. How can we construct these groups for the best results for the students?


More MOOCs! (Still writing up ICER, sorry!)

The Gates Foundation is offering grants for MOOCs in Introductory Classes. I mentioned in an earlier post that if we can show that MOOCs work, then generally available and cheap teaching delivery is a fantastically transformative technology. You can read the press release but it’s obvious that this has some key research questions in it, much as we’ve all been raising:

The foundation wants to know, for instance, which students benefit most from MOOC’s (sic) and which kinds of courses translate best to that format.

Yes! If these courses do work then for whom do they work and which courses? There’s little doubt that the Gates have been doing some amazing things with their money and this looks promising – of course, now I have to find out if my University has been invited to join and, if so, how I can get involved. (Of course, if they haven’t, then it’s time to put on my dancing trousers and try to remedy that situation.)

However, money plus research questions is a good direction to go in.


ICER 2012 Day 1: Discussion Papers Session 1

ICER contains a variety of sessions: research papers, discussion papers, lightning talks and elevator pitches. The discussion papers allow people to present ideas and early work in order to get the feedback of the community. This is a very vocal community so opening yourself up to discussion is going to be a bit like drinking from the firehouse: sometimes you quench your thirst for knowledge and sometimes you’re being water-cannonned.

Web-scale Data Gathering with BlueJ
Ian Utting, Neil Brown, Michael Kölling, Davin McCall and Philip Stevens

BlueJ is a very long-lived and widely used Java programming environment with a development environment designed to assist with the learning and teaching of object-oriented programming, as well as Java. The BlueJ project is now adding automated instrumentation to every single BlueJ installation and students can opt-in to a data reporting mechanism that will allow the collection and formation of a giant data repository: Project Blackbox. (As a note, that’s a bit of a super villain name, guys.)

BlueJ has 1-2M New users per year, typically using it for ~90 days and all of these users will be able to opt-in, can opt-out later, although this can be disabled in config. To protect user identity, locally generated (anon) UUID will be generated and linked to user+installation (So home and lab won’t correlate). On the technical side, the stored data will includes time-stamps, tool invocations, source code snapshots, and course-grained location. You can also connect (locally available) personal data about students and link it to UUID data. Groups can be tagged and queries restricted to that tag (and that includes taxonomic data if you’re looking into the murky world of assessment taxonomy).
In terms of making this work, ethical approval has been obtained from the hosting organisation, for verified academic researchers, initially via SQL queries on multi-terabyte repository but the data will not be fully public (this will be one of largest repositories of assignment solutions in the world).
Timescale: private beta by end of 2012, with a full-scale roll out next Spring, AY 2013. Very usefully, you can still get access to the data even if you don’t contribute.
There was a lot of discussion on this: we’re all hungry for the data. One question that struck me was from Sally Fincher: Given that we will have web-scale data gathering, do we have web scale questions? We can all think of things to do but this level of data is now open to entirely new analyses. How will we use this? What else do we need to do?

Evaluating an Early Software Engineering Course with Projects and Tools from Open Source Software
Robert McCartney, Swapna Gokhale and Therese Smith

We tend to give Software Engineering students a project that requires them to undertake design and then, as a group, produce a large software artefact from scratch. In this talk, Robert discussed using existing projects that use a range of skills that are directly relevant to one of the most common activities our students will carray out in industry: maintenance and evolution.

Under a model of developing new features in an open-source system, the instructors provide a pre-selected set of projects and then the 2 person team:

  1. picks a project
  2. learns to comprehend code
  3. proposes enhancements
  4. describes and documents
  5. implements and presents
The evaluation seeks to understand how the students’ understanding of issues has changed especially regarding the importance of maintenance and evolution, the value of documentation, the importance of tools and how reverse engineering can aid comprehension. This approach has been trialled and early student response is positive but the students thought that 10,000 Lines of Code (LOC) projects were too small, hence the project size has increased to 100,000 LOC.

A Case Study of Environmental Factors Influencing Teaching Assistant Job Satisfaction
Elizabeth Patitsas

Elizabeth presented some interesting work on the impact of lecture theatres on what our TAs do. If the layout is hard to work with then, unsurprisingly, the TAs are less inclined to walk around and more inclined to disengage, sitting down the front checking e-mail. When we say ‘less inclined’, we mean that in closed lab layouts TAs spend 40% of the their time interacting with students, versus 76% in an open layout. However, these effects are also seen in windowless spaces: make a space unpleasant and you reduce the time that people spend answering questions and engaging.

The value of a pair of TAs was stressed: a pair gives you a backup but doesn’t lead to decision problems when coming to consensus. However, the importance of training was also stressed, as already clearly identified in the literature.

Education and Research: Evidence of a Dual Life
Joe Mirõ Julia, David López and Ricardo Alberich

Joe provided a fascinating coloration network analysis of the paper writing groups in ICER and generally. In CS education,  we tend to work in smaller groups than other CS research areas and newcomers tend to come alone to conferences. The ICER colouration network graph has a very well-defined giant component that centres around Robert (see above) but, across the board, roughly 50% of conference authors are newcomer. One of the most common ways for people to enter the traditional CS research community is through what can be described as a mentoring process, we extend the group through an existing connection and then these people join the giant component. There is, however, no significant evidence of mentoring in the edu community.
Unsurprisingly, different countries and borders hinder the growth of the giant component.
There was a lot of discussion on this as well, as we tried to understand what was going on and, outside of the talk, I raised my suggestion with Joe that hemispherical separation was a factor worth considering because of the different timetables that we worked to. Right now, I am at a conference in the middle of teaching, while the Northern Hemisphere has only just gone back to school.

ICER 2012 Day 1 Keynote: How Are We Thinking?

We started off today with a keynote address from Ed Meyer, from University of Queensland, on the Threshold Concepts Framework (Also Pedagogy, and Student Learning). I am, regrettably, not as conversant with threshold concepts as I should be, so I’ll try not to embarrass myself too badly. Threshold concepts are central to the mastery of a given subject and are characterised by some key features (Meyer and Land):

  1. Grasping a threshold concept is transformative because it changes the way that we think about something. These concepts become part of who we are.
  2. Once you’ve learned the concept, you are very unlikely to forget it – it is irreversible.
  3. This new concept allows you to make new connections and allows you to link together things that you previously didn’t realise were linked.
  4. This new concept has boundaries – they have an area over which they apply. You need to be able to question within the area to work out where it applies. (Ultimately, this may identify areas between schools of thought in an area.)
  5. Threshold concepts are ‘troublesome knowledge’. This knowledge can be counter-intuitive, even alien and will make no sense to people until they grasp the new concept. This is one of the key problems with discussing these concepts with people – they will wish to apply their intuitive understanding and fighting this tendency may take some considerable effort.

Meyer then discussed how we see with new eyes after we integrate these concepts. It can be argued that concepts such as these give us a new way of seeing that, because of inter-individual differences, students will experience in varying degrees as transformative, integrative, and (look out) provocative and troublesome. For this final one, a student experiences this in many ways: the world doesn’t work as I think it should! I feel lost! Helpless! Angry! Why are you doing this to me?

How do you introduce a student to one of these troublesome concepts and, more importantly, how can you describe what you are going to talk about when the concept itself is alien: what do you put in the course description given that you know that the student is not yet ready to assimilate the concept?

Meyer raised a really good point: how do we get someone to think inside the discipline? Do they understand the concept? Yes. Does this mean that they think along the right lines? Maybe, maybe not. If I don’t think like a Computer Scientist, I may not understand why a CS person sees a certain issue as a problem. We have plenty of evidence that people who haven’t dealt with the threshold concepts in CS Education find it alien to contemplate that the lecture is not the be-all and end-all of teaching – their resistance and reliance upon folk pedagogies is evidence of this wrestling with troublesome knowledge.

A great deal to think about from this talk, especially in dealing with key aspects of CS Ed as the threshold concept that is causing many of our non-educational research oriented colleagues so much trouble, as well as our students.

 


Loading the Dice: Show and Tell

I’ve been using a set of four six-sided dice to generate random numbers for one of my classes this year, generally to establish a presentation order or things like that. We’ve had a number of students getting the same number and so we have to have roll-offs. Now in this case, the most common number rolled so far has been in the range of 17-19 but we have only generated about 18-20 rolls so, while that’s a little high, it’s not high enough to arouse suspicion.

Today we rolled again, and one student wasn’t quite there yet so I did it with the rest of the class. Once again, 18 showed up a bit. This time I asked the class about it. Did that seem suspicious? Then I asked them to look at the dice.

Oh.

Only two of the dice are actually standard dice. One has the number five on every face. One has three sixes and three twos. The students have seen these dice numerous times and have never actually examined them – of course, I didn’t leave them lying around for them to examine but, despite one or two starting to think “Hey, that’s a bit weird”, nobody ever twigged to the loading.

All of the dice in this picture are loaded through weight manipulation, rather than dot alteration. You can buy them for just about any purpose. Ah, Internet!

Having exposed this trick, to some amusement, the last student knocked on the door and I picked up the dice. He was then asked to roll for his position, with the rest of the class staying quiet. (Well, smirking.) He rolled something in 17-19, I forget what, and I wrote that up on the board. Then I asked him if it seemed high to him? On reflection, he said that these numbers all seemed pretty high, especially as the theoretical maximum was 24. I then asked if he’d like to inspect the dice.

He then did so, as I passed him the dice one at a time, and storing the inspected dice in my other hand. (Of course, as he peered at each die to see if it was altered, I quickly swapped one of the ‘real’ dice back into the position in my hand and, as the rest of the class watched and kept admirably quiet, I then forced a real die onto him. Magic is all about misdirection, after all.)

So, having inspected all of them, he was convinced that they were normal. I then plonked them down on the table and asked him to inspect them, to make sure. He lined them up, looked across the top face and, then, looked at the side. Light dawned. Loudly! What, of course, was so startling to him was that he had just inspected the dice and now they weren’t normal.

What was my point?

My students have just completed a project on data visualisation where they provided a static representation of a dataset. There is a main point to present, supported by static analysis and graphs, but the poster is fundamentally explanatory. The only room for exploration is provided by the poster producer and the reader is bound by the inherent limitations in what the producer has made available. Much as with our discussions of fallacies in argument from a recent tutorial, if information is presented poorly or you don’t get enough to go on, you can’t make a good decision.

Enter, the dice.

Because I deliberately kept the students away from them and never made a fuss about them, they assumed that they were normal dice. While the results were high, and suspicion was starting to creep in, I never gave them enough space to explore the dice and discern their true nature. Even today, while handing them to a student to inspect, I controlled the exploration and, by cherry picking and misdirection, managed to convey a false impression.

Now my students are moving into dynamic visualisation and they must prepare for sharing data in a way that can be explored by other people. While the students have a lot of control over who this exploration takes place, they must prepare for people’s inquisitiveness, their desire to assemble evidence and their tendency to want to try everything. They can’t rely upon hiding difficult pieces of data in their representation and they must be ready for users who want to keep exploring through the data in ways that weren’t originally foreseen. Now, in exploratory mode, they must prepare for people who want to try to collect enough evidence to determine if something is true or not, and to be able to interrogate the dataset accordingly.

Now I’m not saying that I believe that their static posters were produced badly, and I did require references to support statements, but the view presented was heavily controlled. They’ve now seen, in a simple analogue, how powerful that can be. Now, it’s time to break out of that mindset and create something that can be freely explored, letting their design guide the user to construct new things rather than to lead them down a particular path.

I can only hope that they’re exceed by this because I certainly am!!


Gamification: What Happens If All Of The Artefacts Already Exist

Win! Win! Win! (via mindjumpers.com)

I was reading an article today in May/June’s “Information Age”, the magazine of the Australian Computer Society, entitled “Gamification Goes Mainstream”. The article identified the gaming mechanics that could be added to businesses to improve engagement and work quality/productivity by employees. These measures are:

  1. Points: Users get points for achievements and can spend the points on prizes.
  2. Levelling: Points get harder to get as the user masters the systems.
  3. Badges: Badges are awarded and become part of the user’s “trophy page”, accompanying any comments made by the user.
  4. Leader Boards: Users are ranked by points or achievement.
  5. Community: Collaborative tools, contests, sharing and forums.

Now, of course, there’s a reason that things exist like in games and that’s because most games are outside of the physical world and, in the absence of the natural laws that normally make things happen and ground us, we rely upon these mechanics to help us to assess our progress through the game and provide us with some reward for our efforts. Now, while I’m a great believer in using whatever is necessary to make work engaging and to make like more enjoyable, I do wonder about the risk of setting up parallel systems that get people to focus on things other than their actual work.

Yes, yes, we all know I have issues with extrinsic motivations but let’s look again at the list of measures above, which would normally be provided in a game to allow us to make sense of the artificial world in which we find ourselves, and think about how they apply already in a workplace.

  1. Points that can be used to purchase things: I think that we call this money. If I provide a points system for buying company things then I’ve created a second economy that is not actually money.
  2. Levelling: Oh, wait, now it’s hard to spend the special points that I’ve been given so I’ve not only created a second economy, I’ve started down the road towards hyperinflation by devaluing the currency. (Ok, so the promotional system works here in my industry like that – our ranks are our levels, which isn’t that uncommon.)
  3. Badges: Plaques for special achievement, awards, post-nominal letters, Fellowships – anything that goes on the business card is effectively a badge.
  4. Leader Boards: Ok, this is something that we don’t often see in the professional world but, let’s face it, if you’re not on top then you’re not the best. Is that actually motivational or soul-destroying? Of course, if we don’t have it yet, then you do have to wonder why, given every other management trend seems to get a workout occasionally. I should note that I have seen leader boards at my workplace which have been ‘anonymised’ but given that I can see myself I can see where I sit – now not only do I know if am not top, I don’t know who to ask about how to get better, which has been touted as one of the reasons to identify the stars in the first place.
  5.  Community: We do have collaborative tools but they are focussed on helping us achieve our jobs, not on achieving orthogonal goals associated with a gaming system. We also have comment forums, discussion mechanisms such as mailing lists and the like. Contests? No. We don’t have contests. Do we? Oh wait, national competitive grant schemes, local teaching schemes, competitive bidding for opportunities.

Now if people aren’t engaging with the tasks that are expected of them (let’s assume reasonably) then, yes, we should find ways to make things more interesting to encourage participation. However, talking about all of the game mechanics above, it’s obviously going to take more thought than just picking a list of things that we are already doing and providing an alternative system that somehow makes everything really interesting again.

I should note that the article does sound a cautionary tone, from one of the participants, who basically says that it’s too soon to see how effective these schemes are and, of course, Kohn is already waggling a finger at setting up a prize/compliance expectation. So perhaps the lesson here is “how can we take what we already have and work out how to make it more interesting” rather than taking the lessons in required constructions of phenomena from a completely artificial environment where we have to define gravity in order to make things fall. Gamification shows promise in certain direction, mainly because there’s a lot of fun implicit in the whole process, but the approaches need to be carefully designed to make sure that we don’t accidentally reinvent the same old wheel.

 

 


Let’s Transform Education! (MOOC Hijinks Hilarity! Jinkies!)

I had one of those discussions yesterday that every one in Higher Education educational research comes to dread: a discussion with someone who basically doesn’t believe the educational research and, within varying degrees of politeness, comes close to ignoring or denigrating everything that you’re trying to do. Yesterday’s high point was the use of the term “Mr Chips” to describe the (from the speaker’s perspective) incredibly low possibility of actually widening our entrance criteria and turning out “quality” graduates – his point was that more students would automatically mean much larger (70%) failure rates. My counter (and original point) is that since there is such a low correlation between school marks and University GPA (roughly 40-45% and it’s very noisy) that successful learning and teaching strategies could deal with an influx of supposedly ‘lower quality’ students, because the quality metric that we’re using (terminal high school grade or equivalent) is not a reliable indicator of performance. My fundamental belief is that good education is transformative. We start with the students that schools give us but good, well-constructed, education can, in the vast majority of cases, successfully educate students and transform them into functioning, self-regulating graduates. We have, as a community, carried out a lot of research that says that this works, provided that we are happy to accept that we (academics) are not by any stretch of the imagination the target demographic or majority experience in our classes, and that, please, let’s look at new teaching methods and approaches that actually work in developing the knowledge and characteristics that we’re after.

The “Mr Chips” thing is a reference to a rather sentimental account of the transformative influence of a school master, the eponymous Chips, and, by inference, using it in a discussion of the transformative power of education does cast the perception of my comments on equality of access, linked with educational design and learning systems as transformative technologies, as being seen as both naïve and (in a more personal reading) makes me uncomfortably aware that some people might think I’m talking about myself as being the key catalyst of some sort. One of the nice things about being an academic is that you can have a discussion like this and not actually come to blows over it – we think and argue for a living, after all. But I find this dismissive and rude. If we’re not trying to educate people and transform them, then what the hell are we doing? Advocating inclusion and transformation shouldn’t be seen as grandstanding – it should be seen as our job. I don’t want to be the keystone, I want systems that work and survive individuals, but that individuals can work within to improve and develop – we know this is possible and it’s happening in a lot of places. There are, however, pockets of resistance: people who are using the same old approaches out of laziness, ignorance and a refusal to update for what appear to be philosophical reasons but have no evidence to support them.

Frankly, I’m getting a little irritated by people doubting the value of the volumes of educational research. If I was dealing with people who’d read the papers, I’d be happier, but I’m often dealing with people who won’t read the papers because they just don’t believe that there’s a need to change or they refuse to accept what is in there because of a perceived difficulty in making it work. (A colleague demanded a copy of one of our papers showing the impact of our new approaches on retention – I haven’t heard from him since he got it. This probably means that he’s chosen to ignore it and is going to pretend that he never asked.) Over coffee this morning, musing on this, it occurred to me that at the same time that we’re not getting the greatest amount of respect and love in the educational research community, we’re also worried about the trend towards MOOCs. Many of our concerns about MOOCs are founded in the lack of evidence that they are educationally effective. And I saw a confluence.

All of the educational researchers who are not able to sway people inside their institutions – let’s just ignore them and surge into the MOOCs. We can still teach inside our own places, of course, and since MOOCs are free there’s no commercial conflict – but let’s take all of the research and practice and build a brave new world out in MOOC space that is the best of what we know. We can even choose to connect our in-house teaching into that system if we want. (Yes, we still have the face-to-face issue for those without a bricks-and-mortar campus, but how far could we go to make things better in terms of what MOOCs can offer?) We’re transformers, builders and creators. What could we do with the infinite canvas of the Internet and a lot of very clever people, working with a lot of very other clever people who are also driven and entrepreneurial?

The MOOC community will probably have a lot to say about this, which is why we shouldn’t see this as a hijack or a take-over, and I think it’s helpful to think of this very much as a confluence – a flowing together. I am, not for a second, saying that this will legitimise MOOCs, because this implies that they are illegitimate, but rather than keep fighting battles with colleagues and systems that can defeat 40 years of knowledge by saying “Well, I don’t think so”, let’s work with people who have already shown that they are looking to the future. Perhaps, combining people who are building giant engines of change with the people who are being frustrated in trying to bring about change might make something magical happen? I know that this is already happening in some places – but what if it was an international movement across the whole sector?

Jinkies! (Sorry, the title ran to this and I get to use a picture of a t-shirt with Velma on it!)

Relma!

The purpose of this is manifold:

  1. We get to build the systems that we want to, to deliver education to students in the best ways we know.
  2. We (potentially) help to improve MOOCs by providing strong theory to construct evidence gathering mechanisms that allow us to really get inside what MOOCs are doing.
  3. More students get educated. (Ok, maybe not in our host institutions, but what is our actual goal anyway?)
  4. We form a strong international community of educational researchers with common outputs and sharing that isn’t necessarily owned by one company (sorry, iTunesU).
  5. If we get it right, students vote with their feet and employers vote with their wallets. We make educational research important and impossible to ignore through visible success.

Now this is, of course, a pipe dream in many ways. Who will pay for it? How long will it take before even not-for-pay outside education becomes barred under new terms and conditions? Who will pay my mortgage if I get fired because I’m working on a deliberately external set of courses for students who are not paying to come to my institution?

But, the most important thing, for me, is that we should continue what has been proposed and work more and more closely with the MOOC community to develop exemplars of good practice that have strong, evidence-based outcomes that become impossible to ignore. Much as students use temporal discounting to procrastinate about their work, administrators tend to use a more traditional financial discounting when it comes to what they consider important. If it takes 12 papers and two years of study to justify spending $5,000 on a new tool or time spent on learning design – forget about it. If, however, MOOCs show strong evidence of improving student retention (*BING*), student attraction (*BING*), student engagement (*BING*) and employability – well, BINGO. People will pay money for that.

I’ve spoken before about how successful I had to be before I was tolerated in my pursuit of educational research and, while I don’t normally talk about it in detail because it smacks of hubris and I sincerely believe that I am not a role model of any kind, I  hope that you will excuse me so that I can explain why I think it’s crazy as to how successful I had to be in order to become tolerated – and not yet really believed. To summarise, I’m in three research groups, I’ve brought in (as part of a group and individually) somewhere in the order of $0.5M in one non-ed research area, I’ve brought in something like $30-50K in educational research money, I’ve published two A journals (one CS research, one CS ed), two A conferences (both ed) and one B conference (ed/CS) and I have a faculty level position as an Associate Dean and I have a national learning and teaching presence. All of the things on that line – that’s 2012. 2011 wasn’t quite as successful but it wasn’t bad by any stretch of the imagination. I think that’s an unreasonably high bar to pass in order to be allowed the luxury of asking questions about what it is that we’re doing with learning and teaching. But if I can leverage that to work with other colleagues who can then refer to what we’ve done in a way that makes administrators and managers accept the real value of an educational revolution – then my effort is shared over many more people and it suddenly looks like a much better investment of my time.

This is more musing that mission, I’m afraid, and I realise that any amount of this could be shot down but I look forward to some discussion!

 

 


A Good Friday: Student Brainstorming Didn’t Kill Me!

We had 19 of last year’s Year 10 Tech School participants back for a brainstorming session yesterday, around the theme “What do you like about ICT/What would you say to other people about ICT.” I started them off with some warm-up exercises, as I only had three hours in total. We started with “One word to describe Tech School 2011”, “two words to describe anything you learnt or used from it”, and “three words to discuss what you think about ICT”. The last one got relaxed quickly as people started to ask whether they could extend it. We split them into tables and groups got pads of post-it notes. Get an idea, write it down, slam it on the table *thump*.

Nobody sketched Babbage and slammed it on the table. (I’m not all that surprised.)

After they had ideas all over the table, I asked them to start assembling them into themes – how would they make sentences or ideas out of this. The most excellent Justine, who did all of the hard work in setting this up (thank you!), had pre-printed some pages of images so the students could cut these out and paste them into places to convey the idea. We had four groups so we ended up with four initial posters.

Floating around, and helping me to facilitate, were Matt and Sami, both from my GC class and they helped to keep the groups moving, talking to students, drawing out questions and also answering the occasional question about the Bachelor of Computer Science (Advanced) and Grand Challenges.

We took a break for two puzzles (Eight Queens and combining the digits from 1 to 9 to equal 100 with simple arithmetic symbols) and then I split the groups up to get them to look at each other’s ideas and maybe get some new ideas to put onto another poster.

Yeah, that didn’t go quite as well. We did get some new ideas but it became obvious that we either needed to have taken a longer break, or we needed some more scaffolding to take the students forward along another path. Backtracking is a skill that takes a while to learn and, even with the graphic designer walking around trying to get some ideas going, we were tapping out a bit by the time that the finish arrived.

However, full marks to the vast majority of the participants who gave me everything that they could think of – with a good spread across schools and regions, as well as a female:male ratio of about 50%, we got a lot of great thoughts that will help us to talk to other students and let them know why they might want to go into ICT… or just come to Uni!

I didn’t let the teachers off the hook either and they gave us lots of great stuff to put into our outreach program. As a hint, I’ve never met yet a teacher at one of these events who said “Oh no, we see enough Uni people and students in the schools”, the message is almost always “Please send more students to talk to our students! Send more info!” The teachers are as, if not more, dedicated to getting students into Uni so that’s a great reminder that we’re all trying to do the same thing.

So, summary time, what worked:

  1. Putting the students into groups, armed with lots of creative materials, and asking them what they honestly thought. We got some great ideas from that.
  2. Warming them up and then getting them into story mode with associated pictures. We have four basic poster themes that we can work on.
  3. Giving everyone a small gift voucher for showing up after the fact, with no judging quality of ideas. That just appeals to my nature – I have no real idea what effect that had but I didn’t have to tell anyone that they were wrong (or less than right) because that wasn’t the aim of today.
  4. Getting teachers into a space where they could share what they needed from us as well.

What needs review or improvement:

  1. I need to look at how idea refinement and recombination might work in a tight time frame like this. I think, next time, I’ll get people to decompose the ideas to a mind map hexagon or something like that – maybe even sketch up the message graphically? Still thinking.
  2. I need more helpers. I had three and I think that a couple more would be good, as close to student age as possible.
  3. The puzzles in the middle should have naturally led to new group formation.
  4. Setting it an hour later so that everyone can get there regardless of traffic.

So, thanks again to Justine and Joh for making this work and believing in it enough to give it a try – I believe it really worked and, to be honest, far better than I thought it would but I can see how to improve it. Thanks to Matt and Sami for their help and I really hope that seeing that I actually believe all that stuff I spout in lectures wasn’t too weird!

But. of course, my thanks to the students and teachers who came along and took part in something just because we asked if they’d like to come back. Yeah, I know the motives varied but a lot of great ideas came out and I think it’ll be very helpful for everyone.

Onwards to the posters!


Musing on MOOCs

Mark Guzdial’s blog contains a number of posts where he looks at Massive Open On-line Courses (MOOCs) but a recent one on questionable student behaviour made me think about how students act and, from the link where students sign up multiple times so that they can accumulate a ‘perfect’ score for one of their doppelgängers, why a student would go to so much trouble in a course. As the post that Mark refers to asks, is this a student retaking the course/redoing an assignment until they achieve mastery (which is highly desirable) or are they recording their attempts and finding the right answer through exhausting the search space (which is not productive and starts to look like cheating, if it isn’t actually cheating – it’s certainly against the terms of service of the courses.)

Why is this important? It’s important because MOOCs look great in terms of investment and return. Set up a MOOC and you can have 100,000 students enrol! One instructor, maybe a handful of TAs, some courseware – 100,000 students! (Some of the administrators in my building have just had to break out the smelling salts at the thought of income to expenditure ratio.) Of course, this assumes that we’re charging, which most don’t just for participation although you may get charged a fee for anything that allows you to derive accreditation. It also assumes that 100,000 students turns into some reasonable number of completions, which it also doesn’t and, as has been discussed elsewhere, plagiarism/copying is a pretty big problem.

Hang on. The course is free. It’s voluntary to sign-up to in the vast majority of cases. Why are people carrying out this kind of behaviour in a voluntary, zero-cost course? One influence is possible future accreditation, where students regard their previous efforts as a dry-run to get a high percentage outcome on a course from a prestigious institution. I’ll leave those last two words hanging there while I talk about James Joyce for a moment.

If you know of James Joyce, or you’ve read any James Joyce, you may be able to guess the question that I’m about to ask.

“Have you read and finished Ulysses?”

Joyce’s Ulysses is regarded as one of the best English-language novels of the 20th Century. However, at over 250,000 words long (that’s longer than the longest Harry Potter, by the way, and about half the count of Lord of the Rings) , full of experimental techniques, complexity and a stream-of-consciousness structure, it isn’t exactly accessible to a vast number of readers. But, because it is widely regarded as a very important novel, it is often a book that people are planning to read. Or, having started, that they plan to finish.

However, the number of people that have actually read Ulysses, all the way through and reading every word, is probably quite small. The whole ‘books I claim to have read’ effect is discussed reasonably often. From that link:

Asked if they had ever claimed to read a book when they had not, 65% of respondents said yes and 42% said they had falsely claimed to have read Orwell’s classic [1984] in order to impress. This is followed by Tolstoy’s War and Peace (31%), James Joyce’s Ulysses (25%) and the Bible (24%).

So, having possibly neither started nor finished, they claim that they have read it, because of the prestige of the work. 42% of people claim to, but haven’t read 1984, which, compared to Ulysses, is positively a pamphlet – a bus ticket aphorism in terms of relative length and readability. And we see that the other three books on the list are large, long and somewhat ponderous. (Sorry, Tolstoy, but we don’t all get locked into our dachas for 6 months when it snows.) 1984, of course, is in the public eye because of the ‘Big Brother’ associations and the on-going misinterpretation of the work as predictive, rather than as an insightful and brooding reflection of Eric’s dislike of the BBC and post-war London. (Sorry, that’s a bit glib, but I’m trying to keep it short.)

I have read Ulysses but I think it fair to say that I read it, and forced myself to complete it, for entirely the wrong reasons. Now that I enjoy the work of the Modernists far more, I’m planning to return to Ulysses and see how much I enjoy the journey this time – especially as I shall be reading it for my own reasons. But, the first time, I read it and completed it because of the prestige of the work and because I wanted to see what all the fuss was about. (Hint: it’s about a day in Dublin.)

I think there’s an intersection between the mindset that would make you claim to read a book that you had not, for reasons of prestige rather than purely for tribal membership, and that required to take a MOOC from Stanford, Harvard or Berkeley, and to falsify your progress by copying answers from other people or by solemnly duplicating your identities to accumulate enough answers to be able to ‘graduate’ summa cum laude. In this case, taking a course from one of these august institutions, especially in these days of the necessity of having a college degree for many jobs that have no professional requirement for it, is better than not. Completing assignments to a high standard, however you achieve it, may start to define your worth – this is a conjunction of prestige and tribalism that may one day allow you to become a graduate of University X (even if it is tagged as on-line or there is subsequent charging or marking load for accreditation).

And here we find our strong need for real evidence of the efficacy of the MOOC approach. Let’s assume that we solve the identity problem and can now attach work to a person reliably – how will we measure if someone is seeking mastery or is actually trying to cheat? We can ask that now – is the student who seeks questions to previous examinations testing their understanding and knowledge or conducting a brute force attack against our test bank? If MOOCs can work then the economies of scale make them a valuable tool for education but there are so many confounding factors as we try to assess these new courses: high sign-up rates with very low completion rates, high levels of plagiarism, obvious and detectable levels of gaming and all of this happening before they actually become strong alternatives to the traditional approach.

It would be easy to dismiss my comments as those of a disgruntled traditionalist but that would be wrong. What I need is evidence of what works. I have largely abandoned lectures in favour of collaborative and interactive sessions because the efficacy of the new approach became apparent – through research and evidence. Similarly for my investigation into deadlines and assessment, evidence drove me here.

If MOOCs work, then I would expect to see evidence that they do. If they don’t, then I don’t want students to sign up to something that doesn’t work, potentially at the expense of other educational opportunities that do work, any more than I want someone to stop taking their medication because someone convinces them that unverifiable alternatives are better. If MOOCs don’t quite work yet, by collecting evidence, maybe we can make them work, or part of our other courses, or produce something that benefits all of us.

It’s not about tradition or exclusivity, it’s about finding what works, which is all about collecting evidence, constructing hypotheses and testing them. Then we can find out what actually works.


Putting it all together – discussing curriculum with students

One of the nice things about my new grand challenges course is that the lecture slots are a pre-reading based discussion of the grand challenges in my discipline (Computer Science), based on the National Science Foundation’s Taskforce report. Talking through this with students allows us to identify the strengths of the document and, perhaps more interestingly, some of its shortfalls. For example, there is much discussion on inter-disciplinary and international collaboration as being vital, followed by statements along the lines of “We must regain the ascendancy in the discipline that we invented!” because the NSF is, first and foremost, a US-funded organisation. There’s talk about providing the funds for sustainability and then identifying the NSF as the organisation giving the money, and hence calling the shots.

The areas of challenge are clearly laid out, as are the often conflicting issues surrounding the administration of these kinds of initiative. Too often, we see people talking about some amazing international initiative – only to see it fail because nobody wants to go first, or no country/government wants to put money up that other people can draw on until everyone does it at the same time.

In essence, this is a timing and trust problem. If we may quote Wimpy from the Popeye cartoons:

A picture of Wimpy saying "I will gladly pay you Tuesday for a hamburger today!"

Via theawl.com. Click on the link for a very long discussion of Popeye and Wimpy related issues.

The NSF document lays bare the problem we always have: those who have the hamburgers are happy to talk about sharing the meal but there are bills to be paid. The person who owns the hamburger stand is going to have words with you if you give everything away with nothing to show in return except a promise of payment on Tuesday.

Having covered what the NSF considered important in terms of preparing us for the heavily computerised and computational future, my students finished with a discussion of educational issues and virtual organisations. The educational issues were extremely interesting because, having looked at the NSF Taskforce report, we then looked at the ACM/IEEE 2013  Computer Science Strawman curriculum to see how many areas overlapped with the task force report. Then we looked at the current curriculum of our school, which is undergoing review at the moment but was last updated for the 2008 ACM/IEEE Curriculum.

What was pleasing was, rom the range of students, how many of the areas were being addressed throughout our course and how much overlap there was between the highlighted areas of the NSF Report and the Strawman. However, one of the key issues from the task force report was the notion of greater depth and breadth – an incredible challenge in the time-constrained curriculum implementations of the 21st century. Adding a new Knowledge Area (KA) to the Strawman of ‘Platform Dependant Computing’ reflects the rise of the embedded and mobile device yet, as the Strawman authors immediately admit, we start to make it harder and harder to fit everything into one course. Combine this with the NSF requirement for greater breadth, including scientific and mathematical aspects that have traditionally been outside of Computing, and their parallel requirement for the development of depth… and it’s not easy.

The lecture slot where we discussed this had no specific outcomes associated with it – it was a place to discuss the issues arising but also to explain to the students why their curriculum looks the way that it does. Yes, we’d love to bring in Aspect X but where does it fit? My GC students were looking at the Ethics aspects of the Strawman and wondered if we could fit Ethics into its own 3-unit course. (I suspect that’s at least partially my influence although I certainly didn’t suggest anything along these lines.) “That’s fine,” I said, “But what do we lose?”

In my discussions with these students, they’ve identified one of the core reasons that we changed teaching languages, but I’ve also been able to talk to them about how we think as we construct courses – they’ve also started to see the many drivers that we consider, which I believe helps them in working out how to give feedback that is the most useful form for us to turn their needs and wants into improvements or developments in the course. I don’t expect the students to understand the details and practice of pedagogy but, unless I given them a good framework, it’s going to be hard for them to communicate with me in a way that leads most directly to an improved result for both of us.

I’ve really enjoyed this process of discussion and it’s been highly rewarding, again I hope for both sides of the group, to be able to discuss things without the usual level of reactive and (often) selfish thinking that characterises these exchanges. I hope this means that we’re on the right track for this course and this program.