ITiCSE 2014: Working Groups Reports #ITiCSE2014 #ITiCSE

Unfortunately, there are too many working groups, reporting at too high a speed, for me to capture it here. All of the working groups are going to release reports and I suggest that you have a look into some of the areas covered. The topics reported on today were:

  • Methodology and Technology for In-Flow Peer Review

    In-flow peer review is the review of an exercise as it is going on. Providing elements to review can be difficult as it may encourage plagiarism but there are many benefits to this, which generally justifies the decision to do review. Picking who can review what for maximum benefit is also very difficult.

    We’ve tried to do a lot of work here but it’s really challenging because there are so many possibly right ways.

  • Computational Thinking in K-9 Education

    Given that there are national, and localised, definitions of what “Computational Thinking” is, this is challenging to identify. Many K-12 teachers are actually using CT techniques but wouldn’t know to answer “yes” if asked if they were. Many issues in play here but the working group are a multi-national and thoughtful group who have lots of ideas.

    As a note, K-9 refers to Kindergarten to Year 9, not dogs. Just to be clear.

  • Increasing Accessibility and Adoption of Smart Technologies for Computer Science Education

    How can you integrate all of the whizz-bang stuff into the existing courses and things that we already use everyday? The working group have proposed an architecture to help with the adoption. It’s a really impressive, if scary, slide but I’ll be interested to see where this goes. (Unsurprisingly, it’s a three-tier model that will look familiar to anyone with a networking or distributed systems background.) Basically, let’s not re-invent the wheel when it comes to using smarter technologies but let’s also find out the best ways to build these systems and then share that, as well as good content and content delivery. Identity management is, of course, a very difficult problem for any system so this is a core concern.

    There’s a survey you can take to share your knowledge with this workgroup. (The feared and dreaded Simon noted that it would be nice if their survey was smarter.) A question from the floor was that, while the architecture was nice and standards were good, what impact would this have on the chalkface? (This is a neologism I’ve recently learned about, the equivalent of the coalface for the educational teaching edge.) This is a good question. You only have to look at how many standards there are to realise that standard construction and standard adoption are two very different beasts. Cultural change is something that has to be managed on top of technical superiority. The working group seems to be on top of this so it will be interesting to see where it goes.

  • Strengthening Methodology Education in Computing

    Unsurprisingly, computing is a very broad field and is methodologically diverse. There’s a lot of ‘borrowing’ from other fields, which is a nice way of saying ‘theft’. (Sorry, philosophers, but ontologies are way happier with us.) Our curricular have very few concrete references to methodology, with a couple of minor exceptions. The working group had a number of objectives, which they reduced down to fewer and remove the term methodology. Literature reviews on methodology education are sparse but there is more on teaching research methods. Embarrassingly, the paper that shows up for this is a 2006 report from a working group from this very conference. Oops. As Matti asked, are we really this disinterested in this topic that we forget that we were previously interested in it? The group voted to change direction to get some useful work out of the group. They voted not to produce a report as it was too challenging to repurpose things at this late stage. All their work would be toward annotating the existing paper rather than creating a new one.

    One of the questions was why the previous paper had so few citations, cited 5 times out of 3000 downloads, despite the topic being obviously important. One aspect mentioned is that CS researchers are a separate community and I reiterated some early observations that we have made on the pathway that knowledge takes to get from the CS Ed community into the CS ‘research’ community. (This summarises as “Do CS Ed research, get it into pop psychology, get it into the industrial focus and then it will sneak into CS as a curricular requirement, at which stage it will be taken seriously.” Only slightly tongue-in-cheek.)

  • A Sustainable Gamification Strategy for Education

    Sadly, this group didn’t show up, so this was disbanded. I imagine that they must have had a very good reason.

Interesting set of groups – watch for the reports and, if you use one, CITE IT! 🙂


ITiCSE 2014, Monday, Session 1A, Technology and Learning, #ITiCSE2014 #ITiCSE @patitsel @guzdial

(The speakers are going really. really quickly so apologies for any errors or omissions that slip through.)

The chair had thanked the Spanish at the opening for the idea of long coffee breaks and long lunches – a sentiment I heartily share as it encourages discussions, which are the life blood of good conferences. The session opened with “SPOC – supported introduction to Programming” presented by Marco Piccioni. SPOCs are Small Private On-line Courses and are part of the rich tapestry of hand-crafted terminology that we are developing around digital delivery. The speaker is from ETH-Zurich and says that they took a cautious approach to go step-by-step in taking an existing and successful course and move it into the on-line environment. The classic picture from University of Bologna of the readers/scribes was shown. (I was always the guy sleeping in the third row.)

No paper aeroplanes?

No paper aeroplanes?

We want our teaching to be interesting and effective so there’s an obis out motivation to get away from this older approach. ETH has an interesting approach where the exam is 10 months after the lecture, which leads to interesting learning strategies for students who can’t solve the instrumentality problem of tying work now into success in the future. Also, ETH had to create an online platform to get around all of the “my machine doesn’t work” problems that would preclude the requirement to install an IDE. The final point of motivation was to improve their delivery.

The first residential version of the course ran in 2003, with lectures and exercise sessions. The lectures are in German and the exercise sessions are in English and German, because English is so dominant in CS. There are 10 extensive home assignments including programming and exercise sessions groups formed according to students’ perceived programming proficiency level. (Note on the last point: Hmmm, so people who can’t program are grouped together with other people who can’t program? I believe that the speaker clarifies this as “self-perceived” ability but I’m still not keen on this kind of streaming. If this worked effectively, then any master/apprentice model should automatically fail) Groups were able to switch after a week, for language or not working with the group.

The learning platform for the activity was Moodle and their experience with it was pretty good, although it didn’t do everything that they wanted. (They couldn’t put interactive sessions into a lecture, so they produced a lecture-quiz plug-in for Moodle. That’s very handy.) This is used in conjunction with a programming assessment environment, in the cloud, which ties together the student performance at programming with the LMS back-end.

The SPOC components are:

  • lectures, with short intros and video segments up to 17 minutes. (Going to drop to 10 minutes based on student feedback),
  • quizzes, during lectures, testing topic understanding immediately, and then testing topic retention after the lecture,
  • programming exercises, with hands-on practice and automatic feedback

Feedback given to the students included the quizzes, with a badge for 100% score (over unlimited attempts so this isn’t as draconian as it sounds), and a variety of feedback on programming exercises, including automated feedback (compiler/test suite based on test cases and output matching) and a link to a suggested solution. The predefined test suite was gameable (you could customise your code for the test suite) and some students engineered their output to purely match the test inputs. This kind of cheating was deemed to be not a problem by ETH but it was noted that this wouldn’t scale into MOOCs. Note that if someone got everything right then they got to see the answer – so bad behaviour then got you the right answer. We’re all sadly aware that many students are convinced that having access to some official oracle is akin to having the knowledge themselves so I’m a little cautious about this as a widespread practice: cheat, get right answer, is a formula for delayed failure.

Reporting for each student included their best attempt and past attempts. For the TAs, they had a wider spread of metrics, mostly programmatic and mark-based.

On looking at the results, the attendance to on-line lectures was 71%, where the live course attendance remained stable. Neither on-line quizzes nor programming exercises counted towards the final grade. Quiz attempts were about 5x the attendance and 48% got 100% and got the badge, significantly more than the 5-10% than would usually do this.

Students worked on 50% of the programming exercises. 22% of students worked on 75-100% of the exercises. (There was a lot of emphasis on the badge – and I’m really not sure if there’s evidence to support this.)

The lessons learned summarised what I’ve put above: shortening video lengths, face-to-face is important, MCQs can be creative, ramification, and better feedback is required on top of the existing automatic feedback.

The group are scaling from SPOC to MOOC with a Computing: Art, Magic, Science course on EdX launching later on in 2014.

I asked a question about the badges because I was wondering if putting in the statement “100% in the quiz is so desirable that I’ll give you a badge” was what had led to the improved performance. I’m not sure I communicated that well but, as I suspected, the speaker wants to explore this more in later offerings and look at how this would scale.

The next session was “Teaching and learning with MOOCs: Computing academics’ perspectives and engagement”, presented by Anna Eckerdal. The work was put together by a group composed from Uppsala, Aalto, Maco and Monash – which illustrates why we all come to conferences as this workgroup was put together in a coffee-shop discussion in Uppsala! The discussion stemmed from the early “high hype” mode of MOOCs but they were highly polarising as colleagues either loved it or hated it. What was the evidence to support either argument? Academics’ experience and views on MOOCs were sought via a questionnaire sent out to the main e-mail lists, to CS and IT people.

The study ran over June-JUly 2013, with 236 responses, over > 90 universities, and closed- and open-ended questions. What were the research questions: What are the community views on MOOC from a teaching perspective (positive and negative) and how have people been incorporating them into their existing courses? (Editorial note: Clearly defined study with a precise pair of research questions – nice.)

Interestingly, more people have heard concern expressed about MOOCs, followed by people who were positive, then confused, the negative, then excited, then uninformed, then uninterested and finally, some 10% of people who have been living in a time-travelling barrel in Ancient Greece because in 2013 they have heard no MOOC discussion.

Several themes were identified as prominent themes in the positive/negative aspects but were associated with the core them of teaching and learning. (The speaker outlined the way that the classification had been carried out, which is always interesting for a coding problem.) Anna reiterated the issue of a MOOC as a personal power enhancer: a MOOC can make a teacher famous, which may also be attractive to the Uni. The sub themes were pedagogy and learning env, affordance of MOOCs, interaction and collaboration, assessment and certificates, accessibility.

Interestingly, some of the positive answers included references to debunked approaches (such as learning styles) and the potential for improvements. The negatives (and there were many of them) referred to stone age learning and ack of relations.

On affordances of MOOCs, there were mostly positive comments: helping students with professional skills, refresh existing and learn new skills, try before they buy and the ability to transcend the tyranny of geography. The negatives included the economic issues of only popular courses being available, the fact that not all disciplines can go on-line, that there is no scaffolding for identity development in the professional sense nor support development of critical thinking or teamwork. (Not sure if I agree with the last two as that seems to be based on the way that you put the MOOC together.)

I’m afraid I missed the slide on interaction and collaboration so you’ll (or I’ll) have to read the paper at some stage.

There was nothing positive about assessment and certificates: course completion rates are low, what can reasonably be assessed, plagiarism and how we certify this. How does a student from a MOOC compete with a student from a face-to-face University.

1/3 of the respondents answered about accessibility, with many positive comments on “Anytime. anywhere, at one’s own pace”. We can (somehow) reach non-traditional student groups. (Note: there is a large amount of contradictory evidence on this one, MOOCs are even worse than traditional courses. Check out Mark Guzdial’s CACM blog on this.) Another answer was “Access to world class teachers” and “opportunity to learn from experts in the field.” Interesting, given that the mechanism (from other answers) is so flawed that world-class teachers would barely survive MOOC ification!

On Academics’ engagement with MOOCs, the largest group (49%) believed that MOOCs had had no effect at all, about 15% said it had inspired changes, roughly 10% had incorporated some MOOCs. Very few had seen MOOCs as a threat requiring change: either personally or institutionally. Only one respondent said that their course was a now a MOOC, although 6% had developed them and 12% wanted to.

For the open-ended question on Academics’ engagement, most believed that no change was required because their teaching was superior. (Hmm.) A few reported changes to teaching that was similar to MOOCs (on line materials or automated assessment) but wasn’t influenced by them.

There’s still no clear vision of the role of MOOCs in the future: concerned is as prominent as positive. There is a lot of potential but many concerns.

The authors had several recommendations of concern: focusing on active learning, we need a lot more search in automatic assessment and feedback methods, and there is a need for lots of good policy from the Universities regarding certification and the role of on-site and MOOC curricula. Uppsala have started the process of thinking about policy.

The first question was “how much of what is seen here would apply to any new technology being introduced” with an example of the similar reactions seen earlier to “Second Life”. Anna, in response, wondered why MOOC has such a global identity as a game-changer, given its similarity to previous technologies. The global discussion leads to the MOOC topic having a greater influence, which is why answering these questions is more important in this context. Another issue raised in questions included the perceived value of MOOCs, which means that many people who have taken MOOCs may not be advertising it because of the inherent ranking of knowledge.

@patitsel raised the very important issue that under-represented groups are even more under-represented in MOOCs – you can read through Mark’s blog to find many good examples of this, from cultural issues to digital ghettoisation.

The session concluded with “Augmenting PBL with Large Public Presentations: A Case Study in Interactive Graphics Pedagogy”. The presenter was a freshly graduated student who had completed the courses three weeks ago so he was here to learn and get constructive criticism. (Ed’s note: he’s in the right place. We’re very inquisitive.)

Ooh, brave move. He’s starting with anecdotal evidence. This is not really the crowd for that – we’re happy with phenomenographic studies and case studies to look at the existence of phenomena as part of a study, but anecdotes, even with pictures, are not the best use of your short term in front of a group of people. And already a couple of people have left because that’s not a great way to start a talk in terms of framing.

I must be honest, I slightly lost track of the talk here. EBL was defined as project-based learning augmented with constructively aligned public expos, with gamers as the target audience. The speaker noted that “gamers don’t wait” as a reason to have strict deadlines. Hmm. Half Life 3 anyone? The goal was to study the pedagogical impact of this approach. The students in the study had to build something large, original and stable, to communicate the theory, work as a group, demonstrate in large venues and then collaborate with a school of communication. So, it’s a large-scale graphics-based project in teams with a public display.

Grading was composed of proposals, demos, presentation and open houses. Two projects (50% and 40%) and weekly assignments (10%) made up the whole grading scheme. The second project came out after the first big Game Expo demonstration. Project 1 had to be interactive groups, in groups of 3-4. The KTH visualisation studio was an important part of this and it is apparently full of technology, which is nice and we got to hear about a lot of it. Collaboration is a strong part of the visualisation studio, which was noted in response to the keynote. The speaker mentioned some of the projects and it’s obvious that they are producing some really good graphics projects.

I’ll look at the FaceUp application in detail as it was inspired by the idea to make people look up in the Metro rather than down at their devices. I’ll note that people look down for a personal experience in shared space. Projecting, even up, without capturing the personalisation aspect, is missing the point. I’ll have to go and look at this to work out if some of these issues were covered in the FaceUp application as getting people to look up, rather than down, needs to have a strong motivating factor if you’re trying to end digitally-inspired isolation.

The experiment was to measure the impact on EXPOs on ILOs, using participation, reflection, surveys and interviews. The speaker noted that doing coding on a domain of knowledge you feel strongly about (potentially to the point of ownership) can be very hard as biases creep in and I find it one of the real challenges in trying to do grounded theory work, personally. I’m not all that surprised that students felt that the EXPO had a greater impact than something smaller, especially where the experiment was effectively created with a larger weight first project and a high-impact first deliverable. In a biological human sense, project 2 is always going to be at risk of being in the refectory period, the period after stimulation during which a nerve or muscle is less able to be stimulated. You can get as excited about the development, because development is always going to be very similar, but it’s not surprising that a small-scale pop is not as exciting as a giant boom, especially when the boom comes first.

How do we grade things like this? It’s a very good question – of course the first question is why are we grading this? Do we need to be able to grade this sort of thing or just note that it’s met a professional standard? How can we scale this sort of thing up, especially when the main function of the coordinator is as a cheerleader and relationships are essential. Scaling up relationships is very, very hard. Talking to everyone in a group means that the number of conversations you have is going to grow at an incredibly fast rate. Plus, we know that we have an upper bound on the number of relationships we can actually have – remember Dunbar’s number of 120-150 or so? An interesting problem to finish on.


ITiCSE 2014: Monday, Keynote 1, “New Technology, New Learning?” #ITiCSE2014 #ITiCSE

This keynote was presented by Professor Yvonne Rogers, from University College of London. The talk was discussing how we could make learning more accessible and exciting for everyone and encourage students to think, to create and share our view. Professor Rogers started by sharing a tweet by Conor Gearty on a guerrilla lecture, with tickets to be issued at 6:45pm, for LSE students. (You can read about what happened here.) They went to the crypt of Westminster Cathedral and the group, split into three smaller groups, ended up discussing the nature of Hell and what it entailed. This was a discussion on religion but, because of the way that it was put together, it was more successful than a standard approach – context shift, suspense driving excitement and engagement. (I wonder how much suspense I could get with a guerrilla lecture on polymorphism… )

Professor Rogers says that suspense matters, as the students will be wondering what is coming next, and this will hopefully make them more inquisitive and thus drive them along the path to scientific enquiry. The Ambient Wood was a woodland full of various technologies for student pairs, with technology and probes, an explorative activity. You can read about the Ambient Wood here. The periscope idea ties videos into the direction that you are looking – a bit like Google Glass without a surveillance society aspect (a Woodopticon?). (We worked on similar ideas at Adelaide for an early project in the Arts Precinct to allow student exploration to drive the experience in arts, culture and botanical science areas.) All of the probes were recorded in the virtual spatial environment matching the wood so that, after the activity, the students could then look at what they did. Thus, a group of 10-12 year olds had an amazing day exploring and discovering, but in a way that was strongly personalised, with an ability to see it from the bird’s eye view above them.

And, unsurprisingly, we moved on to MOOCs, with an excellent slide on MOOC HYSTERIA. Can we make these as engaging as the guerrilla lecture or the ambient wood?

hysertia

MOOCs, as we know, are supposed to increase our reach and access to education but, as Professor Rogers noted, it is also a technology that can make the lecturer a “bit of a star”. This is one of the most honest assessments of some of the cachet that I’ve heard – bravo, Professor Rogers. What’s involved in a MOOC? Well, watching things, doing quizzes, and there’s probability a lot of passive, rather than active, learning. Over 60% of the people who sign up to do a MOOC, from the Stanford experience, have a degree – doing Stanford for free is a draw for the already-degreed. How can we make MOOCs fulfil their promise, give us good learning, give us active learning and so on? Learning analytics give us some ideas and we can data mine to try and personalise the course to the student. But this has shifted what our learning experience is and do we have any research to show the learning value of MOOCs?

In 2014, 400 students taking a Harvard course:

  1. Learned in a passive way
  2. Just want to complete
  3. Take the easy option
  4. Were unable to apply what they learned
  5. Don’t reflect on or talk to their colleagues about it.

Which is not what we want? What about the Flipped Classroom? Professor Rogers attributed this to Khan but I’m not sure I agree with this as there were people, Mazur for example, who were doing this in Peer Instruction well before Khan – or at least I thought so. Corrections in the questions please! The idea of the flip is that we don’t have content delivery in lectures with the odd question – we have content beforehand and questions in class. What is the reality?

  1. Still based on chalk and talk.
  2. Is it simply a better version of a bad thing?
  3. Are students more motivated and more active?
  4. Very labour-intensive for the teacher.

So where’s the evidence? Well, it does increase interaction in class between instructors and students. It does allow for earlier identification of misconceptions. Pierce and Fox, 2012, found that it increased exam results for pharmacology students. It also fostered critical thinking in case scenarios. Maybe this will work for 10s-100s – what about classes of thousands? Can we flip to this? (Should we even have classes of this size is another good question)

Then there’s PeerWise, Paul Denny (NZ), where there is active learning in which students create questions, answer them and get feedback. Students create the questions and then they get to try other student’s questions and can then rate the question and rate the answer. (We see approaches like this, although not as advanced, in other technologies such as Piazza.)

How effective is this? Performance in PeerWise correlated with exam marks (Anyadi, Green and Tang, 2013), with active student engagement. It’s used for revision before the exams, and you get hihg-quality questions and answers, while supporting peer interaction. Professor Rogers then showed the Learning Pyramid, from the National Training Laboratories, Bethel, Maine. The PeerWise system plays into the very high retention area.

pyramid

Professor Rogers then moved on to her own work, showing us a picture of the serried rank nightmare of a computer-based classroom: students in rows, isolated and focused on their screens. Instead of ‘designing for one’, why don’t we design to orchestrate shared activities, with devices that link to public displays and can actively foster collaboration. One of Professor Rogers’ students is looking at ways to share simulations across tablets and screens. This included “4Decades“, a a simulation of climate management, with groups representing the different stakeholders to loo at global climate economics. We then saw a video that I won’t transcribe. The idea is that group work encourages discussion, however we facilitate it, and this tends to leading to teaching others in the sharing of ideas. Another technology that Professor Rogers’ group have developed in this space is UniPad: orchestrating collaborate activities across multiple types of devices, with one device per 6-7 students, and used in classes without many researchers present. Applications of this technology include budgeting for students (MyBank), with groups interacting and seeing the results on a public display. Given how many students operate in share houses collaboratively, this is quite an interesting approach to the problem. From studies on this, all group members participated and used the tablet as a token for discussion, taking ownership of a part of the problem. This also extended to reflection on other’s activities, including identifying selfish behaviour on the part of other people. (Everyone who has had flatmates is probably groaning at the moment. Curse you, Love Tarot Pay-By-The-Minute Telephone Number, which cost me and my flatmates a lot of dollars after a flatmate skipped out on us.)

The next aspect Professor Rogers discussed was physical creation toolkits, such as MaKey MaKey, where you can build alternative input for a computer, based on a simple printed circuit board with alligator clips and USB cables. The idea is simple: you can turn anything you like into a keyboard key. Demonstrations included a banana space bar, a play dough MarioKart gamepad, and many other things (a water bowl in front of the machine became a cat-triggered photo booth). This highlights one of the most important aspects of thinking about learning: learning for life. How can we keep people interested in learning in the face of busy, often overfull, lives when many people still think about learning as something that had to be endured on their pathway into the workforce? (Paging my climbing friends with their own climbing wall: you could make the wall play music if you wanted to. Just saying.)

One of the computers stopped working during a trial of the MaKey MaKey system with adult learners and the collaboration that ensued changed the direction of the work and more people were assigned to a single kit. Professor Rogers showed a small video of a four-person fruit orchestra of older people playing Twinkle Twinkle Little Star. (MORE KIWI!) This elicited a lot of ideas, including for their grandchildren and own parent, transforming exercise to be more fun, to help people learn fundamental knowledge skills and give good feedback. We often heavily intervene in the learning experience and the reflection of the Fruit Orchestra was that intervening less in self-driven activities such as MaKey MaKey might be a better way to go, to increase autonomy and thus drive engagement.

Next was the important question: How can we gets to create and code, where coding is just part of the creating? Can we learn to code differently beyond just choosing a particular language? We have many fascinating technologies but what is the suite of tools over the top that will drive creativity and engagement in this area, to produce effective learning? The short video shown demonstrated a pop-out prefabricated system, where physical interfaces and gestures across those represented coding instructions: coding without any typing at all. (Previous readers will remember my fascination with pre-literate programming.) This early work, electronics on a sheet, is designed to be given away because the production cost is less than 3 Euros. The project is called “code me” from University College London and is designed to teach logic without people realising it: the fundamental building block of computational thinking. Future work includes larger blocks with Bluetooth input and sensors. (I can’t find a web page for this.)

What role should technology play in learning? Professor Rogers mentioned thinking about this in two ways. The inside learning using technology to think about the levels students to reach to foster attainment: personalise, monitor, motivate, flexible, adaptive. The outside learning approach is to work with other people away from the screen: collaborate, create, connect, reflect and play. Professor Rogers believes that the choice is ours but that technology should transform learning to make it active, creative, collaborative, exciting (some other things I didn’t catch) and to recognise the role of suspense in making people think.

An interesting and thought-provoking keynote.

 


The Antagonistic Classroom Is A Dinosaur

It’s been a while since I’ve posted but, in that time, I’ve been doing a lot of reading and a lot of thinking. I’m aware that as a CS Ed person whose background is in CS rather than Ed is that I have a lot of catching up to do in terms of underlying theory and philosophy. I’ve been going further back to look at the changes in education, from the assumption that a student is a blank slate (tabula rasa) to be written on (or an empty bank account to be filled) rather than as a person to be worked with. As part of my search I’ve been reading a lot and what has become apparent is how long people have been trying to change education in order to improve the degree and depth of learning and student engagement. It’s actually mildly depressing to track the last 250 years of people trying to do anything other than rote learning, serried ranks of silent students and cultural crystallisation. As part of this reading, and via Rousseau and Hegel, I’ve wandered across the early works of Karl Marx who, before the proletariat began its ongoing efforts to not act as he had modelled them, was thinking about the role of work and life. In essence, if what you are doing is not really a part of your life then you are working at something alien in order to earn enough to live – in order to work again another day. I’m not a Marxist, by any stretch of the imagination and for a variety of reasons, but this applies well for the way that many people see study as well. For many people, education is an end in itself, something to be endured in order to move on to the next stage, which is working in order to live until you stop working and then you die.

When you look at the methods that, from evidence and extensive research, now appear to be successful in developing student learning, we see something very different from what we have done before: we see cooperation, mutual respect, self-determination and a desire to learn that is facilitated by being part of the educational system. In this system, the school is not a cage for students and a trap for the spirit of education. However, this requires a distinct change in the traditional roles between student and teacher, and it’s one that some teachers still aren’t ready for and many students haven’t been prepared for. The future is creative and it’s now time to change our educational system to fully support that.

Ultimately, many of the ways that we educate place the teacher in a role of judgement and opposition to the student: students compete in order to secure the best marks, which may require them to withhold information from each other, and they must convince the teacher of their worth in order to achieve the best results. In order to maintain the mark separation, we have to provide artificial mechanisms to ensure that we can create an arbitrary separation, above the concept of competency, by having limited attempts on assignments and late penalties. This places the teacher in opposition to the student, an adversary who must be bested. Is this really what we want in what should be a mutually enriching relationship? When we get it right, the more we learn, the more we can teach and hence the more everyone learns.

If we search for the opposite of an antagonist, we find the following words: ally, helper, supporter and friend. These are great words but they evoke roles that we can’t actually fill unless we step out from behind the lectern and the desk and work with our students. An ally doesn’t force students to compete against each other for empty honours that only a few can achieve. A helper doesn’t tell students that the world works as if every single piece of assignment work is the most important thing ever assigned. A supporter develops deep structures that will hold up the person and their world for their whole life. A friend has compassion for the frailties of the humans around them – although they still have to be honest as part of that friendship.

My students and I win together when they achieve things. I don’t need to be smarter than them in order to prove anything and I don’t need them to beat me before they can demonstrate that they’re ready to go out into the world. If I held a pebble out in my hand and asked the student how they could get it, I would hope that they would first ask me if they could have it, rather than attempting some bizarre demonstration of hand-eye coordination. Why compete when we could all excel together?

We stand in exciting times, where knowledge can be shared widely and semi-instantly, but we won’t see the best of what we can do with this until we see an antagonistic classroom for the dinosaur that it is and move on.


ASWEC Day 3 (SE Education Track), Keynote, “Teaching Gap: Where’s the Product Gene?” (#aswec2014 #AdelED @jmwind)

Today’s speaker is Jean-Michel Lemieux, the VP of Engineering for Atlassian, to open the Education track for the ASWEC Conference. (I’m the track chair so I can’t promise an unbiassed report of the day’s activities.) Atlassian has a post-induction reprogramming idea where they take in graduates and then get people to value products over software – it’s not about what’s in the software, it’s about who is going to be using it. The next thing is to value experiences over functionality.

What is the product “gene” and can we teach it? Atlassian has struggled with this in the past despite having hired good graduates in the past, because they were a bit narrow and focused on individual features rather than the whole product. Jean-Michel spoke about the “Ship-it” event where you have to write a product in 24 hours and then a customer comes and pick what they would buy.

Jean-Michel is proposing the addition of a new degree – to add a product engineering course or degree. Whether it’s a 1 year or 4 year is pretty much up to the implementers – i.e. us. EE is about curvy waves, Computer Engineering is about square waves, CS is about programs, SE is about processes and systems, and PE is about product engineering. PE still requires programming and overlaps with SE. Atlassian’s Vietnam experience indicates that teaching the basics earlier will be very helpful: algorithms, data structures, systems admin, programming languages, compilers, storage and so on. Atlassian wants the basics in earlier here as well (regular readers will be aware of the new digital technologies curriculum but Jean-Michel may not be aware of this).

What is Product Engineering about? Customers, desirable software over a team as part of an ecosystem that functions for years. This gets away from the individual mark-oriented short-term focus that so many of our existing courses have (and of which I am not a great fan). From a systems thinking perspective, we can look at the customer journey. If people are using your product then they’re going through a lifecycle with your product.

Atlassian have a strong culture of exposure and presentation: engineers are regularly explaining problems, existing solutions and demonstrating understanding before they can throw new things on top. Demoing is a very important part of Atlassian culture: you have to be able to sell it with passion. Define the problem. Tell a story. Make it work. Sell with passion.

There’s a hypothesis drive development approach starting from hypothesis generation and experimental design, leading to cohort selection, experiment development, measurement and analysis and then the publishing of results. Ideally, a short experiment is going to give you a prediction of behaviour over a longer term timeframe with a larger number of people. The results themselves have to be clearly communicated and, from what was demonstrated, associated with the experiment itself.

Atlassian have a UI review process using peer review. This has two parts: “Learn to See” and “Learn to Seek”. For “Learning to See”, the important principles are consistency, alignment, contrast and simplicity. How much can you remove, reuse and set up properly so the UI does exactly what it needs to do and no more? For “Learning to Seek”, the key aspects are “bring it forward” (bring your data forward to make things easier: you can see the date when your calendar app is closed). (This is based on work in Microinteractions, a book that I have’t read.) The use of language in text and error messages is also very important and part of product thinking.

No-one works alone at Atlassian and team work is default. There’s a lot of team archeology and look at what a team has been doing for the past few years and learn from it. The Team Fingerprint shows you how a team works, by looking at their commit history, bug tracking. If they reject commits, when do they do it and why? Where’s the supporting documentation and discussion? Which files are being committed or changed together? If two files are always worked on together, can we simplify this?

In terms of the ecosystem, Atlassian also have an API focus (as Google did yesterday) and they design for extensibility. They also believe in making tools available with a focus on determining whether the product will be open source or licensed and how the IP is going to be handled. Extensibility can be very hard because it’s a commitment over time and your changes today have to support tomorrow’s changes. It’s important to remember that extending something requires you to build a community who will use the extensions – again, communication is very important. An Atlassian platform team is done when their product has been adopted by another team, preferably without any meetings. If you’re open source then you live and die by the number of people who are actually using your product. Atlassian have a no-meeting clause: you can’t have a meeting to explain to someone why they should adopt your product.

When things last for years you have to prepare for it. You need to learn from your running code, rather than just trusting your test data. You need to validate assumptions in production and think like an “ops” person. This includes things like building in consistency checks across the board.

Where’s the innovation in this? The Atlassian approach is a little more prescriptive in some ways but it’s not mandating tools so there’s still room for the innovative approaches that Alan mentioned yesterday.

Question time was interesting, with as many (if not more) comments than questions, but there was a question as to whether the idea for such a course should be at a higher level than an individual University: such as CORE, ACDICT, EA,or ACS. It will be interesting to see what comes out of this.


ASWEC 2014, Day 2, Keynote, “Innovation at Google” (#aswec2014 #AdelEd @scruzin @sallyannw)

Today’s keynote was given by Alan Noble, Engineering Director for Google Australia and long-term adjunct at the University of Adelaide, who was mildly delayed by Sydney traffic but this is hardly surprising. (Sorry, Sydney!) Whn asked to talk about Google’s Software Engineering (SE) processes, Alan thought “Wow, where do I began?” Alan describes Google’s processes as “organic” and “changing over time” but no one label can describe an organisation that has over 30,000 employees.

So what does Alan mean by “organic”? Each team in Google is empowered to use the tools and processes that work best for them – there is no one true way (with some caveats). The process encouraged is “launch and iterate” and “release early, release often”, which many of us have seen in practice! You launch a bit, you iterate a bit, so you’re growing it piece by piece. As Alan noted, you might think that sounds random, so how does it work? There are some very important underlying commonalities. In the context of SE, you have an underlying platform and underlying common principles.

Everything is built on Google Three (Edit: actually it’s google3, from Alan’s comment below so I’ll change that from here on) – Google’s third iteration of their production codebase, which also enforces certain approaches to the codebase. At the heart of google3 is something called a package, which encapsulates a group of source files, and this is associated with a build file. Not exciting, but standard. Open Source projects are often outside: Chrome and Android are not in google3. Coming to grips with google3 takes months, and can be frustrating for new hires, who can spend weeks doing code labs to get a feeling for the codebase. It can take months before an engineer can navigate google3 easily. There are common tools that operate on this, but not that many of them and for a loose definition of “common”. There’s more than one source code control system, for example. (As a note, any third party packages used inside Google have the heck audited out of them for security purposes, unsurprisingly.) The source code system used to be Perforce by itself but it’s a highly centralised server architecture that hasn’t scaled for how Google is now. Google has a lot of employees spread around the world and this presents problems. (As a note, Sydney is the 10th largest engineering centre for Google outside of Mountain View.) In response to this scaling problem, Google have tried working with the vendor (which didn’t pan out) and have now started to produce their own source control system. Currently, the two source control systems co-exist while migration takes place – but there’s no mandated move. Teams will move based on their needs.

Another tool is a tracking tool called Buganizer which does more than track bugs. What’s interesting is that there are tools that Google use internally that we will never see, to go along with their tools that are developed for public release.

There’s a really strong emphasis on making sure that the tools have well-defined, well-documented and robust APIs. They want to support customisation, which means documentation is really important if sound extensions and new front ends can be built. By providing a strong API, engineering teams can build a sensible front end for their team – although complete reinvention of the wheel is frowned upon and controlled. Some of the front ends get adopted by other teams, such as the Mondrian UI front-end for Buganizer. Another front end for Google Spreadsheets is Maestro. The API philosophy is carried from the internal tools to the external products.

Google makes heavy use of their own external products that they produce, such as Docs, Spreadsheets and Analytics. (See, dog food, the eating thereof.) This also allows the internal testing of pre-release and just-released products. Google Engineers are slightly allergic to GANTT charts but you can support them by writing an extension to Spreadsheets. There is a spreadsheet called Smartsheet that has been approved for internal use but is not widely used. Scripting over existing tools is far more common.

And now we move onto programming languages. Or should I say that we Go onto programming languages. There are four major languages in use at Google: Java, C++, Python, and Go (the Google language). Alan’s a big fan of Go and recommends it for distributed and concurrent systems. (I’ve used it a bit and it’s quite interesting but I haven’t written enough in it to make much comment.) There are some custom languages as well, including scripting languages for production tasks. Teams can use their own language of choice, although it’s unlikely to be Ruby on Rails anytime soon.

Is letting engineers pick their language the key to Google’s success? Is it the common platform? The common tools? No. The platforms, tools and languages won’t matter if your organisational culture isn’t right. If the soil is toxic, the tree won’t grow. Google is in a highly competitive space and have to be continually innovating and improving or users will go elsewhere. The drive for innovation is the need to keep the users insanely happy. Getting the organisational settings right is essential: how do you foster innovation?

Well, how do they do it? First and foremost, it’s about producing a culture of innovation. The wrong culture and you won’t get interesting or exciting software. Hiring matters a LOT. Try to hire people that are smarter than you, are passionate, are quick learners – look for this when you’re interviewing. Senior people at Google need to have technical skills, yes, but they have to be a cultural fit. Will this person be a great addition to the team? (Culture Fit is actually something they assess for – it’s on the form.) Passion is essential: not just for software but for other things as well. If people are passionate about one thing, something, then you’d expect that this passion would flow over into other things in their lives.

Second ingredient: instead of managing, you’re unmanaging. This is why Alan is able to talk today – he’s hired great people and can leave the office without things falling apart. You need to hire technical managers as well, people who have forgotten their technical skills won’t work at Google if they’re to provide a sounding board and be able to mentor members of the team.

The third aspect is being open to sharing information: share, share, share. The free exchange of information is essential in a collaborative environments, based on trust.

Info sharing is power, info hoarding is impotence.” (Alan Noble)

The fourth thing is to recognise merit. It’s cool to do geeky things. Success is celebrated generously.

Finally, it’s important to empower teams to be agile and to break big projects into smaller, more manageable things. The unit of work at Google is about 3-4 engineers. Have 8 engineers? That’s two 4 person teams. What about meetings? Is face-to-face still important? Yes, despite all the tech. (I spoke about this recently.) Having a rich conversation is very high bandwidth and when you’re in the same room, body language will tell you if things aren’t going across. The 15 minute “stand up” meeting is a common form of meeting: stand up in the workplace and have a quick discussion, then break. There’s also often a more regular weekly meeting which is held in a “fun” space. Google wants you to be within 150m of coffee, food and fuel at all times to allow you to get what you need to keep going, so weekly meetings will be there. There’s also the project kick-off meeting, where the whole team of 20-30 will come together in order to break it down to autonomous smaller units.

People matter and people drive innovation. Googlers are supposed to adapt to fast-paced change and are encouraged to pursue their passions: taking their interests and applying them in new ways to get products that may excite other people. Another thing that happens is TGIF – which is now on Thursday, rather than Friday, where there is an open Q and A session with the senior people at Google. But you also need strong principles underlying all of this people power.

The common guiding principles that bring it all together need to be well understood and communicated. Here’s Alan’s list of guiding principles (the number varies by speaker, apparently.)

  1. Focus on the user. This keeps you honest and provides you with a source of innovation. Users may not be articulate what they want but this, of course, is one of our jobs: working out what the user actually wants and working out how many users want a particular feature.
  2. Start with problems. Problems are a fantastic source of innovation. We want to be solving real, important and big problems. There are problems everywhere!
  3. Experiment Often. Try things, try a lot of things, work out what works, detect your failures and don’t expose your users to any more failures than you have to.
  4. Fail Fast. You need to be able to tolerate failure: it’s the flip side of failure. (A brief mention of Google Wave, *sniff*)
  5. Paying Attention to the Data. Listen to the data to find out what is and what is not working. Don’t survey, don’t hire marketing people, look at the data to find out what people are actually doing!
  6. Passion. Let engineers find their passion – people are always more productive when they can follow their passion. Google engineers can self-initiate a transfer to encourage them to follow their passion, and there is always the famous Google 20% time.
  7. Dogfood. Eat your own dogfood! Testing your own product in house and making sure that you want to use it is an essential step.

The Google approach to failure has benefited from the Silicon Valley origins of the company, with the approach to entrepreneurship and failure tolerance. Being associated with a failed start-up is not a bad thing: failure doesn’t have to be permanent. As long as you didn’t lie, cheat or steal, then you’ve gained experience. It’s not making the mistake, it’s how you recover from it and how you carry yourself through that process (hence being ethical even as the company is winding down).

To wind it all up, Google doesn’t have standard SE processes across the company: they focus on getting their organisation culture right with common principles that foster innovation. People want to do exciting things and follow new ideas so every team is empowered to make their own choices, select their own tools and processes. Launch, iterate, get it out, and don’t hold it back. Grow your software like a tree rather than dropping a monolith. Did it work? No? Wind it back. Yes? Build on it! Take the big bets sometimes because some big problems need big leaps forward: the moon shot is a part of the Google culture.

Embrace failure, learn from your mistakes and then move on.


CSEDU Wrap-up (#csedu14 #AdelEd)

Well, it’s the day after CSEDU and the remaining attendees are all checking out and leaving. All that remains now is lunch (which is not a minor thing in Spain) and heading to the airport. In this increasingly on-line age, the question is often asked “Why do you still go to conferences?”, meaning “Why do you still transport yourself to conferences rather than participating on-line?” It’s a pretty simple reason and it comes down to how well we can be somewhere using telepresence or electronic representations of ourselves in other places. Over the time of this conference, I’ve listened to a number of talks and spoken to a number of people, as you can see from my blog and (if you could see my wallet) the number of business cards I’ve collected. However, some of the most fruitful discussions took place over simple human rituals such as coffee, lunch, drinks and dinner. Some might think that a travelling academic’s life is some non-stop whirl of dining and fun but what is actually happening is a pretty constant round of discussion, academic argument and networking. When we are on the road, we are generally doing a fair portion of our job back home and are going to talks and, in between all of this, we are taking advantage of being surrounded by like-minded people to run into each other and build up our knowledge networks, in the hope of being able to do more and to be able to talk with people who understand what we’re doing. Right now, telepresence can let me view lectures and even participate to an extent, but it cannot give me those accidental meetings with people where we can chat for 5 minutes and work out if we should be trying to work together. Let’s face it, if we could efficiently send all of the signals that we need to know if another human is someone we want to work with or associate with, we’d have solved this problem for computer dating and, as I understand it, people are still meeting for dinners and lunch to see if what was represented on line had any basis in reality. (I don’t know about modern computer dating – I’ve been married for over 15 years – so please correct me if I’m wrong.)

Of course, for dating, most people choose to associate with someone who is already in their geographical locale but academics don’t have that luxury because we don’t tend to have incredible concentrations of similar universities and research groups in one place (although some concentrations do exist) and a conference provides us with a valuable opportunity to walk out our raw ideas into company and see what happens. There is also a lot to be said for the “defusing” nature of a face-to-face meeting, when e-mail can be so abrupt and video conferencing can provide quite jagged and harsh interactions, made more difficult by network issues and timezone problems. That is another good reason for conferences: everyone is away and everyone is in the same timezone. The worst conference to attend is one that is in your home town, because you will probably not take time off work, you’ll duck into the conference when you have a chance – and this reduces the chances of all of the good things we’ve talked about. It’s because you’re separated from your routine that you can have dinner with academic strangers or hang around after coffee to spend the time to talk about academic ideas. Being in the same timezone also makes it a lot easier as multi-continent video conferences often select times based on what is least awful for everyone, so Americans are up too early, Australians are up too late, and the Europeans are missing their lunches. (Again, don’t mess with lunch.)

It’s funny that the longer I stay an academic, the harder I work at conferences but it’s such a good type of hard work. It’s productive, it’s exciting, it’s engaging and it allows us to all make more progress together. I’ve met some great people here and run into some friends, both of which make me very happy. It’s almost time to jump back on a plane and head home (where I turn around in less than 14 hours to go and run another conference) but I feel that we’ve done some good things here and that will lead to better things in the future.

A place for meeting people and taking the time for academic thought.

A place for meeting people and taking the time for academic thought.

It’s been a blast, CSEDU, let’s do it again. Buenos dias!


CSEDU, Day 3, Final Keynote, “Digital Age Learning – The Changing Face of Online Education”, (#csedu14 #AdelED @timbuckteeth)

Now, I should warn you all that I’ve been spending time with Steve Wheeler (@timbuckteeth) and we agree on many things, so I’m either going to be in furious agreement with him or I will be in shock because he suddenly reveals himself to be a stern traditionalist who thinks blended learning is putting a textbook in the Magimix. Only time will tell, dear reader, so let’s crack on, shall we? Steve is from the Plymouth Institute of Education, conveniently located in Plymouth University, and is a ferocious blogger and tweeter (see his handle above).

Erik introduced Steve by saying that Steve didn’t need much introduction and noted that Steve was probably one of the reasons that we had so many people here on the last day! (This is probably true, the afternoon on the last day of a European conference is normally notable due to the almost negative number of participants.)

When you’re a distance educator, the back of the classroom can be thousands of miles away” (Steve Wheeler)

Steve started with the idea that on-line learning is changing and that his presentation was going to be based on the idea that the future will be richly social and intensely personal. Paradoxical? Possibly but let’s find out. Oh, look, an Einstein quote – we should have had Einstein bingo cards. It’s a good one and it came with an anecdote (which was a little Upstairs Downstairs) so I shall reproduce it here.

I never teach my students. I only provide the conditions in which they can learn.” Albert Einstein

There are two types of learning: shallow (rote) learning that we see when cramming, where understanding is negligible or shallow if there at all, and then there is the fluid intelligence, the deeper kind of learning that draws on your previous learning and your knowledge structures. But what about strategic learning where we switch quickly between the two. Poor pedagogy can suppress these transitions and lock people into one spot.

There are three approaches here: knowledge (knowing that, which is declarative), wisdom (knowing how, which is procedural) and transformation (knowing why, which is critical). I’ve written whole papers about the missing critical layer so I’m very happy to see Steve saying that the critical layer is the one that we often do the worst with. This ties back into blooms where knowledge is cognitive, wisdom is application and transformation is analysis and evaluation. Learning can be messy but it’s transformative and it can be intrinsically hard to define. Learning is many things – sorry, Steve, not going to summarise that whole sentence.

We want to move through to the transformational stage of learning.

What is the first attempt at distance learning? St Paul’s name was tossed out, as was Moses. But St Paul was noted as the first correspondence course offered. (What was the assessment model, I wonder, for Epistola.) More seriously, it was highly didactic and one-way, and it was Pitman who established a two-way correspondence course that was both laborious and asynchronous but it worked. Then we had television and in 1968, the Stanford Instructions TV Network popped up. In 1970, Steve saw an example of video conferencing that had been previously confined to Star Trek. I was around in the early 70s and we were all agog about the potential of the future – where is my moon base, by the way? But the tools were big and bulk – old video cameras were incredibly big and ridiculously short lived in their battery life… but it worked! Then people saw uses for the relationship between this new technology and pedagogy. Reel-to-reel, copiers, projectors, videos: all of these technologies were effective for their teaching uses at the time.

Of course, we moved on to computer technology including the BBC Model B (hooray!) and the reliable but hellishly noisy dot matrix printer. The learning from these systems was very instructional, using text and very simplistic in multiple choice question approach. Highly behaviouristic but this is how things were done and the teaching approach matched the technology. Now, of course, we’ve gone tablet-based, on-line gaming environments that have non-touch technologies such as Kinect, but the principle remains the same: over the years we’ve adapted technology to pedagogy.

But it’s only now that, after Sir Tim Berners-Lee, we have the World Wide Web that on-line learning is now available to everybody, where before it was sort-of available but not anywhere near as multiplicable. Now, for our sins, we have Learning Management Systems, the most mixed of blessings, and we still have to ask what are we using them for, how are we using them? Is our pedagogy changing? Is out connection with our students changing? Illich (1972) criticised educational funnels that had a one-directional approach and intend motivated educational webs that allow the transformation of each moment of living into one of learning, sharing and caring.

What about the Personal Learning Environment (PLE)? This is the interaction of tools such as blogs, twitters and e-Portfolios, then add in the people we interact with, and then the other tools that we use – and this would be strongly personal to an individual. If you’ve ever tried to use your partner’s iPad, you know how quickly personalisation changes your perception of a tool! Wheeler and Malik (2010) discuses the PLE that comprises the personal learning network and personal web tools, with an eye on more than the classroom, but as a part of life-long learning. Steve notes (as Stephen Heppel did) that you may as well get students to use their PLEs in the open because they’ll be using them covertly otherwise: the dreaded phone under the table becomes a learning tool when it’s on top of the table. Steve discussed the embedded MOOC that Hugh discussed yesterday to see how the interaction between on-line and f2f students can benefit from each other.

In the late ’80s, the future was “multi-media” and everything had every other medium jammed into it (and they don’t like it up ’em) and then the future was going to converge on the web. Internet take up is increasing: social, political and economic systems change incrementally, but technology changes exponentially. Steve thinks the future is smart mobile and pervasive, due to miniaturisation and capability of new devices. If you have WiFi then you have the world.

Change is not linear, it’s exponential.” Kurzweil

Looking at the data, there are no more people in the world with mobile phones than people without, although some people have more than one. (Someone in the audience had four, perhaps he was a Telco?) Of course, some reasons for this are because mobile phones replace infrastructure: there are entire African banks that run over mobile networks, as an example. Given that we always have a computer in our pocket, how can we promote learning everywhere? We are using these all the time, everywhere, and this changes what we can do because we can mix leisure and learning without having to move to fixed spaces.

Steve then displayed the Intel info graphic “What Happens In an Internet Minute“, but it’s scary to see how much paper is lagging these days. What will the future look like? What will future learning look like? If we think exponentially then things are changing fast. There is so much content being generated, there must be something that we can use (DOGE photos and Justin Bieber vides excepted) for our teaching and learning. But, given that 70% of what we learn is if informal and outside of the institution, this is great! But we need to be able to capture this and this means that we should produce a personal learning network, because trying to drink down all that content by yourself is exceeding our ability! By building a network, we build a collection of filters and aggregators that are going to help us to bring sense out of the chaos. Given that nobody can learn everything, we can store our knowledge in other people and know where to go when we need that knowledge. A plank of connectivist theory and leading into paragogy, where we learn from each other. This also leads us to distributed cognition, where we think across the group (a hive mind, if you will) but, more simply, you learn from one person, then another, and it becomes highly social.

Steve showed us a video on “How have you used your own technology to enhance your learning“, which you can watch on YouTube. Lucky old 21st Century you! This is a recording of some of Steve’s students answering the question and sharing their personal learning networks with us. There’s an interesting range of ideas and technologies in use so it’s well worth a look. Steve runs a Twitter wall in his classroom and advertises the hashtag for a given session so questions, challenges and comments go out on to that board and that allows Steve to see it but also retweet it to his followers, to allow the exponential explosion that we would want in a personal learning network. Students accessed when they harness the tools they need to solve their problems.

Steve showed us a picture of about 10,000 Germans taking pictures of the then-Presidential Elect Barack Obama because he was speaking in Berlin and it was a historical moment that people wanted to share with other people. This is an example of the ubiquitous connection that we now enjoy and, in many ways, take for granted. It is a new way of thinking and it causes a lot of concern for people who want to stick to previous methods. (There will come a time when a paper exam for memorised definitions will make no sense because people have computers connected to their eyes – so let’s look at asking questions in ways that always require people to actually use their brains, shall we.) Steve then showed us a picture of students “taking notes” by taking pictures of the whiteboard: something that we are all very accustomed to now. Yes, some teachers are bothered by this but why? What is wrong with instantaneous capture versus turning a student into a slow organic photocopying machine? Let’s go to a Papert quote!

I am convinced that heh best learning takes place when the learner takes charge,” Seymour Papert

We learn by doing“, Piaget, 1960

We learn by making“, Papert, 1960.

Steve alluded to constructionist theory and pointed out how much we have to learn about learning by making. He, like many of us, doesn’t subscribe to generational or digital native/immigrant theory. It’s an easy way of thinking but it really gets in the way, especially when it makes teachers fearful of weighing in because they feel that their students know more than they do. Yes, they might, but there is no grand generational guarantee. It’s not about your age, it’s about your context. It’s about how we use the technology, it’s not about who we are and some immutable characteristics that define us as in or out. (WTF does not, for the record, mean “Welcome to Facebook”. Sorry, people.) There will be cultural differences but we are, very much, all in this together.

Steve showed us a second video, on the Future of Publishing, which you can watch again! Some of you will find it confronting that Gaga beats Gandhi but cultures change and evolve  and you need to watch to the end of the video because it’s really rather clever. Don’t stop halfway through! As Steve notes, it’s about perception and, as I’ve noted before, I’m pretty sure that people put people into the categories that they were already thinking about – it’s one of the reasons I have such a strong interest in grounded theory. If you have a “Young bad” idea in your head then everything you see will tend to confirm this. Perception and preconception can heavily interfere with each other but using perception, and being open to change, is almost always a better idea.

Steve talked about Csíkszentmihályi’s Flow, the zone you’re in when the level of challenge roughly matches your level of skill and you balance anxiety and boredom. Then, for maximum Nick points, he got onto Vygotsky’s  Zone of Proximal Development, where we build knowledge better and make leaps when we do it with other people, using the knowledgable other to scaffold the learning. Steve also talked about mashing them up, and I draw the reader back to something I wrote on this a whole ago on Repenning’s work.

We can do a lot of things with computers but we don’t have to do all the things that we used to do and slavishly translate them across to the new platform. Waters (2011) talks about new learners: learners who are more self-directed and able to make more and hence learn more.

There are many digital literacies: social networking, privacy management, identity management, creating content, organising content, reusing and repurposing, filtering and selection, self presentation, transliteracy (using any platform to get your ideas across). We build skills, that become competencies, that become literacies and, finally, potentially become masteries.

Steve finished with in discussing the transportability of skills using driving in the UK and the US as an example. The skill is pretty much the same but safe driving requires a new literacy when you make a large contextual change. Digital environments can be alien environments so you need to be able to take the skills that you have now and be able to put them into the new contexts. How do you know that THIS IS SHOUTING?  It’s a digital literacy.

Steve presented a quote from Socrates, no, Socrates, no, Plato:

Knowledge that is acquired under compulsion obtains no hold on the mind.

and used the rather delightful neologism “Darwikianism” to illustrate evolving improvement on on-line materials over time. (And illustrated it with humour and pictures.) Great talk with a lot of content! Now I have to go and work on my personal learning network!

Vatsoc

This is not actually Socrates. Sorry!


CDEDU, Day 3, “Through the Lens of Third Space Theory: Possibilities For Research Methodologies in Educational Technologies”, (#csedu14 #AdelEd)

This talk was presented by Kathy Jordan and Jennifer Elsden-Clifton, both from RMIT University. They discussed educational technologies through another framework that they have borrowed from another area: third space theory. This allows us to describe how teachers and students use complex roles in their activities.

HALlo.

HALlo.

A lot of educational research is focused on the use of technology and can be rather theory light (no arguments from me), leading to technological evangelism that is highly determinist. (I’m assuming that the speakers mean technological determinism, which is the belief that it’s a society’s technology that drives its culture and social structures, after Veblen.) The MOOC argument was discussed again. Today, the speakers were planning to offer an alternative way to think about technology and use of technology. As always, don’t just plunk technology down in the classroom and expect it to achieve your learning and teaching goals. Old is not always bad and new is not always good, in effect. (I often say this and then present the reverse as well. Binary thinking is for circuits.)

The real voyage of discover consists not in seeing new landscapes, but in having new eyes.” (Proust, cited in Canfield, Hanson and Zlkman, 2002)

With whose eyes were my eyes crafted?” (Castor, 1991)

Basically, we bring ourselves to the landscape and have to think about why we see what we;re seeing. The new methodology proposed moves away from a simplistic, techno-centric approach and towards Third Space Theory. Third Space Theory is used to explore and understand the spaces in between two or more discourses, conceptualisations or binaries. (Bhabna, 1994). Thirdspace is this a “come together” space (Soja, 1996) to combine the first and second spaces and then enmesh the binaries that characterise these spaces. This also reduces the implicit privileging of one conceptual space over another.

Conceptualisations of the third space include bridges, navigational spaces and transformative spaces.  Interesting, from an editorial perspective, I find the binary notion of MOOC good/MOOC bad, which we often devolve to, is one of the key problems in discussing MOOCs because it often forces people into responding to a straw man and I think that this work on Thirdspaces is quite strong without having to refer to a perceived necessity for MOOCs.

Thirdspace theory is used across a variety of disciplines at the moment. Firstspace in our context could be face-to-face learning, the second space is “on-line learning” and the speakers argue that this binary classification is inherently divisive. Well, yes it is, but this assumes that you are not perceiving these are naturally overlapping when we consider blended learning, which we’ve really had as a concept since 1999. There are definitely problems when people move through f2f and on-line as if they are exclusive  binary aspects of some educational Janus but I wonder how much of that is lack of experience and exposure rather than a strict philosophical structure – but there is no doubt that thinking about these things as a continuum is beneficial and if Thirdspace theory brings people to it – then hooray!

(As Hugh noted yesterday, MOOC got people interested in on-line learning, which made it worth running MOOCs. Then, hooray!)

A lot of the discussion of technology in education is a collection of Shibboleths and “top of the head” solutions that have little maturity or strategy behind them, so a new philosophical approach to this is most definitely welcome and I need to read up more on Thirdspace, obviously.

The speakers provided some examples, including some learning fusion around Blackboard collaborate and the perceived inability of pre-service teachers to be able to move personal technology literacy into their new workplace, due to fear. So, in the latter case, Thirdspace allowed an analysis of the tensions involved and to assist the pre-service teachers in negotiating the “unfamiliar terrain” (Bhabha, 1992) of sanctioned technology frameworks in schools. (An interesting example was having t hand write an e-mail first before being allowed to enter it electronically – which is an extreme sanctioning of the digital space.)

I like the idea of the lens that Thirdspace provides but wonder whether we are seeing the liminal state that we would normally associate with a threshold concept. Rather than a binary model, we are seeing a layered model, where the between is neither stable nor clearly understood as it is heavily personalised. There is, of course, no guarantee that having a skill in one area makes it transferable to another because of the inherent contextual areas (hang on, have we gone into NeoPiaget!).

Anything that removes the potential classification of any category as a lower value or undesirable other is a highly desirable thing for me. The notion that transitional states, however we define them, are a necessary space that occurs between two extremes, whether they are dependent or opposing concepts, strongly reduces the perceived privilege of the certainty that so many people confuse with being knowledgeable and informed. Our students, delightful dualists that they are, often seek black/white dichotomies and it is part of our job to teach them that grey is not only a colour but an acceptable colour.

I think that labelling the MOOC discussion as a techno-determinist and shallow argument doesn’t really reflect the maturity of the discussion in the contemporary MOOC space and is a bit of a dismissive binary, if I can be so bold. We did discuss this in the questions and the speakers agreed that the discussion of MOOC has matured and was definitely in advance of the rather binary and outmoded description presented in the first keynote that I railed against. Yes, MOOCs have been presented by evangelists and profit makers as something but the educational community has done a lot of work to refine this and very few of the practitioners I know who are still involved in MOOC are what I would call techno-determinists. Techno-utopians, maybe, techno-optimisits, often, but techno-skeptics and serious, serious educational theorists who are also techno-optional, just as often.

The other potential of Third Space Theory is that it “provides a framework for destabilisation” and moving beyond past patterns rather than relying on old binary conceptualisations of new/old good/bad updated/outmoded. Projecting any single method to everything is always challenging and I suspect it’s a little bit of a hay hominid but the resulting questions clarified that the potential of Thirdspace is in being capable of deliberately rejecting staid and binary thinking, without introducing a new mode of privilege on to the new Thirdspace model. I’m not sure that I agree with all of the points here but I certainly have a lot to think about.


CSEDU, Day 2, Invited Talk, “How are MOOCs Disrupting the Educational Landscape?”, (#CSEDU14 #AdelEd)

I’ve already spent some time with Professor Hugh Davis, from Southampton, and we’ve had a number of discussions already around some of the matters we’re discussing today, including the issue when you make your slides available before a talk and people react to the content of the slides without having the context of the talk! (This is a much longer post for another time.) Hugh’s slides are available at http://www.slideshare.net/hcd99.

As Hugh noted, this is a very timely topic but he’s planning to go through the slides at speed so I may not be able to capture all of it. He tweeted his slides earlier, as I noted, and his comment that he was going to be debunking things earned him a minor firestorm. But, to summarise, his answer to the questions is “not really, probably” but we’ll come back to this. For those who don’t know, Southampton is about 25,000 students, Russell Group and Top 20 in the UK, with a focus on engineering and oceanography.

Back in 2012, the VC came back infused with the desire to put together a MOOC (apparently, Australians talked them into it – sorry, Hugh) and in December, 2012, Hugh was called in and asked to do MOOCs. Those who are keeping track will now that there was a lot of uncertainty about MOOCs in 2012 (and there still is) so the meeting called for staff to talk about this was packed – in a very big room. But this reflected excitement on the part of people – which waving around “giant wodges” of money to do blended learning had failed to engender, interestingly enough. Suddenly, MOOCs are more desirable because people wanted to do blended learning as long as you used the term MOOC. FutureLearn was produced and things went from there. (FutureLearn now has a lot of courses in it but I’ve mentioned this before. Interestingly, Monash is in this group so it’s not just a UK thing. Nice one, Monash!)

In this talk, Hugh’s planning to intro MOOCs, discuss the criticism, look at Higher Ed, ask why we are investing in MOOCs, what we can get out of it and then review the criticisms again. Hugh then defined what the term MOOC means: he defined it as a 10,000+, free and open registration, on-line course, where a course runs at a given time with a given cohort, without any guarantee of accreditation. (We may argue about this last bit later on.) MOOCs are getting shorter – with 4-6 weeks being the average for a MOOC, mostly due to fears of audience attrition over time.

The dreaded cMOOC/xMOOC timeline popped up from Florida Institute of Technology’s History of MOOCs:

moocs_picture

and then we went into the discussion of the stepped xMOOC with instructor led and a well-defined and assessable journey and the connectivist cMOOC  where the network holds the knowledge and the learning comes from connections. Can we really actually truly separate MOOCs into such distinct categories? A lot of xMOOC forums show cMOOC characteristics and you have to wonder how much structure you can add to a cMOOC without it getting “x”-y. So what can we say about the definition of courses? How do we separate courses you can do any time from the cohort structure of the MOOC? The synchronicity of human collision is a very connectivisty idea which is embedded implicitly in every xMOOC because of the cohort.

What do you share? Content or the whole course? In MOOCS, the whole experience is available to you rather than just bits and pieces. And students tend to dip in and out when they can, rather than just eating what is doled out, which suggests that they are engaging. There are a lot of providers, who I won’t list here, but many of them are doing pretty much the same thing.

What makes a MOOC? Short videos, on-line papers, on-line activities, links toe external resources, discussions and off platform activity – but we can no longer depend upon students being physical campus students and thus we can’t guarantee that they share our (often privileged) access to resources such as published journals. So Southampton often offer précis of things that aren’t publicly available. Off platform is an issue for people who are purely on-line.

If you have 13,000 people you can’t really offer to mark all their essays so assessment has to depend upon the self-motivated students and they have to want to understand what is going on – self evaluation and peer review have to be used. This is great, according to Hugh, because we will have a great opportunity to find out more about peer review than we ever have before.

What are the criticisms? Well, they’re demographically pants – most of the students are UK (77%) and then a long way down US (2%), with some minor representation from everywhere else. This isn’t isolated to this MOOC. 70% of MOOC users come from the home country, regardless of where it’s run. Of course, we also know that the people who do MOOCs also tend to have degrees – roughly 70% from the MOOCS@Edinburgh2013 Report #1. These are serial learners (philomaths) who just love to learn things but don’t necessarily have the time or inclination (or resources) to go back to Uni. But for those who register, many don’t do anything, and those who do drop out at about 20% a week – more weeks, more drop-out. Why didn’t people continue? We’ll talk about this later. (See http://moocmoocher.wordpress.com) But is drop out a bad thing? We’ll comeback to this.

Then we have the pedagogy, where we attempt to put learning design into our structure in order to achieve learning outcomes – but this isn’t leading edge pedagogy and there is no real interaction between educators and learners. There are many discussions, and they happen in volume, but this discussion is only over 10% of the community, with 1% making the leading and original contributions. 1% of 10-100,000 can be a big number compared to a standard class room.

What about the current Higher Ed context – let’s look at “The Avalanche Report“. Basically, the education business is doomed!!! DOOOMED, I tell you! which is hardly surprising for a report that mostly originates from a publishing house who wants to be a financially successful disruptor. Our business model is going to collapse! We are going to have our Napster moment! Cats lying down with dogs! In the HE context, fees are going up faster than the value of degree (across most of the developed world, apparently). There is an increased demand for flexibility of study, especially for professional development, in the time that they have. The alternative educational providers are also cashing up and growing. With all of this in mind, on-line education should be a huge growing market and this is what the Avalanche report uses to argue that the old model is doomed. To survive, Unis will have to either globalise or specialise – no room in the middle. MOOCs appear to be the vanguard of the on-line program revolution, which explains why there is so much focus.

Is this the end of the campus? It’s not the end of the pithy slogan, that’s for sure. So let’s look at business models. How do we make money on MOOCs? Freemium where there are free bits and value-added bits  The value-adds can be statements of achievement or tutoring. There are also sponsored MOOCs where someone pays us to make a MOOC (for their purposes) or someone pays us to make a MOOC they want (that we can then use elsewhere.) Of course there’s also just the old “having access to student data” which is a very tasty dish for some providers.

What does this mean to Southampton? Well it’s a kind of branding and advertising for Southampton to extend their reputation. It might also generate new markets, bring them in via Informal Learning, move to Non-Formal Learning, then up to the Modules of Formal Learning and then doing whole programmes under more Formal learning. Hugh thinks this is optimistic, not least because not many people have commodified their product into individual modules for starters. Hugh thinks it’s about 60,000 Pounds to make a MOOC, which is a lot of money, and so you need a good business model to justify dropping this wad of cash. But you can get 60K back from enough people with a small fee. Maybe on-line learning is another way to get students than the traditional UK “boarding school” degrees. But the biggest thing is when people accept on-line certification as this is when the product becomes valuable to the people who want the credentials. Dear to my heart, is of course that this also assists in the democratisation of education – which is a fantastic thing.

What can we gain from MOOCs? Well, we can have a chunk of a running course for face-to-face students that runs as a MOOC and the paying students have benefited from interacting with the “free attendees” on the MOOC but we have managed to derive value from it. It also allows us to test things quickly and at scale, for rapid assessment of material quality and revision – it’s hard not to see the win-win here. This automatically drives the quality up as it’s for all of your customers, not just the scraps that you can feed to people who can’t afford to pay for it. Again, hooray for democratisation.

Is this the End of the Lecture? Possibly, especially as we can use the MOOC for content and flip to use the face-to-face for much more valuable things.

There are on-line degrees and there is a lot of money floating around looking for brands that they will go on-line (and by brand, we mean the University of X.)  Venture capitalist, publishers and start-ups are sniffing around on-line so there’s a lot of temptation out there and a good brand will mean a lot to the right market. What about fusing this and articulating the degree programme, combining F2F modules. on-line, MOOC, and other aspects.

Ah, the Georgia Tech On-line Masters in Computer Science has been mentioned. This was going to be a full MOOC with free and paying but it’s not fully open, for reasons that I need to put into another post. So it’s called a MOOC but it’s really an on-line course. You may or may not care about this – I do, but I’m in agreement with Hugh.

The other thing about MOOC is that we are looking at big, big data sets where these massive cohorts can be used to study educational approaches and what happens when we change learning and assessment at the big scale.

So let’s address the criticisms:

  1. Pedagogically Simplistic! Really, as simple as a lecture? Is it worse – no, not really and we have space to innovate!
  2. No support and feedback!  There could be, we’d just have to pay for it.
  3. Poor completion rates! Retention is not the aim, satisfaction is. We are not dealing with paying students.
  4. No accreditation! There could be but, again, you’d have to pay for someone to mark and accredit.
  5. This is going to kill Universities! Hugh doesn’t think so but we’ll had to get a bit nimble. So only those who are not agile and responsive to new business models may have problems – and we may have to do some unbundling.

Who is actually doing MOOCs? The life-long learner crowd (25-65, 505/50 M/F and nearly always have a degree). People who are after a skill (PD and CPD). Those with poor access to Higher education, unsurprisingly. There’s also a tiny fourth cohort who are those who are dipping a toe in Uni and are so small as to be insignificant. (The statistics source was questioned, somewhat abruptly, in the middle of Hugh’s flow, so you should refer to the Edinburgh report.”

The patterns of engagement were identified as auditing, completing and sampling, from the Coursera “Emerging Student Pattersn in Open-Enrollment MOOCs”.

To finish up, MOOCs can give us more choice and more flexibility. Hugh’s happy because people want do online learning and this helps to develop capacity to develop high quality on-line courses. This does lead to challenges for institutional strategy: changing beliefs, changing curriculum design, working with the right academic staff (and who pays them), growing teams of learning designers and multimedia producers, legal matters, speed and agility, budget and marketing. These are commercial operations so you have a lot of commercial issues to worry about! (For our approach, going Creative Commons was one of the best things we every did.)

Is it the end of the campus? … No, not really, Hugh thinks that the campus will keep going and there’ll just be more on-line learning. You don’t stop going to see good music because you’ve got a recording, for example.

And now for the conclusions! MOOCs are a great marketing device and have a good reach for people who were out of reach before, But we can take high quality content and re-embed back into blended learning, use it to drive teaching practice change, get some big data and building capacity for online learning.

This may be the vanguard of on-line disruption but if we’re ready for it, we can live for it!

Well, that was a great talk but goodness, does Hugh speak quickly! Have a look at his slides in the context of this because I think he’s balanced an optimistic view of the benefits with a sufficient cynical eye on the weasels who would have us do this for their own purposes.