Is an IDE an E3? Maybe an E2?

Earlier, I split the evaluation resources of a course into:

  • E1 (the lecturer and course designer),
  • E2 (human work that can be based on rubrics, including peer assessment and casual markers),
  • E3 (complicated automated evaluation mechanisms)
  • E4 (simple automated evaluation mechanisms, often for acceptance testing)

E1 and E2 everyone tends to understand, because the culture of Prof+TA is widespread, as is the concept of peer assessment. In a Computing Course, we can define E3 as complex marking scripts that perform amazing actions in response to input (or even carry out formal analysis if we’re being really keen), with E4 as simple file checks, program compilation and dumb scripts that jam in a set of data and see what comes out.

But let’s get back to my first year, first exposure, programming class. What I want is hands-on, concrete, active participation and constructive activity and lots of it. To support that, I want the best and most immediate feedback I can provide. Now I can try to fill a room with tutors, or do a lot of peer work, but there will come times when I want to provide some sort of automated feedback.

Given how inexperienced these students are, it could be a quite a lot to expect them to get their code together and then submit it to a separate evaluation system, then interpret the results. (Remember I noted earlier on how code tracing correlates with code ability.)

Thus, the best way to get that automated feedback is probably working with the student in place. And that brings us to the Integrated Development Environment (IDE). An IDE is an application that provides facilities to computer programmers and helps them to develop software. They can be very complicated and rich (Eclipse), simple (Processing) or aimed at pedagogical support (Scratch, BlueJ, Greenfoot et al) but they are usually made up of a place in which you can assemble code (typing or dragging) and a set of buttons or tools to make things happen. These are usually quite abstract for early programmers, built on notional machines rather than requiring a detailed knowledge of hardware.

Screen Shot 2016-01-31 at 4.12.01 PM

The Processing IDE. Type in one box. Hit play. Rectangle appears.

Even simple IDEs will tell you things that provide immediate feedback. We know how these environments can have positive reception, with some demonstrated benefits, although I recommend reading Sorva et al’s “A Review of Generic Program Visualization Systems for Introductory Programming Education” to see the open research questions. In particular, people employing IDEs in teaching often worry about the time to teach the environment (as well as the language), software visualisations, concern about time on task, lack of integration and the often short lifespan of many of the simpler IDEs that are focused on pedagogical outcomes. Even for well-established systems such as BlueJ, there’s always concern over whether the investment of time in learning it is going to pay off.

In academia, time is our currency.

But let me make an aesthetic argument for IDEs, based on the feedback that I’ve already put into my beautiful model. We want to maximise feedback in a useful way for early programmers. Early programmers are still learning the language, still learning how to spell words, how to punctuate, and are building up to a grammatical understanding. An IDE can provide immediate feedback as to what the computer ‘thinks’ is going on with the program and this can help the junior programmer make immediate changes. (Some IDEs have graphical representations for object systems but we won’t discuss these any further here as the time to introduce objects is a subject of debate.)

Now there’s a lot of discussion over the readability of computer error messages but let me show you an example. What’s gone wrong in this program?

 

Screen Shot 2016-01-31 at 4.20.22 PM

See where that little red line is, just on the end of the first line? Down the bottom there’s a message that says “missing a semicolon”. In the Processing language, almost all lines end with a “;” so that section of code should read:

size(200,200);
rect(0,10,100,100);

Did you get that? That missing semicolon problem has been an issue for years because many systems report the semicolon missing on the next line, due to the way that compilers work. Here, Processing is clearly saying: Oi! Put a semi-colon on the red squiggle.

I’m an old programmer, who currently programs in Java, C++ and Processing, so typing “;” at the end of a line is second nature to me. But it’s an easy mistake for a new programmer to make because, between all of the ( and the ) and the , and the numbers and the size and the rect… what do I do with the “;”?

The Processing IDE is functioning in at least an E4 mode: simple acceptance testing that won’t let anything happen until you fix that particular problem. It’s even giving you feedback as to what’s wrong. Now this isn’t to say that it’s great but it’s certainly better than a student sitting there with her hand up for 20 minutes waiting for a tutor to have the time to come over and say “Oh, you’re missing a semicolon.”

We don’t want shotgun coding, where random fixes and bashed-in attempts are made desperately to solve a problem. We want students to get used to getting feedback on how they’re going and using this to improve what they do.

Because of Processing’s highly visual mode, I think it’s closer to E3 (complex scripting) in many ways because it can tell you if it doesn’t understand what you’re trying to do at all. Beyond just not doing something, it can clearly tell you what’s wrong.

But what if it works and then the student puts something up on the screen, a graphic of some sort and it’s not quite right? Then the student has started to become their own E2, evaluating what has happened in response to the code and using human insight to address the shortfall and make changes. Not as an expert but, with support and encouragement, a developing expertise.

Feedback is good. Immediacy is good. Student involvement is good. Code tracing is linked to coding ability. A well-designed IDE can be simple and engage the student to an extent that is potentially as high as E2, although it won’t be as rich, without using any other human evaluation resources. Even if there is no other benefit, the aesthetic argument is giving us a very strong nudge to adopt an appropriate IDE.

Maybe it’s time to hang up the command line and live in a world where IDEs can help us to get things done faster, support our students better and make our formal human evaluation resources go further.

What do you think?


Four tiers of evaluators

We know that we can, and do, assess different levels of skill and knowledge. We know that we can, and do, often resort to testing memorisation, simple understanding and, sometimes, the application of the knowledge that we teach. We also know that the best evaluation of work tends to come from the teachers who know the most about the course and have the most experience, but we also know that these teachers have many demands on their time.

The principles of good assessment can be argued but we can probably agree upon a set much like this:

  1. Valid, based on the content. We should be evaluating things that we’ve taught.
  2. Reliable, in that our evaluations are consistent and return similar results for different evaluators, that re-evaluating would give the same result, that we’re not unintentionally raising or lowering difficulty.
  3. Fair.
  4. Motivating, in that we know how much influence feedback and encouragement have on students, so we should be maximising the motivation and, we hope, this should drive engagement.
  5. Finally, we want our assessment to be as relevant to us, in terms of being able to use the knowledge gained to improve or modify our courses, as it is to our student. Better things should come from having run this assessment.

Notice that nothing here says “We have to mark or give a grade”, yet we can all agree on these principles, and any scheme that adheres to them, as being a good set of characteristics to build upon. Let me label these as aesthetics of assessment, now let’s see if I can make something beautiful. Let me put together my shopping list.

  • Feedback is essential. We can see that. Let’s have lots of feedback and let’s put it in places where it can be the most help.
  • Contextual relevance is essential. We’re going to need good design and work out what we want to evaluate and then make sure we locate our assessment in the right place.
  • We want to encourage students. This means focusing on intrinsics and support, as well as well-articulated pathways to improvement.
  • We want to be fair and honest.
  • We don’t want to overload either the students or ourselves.
  • We want to allow enough time for reliable and fair evaluation of the work.

What are the resources we have?

  • Course syllabus
  • Course timetable
  • The teacher’s available time
  • TA or casual evaluation time, if available
  • Student time (for group work or individual work, including peer review)
  • Rubrics for evaluation.
  • Computerised/automated evaluation systems, to varying degree.

Wait, am I suggesting automated marking belongs in a beautiful marking system? Why, yes, I think it has a place, if we are going to look at those things we can measure mechanistically. Checking to see if someone has ticked the right box for a Bloom’s “remembering” level activity? Machine task. Checking to see if an essay has a lot of syntax or grammatical errors? Machine task. But we can build on that. We can use human markers and machine markers, in conjunction, to the best of their strengths and to overcome each other’s weaknesses.

Some cast-iron wheels and gears, connected with a bicycle chain.

We’ve come a long, in terms of machine-based evaluation. It doesn’t have to be steam-driven.

If we think about it, we really have four separate tiers of evaluators to draw upon, who have different levels of ability. These are:

  1. E1: The course designers and subject matter experts who have a deep understanding of the course and could, possibly with training, evaluate work and provide rich feedback.
  2. E2: Human evaluators who have received training or are following a rubric provided by the E1 evaluators. They are still human-level reasoners but are constrained in terms of breadth of interpretation. (It’s worth noting that peer assessment could fit in here, as well.)
  3. E3: High-level machine evaluation includes machine-based evaluation of work, which could include structural, sentiment or topic analysis, as well as running complicated acceptance tests that look for specific results, coverage of topics or, in the case of programming tasks, certain output in response to given input. The E3 evaluation mechanisms will require some work to set up but can provide evaluation of large classes in hours, rather than days.
  4. E4: Low-level machine evaluation, checking for conformity in terms of length of assignment, names, type of work submitted, plagiarism detection. In the case of programming assignments, E4 would check that the filenames were correct, that the code compiled and also may run some very basic acceptance tests. E4 evaluation mechanisms should be quick to set up and very quick to execute.

This separation clearly shows us a graded increase of expertise that corresponds to an increase of time spent and, unfortunately, a decrease in time available. E4 evaluation is very easy to set up and carry out but it’s not fantastic for detailed feedback or higher Bloom’s level. Yet we have an almost infinite amount of this marking time available. E1 markers will (we hope) give the best feedback but they take a long time and this immediately reduces the amount of time to be spent on other things. How do we handle this and select the best mix?

While we’re thinking about that, let’s see if we are meeting the aesthetics.

  1. Valid? Yes. We’ve looked at our design (we appear to have a design!) and we’ve specifically set up evaluation into different areas while thinking about outcomes, levels and areas that we care about.
  2. Reliable? Looks like it. E3 and E4 are automated and E2 has a defined marking rubric. E1 should also have guidelines but, if we’ve done our work properly in design, the majority of marks, if not all of them, are going to be assigned reliably.
  3. Fair? We’ve got multiple stages of evaluation but we haven’t yet said how we’re going to use this so we don’t have this one yet.
  4. Motivating? Hmm, we have the potential for a lot of feedback but we haven’t said how we’re using that, either. Don’t have this one either.
  5. Relevant to us and the students. No, for the same reasons as 3 and 4, we haven’t yet shown how this can be useful to us.

It looks like we’re half-way there. Tomorrow, we finish the job.

 


Promoting acceptance by understanding people.

Let me start by putting up a picture of some people celebrating!

Wow, that's a really happy group of people!

Wow, that’s a really happy group of people!

My first confession is that the ‘acceptance’ I’m talking about is for academic and traditional fiction publishing. The second confession is that I have attempted to manipulate you into clicking through by using a carefully chosen title and presented image. This is to lead off with the point I wish to make today: we are a mess of implicit and explicit cognitive biases and to assume that we have anything approximating a fair evaluation mechanism to get work published is to, sadly, be making a far reaching assumption.

If you’ve read this far, my simple takeaway is “If people don’t even start reading your work with a positive frame of mind and a full stomach, your chances of being accepted are dire.”

If you want to hang around my argument is going to be simple. I’m going to demonstrate that, for much simpler assessments than research papers or stories, simple cognitive biases have a strong effect. I’m going to follow this and indicate how something as simple as how hungry you are can affect your decision making. I’m then going to identify a difference between scientific publishing and non-scientific publishing in terms of feedback and why expecting that we will continue to get good results from both approaches is probably too optimistic. I am going to make some proposals as to how we might start thinking about a fix, but only to start discussion because my expertise in non-academic publishing is not all that deep and limited by not being an editor or publisher!

[Full disclosure: I am happily published in academia but I am yet to be accepted for publication in non-academic approaches. I am perfectly comfortable with this so please don’t read sour grapes into this argument. As you’ll see, with the approaches I propose, I would in fact strip myself of some potential bias privileges!]

I’ve posted before on an experiment [1] where the only change to the qualifications of a prospective lab manager was to take the name from male to female. The ‘female’ version of this CV got offered less money, less development support and was ‘obviously’ less qualified. And this effect occurred whether the assessor was a man or a woman. This is the pretty much the gold standard for experiments of this type because it reduced any possibility of someone acting out of character because they knew what the experiment was trying to prove. There’s a lot of discussion in fiction at the moment about gendered bias, as well as academia. You’re probably aware of the Bechdel Test, which simply asks if there are two named women in a film who talk to each other about something other than men, and how often the mainstream media fails that test. But let’s look at something else. Antony LaPaglia tells a story that he used to get pulled up on his American accent whenever anyone knew that he was Australian. So he started passing as American. Overnight, complaints about his accent went away.

Compared to assessing a manuscript, reading a CV, bothering to put in two woman with names and a story, and spotting an accent are trivial and yet we can’t get these right without bias.

There’s another thing called the Matthew Effect, which basically says that the more you have, the more you’re going to get (terrible paraphrasing). Thus, the first paper in a field will be one of the most cited, people are comfortable giving opportunities to people who have used them well before, and so on. It even shows up in graph theory, where the first group of things connected together tend to become the most connected!

So, we have lots of examples of bias that comes in, if we know enough about someone that the bias can engage. And, for most people who aren’t trying to be discriminatory, it’s actually completely unconscious. Really? You don’t think you’d notice?

Let’s look at the hunger argument. An incredible study [2] (Economist link for summary) shows that Israeli judges are less likely to grant parole, the longer they’ve waited since they ate, even when taking other factors into account. Here’s a graph. Those big dips are meal breaks.

Perhaps don't schedule your hearing for just before lunch...

Perhaps don’t schedule your hearing for just before lunch…

When confronted with that terrifying graph, the judges were totally unaware of it. The people in the court every day hadn’t noticed it. The authors of the study looked at a large number of factors and found some things that you’d expect in terms of sentencing but the meal break plunges surprised everyone because they had never thought to look for it. The good news is that, most days, the most deserving will still get paroled but, and it’s a big but, you still have to wonder about the people who should have been given parole who were denied because of timing and also the people who were paroled who maybe should not have been.

So what distinguishes academia and non-academic publishing? Shall we start by saying that, notionally, many parts of academic publishing subscribe to the Popperian model of development where we expose ideas to our colleagues and they tear at them like deranged wolves until we fashion truth? As part of that, we expect to get reviews from almost all submissions, whether accepted or not, because that is how we build up academic consensus and find out new things. Actual publication allows you to put your work out to everyone else where they can read it, work with it or use it to fashion a counter-claim.

In non-academic publishing, the publisher wants something that is saleable in the target market and the author wants to provide this. The author probably also wants to make some very important statements about truth, beauty, the lizard people or anything else (much as in academic publishing, the spread of ideas is crucial). However, from a publisher’s perspective, they are not after peer-verified work of sufficient truth, they are after something that matches their needs in order to publish it, most likely for profit.

Both are directly or indirectly prestige markers and often have some form of financial rewards, as well as some truth/knowledge construction function. Non-academic authors publish to eat, academic authors publish to keep their jobs or get tenure (often enough to allow you to eat). But the key difference is the way that feedback is given because an academic journal that gave no feedback would have trouble staying in business (unless it had incredible acceptance already, see Matthew Effect) because we’re all notionally building knowledge. But “no feedback” is the default in other publishing.

When I get feedback academically, I can quickly work out several things:

  1. Is the reviewer actually qualified to review my work? If someone doesn’t have the right background, they start saying things like surely when they mean I don’t know, and it quickly tells you that this review will be uninformative.
  2. Has the reviewer actually read the work? I would ask all the academics reading this to send me $1 if they’ve ever been told to include something that is obviously in the paper and takes up 1-2 pages already, except I am scared of the tax and weight implications.
  3. How the feedback can be useful. Good feedback is great. It spots holes, it reinforces bridges, it suggests new directions.
  4. If I want to publish in that venue again. If someone can’t organise their reviewers and oversee the reviews properly? I’m not going to get what I need to do good work. I should go and publish elsewhere.

My current exposure to non-academic publishing has been: submit story, wait, get rejection. Feedback? “Not suitable for us but thank you for your interest”, “not quite right for us”,”I’m going to pass on this”. I should note that the editors have all been very nice, timely (scarily so, in some cases) and all of my interactions have been great – my problem is mechanistic, not personal. I should clearly state that I assume that point 1 from above holds for all non-academic publishing, that is that the editors have chosen someone to review in a genre that they don’t actually hate and know something about. So 1 is fine. But 2 is tricky when you get no feedback.

But that tricky #2, “Has the reviewer actually read the work”, in the context of my previous statements really becomes “HOW has the reviewer read my work?” Is there an informal ordering of people you think you’ll enjoy to newbies, even unconsciously? How hungry is the reviewer when they’re working? Do they clear up ‘simple checks’ just before lunch? In the absence of feedback, I can’t assess the validity of the mechanism. I can’t improve the work with no feedback (step 3) and I’m now torn as to whether this story was bad for a given venue or whether my writing is just so awful that I should never darken their door again! (I accept, dear reader, that this may just be the sad truth and they’re all too scared to tell me.)

Let me remind you that implicit bias is often completely unconscious and many people are deeply surprised when they discover what they have been doing. I imagine that there are a number of reviewers reading this who are quite insulted. I certainly don’t mean to offend but I will ask if you’ve sat down and collected data on your practice. If you have, I would really love to see it because I love data! But, if what you have is your memory of trying to be fair… Many people will be in denial because we all like to think we’re rational and fair decision makers. (Looks back at those studies. Umm.)

We can deal with some aspects of implicit bias by using blind review systems, where the reviewer only sees the work and we remove any clues as to who wrote it. In academia this can get hard because some people’s contributed signature is so easy to see but it is still widely used. (I imagine it’s equally hard for well known writers.) This will, at least, remove gender bias and potentially reduce the impact of “famous people”, unless they are really distinctive. I know that a blinding process isn’t happening in all of the parts of non-academic publishing because my name is all over my manuscripts. (I must note that there are places that use blind submission, such as Andromeda Spaceways Inflight Magazine and Aurealis, for initial reading, which is a great start.) Usually, when I submit, my covering letter has to clearly state my publication history. This is the very opposite of a blind process because I am being asked to rate myself for Matthew Effect scaling every time I submit!

(There are also some tips and tricks in fiction, where your rejections can be personalised, yet contain no improvement information. This is still “a better rejection” but you have to know this from elsewhere because it’s not obvious. Knowing better writers is generally the best way to get to know about this. Transparency is not high, here.)

The timing one is harder because it requires two things: multiple reviewers and a randomised reading schedule, neither of which take into account the shoe string budgets and volunteer workforce associated with much of fiction publishing. Ideally, an anonymised work gets read 2-3 times, at different times relative to meals and during the day, taking into account the schedule of the reader. Otherwise, that last manuscript you reject before rushing home at 10pm to reheat a stale bagel? It would have to be Hemingway to get accepted. And good Hemingway at that.

And I’d like to see randomised reading applied across academic publishing as well. And we keep reviewing it until we actually reach a consensus. I’ve been on a review panel recently where we had two ‘accepts’, two ‘mehs’ and two ‘kill it with fires’ for the same paper. After group discussion, we settled for ‘a weak accept/strong meh’. Why? Because the two people who had rated it right down weren’t really experts so didn’t recognise what was going on. Why were they reviewing? Because it’s part of the job. So don’t think I’m going after non-academic publishing here. I’m exposing problems in both because I want to try and fix both.

But I do recognise that the primary job of non-academic publishing is getting people to read the publication, which means targeting saleable works. Can we do this in a way that is more systematic than “I know good writing when I see it” because (a) that doesn’t scale and (b) the chances of that aligning across more than two people is tiny.

This is where technological support can be invaluable. Word counting, spell checking and primitive grammar checking are all the dominion of the machine, as is plagiarism detection on existing published works. So step one is a brick wall that says “This work has not been checked against our submissions standards: problems are…” and this need not involve a single human (unless you are trying to spellcheck The Shugenkraft of Berzxx, in which case have a tickbox for ‘Heavy use of neologisms and accents’.) Plagiarism detection is becoming more common in academic writing and it saves a lot of time because you don’t spend it reading lifted work. (I read something that was really familiar and realised someone had sent me some of my own work with their name on it. Just… no.)

What we want is to go from a flood, to a river, then to manage that river and direct it to people who can handle a stream at a time. Human beings should not be the cogs and failure points in the high volume non-academic publishing industry.

Stripping names, anonymising and randomly distributing work is fairly important if we want to remove time biases. Even the act of blinding and randomising is going to reduce the chances that the same people get the same good or bad slots. We are partially systematic. Almost everyone in the industry is overworked, doing vast and wonderful things and, in the face of that, tired and biassed behaviour becomes more likely.

The final thing that would be useful is something alone the lines of a floating set of check boxes that sit with the document, if it’s electronic. (On paper, have a separate sheet that you can scan in once it’s filled in and then automatically extract the info.) What do you actually expect? What is this work/story not giving you? Is it derivative work? Is it just all talk and no action? Is it too early and just doesn’t go anywhere? Separating documents from any form of feedback automation (or expecting people to type sentences) is going to slow things down and make it impossible to give feedback. Every publishing house has a list of things not to do, let’s start with the 10 worst of those and see how many more we can get onto the feedback screen.

I am thinking of an approach that makes feedback an associated act of reading and can then be sent, with accept or reject, in the same action. Perhaps it has already been created and is in use in fine publishing houses, but my work hasn’t hit a bar where I even get that feedback? I don’t know. I can see that distributed editorial boards, like Andromeda, are obviously taking steps down this path because they have had to get good at shunting stuff around at scale and I would love to know how far they’ve got. For me, a mag that said “We will always give you even a little bit of feedback” will probably get all of my stuff first. (Not that they want it but you get the idea.)

I understand completely that publishers are under no obligation whatsoever to do this. There is no right to feedback nor is there an expectation outside of academia. But if we want good work, then I think I’ve already shown that we are probably missing out on some of it and, by not providing feedback, some (if not many) of those stories will vanish, never worked on again, never seen again, because the authors have absolutely no guidance on how to change their work.

I have already discussed mocking up a system, building from digital humanist approaches and using our own expertise, with one of my colleagues and we hope to start working on something soon. But I’d rather build something that works for everyone and lets publishers get more good work, authors recognised when they get it right, and something that brings more and more new voices into the community. Let me know if it’s already been written or take me to school in the comments below. I can’t complain about lack of feedback and then ignore it when I get it!

[1] PNAS, vol. 109 no. 41, Corinne A. Moss-Racusin, 16474–16479, doi: 10.1073/pnas.1211286109

[2] PNAS vol. 108 no. 17, Shai Danziger, 6889–6892, doi: 10.1073/pnas.1018033108


Designing a MOOC: how far did it reach? #csed

Mark Guzdial posted over on his blog on “Moving Beyond MOOCS: Could we move to understanding learning and teaching?” and discusses aspects (that still linger) of MOOC hype. (I’ve spoken about MOOCs done badly before, as well as recording the thoughts of people like Hugh Davis from Southampton.) One of Mark’s paragraphs reads:

“The value of being in the front row of a class is that you talk with the teacher.  Getting physically closer to the lecturer doesn’t improve learning.  Engagement improves learning.  A MOOC puts everyone at the back of the class, listening only and doing the homework”

My reply to this was:

“You can probably guess that I have two responses here, the first is that the front row is not available to many in the real world in the first place, with the second being that, for far too many people, any seat in the classroom is better than none.

But I am involved in a, for us, large MOOC so my responses have to be regarded in that light. Thanks for the post!”

Mark, of course, called my bluff and responded with:

“Nick, I know that you know the literature in this space, and care about design and assessment. Can you say something about how you designed your MOOC to reach those who would not otherwise get access to formal educational opportunities? And since your MOOC has started, do you know yet if you achieved that goal — are you reaching people who would not otherwise get access?”

So here is that response. Thanks for the nudge, Mark! The answer is a bit long but please bear with me. We will be posting a longer summary after the course is completed, in a month or so. Consider this the unedited taster. I’m putting this here, early, prior to the detailed statistical work, so you can see where we are. All the numbers below are fresh off the system, to drive discussion and answering Mark’s question at, pretty much, a conceptual level.

First up, as some background for everyone, the MOOC team I’m working with is the University of Adelaide‘s Computer Science Education Research group, led by A/Prof Katrina Falkner, with me (Dr Nick Falkner), Dr Rebecca Vivian, and Dr Claudia Szabo.

I’ll start by noting that we’ve been working to solve the inherent scaling issues in the front of the classroom for some time. If I had a class of 12 then there’s no problem in engaging with everyone but I keep finding myself in rooms of 100+, which forces some people to sit away from me and also limits the number of meaningful interactions I can make to individuals in one setting. While I take Mark’s point about the front of the classroom, and the associated research is pretty solid on this, we encountered an inherent problem when we identified that students were better off down the front… and yet we kept teaching to rooms with more student than front. I’ll go out on a limb and say that this is actually a moral issue that we, as a sector, have had to look at and ignore in the face of constrained resources. The nature of large spaces and people, coupled with our inability to hover, means that we can either choose to have a row of students effectively in a semi-circle facing us, or we accept that after a relatively small number of students or number of rows, we have constructed a space that is inherently divided by privilege and will lead to disengagement.

So, Katrina’s and my first foray into this space was dealing with the problem in the physical lecture spaces that we had, with the 100+ classes that we had.

Katrina and I published a paper on “contributing student pedagogy” in Computer Science Education 22 (4), 2012, to identify ways for forming valued small collaboration groups as a way to promote engagement and drive skill development. Ultimately, by reducing the class to a smaller number of clusters and making those clusters pedagogically useful, I can then bring the ‘front of the class’-like experience to every group I speak to. We have given talks and applied sessions on this, including a special session at SIGCSE, because we think it’s a useful technique that reduces the amount of ‘front privilege’ while extending the amount of ‘front benefit’. (Read the paper for actual detail – I am skimping on summary here.)

We then got involved in the support of the national Digital Technologies curriculum for primary and middle school teachers across Australia, after being invited to produce a support MOOC (really a SPOC, small, private, on-line course) by Google. The target learners were teachers who were about to teach or who were teaching into, initially, Foundation to Year 6 and thus had degrees but potentially no experience in this area. (I’ve written about this before and you can find more detail on this here, where I also thanked my previous teachers!)

The motivation of this group of learners was different from a traditional MOOC because (a) everyone had both a degree and probable employment in the sector which reduced opportunistic registration to a large extent and (b) Australian teachers are required to have a certain number of professional development (PD) hours a year. Through a number of discussions across the key groups, we had our course recognised as PD and this meant that doing our course was considered to be valuable although almost all of the teachers we spoke to were furiously keen for this information anyway and my belief is that the PD was very much ‘icing’ rather than ‘cake’. (Thank you again to all of the teachers who have spent time taking our course – we really hope it’s been useful.)

To discuss access and reach, we can measure teachers who’ve taken the course (somewhere in the low thousands) and then estimate the number of students potentially assisted and that’s when it gets a little crazy, because that’s somewhere around 30-40,000.

In his talk at CSEDU 2014, Hugh Davis identified the student groups who get involved in MOOCs as follows. The majority of people undertaking MOOCs were life-long learners (older, degreed, M/F 50/50), people seeking skills via PD, and those with poor access to Higher Ed. There is also a small group who are Uni ‘tasters’ but very, very small. (I think we can agree that tasting a MOOC is not tasting a campus-based Uni experience. Less ivy, for starters.) The three approaches to the course once inside were auditing, completing and sampling, and it’s this final one that I want to emphasise because this brings us to one of the differences of MOOCs. We are not in control of when people decide that they are satisfied with the free education that they are accessing, unlike our strong gatekeeping on traditional courses.

I am in total agreement that a MOOC is not the same as a classroom but, also, that it is not the same as a traditional course, where we define how the student will achieve their goals and how they will know when they have completed. MOOCs function far more like many people’s experience of web browsing: they hunt for what they want and stop when they have it, thus the sampling engagement pattern above.

(As an aside, does this mean that a course that is perceived as ‘all back of class’ will rapidly be abandoned because it is distasteful? This makes the student-consumer a much more powerful player in their own educational market and is potentially worth remembering.)

Knowing these different approaches, we designed the individual subjects and overall program so that it was very much up to the participant how much they chose to take and individual modules were designed to be relatively self-contained, while fitting into a well-designed overall flow that built in terms of complexity and towards more abstract concepts. Thus, we supported auditing, completing and sampling, whereas our usual face-to-face (f2f) courses only support the first two in a way that we can measure.

As Hugh notes, and we agree through growing experience, marking/progress measures at scale are very difficult, especially when automated marking is not enough or not feasible. Based on our earlier work in contributing collaboration in the class room, for the F-6 Teacher MOOC we used a strong peer-assessment model where contributions and discussions were heavily linked. Because of the nature of the cohort, geographical and year-level groups formed who then conducted additional sessions and produced shared material at a slightly terrifying rate. We took the approach that we were not telling teachers how to teach but we were helping them to develop and share materials that would assist in their teaching. This reduced potential divisions and allows us to establish a mutually respectful relationship that facilitated openness.

(It’s worth noting that the courseware is creative commons, open and free. There are people reassembling the course for their specific take on the school system as we speak. We have a national curriculum but a state-focused approach to education, with public and many independent systems. Nobody makes any money out of providing this course to teachers and the material will always be free. Thank you again to Google for their ongoing support and funding!)

Overall, in this first F-6 MOOC, we had higher than usual retention of students and higher than usual participation, for the reasons I’ve outlined above. But this material was for curriculum support for teachers of young students, all of whom were pre-programming, and it could be contained in videos and on-line sharing of materials and discussion. We were also in the MOOC sweet-spot: existing degreed learners, PD driver, and their PD requirement depended on progressive demonstration on goal achievement, which we recognised post-course with a pre-approved certificate form. (Important note: if you are doing this, clear up how the PD requirements are met and how they need to be reported back, as early on as you can. It meant that we could give people something valuable in a short time.)

The programming MOOC, Think. Create. Code on EdX, was more challenging in many regards. We knew we were in a more difficult space and would be more in what I shall refer to as ‘the land of the average MOOC consumer’. No strong focus, no PD driver, no geographically guaranteed communities. We had to think carefully about what we considered to be useful interaction with the course material. What counted as success?

To start with, we took an image-based approach (I don’t think I need to provide supporting arguments for media-driven computing!) where students would produce images and, over time, refine their coding skills to produce and understand how to produce more complex images, building towards animation. People who have not had good access to education may not understand why we would use programming in more complex systems but our goal was to make images and that is a fairly universally understood idea, with a short production timeline and very clear indication of achievement: “Does it look like a face yet?”

In terms of useful interaction, if someone wrote a single program that drew a face, for the first time – then that’s valuable. If someone looked at someone else’s code and spotted a bug (however we wish to frame this), then that’s valuable. I think that someone writing a single line of correct code, where they understand everything that they write, is something that we can all consider to be valuable. Will it get you a degree? No. Will it be useful to you in later life? Well… maybe? (I would say ‘yes’ but that is a fervent hope rather than a fact.)

So our design brief was that it should be very easy to get into programming immediately, with an active and engaged approach, and that we have the same “mostly self-contained week” approach, with lots of good peer interaction and mutual evaluation to identify areas that needed work to allow us to build our knowledge together. (You know I may as well have ‘social constructivist’ tattooed on my head so this is strongly in keeping with my principles.) We wrote all of the materials from scratch, based on a 6-week program that we debated for some time. Materials consisted of short videos, additional material as short notes, participatory activities, quizzes and (we planned for) peer assessment (more on that later). You didn’t have to have been exposed to “the lecture” or even the advanced classroom to take the course. Any exposure to short videos or a web browser would be enough familiarity to go on with.

Our goal was to encourage as much engagement as possible, taking into account the fact that any number of students over 1,000 would be very hard to support individually, even with the 5-6 staff we had to help out. But we wanted students to be able to develop quickly, share quickly and, ultimately, comment back on each other’s work quickly. From a cognitive load perspective, it was crucial to keep the number of things that weren’t relevant to the task to a minimum, as we couldn’t assume any prior familiarity. This meant no installers, no linking, no loaders, no shenanigans. Write program, press play, get picture, share to gallery, winning.

As part of this, our support team (thanks, Jill!) developed a browser-based environment for Processing.js that integrated with a course gallery. Students could save their own work easily and share it trivially. Our early indications show that a lot of students jumped in and tried to do something straight away. (Processing is really good for getting something up, fast, as we know.) We spent a lot of time testing browsers, testing software, and writing code. All of the recorded materials used that development environment (this was important as Processing.js and Processing have some differences) and all of our videos show the environment in action. Again, as little extra cognitive load as possible – no implicit requirement for abstraction or skills transfer. (The AdelaideX team worked so hard to get us over the line – I think we may have eaten some of their brains to save those of our students. Thank you again to the University for selecting us and to Katy and the amazing team.)

The actual student group, about 20,000 people over 176 countries, did not have the “built-in” motivation of the previous group although they would all have their own levels of motivation. We used ‘meet and greet’ activities to drive some group formation (which worked to a degree) and we also had a very high level of staff monitoring of key question areas (which was noted by participants as being very high for EdX courses they’d taken), everyone putting in 30-60 minutes a day on rotation. But, as noted before, the biggest trick to getting everyone engaged at the large scale is to get everyone into groups where they have someone to talk to. This was supposed to be provided by a peer evaluation system that was initially part of the assessment package.

Sadly, the peer assessment system didn’t work as we wanted it to and we were worried that it would form a disincentive, rather than a supporting community, so we switched to a forum-based discussion of the works on the EdX discussion forum. At this point, a lack of integration between our own UoA programming system and gallery and the EdX discussion system allowed too much distance – the close binding we had in the R-6 MOOC wasn’t there. We’re still working on this because everything we know and all evidence we’ve collected before tells us that this is a vital part of the puzzle.

In terms of visible output, the amount of novel and amazing art work that has been generated has blown us all away. The degree of difference is huge: armed with approximately 5 statements, the number of different pieces you can produce is surprisingly large. Add in control statements and reputation? BOOM. Every student can write something that speaks to her or him and show it to other people, encouraging creativity and facilitating engagement.

From the stats side, I don’t have access to the raw stats, so it’s hard for me to give you a statistically sound answer as to who we have or have not reached. This is one of the things with working with a pre-existing platform and, yes, it bugs me a little because I can’t plot this against that unless someone has built it into the platform. But I think I can tell you some things.

I can tell you that roughly 2,000 students attempted quiz problems in the first week of the course and that over 4,000 watched a video in the first week – no real surprises, registrations are an indicator of interest, not a commitment. During that time, 7,000 students were active in the course in some way – including just writing code, discussing it and having fun in the gallery environment. (As it happens, we appear to be plateauing at about 3,000 active students but time will tell. We have a lot of post-course analysis to do.)

It’s a mistake to focus on the “drop” rates because the MOOC model is different. We have no idea if the people who left got what they wanted or not, or why they didn’t do anything. We may never know but we’ll dig into that later.

I can also tell you that only 57% of the students currently enrolled have declared themselves explicitly to be male and that is the most likely indicator that we are reaching students who might not usually be in a programming course, because that 43% of others, of whom 33% have self-identified as women, is far higher than we ever see in classes locally. If you want evidence of reach then it begins here, as part of the provision of an environment that is, apparently, more welcoming to ‘non-men’.

We have had a number of student comments that reflect positive reach and, while these are not statistically significant, I think that this also gives you support for the idea of additional reach. Students have been asking how they can save their code beyond the course and this is a good indicator: ownership and a desire to preserve something valuable.

For student comments, however, this is my favourite.

I’m no artist. I’m no computer programmer. But with this class, I see I can be both. #processingjs (Link to student’s work) #code101x .

That’s someone for whom this course had them in the right place in the classroom. After all of this is done, we’ll go looking to see how many more we can find.

I know this is long but I hope it answered your questions. We’re looking forward to doing a detailed write-up of everything after the course closes and we can look at everything.


EduTech AU 2015, Day 2, Higher Ed Leaders, “Change and innovation in the Digital Age: the future is social, mobile and personalised.” #edutechau @timbuckteeth

And heeere’s Steve Wheeler (@timbuckteeth)! Steve is an A/Prof of Learning Technologies at Plymouth in the UK. He and I have been at the same event before (CSEDU, Barcelona) and we seem to agree on a lot. Today’s cognitive bias warning is that I will probably agree with Steve a lot, again. I’ve already quizzed him on his talk because it looked like he was about to try and, as I understand it, what he wants to talk about is how our students can have altered expectations without necessarily becoming some sort of different species. (There are no Digital Natives. No, Prensky was wrong. Check out Helsper, 2010, from the LSE.) So, on to the talk and enough of my nonsense!

Steve claims he’s going to recap the previous speaker, but in an English accent. Ah, the Mayflower steps on the quayside in Plymouth, except that they’re not, because the real Mayflower steps are in a ladies’ loo in a pub, 100m back from the quay. The moral? What you expect to be getting is not always what you get. (Tourists think they have the real thing, locals know the truth.)

“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clarke.

Educational institutions are riddled with bad technology purchases where we buy something, don’t understand it, don’t support it and yet we’re stuck with it or, worse, try to teach with it when it doesn’t work.

Predicting the future is hard but, for educators, we can do it better if we look at:

  • Pedagogy first
  • Technology next (that fits the technology)

Steve then plugs his own book with a quote on technology not being a silver bullet.

But who will be our students? What are their expectations for the future? Common answers include: collaboration (student and staff), and more making and doing. They don’t like being talked at. Students today do not have a clear memory of the previous century, their expectations are based on the world that they are living in now, not the world that we grew up in.

Meet Student 2.0!

The average digital birth of children happens at about six months – but they can be on the Internet before they are born, via ultrasound photos. (Anyone who has tried to swipe or pinch-zoom a magazine knows why kids take to it so easily.) Students of today have tools and technology and this is what allows them to create, mash up, and reinvent materials.

What about Game Based Learning? What do children learn from playing games

Three biggest fears of teachers using technology

  • How do I make this work?
  • How do I avoid looking like an idiot?
  • They will know more about it than I do.

Three biggest fears of students

  • Bad wifi
  • Spinning wheel of death
  • Low battery

The laptops and devices you see in lectures are personal windows on the world, ongoing conversations and learning activities – it’s not purely inattention or anti-learning. Student questions on Twitter can be answered by people all around the world and that’s extending the learning dialogue out a long way beyond the classroom.

One of these is Voltaire, one is Steve Wheeler.

One of these is Voltaire, one is Steve Wheeler.

Voltaire said that we were products of our age. Walrick asks how we can prepare students for a future? Steve showed us a picture of him as a young boy, who had been turned off asking questions by a mocking teacher. But the last two years of his schooling were in Holland he went to the Philips flying saucer, which was a technology museum. There, he saw an early video conferencing system and that inspired him with a vision of the future.

Steve wanted to be an astronaut but his career advisor suggested he aim lower, because he wasn’t an American. The point is not that Steve wanted to be an astronaut but that he wanted to be an explorer, the role that he occupies now in education.

Steve shared a quote that education is “about teaching students not subjects” and he shared the awesome picture of ‘named quadrilaterals’. My favourite is ‘Bob. We have a very definite idea of what we want students to write as answer but we suppress creative answers and we don’t necessarily drive the approach to learning that we want.

Ignorance spreads happily by itself, we shouldn’t be helping it. Our visions of the future are too often our memories of what our time was, transferred into modern systems. Our solution spaces are restricted by our fixations on a specific way of thinking. This prevents us from breaking out of our current mindset and doing something useful.

What will the future be? It was multi-media, it was web, but where is it going? Mobile devices because the most likely web browser platform in 2013 and their share is growing.

What will our new technologies be? Thinks get smaller, faster, lighter as they mature. We have to think about solving problems in new ways.

Here’s a fire hose sip of technologies: artificial intelligence is on the way up, touch surfaces are getting better, wearables are getting smarter, we’re looking at remote presence, immersive environments, 3D printers are changing manufacturing and teaching, gestural computing, mind control of devices, actual physical implants into the body…

From Nova Spivak, we can plot information connectivity against social connectivity and we want is growth on both axes – a giant arrow point up to the top right. We don’t yet have a Web form that connects information, knowledge and people – i.e. linking intelligence and people. We’re already seeing some of this with recommenders, intelligent filtering, and sentiment tracking. (I’m still waiting for the Semantic Web to deliver, I started doing work on it in my PhD, mumble years ago.)

A possible topology is: infrastructure is distributed and virtualised, our interfaces are 3D and interactive, built onto mobile technology and using ‘intelligent’ systems underneath.

But you cannot assume that your students are all at the same level or have all of the same devices: the digital divide is as real and as damaging as any social divide. Steve alluded to the Personal Learning Networking, which you can read about in my previous blog on him.

How will teaching change? It has to move away from cutting down students into cloned templates. We want students to be self-directed, self-starting, equipped to capture information, collaborative, and oriented towards producing their own things.

Let’s get back to our roots:

  1. We learn by doing (Piaget, 1950)
  2. We learn by making (Papert, 1960)

Just because technology is making some of this doing and making easier doesn’t mean we’re making it worthless, it means that we have time to do other things. Flip the roles, not just the classroom. Let students’ be the teacher – we do learn by teaching. (Couldn’t agree more.)

Back to Papert, “The best learning takes place when students take control.” Students can reflect in blogging as they present their information a hidden audience that they are actually writing for. These physical and virtual networks grow, building their personal learning networks as they connect to more people who are connected to more people. (Steve’s a huge fan of Twitter. I’m not quite as connected as he is but that’s like saying this puddle is smaller than the North Sea.)

Some of our students are strongly connected and they do store their knowledge in groups and friendships, which really reflects how they find things out. This rolls into digital cultural capital and who our groups are.

(Then there was a steam of images at too high a speed for me to capture – go and download the slides, they’re creative commons and a lot of fun.)

Learners will need new competencies and literacies.

Always nice to hear Steve speak and, of course, I still agree with a lot of what he said. I won’t prod him for questions, though.


EduTech AU 2015, Day 2, Higher Ed Leaders, “Assessment: The Silent Killer of Learning”, #edutechau @eric_mazur

No surprise that I’m very excited about this talk as well. Eric is a world renowned educator and physicist, having developed Peer Instruction in 1990 for his classes at Harvard as a way to deal with students not developing a working physicist’s approach to the content of his course. I should note that Eric also gave this talk yesterday and the inimitable Steve Wheeler blogged that one, so you should read Steve as well. But after me. (Sorry, Steve.)

I’m not an enormous fan of most of the assessment we use as most grades are meaningless, assessment becomes part of a carrot-and-stick approach and it’s all based on artificial timelines that stifle creativity. (But apart from that, it’s fine. Ho ho.) My pithy statement on this is that if you build an adversarial educational system, you’ll get adversaries, but if you bother to build a learning environment, you’ll get learning. One of the natural outcomes of an adversarial system is activities like cheating and gaming the system, because people start to treat beating the system as the goal itself, which is highly undesirable. You can read a lot more about my views on plagiarism here, if you like. (Warning: that post links to several others and is a bit of a wormhole.)

Now, let’s hear what Eric has to say on this! (My comments from this point on will attempt to contain themselves in parentheses. You can find the slides for his talk – all 62MB of them – from this link on his website. ) It’s important to remember that one of the reasons that Eric’s work is so interesting is that he is looking for evidence-based approaches to education.

Eric discussed the use of flashcards. A week after Flashcard study, students retain 35%. After two weeks, it’s almost gone. He tried to communicate this to someone who was launching a cloud-based flashcard app. Her response was “we only guarantee they’ll pass the test”.

*low, despairing chuckle from the audience*

Of course most students study to pass the test, not to learn, and they are not the same thing. For years, Eric has been bashing the lecture (yes, he noted the irony) but now he wants to focus on changing assessment and getting it away from rote learning and regurgitation. The assessment practices we use now are not 21st century focused, they are used for ranking and classifying but, even then, doing it badly.

So why are we assessing? What are the problems that are rampant in our assessment procedure? What are the improvements we can make?

How many different purposes of assessment can you think of? Eric gave us 90s to come up with a list. Katrina and I came up with about 10, most of which were serious, but it was an interesting question to reflect upon. (Eric snuck

  1. Rate and rank students
  2. Rate professor and course
  3. Motivate students to keep up with work
  4. Provide feedback on learning to students
  5. Provide feedback to instructor
  6. Provide instructional accountability
  7. Improve the teaching and learning.

Ah, but look at the verbs – they are multi-purpose and in conflict. How can one thing do so much?

So what are the problems? Many tests are fundamentally inauthentic – regurgitation in useless and inappropriate ways. Many problem-solving approaches are inauthentic as well (a big problem for computing, we keep writing “Hello, World”). What does a real problem look like? It’s an interruption in our pathway to our desired outcome – it’s not the outcome that’s important, it’s the pathway and the solution to reach it that are important. Typical student problem? Open the book to chapter X to apply known procedure Y to determine an unknown answer.

Shout out to Bloom’s! Here’s Eric’s slide to remind you.

Rights reside with Eric Mazur.

Rights reside with Eric Mazur.

Eric doesn’t think that many of us, including Harvard, even reach the Applying stage. He referred to a colleague in physics who used baseball problems throughout the course in assignments, until he reached the final exam where he ran out of baseball problems and used football problems. “Professor! We’ve never done football problems!” Eric noted that, while the audience were laughing, we should really be crying. If we can’t apply what we’ve learned then we haven’t actually learned i.

Eric sneakily put more audience participation into the talk with an open ended question that appeared to not have enough information to come up with a solution, as it required assumptions and modelling. From a Bloom’s perspective, this is right up the top.

Students loathe assumptions? Why? Mostly because we’ll give them bad marks if they get it wrong. But isn’t the ability to make assumptions a really important skill? Isn’t this fundamental to success?

Eric demonstrated how to tame the problem by adding in more constraints but this came at the cost of the creating stage of Bloom’s and then the evaluating and analysing(Check out his slides, pages 31 to 40, for details of this.) If you add in the memorisation of the equation, we have taken all of the guts out of the problem, dropping down to the lowest level of Bloom’s.

But, of course, computers can do most of the hard work for that is mechanistic. Problems at the bottom layer of Bloom’s are going to be solved by machines – this is not something we should train 21st Century students for.

But… real problem solving is erratic. Riddled with fuzziness. Failure prone. Not guaranteed to succeed. Most definitely not guaranteed to be optimal. The road to success is littered with failures.

But, if you make mistakes, you lose marks. But if you’re not making mistakes, you’re very unlikely to be creative and innovative and this is the problem with our assessment practices.

Eric showed us a stress of a traditional exam room: stressful, isolated, deprived of calculators and devices. Eric’s joke was that we are going to have to take exams naked to ensure we’re not wearing smart devices. We are in a time and place where we can look up whatever we want, whenever we want. But it’s how you use that information that makes a difference. Why are we testing and assessing students under such a set of conditions? Why do we imagine that the result we get here is going to be any indicator at all of the likely future success of the student with that knowledge?

Cramming for exams? Great, we store the information in short-term memory. A few days later, it’s all gone.

Assessment produces a conflict, which Eric noticed when he started teaching a team and project based course. He was coaching for most of the course, switching to a judging role for the monthly fair. He found it difficult to judge them because he had a coach/judge conflict. Why do we combine it in education when it would be unfair or unpleasant in every other area of human endeavour? We hide between the veil of objectivity and fairness. It’s not a matter of feelings.

But… we go back to Bloom’s. The only thinking skill that can be evaluated truly objectively is remembering, at the bottom again.

But let’s talk about grade inflation and cheating. Why do people cheat at education when they don’t generally cheat at learning? But educational systems often conspire to rob us of our ownership and love of learning. Our systems set up situations where students cheat in order to succeed.

  • Mimic real life in assessment practices!

Open-book exams. Information sticks when you need it and use it a lot. So use it. Produce problems that need it. Eric’s thought is you can bring anything you want except for another living person. But what about assessment on laptops? Oh no, Google access! But is that actually a problem? Any question to which the answer can be Googled is not an authentic question to determine learning!

Eric showed a video of excited students doing a statistic tests as a team-based learning activity. After an initial pass at the test, the individual response is collected (for up to 50% of the grade), and then students work as a group to confirm the questions against an IF AT scratchy card for the rest of the marks. Discussion, conversation, and the students do their own grading for you. They’ve also had the “A-ha!” moment. Assessment becomes a learning opportunity.

Eric’s not a fan of multiple choice so his Learning Catalytics software allows similar comparison of group answers without having to use multiple choice. Again, the team based activities are social, interactive and must less stressful.

  • Focus on feedback, not ranking.

Objective ranking is a myth. The amount of, and success with, advanced education is no indicator of overall success in many regards. So why do we rank? Eric showed some graphs of his students (in earlier courses) plotting final grades in physics against the conceptual understanding of force. Some people still got top grades without understanding force as it was redefined by Newton. (For those who don’t know, Aristotle was wrong on this one.) Worse still is the student who mastered the concept of force and got a C, when a student who didn’t master force got an A. Objectivity? Injustice?

  • Focus on skills, not content

Eric referred to Wiggins and McTighe, “Understanding by Design.”  Traditional approach is course content drives assessment design. Wiggins advocates identifying what the outcomes are, formulate these as action verbs, ‘doing’ x rather than ‘understanding’ x. You use this to identify what you think the acceptable evidence is for these outcomes and then you develop the instructional approach. This is totally outcomes based.

  • resolve coach/judge conflict

In his project-based course, Eric brought in external evaluators, leaving his coach role unsullied. This also validates Eric’s approach in the eyes of his colleagues. Peer- and self-evaluation are also crucial here. Reflective time to work out how you are going is easier if you can see other people’s work (even anonymously). Calibrated peer review, cpr.molsci.ucla.edu, is another approach but Eric ran out of time on this one.

If we don’t rethink assessment, the result of our assessment procedures will never actually provide vital information to the learner or us as to who might or might not be successful.

I really enjoyed this talk. I agree with just about all of this. It’s always good when an ‘internationally respected educator’ says it as then I can quote him and get traction in change-driving arguments back home. Thanks for a great talk!

 


EduTech Australia 2015, Day 1, Session 1, Part 2, Higher Ed Leaders #edutechau

The next talk was a video conference presentation, “Designed to Engage”, from Dr Diane Oblinger, formerly of EDUCAUSE (USA). Diane was joining us by video on the first day of retirement – that’s keen!

Today, technology is not enough, it’s about engagement. Diane believes that the student experience can be a critical differentiator in this. In many institutions, the student will be the differentiator. She asked us to consider three different things:

  1. What would life be like without technology? How does this change our experiences and expectations?
  2. Does it have to be human-or-machine? We often construct a false dichotomy of online versus face-to-face rather than thinking about them as a continuum.
  3. Changes in demography are causing new consumption patterns.

Consider changes in the four key areas:

  • Learning
  • Pathways
  • Credentialing
  • Alternate Models

To speak to learning, Diane wants us to think about learning for now, rather than based on our own experiences. What will happen when classic college meets online?

Diane started from the premise that higher order learning comes from complex challenges – how can we offer this to students? Well, there are game-based, high experiential activities. They’re complex, interactive, integrative, information gathering driven, team focused and failure is part of the process. They also develop tenacity (with enough scaffolding, of course). We also get, almost for free, vast quantities of data to track how students performed their solving activities, which is far more than “right” or “wrong”. Does a complex world need more of these?

The second point for learning environments is that, sometimes, massive and intensive can go hand-in-hand. The Georgia Tech Online Master of Science in Computer Science, on Udacity , with assignments, TAs and social media engagements and problem-solving.  (I need to find out more about this. Paging the usual suspects.)

The second area discussed was pathways. Students lose time, track and credits when they start to make mistakes along the way and this can lead to them getting lost in the system. Cost is a huge issue in the US (and, yes, it’s a growing issue in Australia, hooray.)  Can you reduce cost without reducing learning? Students are benefiting from guided pathways to success. Georgia State and their predictive analytics were mentioned again here – leading students to more successful pathways to get better outcomes for everyone. Greatly increased retention, greatly reduced wasted tuition fees.

We now have a lot more data on what students are doing – the challenge for us is how we integrate this into better decision making. (Ethics, accuracy, privacy are all things that we have to consider.)

Learning needs to not be structured around seat time and credit hours. (I feel dirty even typing that.) Our students learn how to succeed in the environments that we give them. We don’t want to train them into mindless repetition. Once again, competency based learning, strongly formative, reflecting actual knowledge, is the way to go here.

(I really wish that we’d properly investigated the CBL first year. We might have done something visionary. Now we’ll just look derivative if we do it three years from now. Oh, well, time to start my own University – Nickapedia, anyone?)

Credentials raised their ugly head again – it’s one of the things that Unis have had in the bag. What is the new approach to credentials in the digital environment? Certificates and diplomas can be integrated into your on-line identity. (Again, security, privacy, ethics are all issues here but the idea is sound.) Example given was “Degreed”, a standalone credentialing site that can work to bridge recognised credentials from provide to employer.

Alternatives to degrees are being co-created by educators and employers. (I’m not 100% sure I agree with this. I think that some employers have great intentions but, very frequently, it turns into a requirement for highly specific training that might not be what we want to provide.)

Can we reinvent an alternative model that reinvents delivery systems, business models and support models? Can a curriculum be decentralised in a centralised University? What about models like Minerva? (Jeff mentioned this as well.)

(The slides got out of whack with the speaker for a while, apologies if I missed anything.)

(I should note that I get twitchy when people set up education for-profit. We’ve seen that this is a volatile market and we have the tension over where money goes. I have the luxury of working for an entity where its money goes to itself, somehow. There are no shareholders to deal with, beyond the 24,000,000 members of the population, who derive societal and economic benefit from our contribution.)

As noted on the next slide, working learners represent a sizeable opportunity for increased economic growth and mobility. More people in college is actually a good thing. (As an aside, it always astounds me when someone suggests that people are spending too much time in education. It’s like the insult “too clever by half”, you really have to think about what you’re advocating.)

For her closing thoughts, Diane thinks:

  1. The boundaries of the educational system must be re-conceptualised. We can’t ignore what’s going on around us.
  2. The integration of digital and physical experiences are creating new ways to engage. Digital is here and it’s not going away. (Unless we totally destroy ourselves, of course, but that’s a larger problem.)
  3. Can we design a better future for education.

Lots to think about and, despite some technical issues, a great talk.

 


The Sad Story of Dr Karl Kruszelnicki

Dr Karl is a very familiar face and voice in Australia, for his role in communicating and demystifying science. He’s a polymath and skeptic, with a large number of degrees and a strong commitment to raising awareness on crucial issues such as climate change and puncturing misconceptions and myths. With 33 books and an extensive publishing career, it’s no surprise that he’s a widely respected figure in the area of scientific communication and he holds a fellowship at the University of Sydney on the strength of his demonstrated track record and commitment to science.

It is a very sad state of events that has led to this post, where we have to talk about how his decision to get involved in a government-supported advertising campaign has had some highly undesirable outcomes. The current government of Australia has had, being kind, a questionable commitment to science, not appointing a science minister for the first time in decades, undermining national initiatives in alternative and efficient energy,  and having a great deal of resistance to issues such as the scientific consensus on climate change. However, as part of the responsibilities of government, the Intergenerational Report is produced at least every 5 years (this one was a wee bit late) and has the tricky job of crystal-balling the next 40 years to predict change and allow government policy to be shaped to handle that change. Having produced the report, the government looked for a respected science-focused speaker to front the advertising campaign and they recruited Dr Karl, who has recently been on the TV talking about some of the things in the report as part of a concerted effort to raise awareness of the report.

But there’s a problem.

And it’s a terrible problem because it means that Dr Karl didn’t follow some of the most basic requirements of science. From an ABC article on this:

“Dr Kruszelnicki said he was only able to read parts of the report (emphasis mine) before he agreed to the ads as the rest was under embargo.”

Ah. But that’s ok, if he agrees with the report as it’s been released, right? Uhh. About that, from a Fairfax piece on Tuesday.

“The man appearing on television screens across the country promoting the Abbott government’s Intergenerational Report – science broadcaster Karl Kruszelnicki – has hardened his stance against the document, describing it as “flawed” and admitting to concerns that it was “fiddled with” by the government.” (emphasis mine, again)

Dr Karl now has concerns over the independence of the report (he now sees it as a primarily political document) and much of its content. Ok, now we have a problem. He’s become the public face of a report that he didn’t read and that he took, very naïvely, on faith that the parts he hadn’t seen would (somewhat miraculously) reverse the well-known direction of the current government on climate change. But it’s not as if he just took money to front something he didn’t read, is it? Oh. He hasn’t been paid for it yet but this was a paid gig. Obviously, the first thing to do is to not take the money, if you’re unhappy with the report, right? Urm. From the SMH link above:

“What have I done wrong?” he told Fairfax Media. “As far as I’m concerned I was hired to bring the public’s attention to the report. People have heard about this one where they hadn’t heard about IGR one, two or three.”

But then public reaction on Twitter and social media started to rise and, last night, this was released on his Twitter account:

“I have decided to donate any moneys received from the IGR campaign to needy government schools. More to follow tomorrow. Dr Karl.”

This is a really sad day for science communication in Australia. As far as I know, the campaign is still running and, while you can easily find the criticism of the report if you go looking for it, a number of highly visible media sites are not showing anything that would lead readers to think that anyone had any issues with the report. So, while Dr Karl is regretting his involvement, it continues to persuade people around Australia that this friendly, trusted and respected face of science communication is associated with the report. Giving the money away (finally) to needy schools does not fix this. But let me answer Dr Karl’s question from above, “What have I done wrong?” With the deepest regret, I can tell you that what you have done wrong is:

  • You presented a report with your endorsement without reading it,
  • You arranged to get paid to do this,
  • You consumed the goodwill and trust of the identity that you have formed over decades of positive scientific support to do this,
  • You assisted the on-going anti-scientific endeavours of a government that has a bad track record in this area, and
  • You did not immediately realise that what you had done was a complete failure of the implicit compact that you had established with the Australian public as a trusted voice of rationality, skepticism and science.

We all make mistakes and scientists do it all the time, on outdated, inaccurate, misreported and compromised data. But the moment we know that the data is wrong, we have to find the truth and come up with new approaches. But it is a serious breach of professional scientific ethics to present something where you cannot vouch for it through established soundness of reputation (a ‘good’ journal), trusted reference (an established scientific expert) or personal investigation (your own research). And that’s purely in papers or at conferences, where you are looking at relatively small issue of personal reputation or paper citations. To stand up on the national stage, for money, and to effectively say “This is Dr Karl approved”, when you have not read it, and then to hide behind an “but I was only presenting it” defence initially is to take an error of judgement and compound it into an offence that would potentially strip some scientists of their PhDs.

It’s great that Dr Karl is now giving the money away but the use of his image, with his complicity, has done its damage. There are people around Australia who will think that a report that is heavily politicised and has a great deal of questionable science in it is now scientific because Dr Karl stood there. We can’t undo that easily.

I don’t know what happens now. Dr Karl is a broadcaster but he does have a University appointment. I’ll be interested to see if his own University convenes an ethics and professional practice case against him. Maybe it will fade away, in time. Dr Karl has done so much for science in Australia that I hope that he has a good future after this but he has to come to terms with the fact that he has made the most serious error a scientist can make and I think he has a great deal of thinking to do as to what he wants to do next. Taking money from a till and then giving it back when you get caught doesn’t change the fact of what you did to get it – neither does agreeing to take cash to shill a report that you haven’t read.

But maybe I am being too harsh? The fellowship Dr Karl holds is named after Dr Julius Sumner Miller, a highly respected scientific communicator, who then used his high profile to sell chocolate in his final years (much to my Grandfather’s horror). Because nothing says scientific integrity like flogging chocolate for hard cash. Perhaps the fellowship has some sort of commercialising curse?

Actually, I don’t think I’m being too harsh, I wish him the best but I think he’s put himself into the most awful situation. I know what I would do but, then, I hope I would have seen that, just possibly, the least scientifically focused government in Australia might not be able to be trusted with an unseen report. If he had seen the report and then they had changed it all, that’s a scandal and he is a hero of scientific exposure. But that’s not the case. It’s a terribly sad day and a sad story for someone who, up until now, I had the highest respect for.


I want you to be sad. I want you to be angry. I want you to understand.

Nick Falkner is an Australian academic with a pretty interesting career path. He is also a culture sponge and, given that he’s (very happily) never going to be famous enough for a magazine style interview, he interviews himself on one of the more confronting images to come across the wires recently. For clarity, he’s the Interviewer (I) when he’s asking the questions.

The photo of Indian parents helping their children to "cheat" (Original: STRDEL/AFP/GETTY IMAGES)

The photo of Indian parents helping their children to “cheat” (Original: STRDEL/AFP/GETTY IMAGES)

Interviewer: As you said, “I want you to be sad. I want you to be angry. I want you to understand.” I’ve looked at the picture and I can see a lot of people who are being associated with a mass cheating scandal in the Indian state of Bihar. This appears to be a systematic problem, especially as even more cheating has been exposed in the testing for the police force! I think most people would agree that it’s a certainly a sad state of affairs and there are a lot of people I’ve heard speaking who are angry about this level of cheating – does this mean I understand? Basically, cheating is wrong?

Nick: No. What’s saddening me is that most of the reaction I’ve seen to this picture is lacking context, lacking compassion and, worse, perpetrating some of the worst victim blaming I’ve ever seen. I’m angry because people still don’t get that the system in place is Bihar, and wherever else we put systems like this, is going to lead to behaviour like this out of love and a desire for children to have opportunity, rather than some grand criminal scheme of petty advancement.

Interviewer: Well, ok, that’s a pretty strong set of statements. Does this mean that you think cheating is ok?

Nick: (laughs) Well, we’ve got the most usual response out of the way. No, I don’t support “cheating” in any educational activity because it means that the student is bypassing the learning design and, if we’ve done our job, this will be to their detriment. However, I also strongly believe that some approaches to large-scale education naturally lead to a range of behaviours where external factors can affect the perceived educational benefit to the student. In other words, I don’t want students to cheat but I know that we sometimes set things up so that cheating becomes a rational response and, in some cases, the only difference between a legitimate advantage and “cheating” is determined by privilege, access to funds and precedent.

Interviewer: Those are big claims. And you know what that means…

Nick: You want evidence! Ok. Let’s start with some context. Bihar is the third-largest state in India by population, with over 100 million people, the highest density of population in India, the largest number of people under 25 (nearly 60%), a heavily rural population (~85%) and a literacy rate around 64%. Bihar is growing very quickly but has put major work into its educational systems. From 2001 to 2011, literacy jumped from 48 to 64% – 20 of those percentage points are in increasing literacy in women alone.

If we took India out of the measurement, Bihar is in the top 11 countries in the world by population. And it’s accelerating in growth. At the same time, Bihar has lagged behind other Indian states in socio-economic development (for a range of reasons – it’s very … complicated). Historically, Bihar has been a seat of learning but recent actions, including losing an engineering college in 2000 due to boundary re-alignment, means that they are rebuilding themselves right now. At the same time, Bihar has a relatively low level of industrialisation by Indian standards although it’s redefining itself away from agriculture to services and industry at the moment, with some good economic growth. There are some really interesting projects on the horizon – the Indian Media Hub, IT centres and so on – which may bring a lot more money into the region.

Interviewer: Ok, Bihar is big, relatively poor … and?

Nick: And that’s the point. Bihar is full of people, not all of whom are literate, and many of whom still live in Indian agricultural conditions. The future is brightening for Bihar but if you want to be able to take advantage of that, then you’re going to have to be able to get into the educational system in the first place. That exam that the parents are “helping” their children with is one that is going to have an almost incomprehensibly large impact on their future…

Interviewer: Their future employment?

Nick:  Not just that! This will have an impact on whether they live in a house with 24 hour power. On whether they will have an inside toilet. On whether they will be able to afford good medicine when they or their family get sick. On whether they will be able to support their parents when they get old. On how good the water they drink is. As well, yes, it will help them to get into a University system where, unfortunately, existing corruption means that money can smooth a path where actual ability has led to a rockier road. The article I’ve just linked to mentions pro-cheating rallies in Uttar Pradesh in the early 90s but we’ve seen similar arguments coming from areas where rote learning, corruption and mass learning systems are thrown together and the students become grist to a very hard stone mill.  And, by similar arguments, I mean pro-cheating riots in China in 2013. Student assessment on the massive scale. Rote learning. “Perfect answers” corresponding to demonstrating knowledge. Bribery and corruption in some areas. Angry parents because they know that their children are being disadvantaged while everyone else is cheating. Same problem. Same response.

Interviewer: Well, it’s pretty sad that those countries…

Nick: I’m going to stop you there. Every time that we’ve forced students to resort to rote learning and “perfect answer” memorisation to achieve good outcomes, we’ve constructed an environment where carrying in notes, or having someone read you answers over a wireless link, suddenly becomes a way to successfully reach that outcome. The fact that this is widely used in the two countries that have close to 50% of the world’s population is a reflection of the problem of education at scale. Are you volunteering to sit down and read the 50 million free-form student essays that are produced every year in China under a fairer system? The US approach to standardised testing isn’t any more flexible. Here’s a great article on what’s wrong with the US approach because it identifies that these tests are good for measuring conformity to the test and its protocol, not the quality of any education received or the student’s actual abilities. But before we get too carried away about which countries cheat most, here are some American high school students sharing answers on Twitter.

Every time someone talks about the origin of a student, rather than the system that a student was trained under, we start to drift towards a racist mode of thinking that doesn’t help. Similar large-scale, unimaginative, conform-or-perish tests that you have to specifically study for across India, China and the US. What do we see? No real measurement of achievement or aptitude. Cheating. But let’s go back to India because the scale of the number of people involved really makes the high stakes nature of these exams even more obvious. Blow your SATs or your GREs and you can still do OK, if possibly not really well, in the US. In India… let’s have a look.

State Bank of India advertised some entry-level vacancies back in 2013. They wanted 1,500 people. 17 million applied. That’s roughly the adult population of Australia applying for some menial work at the bank. You’ve got people who are desperate to work, desperate to do something with their lives. We often think of cheats as being lazy or deceitful when it’s quite possible to construct a society so that cheating is part of a wider spectrum of behaviour that helps you achieve your goals. Performing well in exams in India and China is a matter of survival when you’re talking about those kinds of odds, not whether you get a great or an ok job.

Interviewer: You’d use a similar approach to discuss the cheating on the police exam?

Nick: Yes. It’s still something that shouldn’t be happening but the police force is a career and, rather sadly, can also be a lucrative source of alternative income in some countries. It makes sense that this is also something that people consider to be very, very high stakes. I’d put money on similar things happening in countries where achieving driving licences are a high stakes activity. (Oh, good, I just won money.)

Interviewer: So why do you want us to be sad?

Nick: I don’t actually want people to be sad, I’d much prefer it if we didn’t need to have this discussion. But, in a nutshell, every parent in that picture is actually demonstrating their love and support for their children and family. That’s what the human tragedy is here. These Biharis probably don’t have the connections or money to bypass the usual constraints so the best hope that their kids have is for their parents to risk their lives climbing walls to slip them notes.

I mean, everyone loves their kids. And, really, even those of us without children would be stupid not to realise that all children are our children in many ways, because they are the future. I know a lot of parents who saw this picture and they didn’t judge the people on the walls because they could see themselves there once they thought about it.

But it’s tragic. When the best thing you can do for your child is to help them cheat on an exam that controls their future? How sad is that?

Interviewer: Do you need us to be angry? Reading back, it sounds like you have enough anger for all of us.

Nick: I’m angry because we keep putting these systems in place despite knowing that they’re rubbish. Rousseau knew it hundreds of years ago. Dewey knew it in the 1930s. We keep pretending that exams like this sort people on merit when all of our data tells us that the best indicator of performance is the socioeconomic status of the parents, rather than which school they go to. But, of course, choosing a school is a kind of “legal” cheating anyway.

Interviewer: Ok, now there’s a controversial claim.

Nick: Not really. Studies show us that students at private schools tend to get higher University entry marks, which is the gateway to getting into courses and also means that they’ve completed their studies. Of course, the public school students who do get in go on to get higher GPAs… (This article contains the data.)

Interviewer: So it all evens out?

Nick: (laughs) No, but I have heard people say that. Basically, sending your kids to a “better” school, one of the private schools or one of the high-performing publics, especially those that offer International Baccalaureate, is not going to hurt your child’s chances of getting a good Tertiary entry mark. But, of course, the amount of money required to go to a private school is not… small… and the districting of public schools means that you have to be in the catchment to get one of these more desirable schools. And, strangely enough, once you factor in the socio-economic factors and outlook for a school district, it’s amazing how often that the high-performing schools map into higher SEF areas. Not all of them and there are some magnificent efforts in innovative and aggressive intervention in South Australia alone but even these schools have limited spaces and depend upon the primary feeder schools. Which school you go to matters. It shouldn’t. But it does.

So, you could bribe someone to make the exam easier or you could pay up to AUD $24,160 in school fees every year to put your child into a better environment. You could go to your local public school or, if you can manage the difficulty and cost of upheaval, you could relate to a new suburb to get into a “better” public school. Is that fair to the people that your child is competing against to get into limited places at University if they can’t afford that much or can’t move? That $24,000 figure is from this year’s fees for one of South Australia’s most highly respected private schools. That figure is 10 times the nominal median Indian income and roughly the same as an experienced University graduate would make in Bihar each year. In Australia, the 2013 median household income was about twice that figure, before tax. So you can probably estimate how many Australian families could afford to put one, or more, children through that kind of schooling for 5-12 years and it’s not a big number.

The Biharis in the picture don’t have a better option. They don’t have the money to bribe or the ability to move. Do you know how I know? Because they are hanging precariously from a wall trying to help their children copy out a piece of information in perfect form in order to get an arbitrary score that could add 20 years to their lifespan and save their own children from dying of cholera or being poisoned by contaminated water.

Some countries, incredibly successful education stories like Finland (seriously, just Google “Finland Educational System” and prepare to have your mind blown), take the approach that every school should be excellent, every student is valuable, every teacher is a precious resource and worthy of respect and investment and, for me, these approaches are the only way to actually produce a fair system. Excellence in education that is only available to the few makes everyone corrupt, to a greater or lesser degree, whether they realise it or not. So I’m angry because we know exactly what happens with high stakes exams like this and I want everyone to be angry because we are making ourselves do some really awful things to ourselves by constantly bending to conform to systems like this. But I want people to be angry because the parents in the picture have a choice of “doing the right thing” and watching their children suffer, or “doing the wrong thing” and getting pilloried by a large and judgemental privileged group on the Internet. You love your kids. They love their kids. We should all be angry that these people are having to scramble for crumbs at such incredibly high stakes.

But demanding that the Indian government do something is hypocritical while we use similar systems and we have the ability to let money and mobility influence the outcome for students at the expense of other students. Go and ask Finland what they do, because they’re happy to tell you how they fixed things but people don’t seem to want to actually do most of the things that they have done.

Interviewer:  We’ve been talking for a while so we had better wrap up. What do you want people to understand?

Nick: What I always want people to understand – I want them to understand “why“. I want them to be able to think about and discuss why these images from a collapsing educational system is so sad. I want them to understand why our system is really no better. I want them to think about why struggling students do careless, thoughtless and, by our standards, unethical things when they see all the ways that other people are sliding by in the system or we don’t go to the trouble to construct assessment that actually rewards creative and innovative approaches.

I want people to understand that educational systems can be hard to get right but it is both possible and essential. It takes investment, it takes innovation, it takes support, it takes recognition and it takes respect. Why aren’t we doing this? Delaying investment will only make the problem harder!

Really, I want people to understand that we would have to do a very large amount of house cleaning before we could have the audacity to criticise the people in that photo and, even then, it would be an action lacking in decency and empathy.

We have never seen enough of a level playing field to make a meritocratic argument work because of ingrained privilege and disparity in opportunity.

Interviewer: So, basically, everything most people think about how education and exams work is wrong? There are examples of a fairer system but most of us never see it?

Nick: Pretty much. But I have hope. I don’t want people to stay sad or angry, I want those to ignite the next stages of action. Understanding, passion and action can change the world.

Interviewer: And that’s all we have time for. Thank you, Nick Falkner!


Large Scale Authenticity: What I Learned About MOOCs from Reality Television

The development of social media platforms has allows us to exchange information and, well, rubbish very easily. Whether it’s the discussion component of a learning management system, Twitter, Facebook, Tumblr, Snapchat or whatever will be the next big thing, we can now chat to each other in real time very, very easily.

One of the problems with any on-line course is trying to maintain a community across people who are not in the same timezone, country or context. What we’d really like is for the community communication to come from the students, with guidance and scaffolding from the lecturing staff, but sometimes there’s priming, leading and… prodding. These “other” messages have to be carefully crafted and they have to connect with the students or they risk being worse than no message at all. As an example, I signed up for an on-line course and then wasn’t able to do much in the first week. I was sitting down to work on it over the weekend when a mail message came in from the organisers on the community board congratulating me on my excellent progress on things I hadn’t done. (This wasn’t isolated. The next year, despite not having signed up, the same course sent me even more congratulations on totally non-existent progress.) This sends the usual clear messages that we expect from false praise and inauthentic communication: the student doesn’t believe that you know them, they don’t feel part of an authentic community and they may disengage. We have, very effectively, sabotaged everything that we actually wanted to build.

Let’s change focus. For a while, I was watching a show called “My Kitchen Rules” on local television. It pretends to be about cooking (with competitive scoring) but it’s really about flogging products from a certain supermarket while delivering false drama in the presence of dangerously orange chefs. An engineered activity to ensure that you replace an authentic experience with consumerism and commodities? Paging Guy Debord on the Situationist courtesy phone: we have a Spectacle in progress. What makes the show interesting is the associated Twitter feed, where large numbers of people drop in on the #mkr to talk about the food, discuss the false drama, exchange jokes and develop community memes, such as sharing pet pictures with each other over the (many) ad breaks. It’s a community. Not everyone is there for the same reasons: some are there to be rude about people, some are actually there for the cooking (??) and some are… confused. But the involvement in the conversation, interplay and development of a shared reality is very real.

Chef is not quite this orange. Not quite.

Chef is not quite this orange. Not quite.

And this would all be great except for one thing: Australia is a big country and spans a lot of timezones. My Kitchen Rules is broadcast at 7:30pm, starting in Melbourne, Canberra, Tasmania and Sydney, then 30 minutes later in Adelaide, then 30 minutes later again in Queensland (they don’t do daylight savings), then later again for Perth. So now we have four different time groups to manage, all watching the same show.

But the Twitter feed starts on the first time point, Adelaide picks up discussions from the middle of the show as they’re starting and then gets discussions on scores as the first half completes for them… and this is repeated for Queensland viewers and then for Perth. Now , in the community itself, people go on and off the feed as their version of the show starts and stops and, personally, I don’t find score discussions very distracting because I’m far more interested in the Situation being created in the Twitter stream.

Enter the “false tweets” of the official MKR Social Media team who ask questions that only make sense in the leading timezone. Suddenly, everyone who is not quite at the same point is then reminded that we are not in the same place. What does everyone think of the scores? I don’t know, we haven’t seen it yet. What’s worse are the relatively lame questions that are being asked in the middle of an actual discussion that smell of sponsorship involvement or an attempt to produce the small number of “acceptable” tweets that are then shared back on the TV screen for non-connected viewers. That’s another thing – everyone outside of the first timezone has very little chance of getting their Tweet displayed. Imagine if you ran a global MOOC where only the work of the students in San Francisco got put up as an example of good work!

This is a great example of an attempt to communicate that fails dismally because it doesn’t take into account how people are using the communications channel, isn’t inclusive (dismally so) and constantly reminds people who don’t live in a certain area that they really aren’t being considered by the program’s producers.

You know what would fix it? Putting it on at the same time everywhere but that, of course, is tricky because of the way that advertising is sold and also because it would force poor Perth to start watching dinner television just after work!

But this is a very important warning of what happens when you don’t think about how you’ve combined the elements of your environment. It’s difficult to do properly but it’s terrible when done badly. And I don’t need to go and enrol in a course to show you this – I can just watch a rather silly cooking show.