Collaboration and community are beautiful

There are many lessons to be learned from what is going on in the MOOC sector. The first is that we have a lot to learn, even for those of us who are committed to doing it ‘properly’ whatever that means. I’m not trying to convince you of “MOOC yes” or “MOOC no”. We can have that argument some other time. I’m talking about we already know from using these tools.

We’ve learned (again) that producing a broadcast video set of boring people reading the book at you in a monotone is, amazingly, not effective, no matter how fancy the platform. We know that MOOCs are predominantly taken by people who have already ‘succeeded’ at learning, often despite our educational system, and are thus not as likely to have an impact in traditionally disadvantaged areas, especially without an existing learning community and culture. (No references, you can Google all of this easily.)

We know that online communities can and do form. Ok, it’s not the same as twenty people in a room with you but our own work in this space confirms that you can have students experiencing a genuine feeling of belonging, facilitated through course design and forum interaction.

“Really?” you ask.

In a MOOC we ran with over 25,000 students, a student wrote a thank you note to us at the top of his code, for the final assignment. He had moved from non-coder to coder with us and had created some beautiful things. He left a note in his code because he thought that someone would read it. And we did. There is evidence of this everywhere in the forums and their code. No, we don’t have a face-to-face relationship. But we made them feel something and, from what we’ve seen so far, it doesn’t appear to be a bad something.

But we, as in the wider on-line community, have learned something else that is very important. Students in MOOCs often set their own expectations of achievement. They come in, find what they’re after, and leave, much like they are asking a question on Quora or StackExchange. Much like you check out reviews on-line before you start watching a show or you download one or two episodes to check it out. You know, 21st Century life.

Once you see that self-defined achievement and engagement, a lot of things about MOOCs, including drop rates and strange progression, suddenly make sense. As does the realisation that this is a total change from what we have accepted for centuries as desirable behaviour. This is something that we are going to have a lot of trouble fitting into our existing system. It also indicates how much work we’re going to have to do in order to bring in traditionally disadvantaged communities, first-in-family and any other under-represented group. Because they may still believe that we’re offering Perry’s nightmare in on-line form: serried ranks with computers screaming facts at you.

We offer our students a lot of choice but, as Universities, we mostly work on the idea of ‘follow this program to achieve this qualification’. Despite notionally being in the business of knowledge for the sake of knowledge, our non-award and ‘not for credit’ courses are dwarfed in enrolments by the ‘follow the track, get a prize’ streams. And that, of course, is where the diminishing bags of dollars come from. That’s why retention is such a hot-button issue at Universities because even 1% more retained students is worth millions to most Universities. A hunt and peck community? We don’t even know what retention looks like in that context.

Pretending that this isn’t happening is ignoring evidence. It’s self-deceptive, disingenuous, hypocritical (for we are supposed to be the evidence junkies) and, once again, we have a failure of educational aesthetics. Giving people what they don’t want isn’t good. Pretending that they just don’t know what’s good for them is really not being truthful. That’s three Socratic strikes: you’re out.

The Eiffel Tower, Paris, at night being struck at the apex by three bolts of lightning simultaneously.

We had better be ready to redirect that energy or explode.

We have a message from our learning community. They want some control. We have to be aware that, if we really want them to do something, they have to feel that it’s necessary. (So much research supports this.) By letting them run around in the MOOC space, artificial and heavily instrumented, we can finally see what they’re up to without having to follow them around with clipboards. We see them on the massive scale, individuals and aggregates. Remember, on average these are graduates; these are students who have already been through our machine and come out. These are the last people, if we’ve convinced them of the rightness of our structure, who should be rocking the boat and wanting to try something different. Unless, of course, we haven’t quite been meeting their true needs all these years.

I often say that the problem we have with MOOC enrolments is that we can see all of them. There is no ‘peeking around the door’ in a MOOC. You’re in or you’re out, in order to be signed up for access or updates.

If we were collaborating with all of our students to produce learning materials and structures, not just the subset who go into MOOC, I wonder what we would end up turning out? We still need to apply our knowledge of pedagogy and psychology, of course, to temper desire with what works but I suspect that we should be collaborating with our learner community in a far more open way. Everywhere else, technology is changing the relationship between supplier and consumer. Name any other industry and we can probably find a new model where consumers get more choice, more knowledge and more power.

No-one (sensible) is saying we should raze the Universities overnight. I keep being told that allowing more student control is going to lead to terrible things but, frankly, I don’t believe it and I don’t think we have enough evidence to stop us from at least exploring this path. I think it’s scary, yes. I think it’s going to challenge how we think about tertiary education, absolutely. I also think that we need to work out how we can bring together the best of face-to-face with the best of on-line, for the most people, in the most educationally beautiful way. Because anything else just isn’t that beautiful.


Updated previous post: @WalidahImarisha #worldcon #sasquan

Walidah Imarisha very generously continued the discussion of my last piece with me on Twitter and I have updated that piece to include her thoughts and to provide vital additional discussion. As always, don’t read me talking about things when you can read the words of the people who are out there fixing, changing the narrative, fighting and winning.

Thank you, Walidah!


The Only Way Forward is With No Names @iamajanibrown @WalidahImarisha #afrofuturism #worldcon #sasquan

Edit: Walidah Imarisha and I had a discussion in Twitter after I released this piece and I wanted to add her thoughts and part of our discussion. I’ve added it to the end so that you’ll have context but I mention it here because her thoughts are the ones that you must read before you leave this piece. Never listen to me when you can be listening to the people who are living this and fighting it.

I’m currently at the World Science Fiction Convention in Spokane, Washington state. As always, my focus is education and (no surprise to long term readers) equity. I’ve had the opportunity to attend some amazing panels. One was on the experience of women in art, publishing and game production of female characters for video gaming. Others were discussing issues such as non-white presence in fiction (#AfroFuturism with Professor Ajani Brown) and a long discussion of the changes between the Marvel Universe in film and comic form, as well as how we can use Science Fiction & Fantasy in the classroom to address social issues without having to directly engage the (often depressing) news sources. Both the latter panels were excellent and, in the Marvel one, Tom Smith, Annalee Flower Horne, Cassandra Rose Clarke, and Professor Brown, there was a lot of discussion of both the new Afro-American characters in movies and TV (Deathlok, Storm and Falcon) as well as how much they had changed from the comics.

I’m going to discuss what I saw and lead towards my point: that all assessment of work for its publishing potential should, where it is possible and sensible, be carried out blind, without knowledge of who wrote it.

I’ve written on this before, both here (where I argue that current publishing may not be doing what we want for the long term benefit of the community and the publishers themselves) and here, where we identify that systematic biases against people who are not western men is rampant and apparently almost inescapable as long as we can see a female name. Very recently, this Jezebel article identified that changing the author’s name on a manuscript, from female to male, not only included response rate and reduced time waiting, it changed the type of feedback given. The woman’s characters were “feisty”, the man’s weren’t. Same characters. It doesn’t matter if you think you’re being sexist or not, it doesn’t even matter (from the PNAS study in the second link) if you’re a man or a woman, the presence of a female name changes the level of respect attached to a work and also the level of reward/appreciation offered an assessment process. There are similar works that clearly identify that this problem is even worse for People of Colour. (Look up Intersectionality if you don’t know what I’m talking about.) I’m not saying that all of these people are trying to discriminate but the evidence we have says that social conditioning that leads to sexism is powerful and dominating.

Now let’s get back to the panels. The first panel “Female Characters in Video Games” with Andrea Stewart, Maurine Starkey, Annalee Flower Horne, Lauren Roy and Tanglwyst de Holloway. While discussing the growing market for female characters, the panel identified the ongoing problems and discrimination against women in the industry. 22% of professionals in the field are women, which sounds awful until you realise that this figure was 11% in 2009. However, Maurine had had her artwork recognised as being “great” when someone thought her work was a mans and “oh, drawn like a woman” when the true owner was revealed. And this is someone being explicit. The message of the panel was very positive: things were getting better. However, it was obvious that knowing someone was a woman changed how people valued their work or even how their activities were described. “Casual gaming” is often a term that describes what women do; if women take up a gaming platform (and they are a huge portion of the market) then it often gets labelled “casual gaming”.

So, point 1, assessing work at a professional level is apparently hard to do objectively when we know the gender of people. Moving on.

The first panel on Friday dealt with AfroFuturism, which looks at the long-standing philosophical and artistic expression of alternative realities relating to people of African Descent. This can be traced to the Egyptian origins of mystic and astrological architecture and religions, through tribal dances and mask ceremonies of other parts of Africa, to the P.Funk mothership and science-fiction works published in the middle of vinyl albums. There are strong notions of carving out or refining identity in order to break oppressive narratives and re-establish agency. AfroFuturism looks into creating new futures and narratives, also allowing for reinvention to escape the past, which is a powerful tool for liberation. People can be put into boxes and they want to break out to liberate themselves and, too often, if we know that someone can be put into a box then we have a nasty tendency (implicit cognitive bias) to jam them back in. No wonder, AfroFuturism is seen as a powerful force because it is an assault on the whole mean, racist narrative that does things like call groups of white people “protesters” or “concerned citizens”, and groups of black people “rioters”.

(If you follow me on Twitter, you’ve seen a fair bit of this. If you’re not following me on Twitter, @nickfalkner is the way to go.)

So point 2, if we know someone’s race, then we are more likely to enforce a narrative that is stereotypical and oppressive when we are outside of their culture. Writers inside the culture can write to liberate and to redefine identity and this probably means we need to see more of this.

I want to focus on the final panel, “Saving the World through Science Fiction: SF in the Classroom”, with Ben Cartwright, Ajani Brown (again!), Walidah Imarisha and Charlotte Lewis Brown. There are many issues facing our students on a day-to-day basis and it can be very hard to engage with some of them because it is confronting to have to address your own biases when you talk about the real world. But you can talk about racism with aliens, xenophobia with a planetary invasion, the horrors of war with apocalyptic fiction… and it’s not the nightly news. People can confront their biases without confronting them. That’s a very powerful technique for changing the world. It’s awesome.

Point 3, then, is that narratives are important and, with careful framing, we can discuss very complicated things and get away from the sheer weight of biases and reframe a discussion to talk about difficult things, without having to resort to violence or conflict. This reinforces Point 2, that we need more stories from other viewpoints to allow us to think about important issues.

We are a narrative and a mythic species: storytelling allows us to explain our universe. Storytelling defines our universe, whether it’s religion, notions of family or sense of state.

What I take from all of these panels is that many of the stories that we want to be reading, that are necessary for the healing and strengthening of our society, should be coming from groups who are traditionally not proportionally represented: women, People of Colour, Women of Colour, basically anyone who isn’t recognised as a white man in the Western Tradition. This isn’t to say that everything has to be one form but, instead, that we should be putting systems in place to get the best stories from as wide a range as possible, in order to let SF&F educate, change and grow the world. This doesn’t even touch on the Matthew Effect, where we are more likely to positively value a work if we have an existing positive relationship with the author, even if said work is not actually very good.

And this is why, with all of the evidence we have with cognitive biases changing the way people think about work based on the name, that the most likely approach to improve the range of stories that we will end up publishing is to judge as many works as we can without knowing who wrote it. If we wanted to take it further, we could even ask people to briefly explain why they did or didn’t like it. The comments on the Jezebel author’s book make it clear that, with those comments, we can clearly identify a bias in play. “It’s not for us” and things like that are not sufficiently transparent for us to see if the system is working. (Apologies to the hard-working editors out there, I know this is a big demand. Anonymity is a great start. 🙂 )

Now some books/works, you have to know who wrote it; my textbook, for example, depends upon my academic credentials and my published work, hence my identify is a part of the validity of academic work. But, for short fiction, for books? Perhaps it’s time to look at all of the evidence and to look at all of the efforts to widen the range of voices we hear and consider a commitment to anonymous review so that SF&F will be a powerful force for thought and change in the decades to come.

Thank you to all of the amazing panellists. You made everyone think and sent out powerful and positive messages. Thank you, so much!

Edit: As mentioned above, Walidah and I had a discussion that extended from this on Twitter. Walidah’s point was about changing the system so that we no longer have to hide identity to eliminate bias and I totally agree with this. Our goal has to be to create a space where bias no longer exists, where the assumption that the hierarchical dominance is white, cis, straight and male is no longer the default. Also, while SF&F is a great tool, it does not replace having the necessary and actual conversations about oppression. Our goal should never be to erase people of colour and replace it with aliens and dwarves just because white people don’t want to talk about race. While narrative engineering can work, many people do not transfer the knowledge from analogy to reality and this is why these authentic discussions of real situations must also exist. When we sit purely in analog, we risk reinforcing inequality if we don’t tie it back down to Earth.

I am still trying to attack a biased system to widen the narrative to allow more space for other voices but, as Walidah notes, this is catering to the privileged, rather than empowering the oppressed to speak their stories. And, of course, talking about oppression leads those on top of the hierarchy to assume you are oppressed. Walidah mentioned Katherine Burdekin & Swastika Nights as part of this. Our goal must be to remove bias. What I spoke about above is one way but it is very much born of the privileged and we cannot lose sight of the necessity of empowerment and a constant commitment to ensuring the visibility of other voices and hearing the stories of the oppressed from them, not passed through white academics like me.

Seriously, if you can read me OR someone else who has a more authentic connection? Please read that someone else.

Walidah’s recent work includes, with adrienne maree brown, editing the book of 20 short stories I have winging its way to me as we speak, “Octavia’s Brood: Science Fiction Stories from Social Justice Movements” and I am so grateful that she took the time to respond to this post and help me (I hope) to make it stronger.


Designing a MOOC: how far did it reach? #csed

Mark Guzdial posted over on his blog on “Moving Beyond MOOCS: Could we move to understanding learning and teaching?” and discusses aspects (that still linger) of MOOC hype. (I’ve spoken about MOOCs done badly before, as well as recording the thoughts of people like Hugh Davis from Southampton.) One of Mark’s paragraphs reads:

“The value of being in the front row of a class is that you talk with the teacher.  Getting physically closer to the lecturer doesn’t improve learning.  Engagement improves learning.  A MOOC puts everyone at the back of the class, listening only and doing the homework”

My reply to this was:

“You can probably guess that I have two responses here, the first is that the front row is not available to many in the real world in the first place, with the second being that, for far too many people, any seat in the classroom is better than none.

But I am involved in a, for us, large MOOC so my responses have to be regarded in that light. Thanks for the post!”

Mark, of course, called my bluff and responded with:

“Nick, I know that you know the literature in this space, and care about design and assessment. Can you say something about how you designed your MOOC to reach those who would not otherwise get access to formal educational opportunities? And since your MOOC has started, do you know yet if you achieved that goal — are you reaching people who would not otherwise get access?”

So here is that response. Thanks for the nudge, Mark! The answer is a bit long but please bear with me. We will be posting a longer summary after the course is completed, in a month or so. Consider this the unedited taster. I’m putting this here, early, prior to the detailed statistical work, so you can see where we are. All the numbers below are fresh off the system, to drive discussion and answering Mark’s question at, pretty much, a conceptual level.

First up, as some background for everyone, the MOOC team I’m working with is the University of Adelaide‘s Computer Science Education Research group, led by A/Prof Katrina Falkner, with me (Dr Nick Falkner), Dr Rebecca Vivian, and Dr Claudia Szabo.

I’ll start by noting that we’ve been working to solve the inherent scaling issues in the front of the classroom for some time. If I had a class of 12 then there’s no problem in engaging with everyone but I keep finding myself in rooms of 100+, which forces some people to sit away from me and also limits the number of meaningful interactions I can make to individuals in one setting. While I take Mark’s point about the front of the classroom, and the associated research is pretty solid on this, we encountered an inherent problem when we identified that students were better off down the front… and yet we kept teaching to rooms with more student than front. I’ll go out on a limb and say that this is actually a moral issue that we, as a sector, have had to look at and ignore in the face of constrained resources. The nature of large spaces and people, coupled with our inability to hover, means that we can either choose to have a row of students effectively in a semi-circle facing us, or we accept that after a relatively small number of students or number of rows, we have constructed a space that is inherently divided by privilege and will lead to disengagement.

So, Katrina’s and my first foray into this space was dealing with the problem in the physical lecture spaces that we had, with the 100+ classes that we had.

Katrina and I published a paper on “contributing student pedagogy” in Computer Science Education 22 (4), 2012, to identify ways for forming valued small collaboration groups as a way to promote engagement and drive skill development. Ultimately, by reducing the class to a smaller number of clusters and making those clusters pedagogically useful, I can then bring the ‘front of the class’-like experience to every group I speak to. We have given talks and applied sessions on this, including a special session at SIGCSE, because we think it’s a useful technique that reduces the amount of ‘front privilege’ while extending the amount of ‘front benefit’. (Read the paper for actual detail – I am skimping on summary here.)

We then got involved in the support of the national Digital Technologies curriculum for primary and middle school teachers across Australia, after being invited to produce a support MOOC (really a SPOC, small, private, on-line course) by Google. The target learners were teachers who were about to teach or who were teaching into, initially, Foundation to Year 6 and thus had degrees but potentially no experience in this area. (I’ve written about this before and you can find more detail on this here, where I also thanked my previous teachers!)

The motivation of this group of learners was different from a traditional MOOC because (a) everyone had both a degree and probable employment in the sector which reduced opportunistic registration to a large extent and (b) Australian teachers are required to have a certain number of professional development (PD) hours a year. Through a number of discussions across the key groups, we had our course recognised as PD and this meant that doing our course was considered to be valuable although almost all of the teachers we spoke to were furiously keen for this information anyway and my belief is that the PD was very much ‘icing’ rather than ‘cake’. (Thank you again to all of the teachers who have spent time taking our course – we really hope it’s been useful.)

To discuss access and reach, we can measure teachers who’ve taken the course (somewhere in the low thousands) and then estimate the number of students potentially assisted and that’s when it gets a little crazy, because that’s somewhere around 30-40,000.

In his talk at CSEDU 2014, Hugh Davis identified the student groups who get involved in MOOCs as follows. The majority of people undertaking MOOCs were life-long learners (older, degreed, M/F 50/50), people seeking skills via PD, and those with poor access to Higher Ed. There is also a small group who are Uni ‘tasters’ but very, very small. (I think we can agree that tasting a MOOC is not tasting a campus-based Uni experience. Less ivy, for starters.) The three approaches to the course once inside were auditing, completing and sampling, and it’s this final one that I want to emphasise because this brings us to one of the differences of MOOCs. We are not in control of when people decide that they are satisfied with the free education that they are accessing, unlike our strong gatekeeping on traditional courses.

I am in total agreement that a MOOC is not the same as a classroom but, also, that it is not the same as a traditional course, where we define how the student will achieve their goals and how they will know when they have completed. MOOCs function far more like many people’s experience of web browsing: they hunt for what they want and stop when they have it, thus the sampling engagement pattern above.

(As an aside, does this mean that a course that is perceived as ‘all back of class’ will rapidly be abandoned because it is distasteful? This makes the student-consumer a much more powerful player in their own educational market and is potentially worth remembering.)

Knowing these different approaches, we designed the individual subjects and overall program so that it was very much up to the participant how much they chose to take and individual modules were designed to be relatively self-contained, while fitting into a well-designed overall flow that built in terms of complexity and towards more abstract concepts. Thus, we supported auditing, completing and sampling, whereas our usual face-to-face (f2f) courses only support the first two in a way that we can measure.

As Hugh notes, and we agree through growing experience, marking/progress measures at scale are very difficult, especially when automated marking is not enough or not feasible. Based on our earlier work in contributing collaboration in the class room, for the F-6 Teacher MOOC we used a strong peer-assessment model where contributions and discussions were heavily linked. Because of the nature of the cohort, geographical and year-level groups formed who then conducted additional sessions and produced shared material at a slightly terrifying rate. We took the approach that we were not telling teachers how to teach but we were helping them to develop and share materials that would assist in their teaching. This reduced potential divisions and allows us to establish a mutually respectful relationship that facilitated openness.

(It’s worth noting that the courseware is creative commons, open and free. There are people reassembling the course for their specific take on the school system as we speak. We have a national curriculum but a state-focused approach to education, with public and many independent systems. Nobody makes any money out of providing this course to teachers and the material will always be free. Thank you again to Google for their ongoing support and funding!)

Overall, in this first F-6 MOOC, we had higher than usual retention of students and higher than usual participation, for the reasons I’ve outlined above. But this material was for curriculum support for teachers of young students, all of whom were pre-programming, and it could be contained in videos and on-line sharing of materials and discussion. We were also in the MOOC sweet-spot: existing degreed learners, PD driver, and their PD requirement depended on progressive demonstration on goal achievement, which we recognised post-course with a pre-approved certificate form. (Important note: if you are doing this, clear up how the PD requirements are met and how they need to be reported back, as early on as you can. It meant that we could give people something valuable in a short time.)

The programming MOOC, Think. Create. Code on EdX, was more challenging in many regards. We knew we were in a more difficult space and would be more in what I shall refer to as ‘the land of the average MOOC consumer’. No strong focus, no PD driver, no geographically guaranteed communities. We had to think carefully about what we considered to be useful interaction with the course material. What counted as success?

To start with, we took an image-based approach (I don’t think I need to provide supporting arguments for media-driven computing!) where students would produce images and, over time, refine their coding skills to produce and understand how to produce more complex images, building towards animation. People who have not had good access to education may not understand why we would use programming in more complex systems but our goal was to make images and that is a fairly universally understood idea, with a short production timeline and very clear indication of achievement: “Does it look like a face yet?”

In terms of useful interaction, if someone wrote a single program that drew a face, for the first time – then that’s valuable. If someone looked at someone else’s code and spotted a bug (however we wish to frame this), then that’s valuable. I think that someone writing a single line of correct code, where they understand everything that they write, is something that we can all consider to be valuable. Will it get you a degree? No. Will it be useful to you in later life? Well… maybe? (I would say ‘yes’ but that is a fervent hope rather than a fact.)

So our design brief was that it should be very easy to get into programming immediately, with an active and engaged approach, and that we have the same “mostly self-contained week” approach, with lots of good peer interaction and mutual evaluation to identify areas that needed work to allow us to build our knowledge together. (You know I may as well have ‘social constructivist’ tattooed on my head so this is strongly in keeping with my principles.) We wrote all of the materials from scratch, based on a 6-week program that we debated for some time. Materials consisted of short videos, additional material as short notes, participatory activities, quizzes and (we planned for) peer assessment (more on that later). You didn’t have to have been exposed to “the lecture” or even the advanced classroom to take the course. Any exposure to short videos or a web browser would be enough familiarity to go on with.

Our goal was to encourage as much engagement as possible, taking into account the fact that any number of students over 1,000 would be very hard to support individually, even with the 5-6 staff we had to help out. But we wanted students to be able to develop quickly, share quickly and, ultimately, comment back on each other’s work quickly. From a cognitive load perspective, it was crucial to keep the number of things that weren’t relevant to the task to a minimum, as we couldn’t assume any prior familiarity. This meant no installers, no linking, no loaders, no shenanigans. Write program, press play, get picture, share to gallery, winning.

As part of this, our support team (thanks, Jill!) developed a browser-based environment for Processing.js that integrated with a course gallery. Students could save their own work easily and share it trivially. Our early indications show that a lot of students jumped in and tried to do something straight away. (Processing is really good for getting something up, fast, as we know.) We spent a lot of time testing browsers, testing software, and writing code. All of the recorded materials used that development environment (this was important as Processing.js and Processing have some differences) and all of our videos show the environment in action. Again, as little extra cognitive load as possible – no implicit requirement for abstraction or skills transfer. (The AdelaideX team worked so hard to get us over the line – I think we may have eaten some of their brains to save those of our students. Thank you again to the University for selecting us and to Katy and the amazing team.)

The actual student group, about 20,000 people over 176 countries, did not have the “built-in” motivation of the previous group although they would all have their own levels of motivation. We used ‘meet and greet’ activities to drive some group formation (which worked to a degree) and we also had a very high level of staff monitoring of key question areas (which was noted by participants as being very high for EdX courses they’d taken), everyone putting in 30-60 minutes a day on rotation. But, as noted before, the biggest trick to getting everyone engaged at the large scale is to get everyone into groups where they have someone to talk to. This was supposed to be provided by a peer evaluation system that was initially part of the assessment package.

Sadly, the peer assessment system didn’t work as we wanted it to and we were worried that it would form a disincentive, rather than a supporting community, so we switched to a forum-based discussion of the works on the EdX discussion forum. At this point, a lack of integration between our own UoA programming system and gallery and the EdX discussion system allowed too much distance – the close binding we had in the R-6 MOOC wasn’t there. We’re still working on this because everything we know and all evidence we’ve collected before tells us that this is a vital part of the puzzle.

In terms of visible output, the amount of novel and amazing art work that has been generated has blown us all away. The degree of difference is huge: armed with approximately 5 statements, the number of different pieces you can produce is surprisingly large. Add in control statements and reputation? BOOM. Every student can write something that speaks to her or him and show it to other people, encouraging creativity and facilitating engagement.

From the stats side, I don’t have access to the raw stats, so it’s hard for me to give you a statistically sound answer as to who we have or have not reached. This is one of the things with working with a pre-existing platform and, yes, it bugs me a little because I can’t plot this against that unless someone has built it into the platform. But I think I can tell you some things.

I can tell you that roughly 2,000 students attempted quiz problems in the first week of the course and that over 4,000 watched a video in the first week – no real surprises, registrations are an indicator of interest, not a commitment. During that time, 7,000 students were active in the course in some way – including just writing code, discussing it and having fun in the gallery environment. (As it happens, we appear to be plateauing at about 3,000 active students but time will tell. We have a lot of post-course analysis to do.)

It’s a mistake to focus on the “drop” rates because the MOOC model is different. We have no idea if the people who left got what they wanted or not, or why they didn’t do anything. We may never know but we’ll dig into that later.

I can also tell you that only 57% of the students currently enrolled have declared themselves explicitly to be male and that is the most likely indicator that we are reaching students who might not usually be in a programming course, because that 43% of others, of whom 33% have self-identified as women, is far higher than we ever see in classes locally. If you want evidence of reach then it begins here, as part of the provision of an environment that is, apparently, more welcoming to ‘non-men’.

We have had a number of student comments that reflect positive reach and, while these are not statistically significant, I think that this also gives you support for the idea of additional reach. Students have been asking how they can save their code beyond the course and this is a good indicator: ownership and a desire to preserve something valuable.

For student comments, however, this is my favourite.

I’m no artist. I’m no computer programmer. But with this class, I see I can be both. #processingjs (Link to student’s work) #code101x .

That’s someone for whom this course had them in the right place in the classroom. After all of this is done, we’ll go looking to see how many more we can find.

I know this is long but I hope it answered your questions. We’re looking forward to doing a detailed write-up of everything after the course closes and we can look at everything.


EduTech AU 2015, Day 2, Higher Ed Leaders, “Innovation + Technology = great change to higher education”, #edutechau

Big session today. We’re starting with Nicholas Negroponte, founder of the MIT Media Lab and the founder of One Laptop Per Child (OLPC), an initiative to create/provide affordable educational devices for children in the developing world. (Nicholas is coming to us via video conference, hooray, 21st Century, so this may or not work well in translation to blogging. Please bear with me if it’s a little disjointed.)

Nicholas would rather be here but he’s bravely working through his first presentation of this type! It’s going to be a presentation with some radical ideas so he’s hoping for conversation and debate. The presentation is broken into five parts:

  1. Learning learning. (Teaching and learning as separate entities.)
  2. What normal market forces will not do. (No real surprise that standard market forces won’t work well here.)
  3. Education without curricula. (Learning comes from many places and situations. Understanding and establishing credibility.)
  4. Where do new ideas come from? (How do we get them, how do we not get in the way.)
  5. Connectivity as a human right. (Is connectivity a human right or a means to rights such as education and healthcare? Human rights are free so that raises a lot of issues.

Nicholas then drilled down in “Learning learning”, starting with a reference to Seymour Papert, and Nicholas reflected on the sadness of the serious accident of Seymour’s health from a personal perspective. Nicholas referred to Papert’s and Minsky’s work on trying to understand how children and machines learned respectively. In 1968, Seymour started thinking about it and on April, 9, 1970, he gave a talk on his thoughts. Seymour realised that thinking about programs gave insight into thinking, relating to the deconstruction and stepwise solution building (algorithmic thinking) that novice programmers, such as children, had to go through.

These points were up on the screen as Nicholas spoke:

  1. Construction versus instruction
  2. Why reinventing the wheel is good
  3. Coding as thinking about thinking

How do we write code? Write it, see if it works, see which behaviours we have that aren’t considered working, change the code (in an informed way, with any luck) and try again. (It’s a little more complicated than that but that’s the core.) We’re now into the area of transferable skills – it appeared that children writing computer programs learned a skill that transferred over into their ability to spell, potentially from the methodical application of debugging techniques.

Nicholas talked about a spelling bee system where you would focus on the 8 out of 10 you got right and ignore the 2 you didn’t get. The ‘debugging’ kids would talk about the ones that they didn’t get right because they were analsysing their mistakes, as a peer group and as individual reflection.

Nicholas then moved on to the failure of market forces. Why does Finland do so well when they don’t have tests, homework and the shortest number of school hours per day and school days per year. One reason? No competition between children. No movement of core resources into the private sector (education as poorly functioning profit machine). Nicholas identified the core difference between the mission and the market, which beautifully summarises my thinking.

The OLPC program started in Cambodia for a variety of reasons, including someone associated with the lab being a friend of the King. OLPC laptops could go into areas where the government wasn’t providing schools for safety reasons, as it needed minesweepers and the like. Nicholas’ son came to Cambodia from Italy to connect up the school to the Internet. What would the normal market not do? Telecoms would come and get cheaper. Power would come and get cheaper. Laptops? Hmm. The software companies were pushing the hardware companies, so they were both caught in a spiral of increasing power consumption for utility. Where was the point where we could build a simple laptop, as a mission of learning, that could have a smaller energy footprint and bring laptops and connectivity to billions of people.

This is one of the reasons why OLPC is a non-profit – you don’t have to sell laptops to support the system, you’re supporting a mission. You didn’t need to sell or push to justify staying in a market, as the production volume was already at a good price. Why did this work well? You can make partnerships that weren’t possible otherwise. It derails the “ah, you need food and shelter first” argument because you can change the “why do we need a laptop” argument to “why do we need education?” at which point education leads to increased societal conditions. Why laptops? Tablets are more consumer-focused than construction-focused. (Certainly true of how I use my tech.)

(When we launched the first of the Digital Technologies MOOCs, the deal we agreed upon with Google was that it wasn’t a profit-making venture at all. It never will be. Neither we nor Google make money from the support of teachers across Australia so we can have all of the same advantages as they mention above: open partnerships, no profit motive, working for the common good as a mission of learning and collegial respect. Highly recommended approach, if someone is paying you enough to make your rent and eat. The secret truth of academia is that they give you money to keep you housed, clothed and fed while you think. )

Nicholas told a story of kids changing from being scared or bored of school to using an approach that brings kids flocking in. A great measure of success.

Now, onto Education without curricula, starting by talking public versus private. This is a sensitive subject for many people. The biggest problem for public education in many cases is the private educational system, dragging out caring educators to a closed system. Remember Finland? There are no public schools and their educational system is astoundingly good. Nicholas’ points were:

  1. Public versus private
  2. Age segregation
  3. Stop testing. (Yay!)

The public sector is losing the imperative of the civic responsibility for education. Nicholas thinks it doesn’t make sense that we still segregate by ages as a hard limit. He thinks we should get away from breaking it into age groups, as it doesn’t clearly reflect where students are at.

Oh, testing. Nicholas correctly labelled the parental complicity in the production of the testing pressure cooker. “You have to get good grades if you’re going to Princeton!” The testing mania is dominating institutions and we do a lot of testing to measure and rank children, rather than determining competency. Oh, so much here. Testing leads to destructive behaviour.

So where do new ideas come from? (A more positive note.) Nicholas is interested in Higher Ed as sources of new ideas. Why does HE exist, especially if we can do things remotely or off campus? What is the role of the Uni in the future? Ha! Apparently, when Nicholas started the MIT media lab, he was accused of starting a sissy lab with artists and soft science… oh dear, that’s about as wrong as someone can get. His use of creatives was seen as soft when, of course, using creative users addressed two issues to drive new ideas: a creative approach to thinking and consulting with the people who used the technology. Who really invented photography? Photographers. Three points from this section.

  1. Children: our most precious natural resource
  2. Incrementalism is the enemy of creativity
  3. Brain drain

On the brain drain, we lose many, many students to other places. Uni are a place to solve huge problems rather than small, profit-oriented problems. The entrepreneurial focus leads to small problem solution, which is sucking a lot of big thinking out of the system. The app model is leading to a human resource deficit because the start-up phenomenon is ripping away some of our best problem solvers.

Finally, to connectivity as a human right. This is something that Nicholas is very, very passionate about. Not content. Not laptops. Being connected.  Learning, education, and access to these, from early in life to the end of life – connectivity is the end of isolation. Isolation comes in many forms and can be physical, geographical and social. Here are Nicholas’ points:

  1. The end of isolation.
  2. Nationalism is a disease (oh, so much yes.) Nations are the wrong taxonomy for the world.
  3. Fried eggs and omelettes.

Fried eggs and omelettes? In general, the world had crisp boundaries, yolk versus white. At work/at home. At school/not at school. We are moving to a more blended, less dichotomous approach because we are mixing our lives together. This is both bad (you’re getting work in my homelife) and good (I’m getting learning in my day).

Can we drop kids into a reading environment and hope that they’ll learn to read? Reading is only 3,500 years old, versus our language skills, so it has to be learned. But do we have to do it the way that we did it? Hmm. Interesting questions. This is where the tablets were dropped into illiterate villages without any support. (Does this require a seed autodidact in the group? There’s a lot to unpack it.) Nicholas says he made a huge mistake in naming the village in Ethiopia which has corrupted the experiment but at least the kids are getting to give press conferences!

Another massive amount of interesting information – sadly, no question time!

 


EduTECH AU 2015, Day 1, Higher Ed Leaders, “Revolutionising the Student Experience: Thinking Boldly” #edutechau

Lucy Schulz, Deakin University, came to speak about initiatives in place at Deakin, including the IBM Watson initiative, which is currently a world-first for a University. How can a University collaborate to achieve success on a project in a short time? (Lucy thinks that this is the more interesting question. It’s not about the tool, it’s how they got there.)

Some brief facts on Deakin: 50,000 students, 11,000 of whom are on-line. Deakin’s question: how can we make the on-line experience as good if not better than the face-to-face and how can on-line make face-to-face better?

Part of Deakin’s Student Experience focus was on delighting the student. I really like this. I made a comment recently that our learning technology design should be “Everything we do is valuable” and I realise now I should have added “and delightful!” The second part of the student strategy is for Deakin to be at the digital frontier, pushing on the leading edge. This includes understanding the drivers of change in the digital sphere: cultural, technological and social.

(An aside: I’m not a big fan of the term disruption. Disruption makes room for something but I’d rather talk about the something than the clearing. Personal bug, feel free to ignore.)

The Deakin Student Journey has a vision to bring students into the centre of Uni thinking, every level and facet – students can be successful and feel supported in everything that they do at Deakin. There is a Deakin personality, an aspirational set of “Brave, Stylish, Accessible, Inspiring and Savvy”.

Not feeling this as much but it’s hard to get a feel for something like this in 30 seconds so moving on.

What do students want in their learning? Easy to find and to use, it works and it’s personalised.

So, on to IBM’s Watson, the machine that won Jeopardy, thus reducing the set of games that humans can win against machines to Thumb Wars and Go. We then saw a video on Watson featuring a lot of keen students who coincidentally had a lot of nice things to say about Deakin and Watson. (Remember, I warned you earlier, I have a bit of a thing about shiny videos but ignore me, I’m a curmudgeon.)

The Watson software is embedded in a student portal that all students can access, which has required a great deal of investigation into how students communicate, structurally and semantically. This forms the questions and guides the answer. I was waiting to see how Watson was being used and it appears to be acting as a student advisor to improve student experience. (Need to look into this more once day is over.)

Ah, yes, it’s on a student home page where they can ask Watson questions about things of importance to students. It doesn’t appear that they are actually programming the underlying system. (I’m a Computer Scientist in a faculty of Engineering, I always want to get my hands metaphorically dirty, or as dirty as you can get with 0s and 1s.) From looking at the demoed screens, one of the shiny student descriptions of Watson as “Siri plus Google” looks very apt.

Oh, it has cheekiness built in. How delightful. (I have a boundless capacity for whimsy and play but an inbuilt resistance to forced humour and mugging, which is regrettably all that the machines are capable of at the moment. I should confess Siri also rubs me the wrong way when it tries to be funny as I have a good memory and the patterns are obvious after a while. I grew up making ELIZA say stupid things – don’t judge me! 🙂 )

Watson has answered 26,000 questions since February, with an 80% accuracy for answers. The most common questions change according to time of semester, which is a nice confirmation of existing data. Watson is still being trained, with two more releases planned for this year and then another project launched around course and career advisors.

What they’ve learned – three things!

  1. Student voice is essential and you have to understand it.
  2. Have to take advantage of collaboration and interdependencies with other Deakin initiatives.
  3. Gained a new perspective on developing and publishing content for students. Short. Clear. Concise.

The challenges of revolution? (Oh, they’re always there.) Trying to prevent students falling through the cracks and make sure that this tool help students feel valued and stay in contact. The introduction of new technologies have to be recognised in terms of what they change and what they improve.

Collaboration and engagement with your University and student community are essential!

Thanks for a great talk, Lucy. Be interesting to see what happens with Watson in the next generations.


Think. Create. Code. Vis! (@edXOnline, @UniofAdelaide, @cserAdelaide, @code101x, #code101x)

I just posted about the massive growth in our new on-line introductory programming course but let’s look at the numbers so we can work out what’s going on and, maybe, what led to that level of success. (Spoilers: central support from EdX helped a huge amount.) So let’s get to the data!

I love visualised data so let’s look at the growth in enrolments over time – this is really simple graphical stuff as we’re spending time getting ready for the course at the moment! We’ve had great support from the EdX team through mail-outs and Twitter and you can see these in the ‘jumps’ in the data that occurred at the beginning, halfway through April and again at the end. Or can you?

Rapid growth in enrolment!

Rapid growth in enrolment! But it’s a little hard to see in this data.

Hmm, this is a large number, so it’s not all that easy to see the detail at the end. Let’s zoom in and change the layout of the data over to steps so we can things more easily. (It’s worth noting that I’m using the free R statistical package to do all of this. I can change one line in my R program and regenerate all of my graphs and check my analysis. When you can program, you can really save time on things like this by using tools like R.)

Screen Shot 2015-04-30 at 2.40.24 pm
Now you can see where that increase started and then the big jump around the time that e-mail advertising started, circled. That large spike at the end is around 1500 students, which means that we jumped 10% in a day.

When we started looking at this data, we wanted to get a feeling for how many students we might get. This is another common use of analysis – trying to work out what is going to happen based on what has already happened.

As a quick overview, we tried to predict the future based on three different assumptions:

  1. that the growth from day to day would be roughly the same, which is assuming linear growth.
  2. that the growth would increase more quickly, with the amount of increase doubling every day (this isn’t the same as the total number of students doubling every day).
  3. that the growth would increase even more quickly than that, although not as quickly as if the number of students were doubling every day.

If Assumption 1 was correct, then we would expect the graph to look like a straight line, rising diagonally. It’s not. (As it is, this model predicted that we would only get 11,780 students. We crossed that line about 2 weeks ago.

So we know that our model must take into account the faster growth, but those leaps in the data are changes that caused by things outside of our control – EdX sending out a mail message appears to cause a jump that’s roughly 800-1,600 students, and it persists for a couple of days.

Let’s look at what the models predicted. Assumption 2 predicted a final student number around 15,680. Uhh. No. Assumption 3 predicted a final student number around 17,000, with an upper bound of 17,730.

Hmm. Interesting. We’ve just hit 17,571 so it looks like all of our measures need to take into account the “EdX” boost. But, as estimates go, Assumption 3 gave us a workable ballpark and we’ll probably use it again for the next time that we do this.

Now let’s look at demographic data. We now we have 171-172 countries (it varies a little) but how are we going for participation across gender, age and degree status? Giving this information to EdX is totally voluntary but, as long as we take that into account, we make some interesting discoveries.

Age demographic data from EdX

Age demographic data from EdX

Our median student age is 25, with roughly 40% under 25 and roughly 40% from 26 to 40. That means roughly 20% are 41 or over. (It’s not surprising that the graph sits to one side like that. If the left tail was the same size as the right tail, we’d be dealing with people who were -50.)

The gender data is a bit harder to display because we have four categories: male, female, other and not saying. In terms of female representation, we have 34% of students who have defined their gender as female. If we look at the declared male numbers, we see that 58% of students have declared themselves to be male. Taking into account all categories, this means that our female participant percentage could be as high as 40% but is at least 34%. That’s much higher than usual participation rates in face-to-face Computer Science and is really good news in terms of getting programming knowledge out there.

We’re currently analysing our growth by all of these groupings to work out which approach is the best for which group. Do people prefer Twitter, mail-out, community linkage or what when it comes to getting them into the course.

Anyway, lots more to think about and many more posts to come. But we’re on and going. Come and join us!


Large Scale Authenticity: What I Learned About MOOCs from Reality Television

The development of social media platforms has allows us to exchange information and, well, rubbish very easily. Whether it’s the discussion component of a learning management system, Twitter, Facebook, Tumblr, Snapchat or whatever will be the next big thing, we can now chat to each other in real time very, very easily.

One of the problems with any on-line course is trying to maintain a community across people who are not in the same timezone, country or context. What we’d really like is for the community communication to come from the students, with guidance and scaffolding from the lecturing staff, but sometimes there’s priming, leading and… prodding. These “other” messages have to be carefully crafted and they have to connect with the students or they risk being worse than no message at all. As an example, I signed up for an on-line course and then wasn’t able to do much in the first week. I was sitting down to work on it over the weekend when a mail message came in from the organisers on the community board congratulating me on my excellent progress on things I hadn’t done. (This wasn’t isolated. The next year, despite not having signed up, the same course sent me even more congratulations on totally non-existent progress.) This sends the usual clear messages that we expect from false praise and inauthentic communication: the student doesn’t believe that you know them, they don’t feel part of an authentic community and they may disengage. We have, very effectively, sabotaged everything that we actually wanted to build.

Let’s change focus. For a while, I was watching a show called “My Kitchen Rules” on local television. It pretends to be about cooking (with competitive scoring) but it’s really about flogging products from a certain supermarket while delivering false drama in the presence of dangerously orange chefs. An engineered activity to ensure that you replace an authentic experience with consumerism and commodities? Paging Guy Debord on the Situationist courtesy phone: we have a Spectacle in progress. What makes the show interesting is the associated Twitter feed, where large numbers of people drop in on the #mkr to talk about the food, discuss the false drama, exchange jokes and develop community memes, such as sharing pet pictures with each other over the (many) ad breaks. It’s a community. Not everyone is there for the same reasons: some are there to be rude about people, some are actually there for the cooking (??) and some are… confused. But the involvement in the conversation, interplay and development of a shared reality is very real.

Chef is not quite this orange. Not quite.

Chef is not quite this orange. Not quite.

And this would all be great except for one thing: Australia is a big country and spans a lot of timezones. My Kitchen Rules is broadcast at 7:30pm, starting in Melbourne, Canberra, Tasmania and Sydney, then 30 minutes later in Adelaide, then 30 minutes later again in Queensland (they don’t do daylight savings), then later again for Perth. So now we have four different time groups to manage, all watching the same show.

But the Twitter feed starts on the first time point, Adelaide picks up discussions from the middle of the show as they’re starting and then gets discussions on scores as the first half completes for them… and this is repeated for Queensland viewers and then for Perth. Now , in the community itself, people go on and off the feed as their version of the show starts and stops and, personally, I don’t find score discussions very distracting because I’m far more interested in the Situation being created in the Twitter stream.

Enter the “false tweets” of the official MKR Social Media team who ask questions that only make sense in the leading timezone. Suddenly, everyone who is not quite at the same point is then reminded that we are not in the same place. What does everyone think of the scores? I don’t know, we haven’t seen it yet. What’s worse are the relatively lame questions that are being asked in the middle of an actual discussion that smell of sponsorship involvement or an attempt to produce the small number of “acceptable” tweets that are then shared back on the TV screen for non-connected viewers. That’s another thing – everyone outside of the first timezone has very little chance of getting their Tweet displayed. Imagine if you ran a global MOOC where only the work of the students in San Francisco got put up as an example of good work!

This is a great example of an attempt to communicate that fails dismally because it doesn’t take into account how people are using the communications channel, isn’t inclusive (dismally so) and constantly reminds people who don’t live in a certain area that they really aren’t being considered by the program’s producers.

You know what would fix it? Putting it on at the same time everywhere but that, of course, is tricky because of the way that advertising is sold and also because it would force poor Perth to start watching dinner television just after work!

But this is a very important warning of what happens when you don’t think about how you’ve combined the elements of your environment. It’s difficult to do properly but it’s terrible when done badly. And I don’t need to go and enrol in a course to show you this – I can just watch a rather silly cooking show.


Teleportation and the Student: Impossibility As A Lesson Plan

Tricking a cremate into looking at their shoe during a transport was common in the 23rd Century.

Tricking a crew-mate into looking at their shoe during a transport was a common prank in the 23rd Century.

Teleporters, in one form or another, have been around in Science Fiction for a while now. Most people’s introduction was probably via one of the Star Treks (the transporter) which is amusing, as it was a cost-cutting mechanism to make it easy to get from one point in the script to another. Is teleportation actually possible at the human scale? Sadly, the answer is probably not although we can do some cool stuff at the very, very small scale. (You can read about the issues in teleportation here and here, an actual USAF study.) But just because something isn’t possible doesn’t mean that we can’t get some interesting use out of it. I’m going to talk through several ways that I could use teleportation to drive discussion and understanding in a computing course but a lot of this can be used in lots of places. I’ve taken a lot of shortcuts here and used some very high level analogies – but you get the idea.

  1. Data Transfer

    The first thing to realise is that the number of atoms in the human body is huge (one octillion, 1E27, roughly, which is one million million million million million) but the amount of information stored in the human body is much, much larger than that again. If we wanted to get everything, we’re looking at transferring quattuordecillion bits (1E45), and that’s about a million million million times the number of atoms in the body. All of this, however, ignores the state of all the bacteria and associated hosted entities that live in the human body and the fact that the number of neural connections in the brain appears to be larger than we think. There are roughly 9 non-human cells associated with your body (bacteria et al) for every cell.

    Put simply, the easiest way to get the information in a human body to move around is to leave it in a human body. But this has always been true of networks! In the early days, it was more efficient to mail a CD than to use the (at the time) slow download speeds of the Internet and home connections. (Actually, it still is easier to give someone a CD because you’ve just transferred 700MB in one second – that’s 5.6 Gb/s and is just faster than any network you are likely to have in your house now.)

    Right now, the fastest network in the world clocks in at 255 Tbps and that’s 255,000,000,000,000 bits in a second. (Notice that’s over a fixed physical optical fibre, not through the air, we’ll get to that.) So to send that quattuordecillion bits, it would take (quickly dividing 1E45 by 255E12) oh…

    about 100,000,000,000,000,000,000,000

    years. Um.

  2. Information Redundancy and Compression

    The good news is that we probably don’t have to send all of that information because, apart from anything else, it appears that a large amount of human DNA doesn’t seem to do very much and there’s  lot of repeated information. Because we also know that humans have similar chromosomes and things lie that, we can probably compress a lot of this information and send a compressed version of the information.

    The problem is that compression takes time and we have to compress things in the right way. Sadly, human DNA by itself doesn’t compress well as a string of “GATTACAGAGA”, for reasons I won’t go into but you can look here if you like. So we have to try and send a shortcut that means “Use this chromosome here” but then, we have to send a lot of things like “where is this thing and where should it be” so we’re still sending a lot.

    There are also two types of compression: lossless (where we want to keep everything) and lossy (where we lose bits and we will lose more on each regeneration). You can work out if it’s worth doing by looking at the smallest number of bits to encode what you’re after. If you’ve ever seen a really bad Internet image with strange lines around the high contrast bits, you’re seeing lossy compression artefacts. You probably don’t want that in your genome. However, the amount of compression you do depends on the size of the thing you’re trying to compress so now you have to work out if the time to transmit everything is still worse than the time taken to compress things and then send the shorter version.

    So let’s be generous and say that we can get, through amazing compression tricks, some sort of human pattern to build upon and the like, our transferred data requirement down to the number of atoms in the body – 1E27. That’s only going to take…

    124,267

    years. Um, again. Let’s assume that we want to be able to do this in at most 60 minutes to do the transfer. Using the fastest network in the world right now, we’re going to have get our data footprint down to 900,000,000,000,000,000 bits. Whew, that’s some serious compression and, even on computers that probably won’t be ready until 2018, it would have taken about 3 million million million years to do the compression. But let’s ignore that. Because now our real problems are starting…

  3. Signals Ain’t Simple and Networks Ain’t Wires.

    In earlier days of the telephone, the movement of the diaphragm in the mouthpiece generated electricity that was sent down the wires, amplified along the way, and then finally used to make movement in the earpiece that you interpreted as sound. Changes in the electric values weren’t limited to strict values of on or off and, when the signal got interfered with, all sorts of weird things happen. Remember analog television and all those shadows, snow and fuzzy images? Digital encoding takes the measurements of the analog world and turns it into a set of 0s and 1s. You send 0s and 1s (binary) and this is turned back into something recognisable (or used appropriately) at the other end. So now we get amazingly clear television until too much of the signal is lost and then we get nothing. But, up until then, progress!

    But we don’t send giant long streams across a long set of wires, we send information in small packets that contain some data, some information on where to send it and it goes through an array of active electronic devices that take your message from one place to another. The problem is that those packet headers add overhead, just like trying to mail a book with individual pages in addressed envelopes in the postal service would. It takes time to get something onto the network and it also adds more bits! Argh! More bits! But it can’t get any worse can it?

  4. Networks Aren’t Perfectly Reliable

    If you’ve ever had variable performance on your home WiFi, you’ll understand that transmitting things over the air isn’t 100% reliable. There are two things that we have to thing about in terms of getting stuff through the network: flow control (where we stop our machine from talking to other things too quickly) and congestion control (where we try to manage the limited network resources so that everyone gets a share). We’ve already got all of these packets that should be able to be directed to the right location but, well, things can get mangled in transmission (especially over the air) and sometimes things have to be thrown away because the network is so congested that packets get dropped to try and keep overall network throughput up. (Interference and absorption is possible even if we don’t use wireless technology.)

    Oh, no. It’s yet more data to send. And what’s worse is that a loss close to the destination will require you to send all of that information from your end again. Suddenly that Earth-Mars teleporter isn’t looking like such a great idea, is it, what with the 8-16 minute delay every time a cosmic ray interferes with your network transmission in space. And if you’re trying to send from a wireless terminal in a city? Forget it – the WiFi network is so saturated in many built-up areas that your error rates are going to be huge. For a web page, eh, it will take a while. For a Skype call, it will get choppy. For a human information sequence… not good enough.

    Could this get any worse?

  5. The Square Dance of Ordering and Re-ordering

    Well, yes. Sometimes things don’t just get lost but they show up at weird times and in weird orders. Now, for some things, like a web page, this doesn’t matter because your computer can wait until it gets all of the information and then show you the page. But, for telephone calls, it does matter because losing a second of call from a minute ago won’t make any sense if it shows up now and you’re trying to keep it real time.

    For teleporters there’s a weird problem in that you have to start asking questions like “how much of a human is contained in that packet”? Do you actually want to have the possibility of duplicate messages in the network or have you accidentally created extra humans? Without duplication possibilities, your error recovery rate will plummet, unless you build in a lot more error correction, which adds computation time and, sorry, increases the number of bits to send yet again. This is a core consideration of any distributed system, where we have to think about how many copies of something we need to send to ensure that we get one – or whether we care if we have more than one.

    PLEASE LET THERE BE NO MORE!

  6. Oh, You Wanted Security, Integrity and Authenticity, Did You?

    I’m not sure I’d want people reading my genome or mind state as it traversed across the Internet and, while we could pretend that we have a super-secret private network, security through obscurity (hiding our network or data) really doesn’t work. So, sorry to say, we’re going to have to encrypt our data to make sure that no-one else can read it but we also have to carry out integrity tests to make sure that what we sent is what we thought we sent – we don’t want to send a NICK packet and end up with a MICE packet, for simplistic example. And this is going to have to be sent down the same network as before so we’re putting more data bits down that poor beleaguered network.

    Oh, and did I mention that encryption will also cost you more computational overhead? Not to mention the question of how we undertake this security because we have a basic requirement to protect all of this biodata in our system forever and eliminate the ability that someone could ever reproduce a copy of the data – because that would produce another person. (Ignore the fact that storing this much data is crazy, anyway, and that the current world networks couldn’t hold it all.)

    And who holds the keys to the kingdom anyway? Lenovo recently compromised a whole heap of machines (the Superfish debacle) by putting what’s called a “self-signed root certificate” on their machines to allow an adware partner to insert ads into your viewing. This is the equivalent of selling you a house with a secret door that you don’t know about it that has a four-digit pin lock on it – it’s not secure and because you don’t know about it, you can’t fix it. Every person who worked for the teleporter company would have to be treated as a hostile entity because the value of a secretly tele-cloned person is potentially immense: from the point of view of slavery, organ harvesting, blackmail, stalking and forced labour…

    But governments can get in the way, too. For example, the FREAK security flaw is a hangover from 90’s security paranoia that has never been fixed. Will governments demand in-transit inspection of certain travellers or the removal of contraband encoded elements prior to materialisation? How do you patch a hole that might have secretly removed essential proteins from the livers of every consular official of a particular country?

    The security protocols and approach required for a teleporter culture could define an entire freshman seminar in maths and CS and you could still never quite have scratched the surface. But we are now wandering into the most complex areas of all.

  7. Ethics and Philosophy

    How do we define what it means to be human? Is it the information associated with our physical state (locations, spin states and energy levels) or do we have to duplicate all of the atoms? If we can produce two different copies of the same person, the dreaded transporter accident, what does this say about the human soul? Which one is real?

    How do we deal with lost packets? Are they a person? What state do they have? To whom do they belong? If we transmit to a site that is destroyed just after materialisation, can we then transmit to a safe site to restore the person or is that on shaky ground?

    Do we need to develop special programming languages that make it impossible to carry out actions that would violate certain ethical or established protocols? How do we sign off on code for this? How do we test it?

    Do we grant full ethical and citizenship rights to people who have been through transporters, when they are very much no longer natural born people? Does country of birth make any sense when you are recreated in the atoms of another place? Can you copy yourself legitimately? How much of yourself has to survive in order for it to claim to be you? If someone is bifurcated and ends up, barely alive, with half in one place and half in another …

There are many excellent Science Fiction works referenced in the early links and many more out there, although people are backing away from it in harder SF because it does appear to be basically impossible. But if a networking student could understand all of the issues that I’ve raised here and discuss solutions in detail, they’d basically have passed my course. And all by discussing an impossible thing.

With thanks to Sean Williams, Adelaide author, who has been discussing this a lot as he writes about teleportation from the SF perspective and inspired this post.


Why “#thedress” is the perfect perception tester.

I know, you’re all over the dress. You’ve moved on to (checks Twitter) “#HouseOfCards”, Boris Nemtsov and the new Samsung gadgets. I wanted to touch on some of the things I mentioned in yesterday’s post and why that dress picture was so useful.

The first reason is that issues of conflict caused by different perception are not new. You only have to look at the furore surrounding the introduction of Impressionism, the scandal of the colour palette of the Fauvists, the outrage over Marcel Duchamp’s readymades and Dada in general, to see that art is an area that is constantly generating debate and argument over what is, and what is not, art. One of the biggest changes has been the move away from representative art to abstract art, mainly because we are no longer capable of making the simple objective comparison of “that painting looks like the thing that it’s a painting of.” (Let’s not even start on the ongoing linguistic violence over ending sentences with prepositions.)

Once we move art into the abstract, suddenly we are asking a question beyond “does it look like something?” and move into the realm of “does it remind us of something?”, “does it make us feel something?” and “does it make us think about the original object in a different way?” You don’t have to go all the way to using body fluids and live otters in performance pieces to start running into the refrains so often heard in art galleries: “I don’t get it”, “I could have done that”, “It’s all a con”, “It doesn’t look like anything” and “I don’t like it.”

Kazimir Malevich's Suprematism with Blue Triangle and Black Square (1915).

Kazimir Malevich’s Suprematism with Blue Triangle and Black Square (1915).

This was a radical departure from art of the time, part of the Suprematism movement that flourished briefly before Stalin suppressed it, heavily and brutally. Art like this was considered subversive, dangerous and a real threat to the morality of the citizenry. Not bad for two simple shapes, is it? And, yet, many people will look at this and use of the above phrases. There is an enormous range of perception on this very simple (yet deeply complicated) piece of art.

The viewer is, of course, completely entitled to their subjective opinion on art but this is, for many cases, a perceptual issue caused by a lack of familiarity with the intentions, practices and goals of abstract art. When we were still painting pictures of houses and rich people, there were many pictures from the 16th to 18th century which contain really badly painted animals. It’s worth going to an historical art museum just to look at all the crap animals. Looking at early European artists trying to capture Australian fauna gives you the same experience – people weren’t painting what they were seeing, they were painting a reasonable approximation of the representation and putting that into the picture. Yet this was accepted and it was accepted because it was a commonly held perception. This also explains offensive (and totally unrealistic) caricatures along racial, gender or religious lines: you accept the stereotype as a reasonable portrayal because of shared perception. (And, no, I’m not putting pictures of that up.)

But, when we talk about art or food, it’s easy to get caught up in things like cultural capital, the assets we have that aren’t money but allow us to be more socially mobile. “Knowing” about art, wine or food has real weight in certain social situations, so the background here matters. Thus, to illustrate that two people can look at the same abstract piece and have one be enraptured while the other wants their money back is not a clean perceptual distinction, free of outside influence. We can’t say “human perception is very a personal business” based on this alone because there are too many arguments to be made about prior knowledge, art appreciation, socioeconomic factors and cultural capital.

But let’s look at another argument starter, the dreaded Monty Hall Problem, where there are three doors, a good prize behind one, and you have to pick a door to try and win a prize. If the host opens a door showing you where the prize isn’t, do you switch or not? (The correctly formulated problem is designed so that switching is the right thing to do but, again, so much argument.) This is, again, a perceptual issue because of how people think about probability and how much weight they invest in their decision making process, how they feel when discussing it and so on. I’ve seen people get into serious arguments about this and this doesn’t even scratch the surface of the incredible abuse Marilyn vos Savant suffered when she had the audacity to post the correct solution to the problem.

This is another great example of what happens when the human perceptual system, environmental factors and facts get jammed together but… it’s also not clean because you can start talking about previous mathematical experience, logical thinking approaches, textual analysis and so on. It’s easy to say that “ah, this isn’t just a human perceptual thing, it’s everything else.

This is why I love that stupid dress picture. You don’t need to have any prior knowledge of art, cultural capital, mathematical background, history of game shows or whatever. All you need are eyes and relatively functional colour sense of colour. (The dress doesn’t even hit most of the colour blindness issues, interestingly.)

The dress is the clearest example we have that two people can look at the same thing and it’s perception issues that are inbuilt and beyond their control that cause them to have a difference of opinion. We finally have a universal example of how being human is not being sure of the world that we live in and one that we can reproduce anytime we want, without having to carry out any more preparation than “have you seen this dress?”

What we do with it is, as always, the important question now. For me, it’s a reminder to think about issues of perception before I explode with rage across the Internet. Some things will still just be dumb, cruel or evil – the dress won’t heal the world but it does give us a new filter to apply. But it’s simple and clean, and that’s why I think the dress is one of the best things to happen recently to help to bring us together in our discussions so that we can sort out important things and get them done.