A Year of Beauty

Plato: Unifying key cosmic values of Greek culture to a useful conceptual trinity.

Plato: Unifying key cosmic values of Greek culture to a useful conceptual trinity.

Ever since education became something we discussed, teachers and learners alike have had strong opinions regarding the quality of education and how it can be improved. What is surprising, as you look at these discussions over time, is how often we seem to come back to the same ideas. We read Dewey and we hear echoes of Rousseau. So many echoes and so much careful thought, found as we built new modern frames with Vygotsky, Piaget, Montessori, Papert and so many more. But little of this should really be a surprise because we can go back to the writings of Marcus Fabius Quintilianus (Quinitilian) and his twelve books of The Orator’s Education and we find discussion of small class sizes, constructive student-focused discussions, and that more people were capable of thought and far-reaching intellectual pursuits than was popularly believed.

… as birds are born for flying, horses for speed, beasts of prey for ferocity, so are [humans] for mental activity and resourcefulness.” Quintilian, Book I, page 65.

I used to say that it was stunning how contemporary education seems to be slow in moving in directions first suggested by Dewey a hundred years ago, then I discovered that Rousseau had said it 150 years before that. Now I find that Quntilian wrote things such as this nearly 2,000 years ago. And Marcus Aurelius, among other stoics, made much of approaches to thinking that, somehow, were put to one side as we industrialised education much as we had industrialised everything else.

This year I have accepted that we have had 2,000 years of thinking (and as much evidence when we are bold enough to experiment) and yet we just have not seen enough change. Dewey’s critique of the University is still valid. Rousseau’s lament on attaining true mastery of knowledge stands. Quintilian’s distrust of mere imitation would not be quieted when looking at much of repetitive modern examination practice.

What stops us from changing? We have more than enough evidence of discussion and thought, from some of the greatest philosophers we have seen. When we start looking at education, in varying forms, we wander across Plato, Hypatia, Hegel, Kant, Nietzsche, in addition to all of those I have already mentioned. But evidence, as it stands, does not appear to be enough, especially in the face of personal perception of achievement, contribution and outcomes, whether supported by facts or not.

Evidence of uncertainty is not enough. Evidence of the lack of efficacy of techniques, now that we can and do measure them, is not enough. Evidence that students fail who then, under other tutors or approaches, mysteriously flourish elsewhere, is not enough.

Authority, by itself, is not enough. We can be told to do more or to do things differently but the research we have suggests that an externally applied control mechanism just doesn’t work very well for areas where thinking is required. And thinking is, most definitely, required for education.

I have already commented elsewhere on Mark Guzdial’s post that attracted so much attention and, yet, all he was saying was what we have seen repeated throughout history and is now supported in this ‘gilt age’ of measurement of efficacy. It still took local authority to stop people piling onto him (even under the rather shabby cloak of ‘scientific enquiry’ that masks so much negative activity). Mark is repeating the words of educators throughout the ages who have stepped back and asked “Is what we are doing the best thing we could be doing?” It is human to say “But, if I know that this is the evidence, why am I acting as if it were not true?” But it is quite clear that this is still challenging and, amazingly, heretical to an extent, despite these (apparently controversial) ideas pre-dating most of what we know as the trappings and establishments of education. Here is our evidence that evidence is not enough. This experience is the authority that, while authority can halt a debate, authority cannot force people to alter such a deeply complex and cognitive practice in a useful manner. Nobody is necessarily agreeing with Mark, they’re just no longer arguing. That’s not helpful.

So, where to from here?

We should not throw out everything old simply because it is old, as that is meaningless without evidence to do so and it is wrong as autocratically rejecting everything new because it is new.

The challenge is to find a way of explaining how things could change without forcing conflict between evidence and personal experience and without having to resort to an argument by authority, whether moral or experiential. And this is a massive challenge.

This year, I looked back to find other ways forward. I looked back to the three values of Ancient Greece, brought together as a trinity through Socrates and Plato.

These three values are: beauty, goodness and truth. Here, truth means seeing things as they are (non-concealment). Goodness denotes the excellence of something and often refers to a purpose of meaning for existence, in the sense of a good life. Beauty? Beauty is an aesthetic delight; pleasing to those senses that value certain criteria. It does not merely mean pretty, as we can have many ways that something is aesthetically pleasing. For Dewey, equality of access was an essential criterion of education; education could only be beautiful to Dewey if it was free and easily available. For Plato, the revelation of knowledge was good and beauty could arose a love for this knowledge that would lead to such a good. By revealing good, reality, to our selves and our world, we are ultimately seeking truth: seeing the world as it really is.

In the Platonic ideal, a beautiful education leads us to fall in love with learning and gives us momentum to strive for good, which will lead us to truth. Is there any better expression of what we all would really want to see in our classrooms?

I can speak of efficiencies of education, of retention rates and average grades. Or I can ask you if something is beautiful. We may not all agree on details of constructivist theory but if we can discuss those characteristics that we can maximise to lead towards a beautiful outcome, aesthetics, perhaps we can understand where we differ and, even more optimistically, move towards agreement. Towards beautiful educational practice. Towards a system and methodology that makes our students as excited about learning as we are about teaching. Let me illustrate.

A teacher stands in front of a class, delivering the same lecture that has been delivered for the last ten years. From the same book. The classroom is half-empty. There’s an assignment due tomorrow morning. Same assignment as the last three years. The teacher knows roughly how many people will ask for an extension an hour beforehand, how many will hand up and how many will cheat.

I can talk about evidence, about pedagogy, about political and class theory, about all forms of authority, or I can ask you, in the privacy of your head, to think about these questions.

  • Is this beautiful? Which of the aesthetics of education are really being satisfied here?
  • Is it good? Is this going to lead to the outcomes that you want for all of the students in the class?
  • Is it true? Is this really the way that your students will be applying this knowledge, developing it, exploring it and taking it further, to hand on to other people?
  • And now, having thought about yourself, what do you think your students would say? Would they think this was beautiful, once you explained what you meant?

Over the coming year, I will be writing a lot more on this. I know that this idea is not unique (Dewey wrote on this, to an extent, and, more recently, several books in the dramatic arts have taken up the case of beauty and education) but it is one that we do not often address in science and engineering.

My challenge, for 2016, is to try to provide a year of beautiful education. Succeed or fail, I will document it here.


Updated previous post: @WalidahImarisha #worldcon #sasquan

Walidah Imarisha very generously continued the discussion of my last piece with me on Twitter and I have updated that piece to include her thoughts and to provide vital additional discussion. As always, don’t read me talking about things when you can read the words of the people who are out there fixing, changing the narrative, fighting and winning.

Thank you, Walidah!


The Only Way Forward is With No Names @iamajanibrown @WalidahImarisha #afrofuturism #worldcon #sasquan

Edit: Walidah Imarisha and I had a discussion in Twitter after I released this piece and I wanted to add her thoughts and part of our discussion. I’ve added it to the end so that you’ll have context but I mention it here because her thoughts are the ones that you must read before you leave this piece. Never listen to me when you can be listening to the people who are living this and fighting it.

I’m currently at the World Science Fiction Convention in Spokane, Washington state. As always, my focus is education and (no surprise to long term readers) equity. I’ve had the opportunity to attend some amazing panels. One was on the experience of women in art, publishing and game production of female characters for video gaming. Others were discussing issues such as non-white presence in fiction (#AfroFuturism with Professor Ajani Brown) and a long discussion of the changes between the Marvel Universe in film and comic form, as well as how we can use Science Fiction & Fantasy in the classroom to address social issues without having to directly engage the (often depressing) news sources. Both the latter panels were excellent and, in the Marvel one, Tom Smith, Annalee Flower Horne, Cassandra Rose Clarke, and Professor Brown, there was a lot of discussion of both the new Afro-American characters in movies and TV (Deathlok, Storm and Falcon) as well as how much they had changed from the comics.

I’m going to discuss what I saw and lead towards my point: that all assessment of work for its publishing potential should, where it is possible and sensible, be carried out blind, without knowledge of who wrote it.

I’ve written on this before, both here (where I argue that current publishing may not be doing what we want for the long term benefit of the community and the publishers themselves) and here, where we identify that systematic biases against people who are not western men is rampant and apparently almost inescapable as long as we can see a female name. Very recently, this Jezebel article identified that changing the author’s name on a manuscript, from female to male, not only included response rate and reduced time waiting, it changed the type of feedback given. The woman’s characters were “feisty”, the man’s weren’t. Same characters. It doesn’t matter if you think you’re being sexist or not, it doesn’t even matter (from the PNAS study in the second link) if you’re a man or a woman, the presence of a female name changes the level of respect attached to a work and also the level of reward/appreciation offered an assessment process. There are similar works that clearly identify that this problem is even worse for People of Colour. (Look up Intersectionality if you don’t know what I’m talking about.) I’m not saying that all of these people are trying to discriminate but the evidence we have says that social conditioning that leads to sexism is powerful and dominating.

Now let’s get back to the panels. The first panel “Female Characters in Video Games” with Andrea Stewart, Maurine Starkey, Annalee Flower Horne, Lauren Roy and Tanglwyst de Holloway. While discussing the growing market for female characters, the panel identified the ongoing problems and discrimination against women in the industry. 22% of professionals in the field are women, which sounds awful until you realise that this figure was 11% in 2009. However, Maurine had had her artwork recognised as being “great” when someone thought her work was a mans and “oh, drawn like a woman” when the true owner was revealed. And this is someone being explicit. The message of the panel was very positive: things were getting better. However, it was obvious that knowing someone was a woman changed how people valued their work or even how their activities were described. “Casual gaming” is often a term that describes what women do; if women take up a gaming platform (and they are a huge portion of the market) then it often gets labelled “casual gaming”.

So, point 1, assessing work at a professional level is apparently hard to do objectively when we know the gender of people. Moving on.

The first panel on Friday dealt with AfroFuturism, which looks at the long-standing philosophical and artistic expression of alternative realities relating to people of African Descent. This can be traced to the Egyptian origins of mystic and astrological architecture and religions, through tribal dances and mask ceremonies of other parts of Africa, to the P.Funk mothership and science-fiction works published in the middle of vinyl albums. There are strong notions of carving out or refining identity in order to break oppressive narratives and re-establish agency. AfroFuturism looks into creating new futures and narratives, also allowing for reinvention to escape the past, which is a powerful tool for liberation. People can be put into boxes and they want to break out to liberate themselves and, too often, if we know that someone can be put into a box then we have a nasty tendency (implicit cognitive bias) to jam them back in. No wonder, AfroFuturism is seen as a powerful force because it is an assault on the whole mean, racist narrative that does things like call groups of white people “protesters” or “concerned citizens”, and groups of black people “rioters”.

(If you follow me on Twitter, you’ve seen a fair bit of this. If you’re not following me on Twitter, @nickfalkner is the way to go.)

So point 2, if we know someone’s race, then we are more likely to enforce a narrative that is stereotypical and oppressive when we are outside of their culture. Writers inside the culture can write to liberate and to redefine identity and this probably means we need to see more of this.

I want to focus on the final panel, “Saving the World through Science Fiction: SF in the Classroom”, with Ben Cartwright, Ajani Brown (again!), Walidah Imarisha and Charlotte Lewis Brown. There are many issues facing our students on a day-to-day basis and it can be very hard to engage with some of them because it is confronting to have to address your own biases when you talk about the real world. But you can talk about racism with aliens, xenophobia with a planetary invasion, the horrors of war with apocalyptic fiction… and it’s not the nightly news. People can confront their biases without confronting them. That’s a very powerful technique for changing the world. It’s awesome.

Point 3, then, is that narratives are important and, with careful framing, we can discuss very complicated things and get away from the sheer weight of biases and reframe a discussion to talk about difficult things, without having to resort to violence or conflict. This reinforces Point 2, that we need more stories from other viewpoints to allow us to think about important issues.

We are a narrative and a mythic species: storytelling allows us to explain our universe. Storytelling defines our universe, whether it’s religion, notions of family or sense of state.

What I take from all of these panels is that many of the stories that we want to be reading, that are necessary for the healing and strengthening of our society, should be coming from groups who are traditionally not proportionally represented: women, People of Colour, Women of Colour, basically anyone who isn’t recognised as a white man in the Western Tradition. This isn’t to say that everything has to be one form but, instead, that we should be putting systems in place to get the best stories from as wide a range as possible, in order to let SF&F educate, change and grow the world. This doesn’t even touch on the Matthew Effect, where we are more likely to positively value a work if we have an existing positive relationship with the author, even if said work is not actually very good.

And this is why, with all of the evidence we have with cognitive biases changing the way people think about work based on the name, that the most likely approach to improve the range of stories that we will end up publishing is to judge as many works as we can without knowing who wrote it. If we wanted to take it further, we could even ask people to briefly explain why they did or didn’t like it. The comments on the Jezebel author’s book make it clear that, with those comments, we can clearly identify a bias in play. “It’s not for us” and things like that are not sufficiently transparent for us to see if the system is working. (Apologies to the hard-working editors out there, I know this is a big demand. Anonymity is a great start. 🙂 )

Now some books/works, you have to know who wrote it; my textbook, for example, depends upon my academic credentials and my published work, hence my identify is a part of the validity of academic work. But, for short fiction, for books? Perhaps it’s time to look at all of the evidence and to look at all of the efforts to widen the range of voices we hear and consider a commitment to anonymous review so that SF&F will be a powerful force for thought and change in the decades to come.

Thank you to all of the amazing panellists. You made everyone think and sent out powerful and positive messages. Thank you, so much!

Edit: As mentioned above, Walidah and I had a discussion that extended from this on Twitter. Walidah’s point was about changing the system so that we no longer have to hide identity to eliminate bias and I totally agree with this. Our goal has to be to create a space where bias no longer exists, where the assumption that the hierarchical dominance is white, cis, straight and male is no longer the default. Also, while SF&F is a great tool, it does not replace having the necessary and actual conversations about oppression. Our goal should never be to erase people of colour and replace it with aliens and dwarves just because white people don’t want to talk about race. While narrative engineering can work, many people do not transfer the knowledge from analogy to reality and this is why these authentic discussions of real situations must also exist. When we sit purely in analog, we risk reinforcing inequality if we don’t tie it back down to Earth.

I am still trying to attack a biased system to widen the narrative to allow more space for other voices but, as Walidah notes, this is catering to the privileged, rather than empowering the oppressed to speak their stories. And, of course, talking about oppression leads those on top of the hierarchy to assume you are oppressed. Walidah mentioned Katherine Burdekin & Swastika Nights as part of this. Our goal must be to remove bias. What I spoke about above is one way but it is very much born of the privileged and we cannot lose sight of the necessity of empowerment and a constant commitment to ensuring the visibility of other voices and hearing the stories of the oppressed from them, not passed through white academics like me.

Seriously, if you can read me OR someone else who has a more authentic connection? Please read that someone else.

Walidah’s recent work includes, with adrienne maree brown, editing the book of 20 short stories I have winging its way to me as we speak, “Octavia’s Brood: Science Fiction Stories from Social Justice Movements” and I am so grateful that she took the time to respond to this post and help me (I hope) to make it stronger.


Musings of an Amateur Mythographer I: Islands of Certainty in a Sea of Confusion

If that's the sea of confusion, I'll be floating in it for a while. (Wikipedia - Mokoli'i)

If that’s the sea of confusion, I’ll be floating in it for a while. (Wikipedia – Mokoli’i)

I’ve been doing a lot of reading recently on the classification of knowledge, the development of scientific thinking, the ways different cultures approach learning, and the relationship between myths and science. Now, some of you are probably wondering why I can’t watch “Agents of S.H.I.E.L.D.” like a normal person but others of you have already started to shift uneasily because I’ve talked about a relationship between myths and science, as if we do not consider science to be the natural successor to preceding myths. Well, let me go further. I’m about to start drawing on thinking on myths and science and even how the myths that teach us about the importance of evidence, the foundation of science, but for their own purposes.

Why?

Because much of what we face as opposition in educational research are pre-existing stereotypes and misconceptions that people employ, where there’s a lack of (and sometimes in the face of) evidence. Yet this collection of beliefs is powerful because it prevents people from adopting verified and validated approaches to learning and teaching. What can we call these? Are these myths? What do I even mean by that term?

It’s important to realise that the use of the term myth has evolved from earlier, rather condescending, classifications of any culture’s pre-scientific thinking as being dismissively primitive and unworthy of contemporary thought. This is a rich topic by itself but let me refer to Claude Lévi-Strauss and his identification of myth as being a form of thinking and classification, rather than simple story-telling, and thus proto-scientific, rather than anti-scientific. I note that I have done the study of mythology a grave disservice with such an abbreviated telling. Further reading here to understand precisely what Lévi-Strauss was refuting could involve Tylor, Malinowski, and Lévy-Bruhl. This includes rejecting a knee-jerk classification of a less scientifically advanced people as being emotional and practical, rather than (even being capable of) being intellectual. By moving myth forms to an intellectual footing, Lévi-Strauss allows a non-pejorative assessment of the potential value of myth forms.

In many situations, we consider myth and folklore as the same thing, from a Western post-Enlightenment viewpoint, only accepting those elements that we can validate. Thus, we choose not to believe that Olympus holds the Greek Pantheon as we cannot locate the Gods reliably, but the pre-scientific chewing of willow bark to relieve pain was validated once we constructed aspirin (and willow bark tea). It’s worth noting that the early location of willow bark as part of its scientific ‘discovery’ was inspired by an (effectively random) approach called the doctrine of signatures, which assumed that the cause and the cure of diseases would be located near each other. The folkloric doctrine of signatures led the explorers to a plant that tasted like another one but had a different use.

Myth, folklore and science, dancing uneasily together. Does this mean that what we choose to call myth now may or may not be myth in the future? We know that when to use it, to recommend it, in our endorsed and academic context is usually to require it to become science. But what is science?

Karl Popper’s (heavily summarised) view is that we have a set of hypotheses that we test to destruction and this is the foundation of our contemporary view of science. If the evidence we have doesn’t fit the hypothesis then we must reject the hypothesis. When we have enough evidence, and enough hypotheses, we have a supported theory. However, this has a natural knock-on effect in that we cannot actually prove anything, we just have enough evidence to support the hypothesis. Kuhn (again, heavily summarised) has a model of “normal science” where there is a large amount of science as in Popper’s model, incrementing a body of existing work, but there are times when this continuity gives way to a revolutionary change. At these times, we see an accumulation of contradictory evidence that illustrates that it’s time to think very differently about the world. Ultimately, we discover the need for a new coherency, where we need new exemplars to make the world make sense. (And, yes, there’s still a lot of controversy over this.)

Let me attempt to bring this all together, finally. We, as humans, live in a world full of information and some of it, even in our post-scientific world, we incorporate into our lives without evidence and some we need evidence to accept. Do you want some evidence that we live our lives without, or even in spite of, evidence? The median length for a marriage in the United States is 11 years and 40-50% of marriages will end in divorce yet many still swear ‘until death do us part’ or ‘all of my days’. But the myth of ‘marriage forever’ is still powerful. People have children, move, buy houses and totally change their lives based on this myth. The actions that people take here will have a significant impact on the world around them and yet it seems at odd with the evidence. (Such examples are not uncommon and, in a post-scientific revolution world, must force us to consider earlier suggestions that myth-based societies move seamlessly to a science-based intellectual utopia. This is why Lévi-Strauss is interesting to read. Our evidence is that our evidence is not sufficient evidence, so we must seek to better understand ourselves.) Even those components of our shared history and knowledge that are constructed to be based on faith, such as religion, understand how important evidence is to us. Let me give an example.

In the fourth book of the New Testament of the Christian Bible, the Gospel of John, we find the story of the Resurrection of Lazarus. Lazarus is sick and Jesus Christ waits until he dies to go to where he is buried and raise him. Jesus deliberately delays because the glory to the Christian God will be far greater and more will believe, if Lazarus is raised from the dead, rather than just healed from illness. Ultimately, and I do not speak for any religious figure or God here, anyone can get better from an illness but to be raised from the dead (currently) requires a miracle. Evidence, even in a book written for the faithful and to build faith, is important to humans.

We also know that there is a very large amount of knowledge that is accepted as being supported by evidence but the evidence is really anecdotal, based on bias and stereotype, and can even be distorted through repetition. This is the sea of confusion that we all live in. The scientific method (Popper) is one way that we can try to find firm ground to stand on but, if Kuhn is to be believed, there is the risk that one day we stand on the islands and realise that the truth was the sea all along. Even with Popper, we risk standing on solid ground that turns out to be meringue. How many of these changes can one human endure and still be malleable and welcoming in the face of further change?

Our problem with myth is when it forces us to reject something that we can demonstrate to be both valuable and scientifically valid because, right now, the world that we live in is constructed on scientific foundations and coherence is maintained by adding to those foundations. Personally, I don’t believe that myth and science have to be at odds (many disagree with me, including Richard Dawkins of course), and that this is an acceptable view as they are already co-existing in ways that actively shape society, for both good and ill.

Recently I made a comment on MOOCs that contradicted something someone said and I was (quite rightly) asked to provide evidence to support my assertions. That is the post before this one and what you will notice is that I do not have a great deal of what we would usually call evidence: no double-blind tests, no large-n trials with well-formed datasets. I had some early evidence of benefit, mostly qualitative and relatively soft, but, and this is important to me, what I didn’t have was evidence of harm. There are many myths around MOOCs and education in general. Some of them fall into the realm of harmful myths, those that cause people to reject good approaches to adhere to old and destructive practices. Some of them are harmful because they cause us to reject approaches that might work because we cannot find the evidence we need.

I am unsurprised that so many people adhere to folk pedagogy, given the vast amounts of information out there and the natural resistance to rejecting something that you think works, especially when someone sails in and tells you’ve been wrong for years. The fact that we are still discussing the nature of myth and science gives insight into how complicated this issue is.

I think that the path I’m on could most reasonably be called that of the mythographer, but the cataloguing of the edges of myth and the intersections of science is not in order to condemn one or the other but to find out what the truth is to the best of our knowledge. I think that understanding why people believe what they believe allows us to understand what they will need in order to believe something that is actually, well, true. There are many articles written on this, on the difficulty of replacing one piece of learning with another and the dangers of repetition in reinforcing previously-held beliefs, but there is hope in that we can construct new elements to replace old information if we are careful and we understand how people think.

We need to understand the delicate relationships between myth, folklore and science, our history as separate and joined peoples, if only to understand when we have achieved new forms of knowing. But we also need to be more upfront about when we believe we have moved on, including actively identifying areas that we have labelled as “in need of much more evidence” (such as learning styles, for example) to assist people in doing valuable work if they wish to pursue research.

I’ll go further. If we have areas where we cannot easily gain evidence, yet we have competing myths in that space, what should we do? How do we choose the best approach to achieve the most effective educational outcomes? I’ll let everyone argue in the comments for a while and then write that as the next piece.


Designing a MOOC: how far did it reach? #csed

Mark Guzdial posted over on his blog on “Moving Beyond MOOCS: Could we move to understanding learning and teaching?” and discusses aspects (that still linger) of MOOC hype. (I’ve spoken about MOOCs done badly before, as well as recording the thoughts of people like Hugh Davis from Southampton.) One of Mark’s paragraphs reads:

“The value of being in the front row of a class is that you talk with the teacher.  Getting physically closer to the lecturer doesn’t improve learning.  Engagement improves learning.  A MOOC puts everyone at the back of the class, listening only and doing the homework”

My reply to this was:

“You can probably guess that I have two responses here, the first is that the front row is not available to many in the real world in the first place, with the second being that, for far too many people, any seat in the classroom is better than none.

But I am involved in a, for us, large MOOC so my responses have to be regarded in that light. Thanks for the post!”

Mark, of course, called my bluff and responded with:

“Nick, I know that you know the literature in this space, and care about design and assessment. Can you say something about how you designed your MOOC to reach those who would not otherwise get access to formal educational opportunities? And since your MOOC has started, do you know yet if you achieved that goal — are you reaching people who would not otherwise get access?”

So here is that response. Thanks for the nudge, Mark! The answer is a bit long but please bear with me. We will be posting a longer summary after the course is completed, in a month or so. Consider this the unedited taster. I’m putting this here, early, prior to the detailed statistical work, so you can see where we are. All the numbers below are fresh off the system, to drive discussion and answering Mark’s question at, pretty much, a conceptual level.

First up, as some background for everyone, the MOOC team I’m working with is the University of Adelaide‘s Computer Science Education Research group, led by A/Prof Katrina Falkner, with me (Dr Nick Falkner), Dr Rebecca Vivian, and Dr Claudia Szabo.

I’ll start by noting that we’ve been working to solve the inherent scaling issues in the front of the classroom for some time. If I had a class of 12 then there’s no problem in engaging with everyone but I keep finding myself in rooms of 100+, which forces some people to sit away from me and also limits the number of meaningful interactions I can make to individuals in one setting. While I take Mark’s point about the front of the classroom, and the associated research is pretty solid on this, we encountered an inherent problem when we identified that students were better off down the front… and yet we kept teaching to rooms with more student than front. I’ll go out on a limb and say that this is actually a moral issue that we, as a sector, have had to look at and ignore in the face of constrained resources. The nature of large spaces and people, coupled with our inability to hover, means that we can either choose to have a row of students effectively in a semi-circle facing us, or we accept that after a relatively small number of students or number of rows, we have constructed a space that is inherently divided by privilege and will lead to disengagement.

So, Katrina’s and my first foray into this space was dealing with the problem in the physical lecture spaces that we had, with the 100+ classes that we had.

Katrina and I published a paper on “contributing student pedagogy” in Computer Science Education 22 (4), 2012, to identify ways for forming valued small collaboration groups as a way to promote engagement and drive skill development. Ultimately, by reducing the class to a smaller number of clusters and making those clusters pedagogically useful, I can then bring the ‘front of the class’-like experience to every group I speak to. We have given talks and applied sessions on this, including a special session at SIGCSE, because we think it’s a useful technique that reduces the amount of ‘front privilege’ while extending the amount of ‘front benefit’. (Read the paper for actual detail – I am skimping on summary here.)

We then got involved in the support of the national Digital Technologies curriculum for primary and middle school teachers across Australia, after being invited to produce a support MOOC (really a SPOC, small, private, on-line course) by Google. The target learners were teachers who were about to teach or who were teaching into, initially, Foundation to Year 6 and thus had degrees but potentially no experience in this area. (I’ve written about this before and you can find more detail on this here, where I also thanked my previous teachers!)

The motivation of this group of learners was different from a traditional MOOC because (a) everyone had both a degree and probable employment in the sector which reduced opportunistic registration to a large extent and (b) Australian teachers are required to have a certain number of professional development (PD) hours a year. Through a number of discussions across the key groups, we had our course recognised as PD and this meant that doing our course was considered to be valuable although almost all of the teachers we spoke to were furiously keen for this information anyway and my belief is that the PD was very much ‘icing’ rather than ‘cake’. (Thank you again to all of the teachers who have spent time taking our course – we really hope it’s been useful.)

To discuss access and reach, we can measure teachers who’ve taken the course (somewhere in the low thousands) and then estimate the number of students potentially assisted and that’s when it gets a little crazy, because that’s somewhere around 30-40,000.

In his talk at CSEDU 2014, Hugh Davis identified the student groups who get involved in MOOCs as follows. The majority of people undertaking MOOCs were life-long learners (older, degreed, M/F 50/50), people seeking skills via PD, and those with poor access to Higher Ed. There is also a small group who are Uni ‘tasters’ but very, very small. (I think we can agree that tasting a MOOC is not tasting a campus-based Uni experience. Less ivy, for starters.) The three approaches to the course once inside were auditing, completing and sampling, and it’s this final one that I want to emphasise because this brings us to one of the differences of MOOCs. We are not in control of when people decide that they are satisfied with the free education that they are accessing, unlike our strong gatekeeping on traditional courses.

I am in total agreement that a MOOC is not the same as a classroom but, also, that it is not the same as a traditional course, where we define how the student will achieve their goals and how they will know when they have completed. MOOCs function far more like many people’s experience of web browsing: they hunt for what they want and stop when they have it, thus the sampling engagement pattern above.

(As an aside, does this mean that a course that is perceived as ‘all back of class’ will rapidly be abandoned because it is distasteful? This makes the student-consumer a much more powerful player in their own educational market and is potentially worth remembering.)

Knowing these different approaches, we designed the individual subjects and overall program so that it was very much up to the participant how much they chose to take and individual modules were designed to be relatively self-contained, while fitting into a well-designed overall flow that built in terms of complexity and towards more abstract concepts. Thus, we supported auditing, completing and sampling, whereas our usual face-to-face (f2f) courses only support the first two in a way that we can measure.

As Hugh notes, and we agree through growing experience, marking/progress measures at scale are very difficult, especially when automated marking is not enough or not feasible. Based on our earlier work in contributing collaboration in the class room, for the F-6 Teacher MOOC we used a strong peer-assessment model where contributions and discussions were heavily linked. Because of the nature of the cohort, geographical and year-level groups formed who then conducted additional sessions and produced shared material at a slightly terrifying rate. We took the approach that we were not telling teachers how to teach but we were helping them to develop and share materials that would assist in their teaching. This reduced potential divisions and allows us to establish a mutually respectful relationship that facilitated openness.

(It’s worth noting that the courseware is creative commons, open and free. There are people reassembling the course for their specific take on the school system as we speak. We have a national curriculum but a state-focused approach to education, with public and many independent systems. Nobody makes any money out of providing this course to teachers and the material will always be free. Thank you again to Google for their ongoing support and funding!)

Overall, in this first F-6 MOOC, we had higher than usual retention of students and higher than usual participation, for the reasons I’ve outlined above. But this material was for curriculum support for teachers of young students, all of whom were pre-programming, and it could be contained in videos and on-line sharing of materials and discussion. We were also in the MOOC sweet-spot: existing degreed learners, PD driver, and their PD requirement depended on progressive demonstration on goal achievement, which we recognised post-course with a pre-approved certificate form. (Important note: if you are doing this, clear up how the PD requirements are met and how they need to be reported back, as early on as you can. It meant that we could give people something valuable in a short time.)

The programming MOOC, Think. Create. Code on EdX, was more challenging in many regards. We knew we were in a more difficult space and would be more in what I shall refer to as ‘the land of the average MOOC consumer’. No strong focus, no PD driver, no geographically guaranteed communities. We had to think carefully about what we considered to be useful interaction with the course material. What counted as success?

To start with, we took an image-based approach (I don’t think I need to provide supporting arguments for media-driven computing!) where students would produce images and, over time, refine their coding skills to produce and understand how to produce more complex images, building towards animation. People who have not had good access to education may not understand why we would use programming in more complex systems but our goal was to make images and that is a fairly universally understood idea, with a short production timeline and very clear indication of achievement: “Does it look like a face yet?”

In terms of useful interaction, if someone wrote a single program that drew a face, for the first time – then that’s valuable. If someone looked at someone else’s code and spotted a bug (however we wish to frame this), then that’s valuable. I think that someone writing a single line of correct code, where they understand everything that they write, is something that we can all consider to be valuable. Will it get you a degree? No. Will it be useful to you in later life? Well… maybe? (I would say ‘yes’ but that is a fervent hope rather than a fact.)

So our design brief was that it should be very easy to get into programming immediately, with an active and engaged approach, and that we have the same “mostly self-contained week” approach, with lots of good peer interaction and mutual evaluation to identify areas that needed work to allow us to build our knowledge together. (You know I may as well have ‘social constructivist’ tattooed on my head so this is strongly in keeping with my principles.) We wrote all of the materials from scratch, based on a 6-week program that we debated for some time. Materials consisted of short videos, additional material as short notes, participatory activities, quizzes and (we planned for) peer assessment (more on that later). You didn’t have to have been exposed to “the lecture” or even the advanced classroom to take the course. Any exposure to short videos or a web browser would be enough familiarity to go on with.

Our goal was to encourage as much engagement as possible, taking into account the fact that any number of students over 1,000 would be very hard to support individually, even with the 5-6 staff we had to help out. But we wanted students to be able to develop quickly, share quickly and, ultimately, comment back on each other’s work quickly. From a cognitive load perspective, it was crucial to keep the number of things that weren’t relevant to the task to a minimum, as we couldn’t assume any prior familiarity. This meant no installers, no linking, no loaders, no shenanigans. Write program, press play, get picture, share to gallery, winning.

As part of this, our support team (thanks, Jill!) developed a browser-based environment for Processing.js that integrated with a course gallery. Students could save their own work easily and share it trivially. Our early indications show that a lot of students jumped in and tried to do something straight away. (Processing is really good for getting something up, fast, as we know.) We spent a lot of time testing browsers, testing software, and writing code. All of the recorded materials used that development environment (this was important as Processing.js and Processing have some differences) and all of our videos show the environment in action. Again, as little extra cognitive load as possible – no implicit requirement for abstraction or skills transfer. (The AdelaideX team worked so hard to get us over the line – I think we may have eaten some of their brains to save those of our students. Thank you again to the University for selecting us and to Katy and the amazing team.)

The actual student group, about 20,000 people over 176 countries, did not have the “built-in” motivation of the previous group although they would all have their own levels of motivation. We used ‘meet and greet’ activities to drive some group formation (which worked to a degree) and we also had a very high level of staff monitoring of key question areas (which was noted by participants as being very high for EdX courses they’d taken), everyone putting in 30-60 minutes a day on rotation. But, as noted before, the biggest trick to getting everyone engaged at the large scale is to get everyone into groups where they have someone to talk to. This was supposed to be provided by a peer evaluation system that was initially part of the assessment package.

Sadly, the peer assessment system didn’t work as we wanted it to and we were worried that it would form a disincentive, rather than a supporting community, so we switched to a forum-based discussion of the works on the EdX discussion forum. At this point, a lack of integration between our own UoA programming system and gallery and the EdX discussion system allowed too much distance – the close binding we had in the R-6 MOOC wasn’t there. We’re still working on this because everything we know and all evidence we’ve collected before tells us that this is a vital part of the puzzle.

In terms of visible output, the amount of novel and amazing art work that has been generated has blown us all away. The degree of difference is huge: armed with approximately 5 statements, the number of different pieces you can produce is surprisingly large. Add in control statements and reputation? BOOM. Every student can write something that speaks to her or him and show it to other people, encouraging creativity and facilitating engagement.

From the stats side, I don’t have access to the raw stats, so it’s hard for me to give you a statistically sound answer as to who we have or have not reached. This is one of the things with working with a pre-existing platform and, yes, it bugs me a little because I can’t plot this against that unless someone has built it into the platform. But I think I can tell you some things.

I can tell you that roughly 2,000 students attempted quiz problems in the first week of the course and that over 4,000 watched a video in the first week – no real surprises, registrations are an indicator of interest, not a commitment. During that time, 7,000 students were active in the course in some way – including just writing code, discussing it and having fun in the gallery environment. (As it happens, we appear to be plateauing at about 3,000 active students but time will tell. We have a lot of post-course analysis to do.)

It’s a mistake to focus on the “drop” rates because the MOOC model is different. We have no idea if the people who left got what they wanted or not, or why they didn’t do anything. We may never know but we’ll dig into that later.

I can also tell you that only 57% of the students currently enrolled have declared themselves explicitly to be male and that is the most likely indicator that we are reaching students who might not usually be in a programming course, because that 43% of others, of whom 33% have self-identified as women, is far higher than we ever see in classes locally. If you want evidence of reach then it begins here, as part of the provision of an environment that is, apparently, more welcoming to ‘non-men’.

We have had a number of student comments that reflect positive reach and, while these are not statistically significant, I think that this also gives you support for the idea of additional reach. Students have been asking how they can save their code beyond the course and this is a good indicator: ownership and a desire to preserve something valuable.

For student comments, however, this is my favourite.

I’m no artist. I’m no computer programmer. But with this class, I see I can be both. #processingjs (Link to student’s work) #code101x .

That’s someone for whom this course had them in the right place in the classroom. After all of this is done, we’ll go looking to see how many more we can find.

I know this is long but I hope it answered your questions. We’re looking forward to doing a detailed write-up of everything after the course closes and we can look at everything.


EduTech AU 2015, Day 2, Higher Ed Leaders, “Change and innovation in the Digital Age: the future is social, mobile and personalised.” #edutechau @timbuckteeth

And heeere’s Steve Wheeler (@timbuckteeth)! Steve is an A/Prof of Learning Technologies at Plymouth in the UK. He and I have been at the same event before (CSEDU, Barcelona) and we seem to agree on a lot. Today’s cognitive bias warning is that I will probably agree with Steve a lot, again. I’ve already quizzed him on his talk because it looked like he was about to try and, as I understand it, what he wants to talk about is how our students can have altered expectations without necessarily becoming some sort of different species. (There are no Digital Natives. No, Prensky was wrong. Check out Helsper, 2010, from the LSE.) So, on to the talk and enough of my nonsense!

Steve claims he’s going to recap the previous speaker, but in an English accent. Ah, the Mayflower steps on the quayside in Plymouth, except that they’re not, because the real Mayflower steps are in a ladies’ loo in a pub, 100m back from the quay. The moral? What you expect to be getting is not always what you get. (Tourists think they have the real thing, locals know the truth.)

“Any sufficiently advanced technology is indistinguishable from magic” – Arthur C. Clarke.

Educational institutions are riddled with bad technology purchases where we buy something, don’t understand it, don’t support it and yet we’re stuck with it or, worse, try to teach with it when it doesn’t work.

Predicting the future is hard but, for educators, we can do it better if we look at:

  • Pedagogy first
  • Technology next (that fits the technology)

Steve then plugs his own book with a quote on technology not being a silver bullet.

But who will be our students? What are their expectations for the future? Common answers include: collaboration (student and staff), and more making and doing. They don’t like being talked at. Students today do not have a clear memory of the previous century, their expectations are based on the world that they are living in now, not the world that we grew up in.

Meet Student 2.0!

The average digital birth of children happens at about six months – but they can be on the Internet before they are born, via ultrasound photos. (Anyone who has tried to swipe or pinch-zoom a magazine knows why kids take to it so easily.) Students of today have tools and technology and this is what allows them to create, mash up, and reinvent materials.

What about Game Based Learning? What do children learn from playing games

Three biggest fears of teachers using technology

  • How do I make this work?
  • How do I avoid looking like an idiot?
  • They will know more about it than I do.

Three biggest fears of students

  • Bad wifi
  • Spinning wheel of death
  • Low battery

The laptops and devices you see in lectures are personal windows on the world, ongoing conversations and learning activities – it’s not purely inattention or anti-learning. Student questions on Twitter can be answered by people all around the world and that’s extending the learning dialogue out a long way beyond the classroom.

One of these is Voltaire, one is Steve Wheeler.

One of these is Voltaire, one is Steve Wheeler.

Voltaire said that we were products of our age. Walrick asks how we can prepare students for a future? Steve showed us a picture of him as a young boy, who had been turned off asking questions by a mocking teacher. But the last two years of his schooling were in Holland he went to the Philips flying saucer, which was a technology museum. There, he saw an early video conferencing system and that inspired him with a vision of the future.

Steve wanted to be an astronaut but his career advisor suggested he aim lower, because he wasn’t an American. The point is not that Steve wanted to be an astronaut but that he wanted to be an explorer, the role that he occupies now in education.

Steve shared a quote that education is “about teaching students not subjects” and he shared the awesome picture of ‘named quadrilaterals’. My favourite is ‘Bob. We have a very definite idea of what we want students to write as answer but we suppress creative answers and we don’t necessarily drive the approach to learning that we want.

Ignorance spreads happily by itself, we shouldn’t be helping it. Our visions of the future are too often our memories of what our time was, transferred into modern systems. Our solution spaces are restricted by our fixations on a specific way of thinking. This prevents us from breaking out of our current mindset and doing something useful.

What will the future be? It was multi-media, it was web, but where is it going? Mobile devices because the most likely web browser platform in 2013 and their share is growing.

What will our new technologies be? Thinks get smaller, faster, lighter as they mature. We have to think about solving problems in new ways.

Here’s a fire hose sip of technologies: artificial intelligence is on the way up, touch surfaces are getting better, wearables are getting smarter, we’re looking at remote presence, immersive environments, 3D printers are changing manufacturing and teaching, gestural computing, mind control of devices, actual physical implants into the body…

From Nova Spivak, we can plot information connectivity against social connectivity and we want is growth on both axes – a giant arrow point up to the top right. We don’t yet have a Web form that connects information, knowledge and people – i.e. linking intelligence and people. We’re already seeing some of this with recommenders, intelligent filtering, and sentiment tracking. (I’m still waiting for the Semantic Web to deliver, I started doing work on it in my PhD, mumble years ago.)

A possible topology is: infrastructure is distributed and virtualised, our interfaces are 3D and interactive, built onto mobile technology and using ‘intelligent’ systems underneath.

But you cannot assume that your students are all at the same level or have all of the same devices: the digital divide is as real and as damaging as any social divide. Steve alluded to the Personal Learning Networking, which you can read about in my previous blog on him.

How will teaching change? It has to move away from cutting down students into cloned templates. We want students to be self-directed, self-starting, equipped to capture information, collaborative, and oriented towards producing their own things.

Let’s get back to our roots:

  1. We learn by doing (Piaget, 1950)
  2. We learn by making (Papert, 1960)

Just because technology is making some of this doing and making easier doesn’t mean we’re making it worthless, it means that we have time to do other things. Flip the roles, not just the classroom. Let students’ be the teacher – we do learn by teaching. (Couldn’t agree more.)

Back to Papert, “The best learning takes place when students take control.” Students can reflect in blogging as they present their information a hidden audience that they are actually writing for. These physical and virtual networks grow, building their personal learning networks as they connect to more people who are connected to more people. (Steve’s a huge fan of Twitter. I’m not quite as connected as he is but that’s like saying this puddle is smaller than the North Sea.)

Some of our students are strongly connected and they do store their knowledge in groups and friendships, which really reflects how they find things out. This rolls into digital cultural capital and who our groups are.

(Then there was a steam of images at too high a speed for me to capture – go and download the slides, they’re creative commons and a lot of fun.)

Learners will need new competencies and literacies.

Always nice to hear Steve speak and, of course, I still agree with a lot of what he said. I won’t prod him for questions, though.


EduTech AU 2015, Day 2, Higher Ed Leaders, “Innovation + Technology = great change to higher education”, #edutechau

Big session today. We’re starting with Nicholas Negroponte, founder of the MIT Media Lab and the founder of One Laptop Per Child (OLPC), an initiative to create/provide affordable educational devices for children in the developing world. (Nicholas is coming to us via video conference, hooray, 21st Century, so this may or not work well in translation to blogging. Please bear with me if it’s a little disjointed.)

Nicholas would rather be here but he’s bravely working through his first presentation of this type! It’s going to be a presentation with some radical ideas so he’s hoping for conversation and debate. The presentation is broken into five parts:

  1. Learning learning. (Teaching and learning as separate entities.)
  2. What normal market forces will not do. (No real surprise that standard market forces won’t work well here.)
  3. Education without curricula. (Learning comes from many places and situations. Understanding and establishing credibility.)
  4. Where do new ideas come from? (How do we get them, how do we not get in the way.)
  5. Connectivity as a human right. (Is connectivity a human right or a means to rights such as education and healthcare? Human rights are free so that raises a lot of issues.

Nicholas then drilled down in “Learning learning”, starting with a reference to Seymour Papert, and Nicholas reflected on the sadness of the serious accident of Seymour’s health from a personal perspective. Nicholas referred to Papert’s and Minsky’s work on trying to understand how children and machines learned respectively. In 1968, Seymour started thinking about it and on April, 9, 1970, he gave a talk on his thoughts. Seymour realised that thinking about programs gave insight into thinking, relating to the deconstruction and stepwise solution building (algorithmic thinking) that novice programmers, such as children, had to go through.

These points were up on the screen as Nicholas spoke:

  1. Construction versus instruction
  2. Why reinventing the wheel is good
  3. Coding as thinking about thinking

How do we write code? Write it, see if it works, see which behaviours we have that aren’t considered working, change the code (in an informed way, with any luck) and try again. (It’s a little more complicated than that but that’s the core.) We’re now into the area of transferable skills – it appeared that children writing computer programs learned a skill that transferred over into their ability to spell, potentially from the methodical application of debugging techniques.

Nicholas talked about a spelling bee system where you would focus on the 8 out of 10 you got right and ignore the 2 you didn’t get. The ‘debugging’ kids would talk about the ones that they didn’t get right because they were analsysing their mistakes, as a peer group and as individual reflection.

Nicholas then moved on to the failure of market forces. Why does Finland do so well when they don’t have tests, homework and the shortest number of school hours per day and school days per year. One reason? No competition between children. No movement of core resources into the private sector (education as poorly functioning profit machine). Nicholas identified the core difference between the mission and the market, which beautifully summarises my thinking.

The OLPC program started in Cambodia for a variety of reasons, including someone associated with the lab being a friend of the King. OLPC laptops could go into areas where the government wasn’t providing schools for safety reasons, as it needed minesweepers and the like. Nicholas’ son came to Cambodia from Italy to connect up the school to the Internet. What would the normal market not do? Telecoms would come and get cheaper. Power would come and get cheaper. Laptops? Hmm. The software companies were pushing the hardware companies, so they were both caught in a spiral of increasing power consumption for utility. Where was the point where we could build a simple laptop, as a mission of learning, that could have a smaller energy footprint and bring laptops and connectivity to billions of people.

This is one of the reasons why OLPC is a non-profit – you don’t have to sell laptops to support the system, you’re supporting a mission. You didn’t need to sell or push to justify staying in a market, as the production volume was already at a good price. Why did this work well? You can make partnerships that weren’t possible otherwise. It derails the “ah, you need food and shelter first” argument because you can change the “why do we need a laptop” argument to “why do we need education?” at which point education leads to increased societal conditions. Why laptops? Tablets are more consumer-focused than construction-focused. (Certainly true of how I use my tech.)

(When we launched the first of the Digital Technologies MOOCs, the deal we agreed upon with Google was that it wasn’t a profit-making venture at all. It never will be. Neither we nor Google make money from the support of teachers across Australia so we can have all of the same advantages as they mention above: open partnerships, no profit motive, working for the common good as a mission of learning and collegial respect. Highly recommended approach, if someone is paying you enough to make your rent and eat. The secret truth of academia is that they give you money to keep you housed, clothed and fed while you think. )

Nicholas told a story of kids changing from being scared or bored of school to using an approach that brings kids flocking in. A great measure of success.

Now, onto Education without curricula, starting by talking public versus private. This is a sensitive subject for many people. The biggest problem for public education in many cases is the private educational system, dragging out caring educators to a closed system. Remember Finland? There are no public schools and their educational system is astoundingly good. Nicholas’ points were:

  1. Public versus private
  2. Age segregation
  3. Stop testing. (Yay!)

The public sector is losing the imperative of the civic responsibility for education. Nicholas thinks it doesn’t make sense that we still segregate by ages as a hard limit. He thinks we should get away from breaking it into age groups, as it doesn’t clearly reflect where students are at.

Oh, testing. Nicholas correctly labelled the parental complicity in the production of the testing pressure cooker. “You have to get good grades if you’re going to Princeton!” The testing mania is dominating institutions and we do a lot of testing to measure and rank children, rather than determining competency. Oh, so much here. Testing leads to destructive behaviour.

So where do new ideas come from? (A more positive note.) Nicholas is interested in Higher Ed as sources of new ideas. Why does HE exist, especially if we can do things remotely or off campus? What is the role of the Uni in the future? Ha! Apparently, when Nicholas started the MIT media lab, he was accused of starting a sissy lab with artists and soft science… oh dear, that’s about as wrong as someone can get. His use of creatives was seen as soft when, of course, using creative users addressed two issues to drive new ideas: a creative approach to thinking and consulting with the people who used the technology. Who really invented photography? Photographers. Three points from this section.

  1. Children: our most precious natural resource
  2. Incrementalism is the enemy of creativity
  3. Brain drain

On the brain drain, we lose many, many students to other places. Uni are a place to solve huge problems rather than small, profit-oriented problems. The entrepreneurial focus leads to small problem solution, which is sucking a lot of big thinking out of the system. The app model is leading to a human resource deficit because the start-up phenomenon is ripping away some of our best problem solvers.

Finally, to connectivity as a human right. This is something that Nicholas is very, very passionate about. Not content. Not laptops. Being connected.  Learning, education, and access to these, from early in life to the end of life – connectivity is the end of isolation. Isolation comes in many forms and can be physical, geographical and social. Here are Nicholas’ points:

  1. The end of isolation.
  2. Nationalism is a disease (oh, so much yes.) Nations are the wrong taxonomy for the world.
  3. Fried eggs and omelettes.

Fried eggs and omelettes? In general, the world had crisp boundaries, yolk versus white. At work/at home. At school/not at school. We are moving to a more blended, less dichotomous approach because we are mixing our lives together. This is both bad (you’re getting work in my homelife) and good (I’m getting learning in my day).

Can we drop kids into a reading environment and hope that they’ll learn to read? Reading is only 3,500 years old, versus our language skills, so it has to be learned. But do we have to do it the way that we did it? Hmm. Interesting questions. This is where the tablets were dropped into illiterate villages without any support. (Does this require a seed autodidact in the group? There’s a lot to unpack it.) Nicholas says he made a huge mistake in naming the village in Ethiopia which has corrupted the experiment but at least the kids are getting to give press conferences!

Another massive amount of interesting information – sadly, no question time!

 


EduTECH AU 2015, Day 1, Higher Ed Leaders, “Revolutionising the Student Experience: Thinking Boldly” #edutechau

Lucy Schulz, Deakin University, came to speak about initiatives in place at Deakin, including the IBM Watson initiative, which is currently a world-first for a University. How can a University collaborate to achieve success on a project in a short time? (Lucy thinks that this is the more interesting question. It’s not about the tool, it’s how they got there.)

Some brief facts on Deakin: 50,000 students, 11,000 of whom are on-line. Deakin’s question: how can we make the on-line experience as good if not better than the face-to-face and how can on-line make face-to-face better?

Part of Deakin’s Student Experience focus was on delighting the student. I really like this. I made a comment recently that our learning technology design should be “Everything we do is valuable” and I realise now I should have added “and delightful!” The second part of the student strategy is for Deakin to be at the digital frontier, pushing on the leading edge. This includes understanding the drivers of change in the digital sphere: cultural, technological and social.

(An aside: I’m not a big fan of the term disruption. Disruption makes room for something but I’d rather talk about the something than the clearing. Personal bug, feel free to ignore.)

The Deakin Student Journey has a vision to bring students into the centre of Uni thinking, every level and facet – students can be successful and feel supported in everything that they do at Deakin. There is a Deakin personality, an aspirational set of “Brave, Stylish, Accessible, Inspiring and Savvy”.

Not feeling this as much but it’s hard to get a feel for something like this in 30 seconds so moving on.

What do students want in their learning? Easy to find and to use, it works and it’s personalised.

So, on to IBM’s Watson, the machine that won Jeopardy, thus reducing the set of games that humans can win against machines to Thumb Wars and Go. We then saw a video on Watson featuring a lot of keen students who coincidentally had a lot of nice things to say about Deakin and Watson. (Remember, I warned you earlier, I have a bit of a thing about shiny videos but ignore me, I’m a curmudgeon.)

The Watson software is embedded in a student portal that all students can access, which has required a great deal of investigation into how students communicate, structurally and semantically. This forms the questions and guides the answer. I was waiting to see how Watson was being used and it appears to be acting as a student advisor to improve student experience. (Need to look into this more once day is over.)

Ah, yes, it’s on a student home page where they can ask Watson questions about things of importance to students. It doesn’t appear that they are actually programming the underlying system. (I’m a Computer Scientist in a faculty of Engineering, I always want to get my hands metaphorically dirty, or as dirty as you can get with 0s and 1s.) From looking at the demoed screens, one of the shiny student descriptions of Watson as “Siri plus Google” looks very apt.

Oh, it has cheekiness built in. How delightful. (I have a boundless capacity for whimsy and play but an inbuilt resistance to forced humour and mugging, which is regrettably all that the machines are capable of at the moment. I should confess Siri also rubs me the wrong way when it tries to be funny as I have a good memory and the patterns are obvious after a while. I grew up making ELIZA say stupid things – don’t judge me! 🙂 )

Watson has answered 26,000 questions since February, with an 80% accuracy for answers. The most common questions change according to time of semester, which is a nice confirmation of existing data. Watson is still being trained, with two more releases planned for this year and then another project launched around course and career advisors.

What they’ve learned – three things!

  1. Student voice is essential and you have to understand it.
  2. Have to take advantage of collaboration and interdependencies with other Deakin initiatives.
  3. Gained a new perspective on developing and publishing content for students. Short. Clear. Concise.

The challenges of revolution? (Oh, they’re always there.) Trying to prevent students falling through the cracks and make sure that this tool help students feel valued and stay in contact. The introduction of new technologies have to be recognised in terms of what they change and what they improve.

Collaboration and engagement with your University and student community are essential!

Thanks for a great talk, Lucy. Be interesting to see what happens with Watson in the next generations.


EduTech Australia 2015, Day 1, Session 2, Higher Education IT Leaders #edutechau

I jumped streams (GASP) to attend Mark Gregory’s talk on “Building customer-centric IT services.” Mark is the CIO from my home institutions, the University of Adelaide, and I work closely with him on a couple of projects. There’s an old saying that if you really want to know what’s going on in your IT branch, go and watch the CIO give a presentation away from home, which may also explain why I’m here. (Yes, I’m a dreadful cynic.)

Seven years ago, we had the worst customer-centric IT ratings in Australia and New Zealand, now we have some of the highest. That’s pretty impressive and, for what it’s worth, it reflects my experiences inside the institution.

Mark showed a picture of the ENIAC team, noting that the picture had been mocked up a bit, as additional men had been staged in the picture, which was a bit strange even the ENIAC team were six women to one man. (Yes, this has been going on for a long time.) His point was that we’ve come a long way from he computer attended by acolytes as a central resource to computers everywhere that everyone can access and we have specifically chosen. Technology is now something that you choose rather than what you put up with.

For Adelaide, on a typical day we see about 56,000 devices on the campus networks, only a quarter of which are University-provided. Over time, the customer requirement for centralised skills is shrinking as their own skills and the availability of outside (often cloud-based) resources increase. In 2020-2025, fewer and fewer of people on campus will need centralised IT.

Is ERP important? Mark thinks ‘Meh’ because it’s being skinned with Apps and websites, the actual ERP running in the background. What about networks? Well, everyone’s got them. What about security? That’s more of an imposition and it’s used by design issues. Security requirements are not a crowd pleaser.

So how will our IT services change over time?

A lot of us are moving from SOE to BYOD but this means saying farewell to the Standard Operating Environment (SOE). It’s really not desirable to be in this role, but it also drives a new financial model. We see 56,000 devices for 25,000 people – the mobility ship has sailed. How will we deal with it?

We’re moving from a portal model to an app model. The one stop shop is going and the new model is the build-it-yourself app store model where every device is effectively customised. The new user will not hang out in the portal environment.

Mark thinks we really, really need to increase the level of Self Help. A year ago, he put up 16 pages of PDFs and discovered that, over the year, 35,000 people went through self help compared to 70,000 on traditional help-desk. (I question that the average person in the street knows that an IP address given most of what I see in movies. 😉 )

The newer operating systems require less help but student self-help use is outnumbered 5 times by staff usage. Students go somewhere else to get help. Interesting. Our approaches to VPN have to change – it’s not like your bank requires one. Our approaches to support have to change – students and staff work 24×7, so why were we only supporting them 8-6? Adelaide now has a contract service outside of those hours to take the 100 important calls that would have been terrible had they not been fixed.

Mark thinks that IDM and access need to be fixed, it makes up 24% of their reported problems: password broken, I can’t get on and so on.

Security used to be on the device that the Uni owned. This has changed. Now it has to be data security, as you can’t guarantee that you own the device. Virtual desktops and virtual apps can offer data containerisation among their other benefits.

Let’s change the thinking from setting a perimeter to the person themselves. The boundaries are shifting and, let’s be honest, the inside of any network with 30,000 people is going to be swampy anyway.

Project management thinking is shifting from traditional to agile, which gets closer to the customer on shorter and smaller projects. But you have to change how you think about projects.

A lot of tools used to get built that worked with data but now people want to make this part of their decision infrastructure. Data quality is now very important.

The massive shift is from “provide and control” to “advise and enable”. (Sorry, auditors.) Another massive shift is from automation of a process that existed to support a business to help in designing the processes that will support the business. This is a driver back into policy space. (Sorry, central admin branch.) At the same time, Mark believes that they’re transitioning from a functional approach to a customer-centric focus. A common services layer will serve the student, L&T, research and admin groups but those common services may not be developed or even hosted inside the institution.

It’s not a surprise to anyone who’s been following what Mark has been doing, but he believes that the role is shifting from IT operations to University strategy.

Some customers are going to struggle. Some people will always need help. But what about those highly capable individuals who could help you? This is where innovation and co-creation can take place, with specific people across the University.

Mark wants Uni IT organisations to disrupt themselves. (The Go8 are rather conservative and are not prone to discussing disruption, let alone disrupting.)

Basically, if customers can do it, make themselves happy and get what they want working, why are you in their way? If they can do it themselves, then get out of the way except for those things where you add value and make the experience better.  We’re helping people who are desperate but we’re not putting as much effort into the innovators and more radical thinkers. Mark’s belief is that investing more effort into collaboration, co-creation and innovation is the way to go.

It looks risky but is it? What happens if you put technology out there? How do you get things happening?

Mark wants us to move beyond Service Level Agreements, which he describes as the bottom bar. No great athlete performs at the top level because of an SLA. This requires a move to meaningful metrics. (Very similar to student assessment, of course! Same problem!) Just because we measure something doesn’t make it meaningful!

We tended to hire skills to provide IT support. Mark believes that we should now be hiring attributes: leaders, drivers, innovators. The customer wants to get somewhere. How can we help them?

Lots to think about – thanks, Mark!


EduTech Australia 2015, Day 1, Session 1, Part 2, Higher Ed Leaders #edutechau

The next talk was a video conference presentation, “Designed to Engage”, from Dr Diane Oblinger, formerly of EDUCAUSE (USA). Diane was joining us by video on the first day of retirement – that’s keen!

Today, technology is not enough, it’s about engagement. Diane believes that the student experience can be a critical differentiator in this. In many institutions, the student will be the differentiator. She asked us to consider three different things:

  1. What would life be like without technology? How does this change our experiences and expectations?
  2. Does it have to be human-or-machine? We often construct a false dichotomy of online versus face-to-face rather than thinking about them as a continuum.
  3. Changes in demography are causing new consumption patterns.

Consider changes in the four key areas:

  • Learning
  • Pathways
  • Credentialing
  • Alternate Models

To speak to learning, Diane wants us to think about learning for now, rather than based on our own experiences. What will happen when classic college meets online?

Diane started from the premise that higher order learning comes from complex challenges – how can we offer this to students? Well, there are game-based, high experiential activities. They’re complex, interactive, integrative, information gathering driven, team focused and failure is part of the process. They also develop tenacity (with enough scaffolding, of course). We also get, almost for free, vast quantities of data to track how students performed their solving activities, which is far more than “right” or “wrong”. Does a complex world need more of these?

The second point for learning environments is that, sometimes, massive and intensive can go hand-in-hand. The Georgia Tech Online Master of Science in Computer Science, on Udacity , with assignments, TAs and social media engagements and problem-solving.  (I need to find out more about this. Paging the usual suspects.)

The second area discussed was pathways. Students lose time, track and credits when they start to make mistakes along the way and this can lead to them getting lost in the system. Cost is a huge issue in the US (and, yes, it’s a growing issue in Australia, hooray.)  Can you reduce cost without reducing learning? Students are benefiting from guided pathways to success. Georgia State and their predictive analytics were mentioned again here – leading students to more successful pathways to get better outcomes for everyone. Greatly increased retention, greatly reduced wasted tuition fees.

We now have a lot more data on what students are doing – the challenge for us is how we integrate this into better decision making. (Ethics, accuracy, privacy are all things that we have to consider.)

Learning needs to not be structured around seat time and credit hours. (I feel dirty even typing that.) Our students learn how to succeed in the environments that we give them. We don’t want to train them into mindless repetition. Once again, competency based learning, strongly formative, reflecting actual knowledge, is the way to go here.

(I really wish that we’d properly investigated the CBL first year. We might have done something visionary. Now we’ll just look derivative if we do it three years from now. Oh, well, time to start my own University – Nickapedia, anyone?)

Credentials raised their ugly head again – it’s one of the things that Unis have had in the bag. What is the new approach to credentials in the digital environment? Certificates and diplomas can be integrated into your on-line identity. (Again, security, privacy, ethics are all issues here but the idea is sound.) Example given was “Degreed”, a standalone credentialing site that can work to bridge recognised credentials from provide to employer.

Alternatives to degrees are being co-created by educators and employers. (I’m not 100% sure I agree with this. I think that some employers have great intentions but, very frequently, it turns into a requirement for highly specific training that might not be what we want to provide.)

Can we reinvent an alternative model that reinvents delivery systems, business models and support models? Can a curriculum be decentralised in a centralised University? What about models like Minerva? (Jeff mentioned this as well.)

(The slides got out of whack with the speaker for a while, apologies if I missed anything.)

(I should note that I get twitchy when people set up education for-profit. We’ve seen that this is a volatile market and we have the tension over where money goes. I have the luxury of working for an entity where its money goes to itself, somehow. There are no shareholders to deal with, beyond the 24,000,000 members of the population, who derive societal and economic benefit from our contribution.)

As noted on the next slide, working learners represent a sizeable opportunity for increased economic growth and mobility. More people in college is actually a good thing. (As an aside, it always astounds me when someone suggests that people are spending too much time in education. It’s like the insult “too clever by half”, you really have to think about what you’re advocating.)

For her closing thoughts, Diane thinks:

  1. The boundaries of the educational system must be re-conceptualised. We can’t ignore what’s going on around us.
  2. The integration of digital and physical experiences are creating new ways to engage. Digital is here and it’s not going away. (Unless we totally destroy ourselves, of course, but that’s a larger problem.)
  3. Can we design a better future for education.

Lots to think about and, despite some technical issues, a great talk.