EduTech Australia 2015, Day 1, Session 1, Part 2, Higher Ed Leaders #edutechau

The next talk was a video conference presentation, “Designed to Engage”, from Dr Diane Oblinger, formerly of EDUCAUSE (USA). Diane was joining us by video on the first day of retirement – that’s keen!

Today, technology is not enough, it’s about engagement. Diane believes that the student experience can be a critical differentiator in this. In many institutions, the student will be the differentiator. She asked us to consider three different things:

  1. What would life be like without technology? How does this change our experiences and expectations?
  2. Does it have to be human-or-machine? We often construct a false dichotomy of online versus face-to-face rather than thinking about them as a continuum.
  3. Changes in demography are causing new consumption patterns.

Consider changes in the four key areas:

  • Learning
  • Pathways
  • Credentialing
  • Alternate Models

To speak to learning, Diane wants us to think about learning for now, rather than based on our own experiences. What will happen when classic college meets online?

Diane started from the premise that higher order learning comes from complex challenges – how can we offer this to students? Well, there are game-based, high experiential activities. They’re complex, interactive, integrative, information gathering driven, team focused and failure is part of the process. They also develop tenacity (with enough scaffolding, of course). We also get, almost for free, vast quantities of data to track how students performed their solving activities, which is far more than “right” or “wrong”. Does a complex world need more of these?

The second point for learning environments is that, sometimes, massive and intensive can go hand-in-hand. The Georgia Tech Online Master of Science in Computer Science, on Udacity , with assignments, TAs and social media engagements and problem-solving.  (I need to find out more about this. Paging the usual suspects.)

The second area discussed was pathways. Students lose time, track and credits when they start to make mistakes along the way and this can lead to them getting lost in the system. Cost is a huge issue in the US (and, yes, it’s a growing issue in Australia, hooray.)  Can you reduce cost without reducing learning? Students are benefiting from guided pathways to success. Georgia State and their predictive analytics were mentioned again here – leading students to more successful pathways to get better outcomes for everyone. Greatly increased retention, greatly reduced wasted tuition fees.

We now have a lot more data on what students are doing – the challenge for us is how we integrate this into better decision making. (Ethics, accuracy, privacy are all things that we have to consider.)

Learning needs to not be structured around seat time and credit hours. (I feel dirty even typing that.) Our students learn how to succeed in the environments that we give them. We don’t want to train them into mindless repetition. Once again, competency based learning, strongly formative, reflecting actual knowledge, is the way to go here.

(I really wish that we’d properly investigated the CBL first year. We might have done something visionary. Now we’ll just look derivative if we do it three years from now. Oh, well, time to start my own University – Nickapedia, anyone?)

Credentials raised their ugly head again – it’s one of the things that Unis have had in the bag. What is the new approach to credentials in the digital environment? Certificates and diplomas can be integrated into your on-line identity. (Again, security, privacy, ethics are all issues here but the idea is sound.) Example given was “Degreed”, a standalone credentialing site that can work to bridge recognised credentials from provide to employer.

Alternatives to degrees are being co-created by educators and employers. (I’m not 100% sure I agree with this. I think that some employers have great intentions but, very frequently, it turns into a requirement for highly specific training that might not be what we want to provide.)

Can we reinvent an alternative model that reinvents delivery systems, business models and support models? Can a curriculum be decentralised in a centralised University? What about models like Minerva? (Jeff mentioned this as well.)

(The slides got out of whack with the speaker for a while, apologies if I missed anything.)

(I should note that I get twitchy when people set up education for-profit. We’ve seen that this is a volatile market and we have the tension over where money goes. I have the luxury of working for an entity where its money goes to itself, somehow. There are no shareholders to deal with, beyond the 24,000,000 members of the population, who derive societal and economic benefit from our contribution.)

As noted on the next slide, working learners represent a sizeable opportunity for increased economic growth and mobility. More people in college is actually a good thing. (As an aside, it always astounds me when someone suggests that people are spending too much time in education. It’s like the insult “too clever by half”, you really have to think about what you’re advocating.)

For her closing thoughts, Diane thinks:

  1. The boundaries of the educational system must be re-conceptualised. We can’t ignore what’s going on around us.
  2. The integration of digital and physical experiences are creating new ways to engage. Digital is here and it’s not going away. (Unless we totally destroy ourselves, of course, but that’s a larger problem.)
  3. Can we design a better future for education.

Lots to think about and, despite some technical issues, a great talk.

 


The driverless car is more than transportation technology.

I’m hoping to write a few pieces on design in the coming days. I’ll warn you now that one of them will be about toilets, so … urm … prepare yourself, I guess? Anyway, back to today’s theme: the driverless car. I wanted to talk about it because it’s a great example of what technology could do, not in terms of just doing something useful but in terms of changing how we think. I’m going to look at some of the changes that might happen. No doubt many of you will have ideas and some of you will disagree so I’ll wait to see what shows up in the comments.

Humans have been around for quite a long time but, surprisingly given how prominent they are in our lives, cars have only been around for 120 years in the form that we know them – gasoline/diesel engines, suspension and smaller-than-buggy wheels. And yet our lives are, in many ways, built around them. Our cities bend and stretch in strange ways to accommodate roads, tunnels, overpasses and underpasses. Ask anyone who has driven through Atlanta, Georgia, where an Interstate of near-infinite width can be found running from Peachtree & Peachtree to Peachtree, Peachtree, Peachtree and beyond!

But what do we think of when we think of cars? We think of transportation. We think of going where we want, when we want. We think of using technology to compress travel time and this, for me, is a classic human technological perspective because we are love to amplify. Cars make us faster. Computers allow us to add up faster. Guns help us to kill better.

So let’s say we get driverless cars and, over time, the majority of cars on the road are driverless. What does this mean? Well, if you look at road safety stats and the WHO reports, you’ll see that about up 40% of traffic fatalities can be straight line accidents (these figures from the Victorian roads department, 2006-2013). That is, people just drive off a straight road and kill themselves. The leading killers overall are alcohol, fatigue, and speed. Driverless cars will, in one go, remove all of these. Worldwide, a million people per year just stopped dying.

But it’s not just transportation. In America, commuting to work eats up from 35-65 hours of your year. If you live in DC, you spend two weeks every year cursing the Beltway. And it’s not as if you can easily work in your car so those are lost hours. That’s not enjoyable driving! That’s hours of frustration, wasted fuel, exposure to burning fuel, extra hours you have to work. The fantasy of the car is driving a convertible down the Interstate in the sunshine, listening to rock, and singing along. The reality is inching forward with the windows up in a 10 year old Nissan family car while stuck between FM stations and having to listen to your second iPod because the first one’s out of power. And it’s the joke one that only has Weird Al on it.

Enter the driverless car. Now you can do some work but there’s no way that your commute will be as bad anyway because we can start to do away with traffic lights and keep the traffic moving. You’ll be there for less time but you can do more. Have a sleep if you want. Learn a language. Do a MOOC! Winning!

Why do I think it will be faster? Every traffic light has a period during which no-one is moving. Why? Because humans need clear signals and need to know what other drivers are doing. A driverless car can talk to other cars and they can weave in and out of the traffic signals. Many traffic jams are caused by people hitting the brakes and then people arrive at this braking point faster than people are leaving. There is no need for this traffic jam and, with driverless cars, keeping distance and speed under control is far easier. Right now, cars move like ice through a vending machine. We want them to move like water.

How will you work in your car? Why not make every driverless car a wireless access point using mesh networking? Now the more cars you get together, the faster you can all work. The I495 Beltway suddenly becomes a hub of activity rather than a nightmare of frustration. (In a perfect world, aliens come to Earth and take away I495 as their new emperor, leaving us with matter transporters, but I digress.)

But let’s go further. Driverless cars can have package drops in them. The car that picks you up from work has your Amazon parcels in the back. It takes meals to people who can’t get out. It moves books around.

But let’s go further. Make them electric and put some of Elon’s amazing power cells into them and suddenly we have a power transportation system if we can manage the rapid charge/discharge issues. Your car parks in the city turn into repair and recharge facilities for fleets of driverless cars, charging from the roof solar and wind, but if there’s a power problem, you can send 1000 cars to plug into the local grid and provide emergency power.

We still need to work out some key issues of integration: cyclists, existing non-converted cars and pedestrians are the first ones that come to mind. But, in my research group, we have already developed passive localisation that works on a scale that could easily be put onto cars so you know when someone is among the cars. Combine that with existing sensors and all a cyclist has to do is to wear a sensor (non-personalised, general scale and anonymised) that lets intersections know that she is approaching and the cars can accommodate it. Pedestrians are slow enough that cars can move around them. We know that they can because slow humans do it often enough!

We start from ‘what could we do if we produced a driverless car’ and suddenly we have free time, increased efficiency and the capacity to do many amazing things.

Now, there are going to be protests. There are going to be people demanding their right to drive on the road and who will claim that driverless cars are dangerous. There will be anti-robot protests. There already have been. I expect that the more … freedom-loving states will blow up a few of these cars to make a point. Anyone remember the guy waving a red flag who had to precede every automobile? It’s happened before. It will happen again.

We have to accept that there are going to be deaths related to this technology, even if we plan really hard for it not to happen, and it may be because of the technology or it may be because of opposing human action. But cars are already killing so may people. 1.2 million people died on the road in 2010, 36,000 from America. We have to be ready for the fact that driverless cars are a stepping stone to getting people out of the grind of the commute and making much better use of our cities and road spaces. Once we go driverless we need to look at how many road accidents aren’t happening, and address the issues that still cause accidents in a driverless example.

Understand the problem. Measure what’s happening. Make a change. Measure again. Determine the impact.

When we think about keeping the manually driven cars on the road, we do have a precedent. If you look at air traffic, the NTSB Accidents and Accident Rates by NTSB Classification 1998-2007 report tells us that the most dangerous type of flying is small private planes, which are more than 5 times more likely to have an accident than commercial airliners. Maybe it will be the insurance rates or the training required that will reduce the private fleet? Maybe they’ll have overrides. We have to think about this.

It would be tempting to say “why still have cars” were it not for the increasingly ageing community, those people who have several children and those people who have restricted mobility, because they can’t just necessarily hop on a bike or walk. As someone who has had multiple knee surgeries, I can assure you that 100m is an insurmountable distance sometimes – and I used to run 45km up and down mountains. But what we can do is to design cities that work for people and accommodate the new driverless cars, which we can use in a much quieter, efficient and controlled manner.

Vehicles and people can work together. The Denver area, Bahnhofstrasse in Zurich and Bourke Street Mall in Melbourne are three simple examples where electric trams move through busy pedestrian areas. Driverless cars work like trams – or they can. Predictable, zoned and controlled. Better still, for cyclists, driverless cars can accommodate sharing the road much more easily although, as noted, there may still be some issues for traffic control that will need to be ironed out.

It’s easy to look at the driverless car as just a car but this is missing all of the other things we could be doing. This is just one example where the replacement of something ubiquitous that might just change the world for the better.


Think. Create. Code. Vis! (@edXOnline, @UniofAdelaide, @cserAdelaide, @code101x, #code101x)

I just posted about the massive growth in our new on-line introductory programming course but let’s look at the numbers so we can work out what’s going on and, maybe, what led to that level of success. (Spoilers: central support from EdX helped a huge amount.) So let’s get to the data!

I love visualised data so let’s look at the growth in enrolments over time – this is really simple graphical stuff as we’re spending time getting ready for the course at the moment! We’ve had great support from the EdX team through mail-outs and Twitter and you can see these in the ‘jumps’ in the data that occurred at the beginning, halfway through April and again at the end. Or can you?

Rapid growth in enrolment!

Rapid growth in enrolment! But it’s a little hard to see in this data.

Hmm, this is a large number, so it’s not all that easy to see the detail at the end. Let’s zoom in and change the layout of the data over to steps so we can things more easily. (It’s worth noting that I’m using the free R statistical package to do all of this. I can change one line in my R program and regenerate all of my graphs and check my analysis. When you can program, you can really save time on things like this by using tools like R.)

Screen Shot 2015-04-30 at 2.40.24 pm
Now you can see where that increase started and then the big jump around the time that e-mail advertising started, circled. That large spike at the end is around 1500 students, which means that we jumped 10% in a day.

When we started looking at this data, we wanted to get a feeling for how many students we might get. This is another common use of analysis – trying to work out what is going to happen based on what has already happened.

As a quick overview, we tried to predict the future based on three different assumptions:

  1. that the growth from day to day would be roughly the same, which is assuming linear growth.
  2. that the growth would increase more quickly, with the amount of increase doubling every day (this isn’t the same as the total number of students doubling every day).
  3. that the growth would increase even more quickly than that, although not as quickly as if the number of students were doubling every day.

If Assumption 1 was correct, then we would expect the graph to look like a straight line, rising diagonally. It’s not. (As it is, this model predicted that we would only get 11,780 students. We crossed that line about 2 weeks ago.

So we know that our model must take into account the faster growth, but those leaps in the data are changes that caused by things outside of our control – EdX sending out a mail message appears to cause a jump that’s roughly 800-1,600 students, and it persists for a couple of days.

Let’s look at what the models predicted. Assumption 2 predicted a final student number around 15,680. Uhh. No. Assumption 3 predicted a final student number around 17,000, with an upper bound of 17,730.

Hmm. Interesting. We’ve just hit 17,571 so it looks like all of our measures need to take into account the “EdX” boost. But, as estimates go, Assumption 3 gave us a workable ballpark and we’ll probably use it again for the next time that we do this.

Now let’s look at demographic data. We now we have 171-172 countries (it varies a little) but how are we going for participation across gender, age and degree status? Giving this information to EdX is totally voluntary but, as long as we take that into account, we make some interesting discoveries.

Age demographic data from EdX

Age demographic data from EdX

Our median student age is 25, with roughly 40% under 25 and roughly 40% from 26 to 40. That means roughly 20% are 41 or over. (It’s not surprising that the graph sits to one side like that. If the left tail was the same size as the right tail, we’d be dealing with people who were -50.)

The gender data is a bit harder to display because we have four categories: male, female, other and not saying. In terms of female representation, we have 34% of students who have defined their gender as female. If we look at the declared male numbers, we see that 58% of students have declared themselves to be male. Taking into account all categories, this means that our female participant percentage could be as high as 40% but is at least 34%. That’s much higher than usual participation rates in face-to-face Computer Science and is really good news in terms of getting programming knowledge out there.

We’re currently analysing our growth by all of these groupings to work out which approach is the best for which group. Do people prefer Twitter, mail-out, community linkage or what when it comes to getting them into the course.

Anyway, lots more to think about and many more posts to come. But we’re on and going. Come and join us!


Musing on Industrial Time

Now Print, Black, Linocut, (C) Nick Falkner, 2013

I caught up with a good friend recently and we were discussing the nature of time. She had stepped back from her job and was now spending a lot of her time with her new-born son. I have gone to working three days a week, hence have also stepped back from the five-day grind.  It was interesting to talk about how this change to our routines had changed the way that we thought of and used time. She used a term that I wanted to discuss here, which was industrial timeto describe the clock-watching time of the full-time worker. This is part of the larger area of time discipline, how our society reacts to and uses time, and is really quite interesting. Both of us had stopped worrying about the flow of time in measurable hours on certain days and we just did things until we ran out of day. This is a very different activity from the usual “do X now, do Y in 15 minutes time” that often consumes us. In my case, it took me about three months of considered thought and re-training to break the time discipline habits of thirty years. In her case, she has a small child to help her to refocus her time sense on the now.

Modern time-sense is so pervasive that we often don’t think about some of the underpinnings of our society. It is easy to understand why we have years and, although they don’t line up properly, months given that these can be matched to astronomical phenomena that have an effect on our world (seasons and tides, length of day and moonlight, to list a few). Days are simple because that’s one light/dark cycle. But why there are 52 weeks in a year? Why are there 7 days in a week? Why did the 5-day week emerge as a contiguous block of 5 days? What is so special about working 9am to 5pm?

A lot of modern time descends from the struggle of radicals and unionists to protect workers from the excesses of labour, to stop people being worked to death, and the notion of the 8 hour day is an understandable division of a 24 hour day into three even chunks for work, rest and leisure. (Goodness, I sound like I’m trying to sell you chocolate!)

If we start to look, it turns out that the 7 day week is there because it’s there, based on religion and tradition. Interestingly enough, there have been experiments with other week lengths but it appears hard to shift people who are used to a certain routine and, tellingly, making people wait longer for days off appears to be detrimental to adoption.

If we look at seasons and agriculture, then there is a time to sow, to grow, to harvest and to clear, much as there is a time for livestock to breed and to be raised for purpose. If we look to the changing time of sunrise and sunset, there is a time at which natural light is available and when it is not. But, from a time discipline perspective, these time systems are not enough to be able to build a large-scale, industrial and synchronised society upon – we must replace a distributed, loose and collective notion of what time is with one that is centralised, authoritarian and singular. While religious ceremonies linked to seasonal and astronomical events did provide time-keeping on a large scale prior to the industrial revolution, the requirement for precise time, of an accuracy to hours and minutes, was not possible and, generally, not required beyond those cues given from nature such as dawn, noon, dusk and so on.

After the industrial revolution, industries and work was further developed that was heavily separated from a natural linkage – there are no seasons for a coal mine or a steam engine – and the development of the clock and reinforcement of the calendar of work allowed both the measurement of working hours (for payment) and the determination of deadlines, given that natural forces did not have to be considered to the same degree. Steam engines are completed, they have no need to ripen.

With the notion of fixed and named hours, we can very easily determine if someone is late when we have enough tools for measuring the flow of time. But this is, very much, the notion of the time that we use in order to determine when a task must be completed, rather than taking an approach that accepts that the task will be completed at some point within a more general span of time.

We still have confusion where our understanding of “real measures” such as days, interact with time discipline. Is midnight on the 3rd of April the second after the last moment of April the 2nd or the second before the first moment of April the 4th? Is midnight 12:00pm or 12:00am? (There are well-defined answers to this but the nature of the intersection is such that definitions have to be made.)

But let’s look at teaching for a moment. One of the great criticisms of educational assessment is that we confuse timeliness, and in this case we specifically mean an adherence to meeting time discipline deadlines, with achievement. Completing the work a crucial hour after it is due can lead to that work potentially not being marked at all, or being rejected. But we do usually have over-riding reasons for doing this but, sadly, these reasons are as artificial as the deadlines we impose. Why is an Engineering Degree a four-year degree? If we changed it to six would we get better engineers? If we switched to competency based training, modular learning and life-long learning, would we get more people who were qualified or experienced with engineering? Would we get less? What would happen if we switched to a 3/1/2/1 working week? Would things be better or worse? It’s hard to evaluate because the week, and the contiguous working week, are so much a part of our world that I imagine that today is the first day that some of you have thought about it.

Back to education and, right now, we count time for our students because we have to work out bills and close off accounts at end of financial year, which means we have to meet marking and award deadlines, then we have to project our budget, which is yearly, and fit that into accredited degree structures, which have year guidelines…

But I cannot give you a sound, scientific justification for any of what I just wrote. We do all of that because we are caught up in industrial time first and we convince ourselves that building things into that makes sense. Students do have ebb and flow. Students are happier on certain days than others. Transition issues on entry to University are another indicator that students develop and mature at different rates – why are we still applying industrial time from top to bottom when everything we see here says that it’s going to cause issues?

Oh, yes, the “real world” uses it. Except that regular studies of industrial practice show that 40 hour weeks, regular days off, working from home and so on are more productive than the burn-out, everything-late, rush that we consider to be the signs of drive. (If Henry Ford thinks that making people work more than 40 hours a week is bad for business, he’s worth listening to.) And that’s before we factor in the development of machines that will replace vast numbers of human jobs in the next 20 years.

I have a different approach. Why aren’t we looking at students more like we regard our grape vines? We plan, we nurture, we develop, we test, we slowly build them to the point where they can produce great things and then we sustain them for a fruitful and long life. When you plant grape vines, you expect a first reasonable crop level in three years, and commercial levels at five. Tellingly, the investment pattern for grapes is that it takes you 10 years to break even and then you start making money back. I can’t tell you how some of my students will turn out until 15-25 years down the track and it’s insanity to think you can base retrospective funding on that timeframe.

You can’t make your grapes better by telling them to be fruitful in two years. Some vines take longer than others. You can’t even tell them when to fruit (although can trick them a little). Yet, somehow, we’ve managed to work around this to produce a local wine industry worth around $5 billion dollars. We can work with variation and seasonal issues.

One of the reasons I’m so keen on MOOCs is that these can fit in with the routines of people who can’t dedicate themselves to full-time study at the moment. By placing well-presented, pedagogically-sound materials on-line, we break through the tyranny of the 9-5, 5 day work week and let people study when they are ready to, where they are ready to, for as long as they’re ready to. Like to watch lectures at 1am, hanging upside down? Go for it – as long as you’re learning and not just running the video in the background while you do crunches, of course!

Once you start to question why we have so many days in a week, you quickly start to wonder why we get so caught up on something so artificial. The simple answer is that, much like money, we have it because we have it. Perhaps it’s time to look at our educational system to see if we can do something that would be better suited to developing really good knowledge in our students, instead of making them adept at sliding work under our noses a second before it’s due. We are developing systems and technologies that can allow us to step outside of these structures and this is, I believe, going to be better for everyone in the process.

Conformity isn’t knowledge, and conformity to time just because we’ve always done that is something we should really stop and have a look at.


Is this a dress thing? #thedress

For those who missed it, the Internet recently went crazy over llamas and a dress. (If this is the only thing that survives our civilisation, boy, is that sentence going to confuse future anthropologists.) Llamas are cool (there ain’t no karma drama with a llama) so I’m going to talk about the dress. This dress (with handy RGB codes thrown in, from a Wired article I’m about to link to):

A picture of a dress taken in a way that confounds human colour sense.

Not even going to try to describe the colour.

When I first saw it, and I saw it early on, the poster was asking what colour it was because she’d taken a picture in the store of a blue and black dress and, yet, in the picture she took, it sometimes looked white and gold and it sometimes looked blue and black. The dress itself is not what I’m discussing here today.

Let’s get something out of the way. Here’s the Wired article to explain why two different humans can see this dress as two different colours and be right. Okay? The fact is that the dress that the picture is of is a blue and black dress (which is currently selling like hot cakes, by the way) but the picture itself is, accidentally, a picture that can be interpreted in different ways because of how our visual perception system works.

This isn’t a hoax. There aren’t two images (or more). This isn’t some elaborate Alternative Reality Game prank.

But the reaction to the dress itself was staggering. In between other things, I plunged into a variety of different social fora to observe the reaction. (Other people also noticed this and have written great articles, including this one in The Atlantic. Thanks for the link, Marc!) The reactions included:

  1. Genuine bewilderment on the part of people who had already seen both on the same device at nearly adjacent times and were wondering if they were going mad.
  2. Fierce tribalism from the “white and gold” and “black and blue” camps, within families, across social groups as people were convinced that the other people were wrong.
  3. People who were sure that it was some sort of elaborate hoax with two images. (No doubt, Big Dress was trying to cover something up.)
  4. Bordering-on-smug explanations from people who believed that seeing it a certain way indicated that they had superior “something or other”, where you can put day vision/night vision/visual acuity/colour sense/dressmaking skill/pixel awareness/photoshop knowledge.
  5. People who thought it was interesting and wondered what was happening.
  6. Attention policing from people who wanted all of social media to stop talking about the dress because we should be talking about (insert one or more) llamas, Leonard Nimoy (RIP, LLAP, \\//) or the disturbingly short lifespan of Russian politicians.

The issue to take away, and the reason I’ve put this on my education blog, is that we have just had an incredibly important lesson in human behavioural patterns. The (angry) team formation. The presumption that someone is trying to make us feel stupid, playing a prank on us. The inability to recognise that the human perceptual system is, before we put any actual cognitive biases in place, incredibly and profoundly affected by the processing shortcuts our perpetual systems take to give us a view of the world.

I want to add a new question to all of our on-line discussion: is this a dress thing?

There are matters that are not the province of simple perceptual confusion. Human rights, equality, murder, are only three things that do not fall into the realm of “I don’t quite see what you see”. Some things become true if we hold the belief – if you believe that students from background X won’t do well then, weirdly enough, then they don’t do well. But there are areas in education when people can see the same things but interpret them in different ways because of contextual differences. Education researchers are well aware that a great deal of what we see and remember about school is often not how we learned but how we were taught. Someone who claims that traditional one-to-many lecturing, as the only approach, worked for them, when prodded, will often talk about the hours spent in the library or with study groups to develop their understanding.

When you work in education research, you get used to people effectively calling you a liar to your face because a great deal of our research says that what we have been doing is actually not a very good way to proceed. But when we talk about improving things, we are not saying that current practitioners suck, we are saying that we believe that we have evidence and practice to help everyone to get better in creating and being part of learning environments. However, many people feel threatened by the promise of better, because it means that they have to accept that their current practice is, therefore, capable of improvement and this is not a great climate in which to think, even to yourself, “maybe I should have been doing better”. Fear. Frustration. Concern over the future. Worry about being in a job. Constant threats to education. It’s no wonder that the two sides who could be helping each other, educational researchers and educational practitioners, can look at the same situation and take away both a promise of a better future and a threat to their livelihood. This is, most profoundly, a dress thing in the majority of cases. In this case, the perceptual system of the researchers has been influenced by research on effective practice, collaboration, cognitive biases and the operation of memory and cognitive systems. Experiment after experiment, with mountains of very cautious, patient and serious analysis to see what can and can’t be learnt from what has been done. This shows the world in a different colour palette and I will go out on a limb and say that there are additional colours in their palette, not just different shades of existing elements. The perceptual system of other people is shaped by their environment and how they have perceived their workplace, students, student behaviour and the personalisation and cognitive aspects that go with this. But the human mind takes shortcuts. Makes assumptions. Has biasses. Fills in gaps to match the existing model and ignores other data. We know about this because research has been done on all of this, too.

You look at the same thing and the way your mind works shapes how you perceive it. Someone else sees it differently, You can’t understand each other. It’s worth asking, before we deploy crushing retorts in electronic media, “is this a dress thing?”

The problem we have is exactly as we saw from the dress: how we address the situation where both sides are convinced that they are right and, from a perceptual and contextual standpoint, they are. We are now in the “post Dress” phase where people are saying things like “Oh God, that dress thing. I never got the big deal” whether they got it or not (because the fad is over and disowning an old fad is as faddish as a fad) and, more reflectively, “Why did people get so angry about this?”

At no point was arguing about the dress colour going to change what people saw until a certain element in their perceptual system changed what it was doing and then, often to their surprise and horror, they saw the other dress! (It’s a bit H.P. Lovecraft, really.) So we then had to work out how we could see the same thing and both be right, then talk about what the colour of the dress that was represented by that image was. I guarantee that there are people out in the world still who are convinced that there is a secret white and gold dress out there and that they were shown a picture of that. Once you accept the existence of these people, you start to realise why so many Internet arguments end up descending into the ALL CAPS EXCHANGE OF BALLISTIC SENTENCES as not accepting that what we personally perceive as being the truth could not be universally perceived is one of the biggest causes of argument. And we’ve all done it. Me, included. But I try to stop myself before I do it too often, or at all.

We have just had a small and bloodless war across the Internet. Two teams have seized the same flag and had a fierce conflict based on the fact that the other team just doesn’t get how wrong they are. We don’t want people to be bewildered about which way to go. We don’t want to stay at loggerheads and avoid discussion. We don’t want to baffle people into thinking that they’re being fooled or be condescending.

What we want is for people to recognise when they might be looking at what is, mostly, a perceptual problem and then go “Oh” and see if they can reestablish context. It won’t always work. Some people choose to argue in bad faith. Some people just have a bee in their bonnet about some things.

“Is this a dress thing?”

In amongst the llamas and the Vulcans and the assassination of Russian politicians, something that was probably almost as important happened. We all learned that we can be both wrong and right in our perception but it is the way that we handle the situation that truly determines whether we’re handling the situation in the wrong or right way. I’ve decided to take a two week break from Facebook to let all of the latent anger that this stirred up die down, because I think we’re going to see this venting for some time.

Maybe you disagree with what I’ve written. That’s fine but, first, ask yourself “Is this a dress thing?”

Live long and prosper.


That’s not the smell of success, your brain is on fire.

Would you mind putting out the hippocampus when you have a chance?

Would you mind putting out the hippocampus when you have a chance?

I’ve written before about the issues of prolonged human workload leading to ethical problems and the fact that working more than 40 hours a week on a regular basis is downright unproductive because you get less efficient and error-prone. This is not some 1968 French student revolutionary musing on what benefits the soul of a true human, this is industrial research by Henry Ford and the U.S. Army, neither of whom cold be classified as Foucault-worshipping Situationist yurt-dwelling flower children, that shows that there are limits to how long you can work in a sustained weekly pattern and get useful things done, while maintaining your awareness of the world around you.

The myth won’t die, sadly, because physical presence and hours attending work are very easy to measure, while productive outputs and their origins in a useful process on a personal or group basis are much harder to measure. A cynic might note that the people who are around when there is credit to take may end up being the people who (reluctantly, of course) take the credit. But we know that it’s rubbish. And the people who’ve confirmed this are both philosophers and the commercial sector. One day, perhaps.

But anyone who has studied cognitive load issues, the way that the human thinking processes perform as they work and are stressed, will be aware that we have a finite amount of working memory. We can really only track so many things at one time and when we exceed that, we get issues like the helmet fire that I refer to in the first linked piece, where you can’t perform any task efficiently and you lose track of where you are.

So what about multi-tasking?

Ready for this?

We don’t.

There’s a ton of research on this but I’m going to link you to a recent article by Daniel Levitin in the Guardian Q&A. The article covers the fact that what we are really doing is switching quickly from one task to another, dumping one set of information from working memory and loading in another, which of course means that working on two things at once is less efficient than doing two things one after the other.

But it’s more poisonous than that. The sensation of multi-tasking is actually quite rewarding as we get a regular burst of the “oooh, shiny” rewards our brain gives us for finding something new and we enter a heightened state of task readiness (fight or flight) that also can make us feel, for want of a better word, more alive. But we’re burning up the brain’s fuel at a fearsome rate to be less efficient so we’re going to tire more quickly.

Get the idea? Multi-tasking is horribly inefficient task switching that feels good but makes us tired faster and does things less well. But when we achieve tiny tasks in this death spiral of activity, like replying to an e-mail, we get a burst of reward hormones. So if your multi-tasking includes something like checking e-mails when they come in, you’re going to get more and more distracted by that, to the detriment of every other task. But you’re going to keep doing them because multi-tasking.

I regularly get told, by parents, that their children are able to multi-task really well. They can do X, watch TV, do Y and it’s amazing. Well, your children are my students and everything I’ve seen confirms what the research tells me – no, they can’t but they can give a convincing impression when asked. When you dig into what gets produced, it’s a different story. If someone sits down and does the work as a single task, it will take them a shorter time and they will do a better job than if they juggle five things. The five things will take more than five times as long (up to 10, which really blows out time estimation) and will not be done as well, nor will the students learn about the work in the right way. (You can actually sabotage long term storage by multi-tasking in the wrong way.) The most successful study groups around the Uni are small, focused groups that stay on one task until it’s done and then move on. The ones with music and no focus will be sitting there for hours after the others are gone. Fun? Yes. Efficient? No. And most of my students need to be at least reasonably efficient to get everything done. Have some fun but try to get all the work done too – it’s educational, I hear. 🙂

It’s really not a surprise that we haven’t changed humanity in one or two generations. Our brains are just not built in a way that can (yet) provide assistance with the quite large amount of work required to perform multi-tasking.

We can handle multiple tasks, no doubt at all, but we’ve just got to make sure, for our own well-being and overall ability to complete the task, that we don’t fall into the attractive, but deceptive, trap that we are some sort of parallel supercomputer.


We don’t need no… oh, wait. Yes, we do. (@pwc_AU)

The most important thing about having a good idea is not the idea itself, it’s doing something with it. In the case of sharing knowledge, you have to get good at communication or the best ideas in the world are going to be ignored. (Before anyone says anything, please go and review the advertising industry which was worth an estimated 14 billion pounds in 2013 in the UK alone. The way that you communicate ideas matters and has value.)

Knowledge doesn’t leap unaided into most people’s heads. That’s why we have teachers and educational institutions. There are auto-didacts in the world and most people can pull themselves up by their bootstraps to some extent but you still have to learn how to read and the more expertise you can develop under guidance, the faster you’ll be able to develop your expertise later on (because of how your brain works in terms of handling cognitive load in the presence of developed knowledge.)

When I talk about the value of making a commitment to education, I often take it down to two things: ongoing investment and excellent infrastructure. You can’t make bricks without clay and clay doesn’t turn into bricks by itself. But I’m in the education machine – I’m a member of the faculty of a pretty traditional University. I would say that, wouldn’t I?

That’s why it’s so good to see reports coming out of industry sources to confirm that, yes, education is important because it’s one of the many ways to drive an economy and maintain a country’s international standing. Many people don’t really care if University staff are having to play the banjo on darkened street corners to make ends meet (unless the banjo is too loud or out of tune) but they do care about things like collapsing investments and being kicked out of the G20 to be replaced by nations that, until recently, we’ve been able to list as developing.

The current G20 flags. How long will Australia be in there?

The current G20 flags. How long will Australia be in there?

PricewaterhouseCoopers (pWc) have recently published a report where they warn that over-dependence on mining and lack of investment in science and technology are going to put Australia in a position where they will no longer be one of the world’s 20 largest economies but will be relegated, replaced by Vietnam and Nigeria. If fact, the outlook is bleaker than that, moving Australia back beyond Bangladesh and Iran, countries that are currently receiving international support. This is no slur on the countries that are developing rapidly, improving conditions for their citizens and heading up. But it is an interesting reflection on what happens to a developed country when it stops trying to do anything new and gets left behind. Of course, science and technology (STEM) does not leap fully formed from the ground so this, in terms, means that we’re going to have make sure that our educational system is sufficiently strong, well-developed and funded to be able to produce the graduates who can then develop the science and technology.

We in the educational community and surrounds have been saying this for years. You can’t have an innovative science and technology culture without strong educational support and you can’t have a culture of innovation without investment and infrastructure. But, as I said in a recent tweet, you don’t have to listen to me bang on about “social contracts”, “general benefit”, “universal equity” and “human rights” to think that investing in education is a good idea. PwC is a multi-national company that’s the second largest professional services company in the world, with annual revenues around $34 billion. And that’s in hard American dollars, which are valuable again compared to the OzD. PwC are serious money people and they think that Australia is running a high risk if we don’t start looking at serious alternatives to mining and get our science and technology engines well-lubricated and running. And running quickly.

The first thing we have to do is to stop cutting investment in education. It takes years to train a good educator and it takes even longer to train a good researcher at University on top of that. When we cut funding to Universities, we slow our hiring, which stops refreshment, and we tend to offer redundancies to expensive people, like professors. Academic staff are not interchangeable cogs. After 12 years of school, they undertake somewhere along the lines of 8-10 years of study to become academics and then they really get useful about 10 years after that through practice and the accumulation of experience. A Professor is probably 30 years of post-school investment, especially if they have industry experience. A good teacher is 15+. And yet these expensive staff are often targeted by redundancies because we’re torn between the need to have enough warm bodies to put in front of students. So, not only do we need to stop cutting, we need to start spending and then commit to that spending for long enough to make a difference – say 25 years.

The next thing, really at the same time, we need to do is to foster a strong innovation culture in Australia by providing incentives and sound bases for research and development. This is (despite what happened last night in Parliament) not the time to be cutting back, especially when we are subsidising exactly those industries that are not going to keep us economically strong in the future.

But we have to value education. We have to value teachers. We have to make it easier for people to make a living while having a life and teaching. We have to make education a priority and accept the fact that every dollar spent in education is returned to us in so many different ways, but it’s just not easy to write it down on a balance sheet. PwC have made it clear: science and technology are our future. This means that good, solid educational systems from the start of primary to tertiary and beyond are now one of the highest priorities we can have or our country is going to sink backwards. The sheep’s back we’ve been standing on for so long will crush us when it rolls over and dies in a mining pit.

I have many great ethical and social arguments for why we need to have the best education system we can have and how investment is to the benefit of every Australia. PwC have just provided a good financial argument for those among us who don’t always see past a 12 month profit and loss sheet.

Always remember, the buggy whip manufacturers are the last person to tell you not to invest in buggy whips.