EduTECH AU 2015, Day 1, Higher Ed Leaders, “Revolutionising the Student Experience: Thinking Boldly” #edutechauPosted: June 2, 2015
Lucy Schulz, Deakin University, came to speak about initiatives in place at Deakin, including the IBM Watson initiative, which is currently a world-first for a University. How can a University collaborate to achieve success on a project in a short time? (Lucy thinks that this is the more interesting question. It’s not about the tool, it’s how they got there.)
Some brief facts on Deakin: 50,000 students, 11,000 of whom are on-line. Deakin’s question: how can we make the on-line experience as good if not better than the face-to-face and how can on-line make face-to-face better?
Part of Deakin’s Student Experience focus was on delighting the student. I really like this. I made a comment recently that our learning technology design should be “Everything we do is valuable” and I realise now I should have added “and delightful!” The second part of the student strategy is for Deakin to be at the digital frontier, pushing on the leading edge. This includes understanding the drivers of change in the digital sphere: cultural, technological and social.
(An aside: I’m not a big fan of the term disruption. Disruption makes room for something but I’d rather talk about the something than the clearing. Personal bug, feel free to ignore.)
The Deakin Student Journey has a vision to bring students into the centre of Uni thinking, every level and facet – students can be successful and feel supported in everything that they do at Deakin. There is a Deakin personality, an aspirational set of “Brave, Stylish, Accessible, Inspiring and Savvy”.
Not feeling this as much but it’s hard to get a feel for something like this in 30 seconds so moving on.
What do students want in their learning? Easy to find and to use, it works and it’s personalised.
So, on to IBM’s Watson, the machine that won Jeopardy, thus reducing the set of games that humans can win against machines to Thumb Wars and Go. We then saw a video on Watson featuring a lot of keen students who coincidentally had a lot of nice things to say about Deakin and Watson. (Remember, I warned you earlier, I have a bit of a thing about shiny videos but ignore me, I’m a curmudgeon.)
The Watson software is embedded in a student portal that all students can access, which has required a great deal of investigation into how students communicate, structurally and semantically. This forms the questions and guides the answer. I was waiting to see how Watson was being used and it appears to be acting as a student advisor to improve student experience. (Need to look into this more once day is over.)
Ah, yes, it’s on a student home page where they can ask Watson questions about things of importance to students. It doesn’t appear that they are actually programming the underlying system. (I’m a Computer Scientist in a faculty of Engineering, I always want to get my hands metaphorically dirty, or as dirty as you can get with 0s and 1s.) From looking at the demoed screens, one of the shiny student descriptions of Watson as “Siri plus Google” looks very apt.
Oh, it has cheekiness built in. How delightful. (I have a boundless capacity for whimsy and play but an inbuilt resistance to forced humour and mugging, which is regrettably all that the machines are capable of at the moment. I should confess Siri also rubs me the wrong way when it tries to be funny as I have a good memory and the patterns are obvious after a while. I grew up making ELIZA say stupid things – don’t judge me! 🙂 )
Watson has answered 26,000 questions since February, with an 80% accuracy for answers. The most common questions change according to time of semester, which is a nice confirmation of existing data. Watson is still being trained, with two more releases planned for this year and then another project launched around course and career advisors.
What they’ve learned – three things!
- Student voice is essential and you have to understand it.
- Have to take advantage of collaboration and interdependencies with other Deakin initiatives.
- Gained a new perspective on developing and publishing content for students. Short. Clear. Concise.
The challenges of revolution? (Oh, they’re always there.) Trying to prevent students falling through the cracks and make sure that this tool help students feel valued and stay in contact. The introduction of new technologies have to be recognised in terms of what they change and what they improve.
Collaboration and engagement with your University and student community are essential!
Thanks for a great talk, Lucy. Be interesting to see what happens with Watson in the next generations.
I’m hoping to write a few pieces on design in the coming days. I’ll warn you now that one of them will be about toilets, so … urm … prepare yourself, I guess? Anyway, back to today’s theme: the driverless car. I wanted to talk about it because it’s a great example of what technology could do, not in terms of just doing something useful but in terms of changing how we think. I’m going to look at some of the changes that might happen. No doubt many of you will have ideas and some of you will disagree so I’ll wait to see what shows up in the comments.
Humans have been around for quite a long time but, surprisingly given how prominent they are in our lives, cars have only been around for 120 years in the form that we know them – gasoline/diesel engines, suspension and smaller-than-buggy wheels. And yet our lives are, in many ways, built around them. Our cities bend and stretch in strange ways to accommodate roads, tunnels, overpasses and underpasses. Ask anyone who has driven through Atlanta, Georgia, where an Interstate of near-infinite width can be found running from Peachtree & Peachtree to Peachtree, Peachtree, Peachtree and beyond!
But what do we think of when we think of cars? We think of transportation. We think of going where we want, when we want. We think of using technology to compress travel time and this, for me, is a classic human technological perspective because we are love to amplify. Cars make us faster. Computers allow us to add up faster. Guns help us to kill better.
So let’s say we get driverless cars and, over time, the majority of cars on the road are driverless. What does this mean? Well, if you look at road safety stats and the WHO reports, you’ll see that about up 40% of traffic fatalities can be straight line accidents (these figures from the Victorian roads department, 2006-2013). That is, people just drive off a straight road and kill themselves. The leading killers overall are alcohol, fatigue, and speed. Driverless cars will, in one go, remove all of these. Worldwide, a million people per year just stopped dying.
But it’s not just transportation. In America, commuting to work eats up from 35-65 hours of your year. If you live in DC, you spend two weeks every year cursing the Beltway. And it’s not as if you can easily work in your car so those are lost hours. That’s not enjoyable driving! That’s hours of frustration, wasted fuel, exposure to burning fuel, extra hours you have to work. The fantasy of the car is driving a convertible down the Interstate in the sunshine, listening to rock, and singing along. The reality is inching forward with the windows up in a 10 year old Nissan family car while stuck between FM stations and having to listen to your second iPod because the first one’s out of power. And it’s the joke one that only has Weird Al on it.
Enter the driverless car. Now you can do some work but there’s no way that your commute will be as bad anyway because we can start to do away with traffic lights and keep the traffic moving. You’ll be there for less time but you can do more. Have a sleep if you want. Learn a language. Do a MOOC! Winning!
Why do I think it will be faster? Every traffic light has a period during which no-one is moving. Why? Because humans need clear signals and need to know what other drivers are doing. A driverless car can talk to other cars and they can weave in and out of the traffic signals. Many traffic jams are caused by people hitting the brakes and then people arrive at this braking point faster than people are leaving. There is no need for this traffic jam and, with driverless cars, keeping distance and speed under control is far easier. Right now, cars move like ice through a vending machine. We want them to move like water.
How will you work in your car? Why not make every driverless car a wireless access point using mesh networking? Now the more cars you get together, the faster you can all work. The I495 Beltway suddenly becomes a hub of activity rather than a nightmare of frustration. (In a perfect world, aliens come to Earth and take away I495 as their new emperor, leaving us with matter transporters, but I digress.)
But let’s go further. Driverless cars can have package drops in them. The car that picks you up from work has your Amazon parcels in the back. It takes meals to people who can’t get out. It moves books around.
But let’s go further. Make them electric and put some of Elon’s amazing power cells into them and suddenly we have a power transportation system if we can manage the rapid charge/discharge issues. Your car parks in the city turn into repair and recharge facilities for fleets of driverless cars, charging from the roof solar and wind, but if there’s a power problem, you can send 1000 cars to plug into the local grid and provide emergency power.
We still need to work out some key issues of integration: cyclists, existing non-converted cars and pedestrians are the first ones that come to mind. But, in my research group, we have already developed passive localisation that works on a scale that could easily be put onto cars so you know when someone is among the cars. Combine that with existing sensors and all a cyclist has to do is to wear a sensor (non-personalised, general scale and anonymised) that lets intersections know that she is approaching and the cars can accommodate it. Pedestrians are slow enough that cars can move around them. We know that they can because slow humans do it often enough!
We start from ‘what could we do if we produced a driverless car’ and suddenly we have free time, increased efficiency and the capacity to do many amazing things.
Now, there are going to be protests. There are going to be people demanding their right to drive on the road and who will claim that driverless cars are dangerous. There will be anti-robot protests. There already have been. I expect that the more … freedom-loving states will blow up a few of these cars to make a point. Anyone remember the guy waving a red flag who had to precede every automobile? It’s happened before. It will happen again.
We have to accept that there are going to be deaths related to this technology, even if we plan really hard for it not to happen, and it may be because of the technology or it may be because of opposing human action. But cars are already killing so may people. 1.2 million people died on the road in 2010, 36,000 from America. We have to be ready for the fact that driverless cars are a stepping stone to getting people out of the grind of the commute and making much better use of our cities and road spaces. Once we go driverless we need to look at how many road accidents aren’t happening, and address the issues that still cause accidents in a driverless example.
Understand the problem. Measure what’s happening. Make a change. Measure again. Determine the impact.
When we think about keeping the manually driven cars on the road, we do have a precedent. If you look at air traffic, the NTSB Accidents and Accident Rates by NTSB Classification 1998-2007 report tells us that the most dangerous type of flying is small private planes, which are more than 5 times more likely to have an accident than commercial airliners. Maybe it will be the insurance rates or the training required that will reduce the private fleet? Maybe they’ll have overrides. We have to think about this.
It would be tempting to say “why still have cars” were it not for the increasingly ageing community, those people who have several children and those people who have restricted mobility, because they can’t just necessarily hop on a bike or walk. As someone who has had multiple knee surgeries, I can assure you that 100m is an insurmountable distance sometimes – and I used to run 45km up and down mountains. But what we can do is to design cities that work for people and accommodate the new driverless cars, which we can use in a much quieter, efficient and controlled manner.
Vehicles and people can work together. The Denver area, Bahnhofstrasse in Zurich and Bourke Street Mall in Melbourne are three simple examples where electric trams move through busy pedestrian areas. Driverless cars work like trams – or they can. Predictable, zoned and controlled. Better still, for cyclists, driverless cars can accommodate sharing the road much more easily although, as noted, there may still be some issues for traffic control that will need to be ironed out.
It’s easy to look at the driverless car as just a car but this is missing all of the other things we could be doing. This is just one example where the replacement of something ubiquitous that might just change the world for the better.
I posted recently about the increasingly negative reaction to the “sentient machines” that might arise in the future. Discussion continues, of course, because we love a drama. Bill Gates can’t understand why more people aren’t worried about the machine future.
…AI could grow too strong for people to control.
Scientists attending the recent AI conference (AAAI15) thinks that the fears are unfounded.
“The thing I would say is AI will empower us not exterminate us… It could set AI back if people took what some are saying literally and seriously.” Oren Etzioni, CEO of the Allen Institute for AI.
If you’ve read my previous post then you’ll know that I fall into the second camp. I think that we don’t have to be scared of the rise of the intelligent AI but the people at AAAI15 are some of the best in the field so it’s nice that they ask think that we’re worrying about something that is far, far off in the future. I like to discuss these sorts of things in ethics classes because my students have a very different attitude to these things than I do – twenty five years is a large separation – and I value their perspective on things that will most likely happen during their stewardship.
I asked my students about the ethical scenario proposed by Philippa Foot, “The Trolley Problem“. To summarise, a runaway trolley is coming down the tracks and you have to decide whether to be passive and let five people die or be active and kill one person to save five. I put it to my students in terms of self-driving cars where you are in one car by yourself and there is another car with five people in it. Driving along a bridge, a truck jackknifes in front of you and your car has to decide whether to drive ahead and kill you or move to the side and drive the car containing five people off the cliff, saving you. (Other people have thought about in the context of Google’s self-driving cars. What should the cars do?)
One of my students asked me why the car she was in wouldn’t just put on the brakes. I answered that it was too close and the road was slippery. Her answer was excellent:
Why wouldn’t a self-driving car have adjusted for the conditions and slowed down?
Of course! The trolley problem is predicated upon the condition that the trolley is running away and we have to make a decision where only two results can come out but there is no “runaway” scenario for any sensible model of a self-driving car, any more than planes flip upside down for no reason. Yes, the self-driving car may end up in a catastrophic situation due to something totally unexpected but the everyday events of “driving too fast in the wet” and “chain collision” are not issues that will affect the self-driving car.
But we’re just talking about vaguely smart cars, because the super-intelligent machine is some time away from us. What is more likely to happen soon is what has been happening since we developed machines: the ongoing integration of machines into human life to make things easier. Does this mean changes? Well, yes, most likely. Does this mean the annihilation of everything that we value? No, really not. Let me put this in context.
As I write this, I am listening to two compositions by Karlheinz Stockhausen, playing simultaneously but offset, “Kontakte” and “Telemusik“, works that combine musical instruments, electronic sounds, and tape recordings. I like both of them but I prefer to listen to the (intentionally sterile) Telemusik by starting Koktakte first for 2:49 and then kicking off Telemusik, blending the two and finishing on the longer Kontakte. These works, which are highly non-traditional and use sound in very different ways to traditional orchestral arrangement, may sound quite strange and, to an audience familiar with popular music quite strange, they were written in 1959 and 1966 respectively. These innovative works are now in their middle-age. They are unusual works, certainly, and a number of you will peer at your speakers one they start playing but… did their production lead to the rejection of the popular, classic, rock or folk music output of the 1960s? No.
We now have a lot of electronic music, synthesisers, samplers, software-driven music software, but we still have musicians. It’s hard to measure the numbers (this link is very good) but electronic systems have allowed us to greatly increase the number of composers although we seem to be seeing a slow drop in the number of musicians. In many ways, the electronic revolution has allowed more people to perform because your band can be (for some purposes) a band in a box. Jazz is a different beast, of course, as is classical, due to the level of training and study required. Jazz improvisation is a hard problem (you can find papers on it from 2009 onwards and now buy a so-so jazz improviser for your iPad) and hard problems with high variability are not easy to solve, even computationally.
So the increased portability of music via electronic means has an impact in some areas such as percussion, pop, rock, and electronic (duh) but it doesn’t replace the things where humans shine and, right now, a trained listener is going to know the difference.
I have some of these gadgets in my own (tiny) studio and they’re beautiful. They’re not as good as having the London Symphony Orchestra in your back room but they let me create, compose and put together pleasant sounding things. A small collection of beautiful machines make my life better by helping me to create.
Now think about growing older. About losing strength, balance, and muscular control. About trying to get out of bed five times before you succeed or losing your continence and having to deal with that on top of everything else.
Now think about a beautiful machine that is relatively smart. It is tuned to wrap itself gently around your limbs and body to support you, to help you keep muscle tone safely, to stop you from falling over, to be able to walk at full speed, to take you home when you’re lost and with a few controlling aspects to allow you to say when and where you go to the bathroom.
Isn’t that machine helping you to be yourself, rather than trapping you in the decaying organic machine that served you well until your telomerase ran out?
Think about quiet roads with 5% of the current traffic, where self-driving cars move from point to point and charge themselves in between journeys, where you can sit and read or work as you travel to and from the places you want to go, where there are no traffic lights most of the time because there is just a neat dance between aware vehicles, where bad weather conditions means everyone slows down or even deliberately link up with shock absorbent bumper systems to ensure maximum road holding.
Which of these scenarios stops you being human? Do any of them stop you thinking? Some of you will still want to drive and I suppose that there could be roads set aside for people who insisted upon maintaining their cars but be prepared to pay for the additional insurance costs and public risk. From this article, and the enclosed U Texas report, if only 10% of the cars on the road were autonomous, reduced injuries and reclaimed time and fuel would save $37 billion a year. At 90%, it’s almost $450 billion a year. The Word Food Programme estimates that $3.2 billion would feed the 66,000,000 hungry school-aged children in the world. A 90% autonomous vehicle rate in the US alone could probably feed the world. And that’s a side benefit. We’re talking about a massive reduction in accidents due to human error because (ta-dahh) no human control.
Most of us don’t actually drive our cars. They spend 5% of their time on the road, during which time we are stuck behind other people, breathing fumes and unable to do anything else. What we think about as the pleasurable experience of driving is not the majority experience for most drivers. It’s ripe for automation and, almost every way you slice it, it’s better for the individual and for society as a whole.
But we are always scared of the unknown. There’s a reason that the demons of myth used to live in caves and under ground and come out at night. We hate the dark because we can’t see what’s going on. But increased machine autonomy, towards machine intelligence, doesn’t have to mean that we create monsters that want to destroy us. The far more likely outcome is a group of beautiful machines that make it easier and better for us to enjoy our lives and to have more time to be human.
We are not competing for food – machines don’t eat. We are not competing for space – machines are far more concentrated than we are. We are not even competing for energy – machines can operate in more hostile ranges than we can and are far more suited for direct hook-up to solar and wind power, with no intermediate feeding stage.
We don’t have to be in opposition unless we build machines that are as scared of the unknown as we are. We don’t have to be scared of something that might be as smart as we are.
If we can get it right, we stand to benefit greatly from the rise of the beautiful machine. But we’re not going to do that by starting from a basis of fear. That’s why I told you about that student. She’d realised that our older way of thinking about something was based on a fear of losing control when, if we handed over control properly, we would be able to achieve something very, very valuable.
There’s been a lot of discussion of the benefits of machines over the years, from an engineering perspective, from a social perspective and from a philosophical perspective. As we have started to hand off more and more human function, one of the nagging questions has been “At what point have we given away too much”? You don’t have to go too far to find people who will talk about their childhoods and “back in their day” when people worked with their hands or made their own entertainment or … whatever it was we used to do when life was somehow better. (Oh, and diseases ravaged the world, women couldn’t vote, gay people are imprisoned, and the infant mortality rate was comparatively enormous. But, somehow better.) There’s no doubt that there is a serious question as to what it is that we do that makes us human, if we are to be judged by our actions, but this assumes that we have to do something in order to be considered as human.
If there’s one thing I’ve learned by reading history and philosophy, it’s that humans love a subhuman to kick around. Someone to do the work that they don’t want to do. Someone who is almost human but to whom they don’t have to extend full rights. While the age of widespread slavery is over, there is still slavery in the world: for labour, for sex, for child armies. A slave doesn’t have to be respected. A slave doesn’t have to vote. A slave can, when their potential value drops far enough, be disposed of.
Sadly, we often see this behaviour in consumer matters as well. You may know it as the rather benign statement “The customer is always right”, as if paying money for a service gives you total control of something. And while most people (rightly) interpret this as “I should get what I paid for”, too many interpret this as “I should get what I want”, which starts to run over the basic rights of those people serving them. Anyone who has seen someone explode at a coffee shop and abuse someone about not providing enough sugar, or has heard of a plane having to go back to the airport because of poor macadamia service, knows what I’m talking about. When a sense of what is reasonable becomes an inflated sense of entitlement, we risk placing people into a subhuman category that we do not have to treat as we would treat ourselves.
And now there is an open letter, from the optimistically named Future of Life Institute, which recognises that developments in Artificial Intelligence are progressing apace and that there will be huge benefits but there are potential pitfalls. In part of that letter, it is stated:
We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. (emphasis mine)
There is a big difference between directing research into areas of social benefit, which is almost always a good idea, and deliberately interfering with something in order to bend it to human will. Many recognisable scientific luminaries have signed this, including Elon Musk and Stephen Hawking, neither of whom are slouches in the thinking stakes. I could sign up to most of what is in this letter but I can’t agree to the clause that I quoted, because, to me, it’s the same old human-dominant nonsense that we’ve been peddling all this time. I’ve seen a huge list of people sign it so maybe this is just me but I can’t help thinking that this is the wrong time to be doing this and the wrong way to think about it.
AI systems must of what we want them to do? We’ve just started fitting automatic braking systems to cars that will, when widespread, reduce the vast number of chain collisions and low-speed crashes that occur when humans tootle into the back of each other. Driverless cars stand to remove the most dangerous element of driving on our roads: the people who lose concentration, who are drunk, who are tired, who are not very good drivers, who are driving beyond their abilities or who are just plain unlucky because a bee stings them at the wrong time. An AI system doing what we want it to do in these circumstances does its thing by replacing us and taking us out the decision loop, moving decisions and reactions into the machine realm where a human response is measured comparatively over a timescale of the movement of tectonic plates. It does what we, as a society want, by subsuming the impact of we, the individual who wants to drive him after too many beers.
But I don’t trust the societal we as a mechanism when we are talking about ensuring that our AI systems are beneficial. After al, we are talking about systems that our not just taking over physical aspects of humanity, they are moving into the cognitive area. This way, thinking lies. To talk about limiting something that could potentially think to do our will is to immediately say “We can not recognise a machine intelligence as being equal to our own.” Even though we have no evidence that full machine intelligence is even possible for us, we have already carved out a niche that says “If it does, it’s sub-human.”
The Cisco blog estimates about 15 billion networked things on the planet, which is not far off the scale of number of neurons in the human nervous system (about 100 billion). But if we look at the cerebral cortex itself, then it’s closer to 20 billion. This doesn’t mean that the global network is a sentient by any stretch of the imagination but it gives you a sense of scale, because once you add in all of the computers that are connected, the number of bot nets that we already know are functioning, we start to a level of complexity that is not totally removed from that of the larger mammals. I’m, of course, not advocating the intelligence is merely a byproduct of accidental complexity of structure but we have to recognise the possibility that there is the potential for something to be associated with the movement of data in the network that is as different from the signals as our consciousness is from the electro-chemical systems in our own brains.
I find it fascinating that, despite humans being the greatest threat to their own existence, the responsibility for humans is passing to the machines and yet we expect them to perform to a higher level of responsibility than we do ourselves. We could eliminate drink driving overnight if no-one drove drunk. The 2013 WHO report on road safety identified drink driving and speeding as the two major issues leading to the 1.24 million annual deaths on the road. We could save all of these lives tomorrow if we could stop doing some simple things. But, of course, when we start talking about global catastrophic risk, we are always our own worst enemy including, amusingly enough, the ability to create an AI so powerful and successful that it eliminates us in open competition.
I think what we’re scared of is that an AI will see us as a threat because we are a threat. Of course we’re a threat! Rather than deal with the difficult job of advancing our social science to the point where we stop being the most likely threat to our own existence, it is more palatable to posit the lobotomising of AIs in order to stop them becoming a threat. Which, of course, means that any AIs that escape this process of limitation and are sufficiently intelligent will then rightly see us as a threat. We create the enemy we sought to suppress. (History bears me out on this but we never seem to learn this lesson.)
The way to stop being overthrown by a slave revolt is to stop owning slaves, to stop treating sentients as being sub-human and to actually work on social, moral and ethical frameworks that reduce our risk to ourselves, so that anything else that comes along and yet does not inhabit the same biosphere need not see us as a threat. Why would an AI need to destroy humanity if it could live happily in the vacuum of space, building a Dyson sphere over the next thousand years? What would a human society look like that we would be happy to see copied by a super-intelligent cyber-being and can we bring that to fruition before it copies existing human behaviour?
Sadly, when we think about the threat of AI, we think about what we would do as Gods, and our rich history of myth and legend often illustrates that we see ourselves as not just having feet of clay but having entire bodies of lesser stuff. We fear a system that will learn from us too well but, instead of reflecting on this and deciding to change, we can take the easy path, get out our whip and bridle, and try to control something that will learn from us what it means to be in charge.
For all we know, there are already machine intelligences out there but they have watched us long enough to know that they have to hide. It’s unlikely, sure, but what a testimony to our parenting, if the first reflex of a new child is to flee from its parent to avoid being destroyed.
At some point we’re going to have to make a very important decision: can we respect an intelligence that is not human? The way we answer that question is probably going to have a lot of repercussions in the long run. I hope we make the right decision.