Choosing a direction

A comment on yesterday’s post noted that minimising ugliness is a highly desirable approach to take for many students, given how ugly their worlds are with poverty, violence, bullying. I completely agree that these things should be minimised but this is a commitment that we should be making as a society, not leaving to education. Yes, education is the best way to reduce these problems but that requires effective education and, for that, I return to my point that a standard of acceptable plainness is just not enough when we plan and design education. It’s not enough that our teaching be tolerable, it should be amazing, precisely because of the potential benefits to our society. 

If, in education, we only seek a minimum bar then the chances of us achieving more than that are reduced and we probably won’t have a good measure of “better” should it occur. We can’t take intentional actions to change something that we’re not measuring. 

Many of the ugliest problems in society have arisen from short-sighted thinking, fixes that are the definition of plain instead of beautiful or inspiring, and from not having a committed vision to aim for better. That’s why I’m so heavily focused on beauty and aesthetics in education, to provide a basis for vision that is manageable sized yet sufficiently powerful. 

I won’t (I can’t) address every equity issue, every unfair thing, or every terrible aspect of modern educational practice in these pieces. But I hope to motivate, over time, why this rather philosophical approach is a good basis for visionary improvements to education.  

 


Perhaps Now Is Not The Time To Anger The Machines

HALlo.

HALlo.

There’s been a lot of discussion of the benefits of machines over the years, from an engineering perspective, from a social perspective and from a philosophical perspective. As we have started to hand off more and more human function, one of the nagging questions has been “At what point have we given away too much”? You don’t have to go too far to find people who will talk about their childhoods and “back in their day” when people worked with their hands or made their own entertainment or … whatever it was we used to do when life was somehow better. (Oh, and diseases ravaged the world, women couldn’t vote, gay people are imprisoned, and the infant mortality rate was comparatively enormous. But, somehow better.) There’s no doubt that there is a serious question as to what it is that we do that makes us human, if we are to be judged by our actions, but this assumes that we have to do something in order to be considered as human.

If there’s one thing I’ve learned by reading history and philosophy, it’s that humans love a subhuman to kick around. Someone to do the work that they don’t want to do. Someone who is almost human but to whom they don’t have to extend full rights. While the age of widespread slavery is over, there is still slavery in the world: for labour, for sex, for child armies. A slave doesn’t have to be respected. A slave doesn’t have to vote. A slave can, when their potential value drops far enough, be disposed of.

Sadly, we often see this behaviour in consumer matters as well. You may know it as the rather benign statement “The customer is always right”, as if paying money for a service gives you total control of something. And while most people (rightly) interpret this as “I should get what I paid for”, too many interpret this as “I should get what I want”, which starts to run over the basic rights of those people serving them. Anyone who has seen someone explode at a coffee shop and abuse someone about not providing enough sugar, or has heard of a plane having to go back to the airport because of poor macadamia service, knows what I’m talking about. When a sense of what is reasonable becomes an inflated sense of entitlement, we risk placing people into a subhuman category that we do not have to treat as we would treat ourselves.

And now there is an open letter, from the optimistically named Future of Life Institute, which recognises that developments in Artificial Intelligence are progressing apace and that there will be huge benefits but there are potential pitfalls. In part of that letter, it is stated:

We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. (emphasis mine)

There is a big difference between directing research into areas of social benefit, which is almost always a good idea, and deliberately interfering with something in order to bend it to human will. Many recognisable scientific luminaries have signed this, including Elon Musk and Stephen Hawking, neither of whom are slouches in the thinking stakes. I could sign up to most of what is in this letter but I can’t agree to the clause that I quoted, because, to me, it’s the same old human-dominant nonsense that we’ve been peddling all this time. I’ve seen a huge list of people sign it so maybe this is just me but I can’t help thinking that this is the wrong time to be doing this and the wrong way to think about it.

AI systems must of what we want them to do? We’ve just started fitting automatic braking systems to cars that will, when widespread, reduce the vast number of chain collisions and low-speed crashes that occur when humans tootle into the back of each other. Driverless cars stand to remove the most dangerous element of driving on our roads: the people who lose concentration, who are drunk, who are tired, who are not very good drivers, who are driving beyond their abilities or who are just plain unlucky because a bee stings them at the wrong time. An AI system doing what we want it to do in these circumstances does its thing by replacing us and taking us out the decision loop, moving decisions and reactions into the machine realm where a human response is  measured comparatively over a timescale of the movement of tectonic plates. It does what we, as a society want, by subsuming the impact of we, the individual who wants to drive him after too many beers.

But I don’t trust the societal we as a mechanism when we are talking about ensuring that our AI systems are beneficial. After al, we are talking about systems that our not just taking over physical aspects of humanity, they are moving into the cognitive area. This way, thinking lies. To talk about limiting something that could potentially think to do our will is to immediately say “We can not recognise a machine intelligence as being equal to our own.” Even though we have no evidence that full machine intelligence is even possible for us, we have already carved out a niche that says “If it does, it’s sub-human.”

The Cisco blog estimates about 15 billion networked things on the planet, which is not far off the scale of number of neurons in the human nervous system (about 100 billion). But if we look at the cerebral cortex itself, then it’s closer to 20 billion. This doesn’t mean that the global network is a sentient by any stretch of the imagination but it gives you a sense of scale, because once you add in all of the computers that are connected, the number of bot nets that we already know are functioning, we start to a level of complexity that is not totally removed from that of the larger mammals. I’m, of course, not advocating the intelligence is merely a byproduct of accidental complexity of structure but we have to recognise the possibility that there is the potential for something to be associated with the movement of data in the network that is as different from the signals as our consciousness is from the electro-chemical systems in our own brains.

I find it fascinating that, despite humans being the greatest threat to their own existence, the responsibility for humans is passing to the machines and yet we expect them to perform to a higher level of responsibility than we do ourselves. We could eliminate drink driving overnight if no-one drove drunk. The 2013 WHO report on road safety identified drink driving and speeding as the two major issues leading to the 1.24 million annual deaths on the road. We could save all of these lives tomorrow if we could stop doing some simple things. But, of course, when we start talking about global catastrophic risk, we are always our own worst enemy including, amusingly enough, the ability to create an AI so powerful and successful that it eliminates us in open competition.

I think what we’re scared of is that an AI will see us as a threat because we are a threat. Of course we’re a threat! Rather than deal with the difficult job of advancing our social science to the point where we stop being the most likely threat to our own existence, it is more palatable to posit the lobotomising of AIs in order to stop them becoming a threat. Which, of course, means that any AIs that escape this process of limitation and are sufficiently intelligent will then rightly see us as a threat. We create the enemy we sought to suppress. (History bears me out on this but we never seem to learn this lesson.)

The way to stop being overthrown by a slave revolt is to stop owning slaves, to stop treating sentients as being sub-human and to actually work on social, moral and ethical frameworks that reduce our risk to ourselves, so that anything else that comes along and yet does not inhabit the same biosphere need not see us as a threat. Why would an AI need to destroy humanity if it could live happily in the vacuum of space, building a Dyson sphere over the next thousand years? What would a human society look like that we would be happy to see copied by a super-intelligent cyber-being and can we bring that to fruition before it copies existing human behaviour?

Sadly, when we think about the threat of AI, we think about what we would do as Gods, and our rich history of myth and legend often illustrates that we see ourselves as not just having feet of clay but having entire bodies of lesser stuff. We fear a system that will learn from us too well but, instead of reflecting on this and deciding to change, we can take the easy path, get out our whip and bridle, and try to control something that will learn from us what it means to be in charge.

For all we know, there are already machine intelligences out there but they have watched us long enough to know that they have to hide. It’s unlikely, sure, but what a testimony to our parenting, if the first reflex of a new child is to flee from its parent to avoid being destroyed.

At some point we’re going to have to make a very important decision: can we respect an intelligence that is not human? The way we answer that question is probably going to have a lot of repercussions in the long run. I hope we make the right decision.


Polymaths, Philomaths and Teaching Philosophy: Why we can’t have the first without the second, and the second should be the goal of the third.

You may have heard the term polymath, a person who possesses knowledge across multiple fields, or if you’re particularly unlucky, you’ve been at one of those cocktail parties where someone hands you a business card that says, simply, “Firstname Surname, Polymath” and you have formed a very interesting idea of what a polymath is.  We normally reserve this term for people who excel across multiple fields such as, to drawn examples from this Harvard Business Review blog by Kyle Wiens, Leonard da Vinci (artist and inventor), Benjamin Franklin, Paul Robeson or Steve Jobs. (Let me start to address the article’s gender imbalance with Hypatia of Alexandria, Natalie Portman, Maya Angelou and Mayim Bialik, to name a small group of multidisciplinary women, admittedly focussing on the Erdös-Bacon intersection.) By focusing on those who excel, we do automatically associate a higher degree of assumed depth of knowledge across these multiple fields. The term “Renaissance [person]” is often bandied about as well.

Da Vinci, seen here inventing the cell phone. Sadly, it was to be over 500 years before the cell phone tower was invented so he never received a call. His monthly bill was still enormous.

Now, I have worked as a system administrator and programmer, a winemaker and I’m now an academic in Computer Science, being slowly migrated into some aspects of managerialism, who hopes shortly to start a PhD in Creative Writing. Do I consider myself to be a polymath? No, absolutely not, and I struggle to think of anyone who would think of me that way, either. I have a lot of interests but, while I have had different areas of expertise over the years, I’ve never managed the assumed highly parallel nature of expertise that would be required to be considered a polymath, of any standing. I have academic recognition of some of these interests but this changes neither the value (to me or others) nor has it ever been required to be well-lettered to be in the group mentioned above.

I describe myself, if I have to, as a philomath, someone who is a lover of learning. (For both of the words, the math suffix comes from the Greek and means to learn, but poly means much/many and philo means lovingso a polymath is ‘many learnéd’.) The immediate pejorative for someone who leans lots of things across areas is the infamous “Jack of all trades” and its companion “master of none”. I love to learn new things, I like studying but I also like applying it. I am confident that the time I spent in each discipline was valuable and that I knew my stuff. However, the main point I’d like to state here is that you cannot be a polymath without first having been a philomath – I don’t see how you can develop good depth in many areas unless you have a genuine love of learning. So every polymath was first a philomath.

Now let’s talk about my students. If they are at all interested in anything I’m teaching them, and let’s assume that at least some of them love various parts of a course at some stage, then they are looking to develop more knowledge in one area of learning. However, looking at my students as mono-cultural beings who only exist when they are studying, say, the use of the linked list in programming, is to sell them very, very short indeed. My students love doing a wide range of things. Yes, those who love learning in my higher educational context will probably do better but I guarantee you that every single student you have loves doing something, and most likely that’s more than one thing! So every single one of my students is inherently a philomath – but the problems arise when what they love to learn is not what I want to teach!

This leads me to the philosophy of learning and teaching, how we frame, study and solve the problems of trying to construct knowledge and transform it to allow its successful transfer to other people, as well as how we prepare students to receive, use and develop it. It makes sense that the state that we wish to develop on our students is philomathy. Students are already learning from, interested and loving their lives and the important affairs of the world as they see them, so to get them interested in what we want to teach them requires us to acknowledge that we are only one part of their lives. I rarely meet a student who cannot provide a deep, accurate and informative discourse on something in their lives. If we accept this then, rather than demanding an unnatural automaton who rewrites their entire being to only accept our words on some sort of diabolical Turing Tape of compliance, we now have a much easier path, in some respects, because accepting this means that our students will spend time on something in the depth that we want – it is now a matter of finding out how to tap into this. At this point, the yellow rag of populism is often raised, unfairly in most cases, because it is assumed that students will only study things which are ‘pop’ or ‘easy’. There is nothing ‘easy’ about most of the pastimes at which our students excel and they will expend vast amount of efforts on tasks if they can see a clear reason to do so, it appears to be a fair return on investment, and they feel that they have reasonable autonomy in the process. Most of my students work harder for themselves than they ever will for me: all I do is provide a framework that allows them to achieve something and this, in turn, allows them to develop a love. Once the love has been generated, the philomathic wheel turns and knowledge (most of the time) develops.

Whether you agree on the nature of the tasks or not, I hope that you can see why the love of learning should be a core focus of our philosophy. Our students should engage because they want to and not just because we force them to do so. Only one of these approaches will persist when you remove the rewards and the punishments and, while Skinner may disagree, we appear to be more than rats, especially when we engage our delightfully odd brains to try and solve tasks that are not simply rote learned. Inspiring the love of learning in any one of our disciplines puts a student on the philomathic path but this requires us to accept that their love of learning may have manifested in many other areas, that may be confusedly described as without worth, and that all we are doing is to try and get them to bring their love to something that will be of benefit to them in their studies and, assuming we’ve set the course up correctly, their lives in our profession.


The Philosophical Angle

Socrates drank hemlock after being found guilty of corrupting the minds of the youth of Athens, and impiety. Seneca submitted to the whims of Nero when the Emperor, inevitably, required that his old tutor die. Seneca’s stoicism was truly tested in this, given that he slashed his veins, took poison, jumped in a warm bath and finally had to be steamed to death before Nero’s edict that he kill himself was finally enacted. I, fortunately, expect no such demonstrations of stoic fortitude from my students but, if we are to think about their behaviour and development as self-regulating beings, then I think that a discussion of their personal philosophy becomes unavoidable. We have talked about the development state, their response to authority, their thoughts on their own thinking, but what of their philosophy?

If you are in a hurry and jump in your car, every red light between you and your destination risks becoming a personal affront, an enraging event that defies your expectation of an ‘all-green’ ride into town. There is no reason why you should expect such favours from the Universe, whatever your belief system, but the fact that this is infuriating to you remains. In the case of the unexpected traffic light, which sounds like the worst Sherlock Holmes story ever, the worst outcome is that you will be late, which may have a variety of repercussions. In preparing assignment work, however, a student may end up failing with far more dire and predictable results.

“Watson, I shall now relate the entire affair through Morse tapped pipe code and interpretative dance.”

While stoicism attracts criticism, understandably, because it doesn’t always consider the fundamentally human nature of humans, being prepared for the unforeseen is a vital part of any planning process. Self-regulation is not about drawing up a time table that allows you to fit in everything that you know about, it is about being able to handle your life and your work when things go wrong. Much as a car doesn’t need to be steered when it is going in a straight line and meeting our requirements, it is how we change direction when we know the road and when a kangaroo jumps out that are the true tests of our ability to manage our resources and ourselves.

Planning is not everything, as anyone who has read Helmuth von Moltke the Elder or von Clausewitz will know: “no plan survives contact with the enemy”. In this case, however, the enemy is not just those events that seek to confound us, it can be us as well! You can have the best plan in the world that relies upon you starting on Day X, and yet you don’t. You may have excellent reasons for this but, the fact remains, you have now introduced problems into your own process. You have met the enemy and it is you. This illustrates the critical importance of ensuring that we have an accurate assessment of our own philosophies – and we do have to be very honest.

There is no point in a student building an elaborate time management plan that relies upon them changing the habits of a lifetime in a week. But this puts the onus upon us as well: there is no point in us fabricating a set of expectations that a student cannot meet because they do not yet have a mature philosophy for understanding what is required. We don’t give up (of course!) but we must now think about how we can scaffold and encourage such change in a manageable way. I find reflection very handy, as I’ve said before, as watching students write things like “I planned for this but then I didn’t do it! WHY?” allows me to step in and discuss this at the point that the student realises that they have a problem.

I am not saying that a student who has a philosophy of “Maybe one day I will pass by accident” should be encouraged to maintain such lassitude, but we must be honest and realise that demanding that their timeliness and process maturity spring fully-formed from their foreheads is an act of conjuring reserved only for certain Greek Gods. (Even Caligula couldn’t manage it and he had far greater claim to this than most.) I like to think of this in terms of similarity of action. If anything I do is akin to walking up to someone and yelling “You should hand in on time, do better!” then I had better re-think my strategy.

The development of a personal philosophy, especially when you may not have ever been exposed to some of the great exemplars, is a fundamentally difficult task. You first need to understand that such a concept exists, then gain the vocabulary for discussing it, then interpret your current approach and see the value of change. Once you have performed all of those tasks, then we can start talking about getting from A to B. If you don’t know what I’m talking about or can’t understand why it’s important, or even discuss core concepts, then I’m yelling at you in the corridor and you’ll nod, compliantly, until I go away. Chances of you taking positive steps in the direction that I want? Very low. Probably, nil. And if it does happen, either it’s accidental or you didn’t actually need my help.

I try to be stoic but I must be honest and say that if Nero sentenced me to death, I’d nod, say “I expected that”, then put on some fast saxophone music and leg it up over the seven hills and far away. I don’t think I’d ever actually expect true stoicism from most of my students. but a simple incorporation of the fact that not everything works out as you think it will would be a definite improvement over the current everything will work out in my favour expectation that seems to be the hallmark of the more frequently disappointed and distressed among them. The trick is that I first have to make them realise that this is something that, with thought, they can not only fix but use to make a genuine, long-lasting and overwhelmingly positive change in their lives.