Talking Ethics with the Terminator: Using Existing Student Experience to Drive Discussion

One of the big focuses at our University is the Small-Group Discovery Experience, an initiative from our overall strategy document, the Beacon of Enlightenment. You can read all of the details here, but the essence is that a small group of students and an experienced research academic meet regularly to start the students down the path of research, picking up skills in an active learning environment. In our school, I’ve run it twice as part of the professional ethics program. This second time around, I think it’s worth sharing what we did, as it seems to be working well.

Why ethics? Well, this is first year and it’s not all that easy to do research into Computing if you don’t have much foundation, but professional skills are part of our degree program so we looked at an exploration of ethics to build a foundation. We cover ethics in more detail in second and third year but it’s basically a quick “and this is ethics” lecture in first year that doesn’t give our students much room to explore the detail and, like many of the more intellectual topics we deal with, ethical understanding comes from contemplation and discussion – unless we just want to try to jam a badly fitting moral compass on to everyone and be done.

Ethical issues present the best way to talk about the area as an introduction as much of the formal terminology can be quite intimidating for students who regard themselves as CS majors or Engineers first, and may not even contemplate their role as moral philosophers. But real-world situations where ethical practice is more illuminating are often quite depressing and, from experience, sessions in medical ethics, and similar, rapidly close down discussion because it can be very upsetting. We took a different approach.

The essence of any good narrative is the tension that is generated from the conflict it contains and, in stories that revolve around artificial intelligence, robots and computers, this tension often comes from what are fundamentally ethical issues: the machine kills, the computer goes mad, the AI takes over the world. We decided to ask the students to find two works of fiction, from movies, TV shows, books and games, to look into the ethical situations contained in anything involving computers, AI and robots. Then we provided them with a short suggested list of 20 books and 20 movies to start from and let them go. Further guidance asked them to look into the active ethical agents in the story – who was doing what and what were the ethical issues?

I saw the students after they had submitted their two short paragraphs on this and I was absolutely blown out of the water by their informed, passionate and, above all, thoughtful answers to the questions. Debate kept breaking out on subtle points. The potted summary of ethics that I had given them (follow the rules, aim for good outcomes or be a good person – sorry, ethicists) provided enough detail for the students to identify issues in rule-based approaches, utilitarianism and virtue ethics, but I could then introduce terms to label what they had already done, as they were thinking about them.

I had 13 sessions with a total of 250 students and it was the most enjoyable teaching experience I’ve had all year. As follow-up, I asked the students to enter all of their thoughts on their entities of choice by rating their autonomy (freedom to act), responsibility (how much we could hold them to account) and perceived humanity, using a couple of examples to motivate a ranking system of 0-5. A toddler is completely free to act (5) and completely human (5) but can’t really be held responsible for much (0-1 depending on the toddler). An aircraft autopilot has no humanity or responsibility but it is completely autonomous when actually flying the plane – although it will disengage when things get too hard. A soldier obeying orders has an autonomy around 5. Keanu Reeves in the Matrix has a humanity of 4. At best.

They’ve now filled the database up with their thoughts and next week we’re going to discuss all of their 0-5 ratings as small groups, then place them on a giant timeline of achievements in literature, technology, AI and also listing major events such as wars, to see if we can explain why authors presented the work that they did. When did we start to regard machines as potentially human and what did the world seem like them to people who were there?

This was a lot of fun and, while it’s taken a little bit of setting up, this framework works well because students have seen quite a lot, the trick is just getting to think about with our ethical lens. Highly recommended.

What do you think, Arnold? (Image from moviequotes.me)

What do you think, Arnold? (Image from moviequotes.me)



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s