Who would you rather kill? Lessons for all from MIT’s chilling thought experiment

Who would you rather kill? Lessons for all from MIT’s chilling thought experiment

And so far, four million people have done it, making it one of the biggest ethical experiments ever conducted. Here’s how it works.

You click through a number of screens that depict two possible outcomes for a driverless car with total brake failure. In each example, you must make a decision about the fate of the car’s occupants and the people who happen to be nearby (because in this simulation, there are always people nearby).

It starts you off pretty easy – so, for example, option one will give you a choice between smashing into a wall (thereby killing the car’s passengers) or avoiding the wall and ploughing instead into a handful of generic pedestrians.

This, I should warn you, is about as vanilla as it gets – a kind of starter dilemma, if you will. After that, it gets pretty dark, pretty quickly.

Does the car choose to swerve right, say, and take out a pelican crossing full of schoolchildren, or does it go left and get the grown-ups instead? Or how about the homeless guy or the executive? The man or the woman? The fat pedestrian or the athletic one?

And on it goes, until you’ve done 13 screens, after which you get polled with a few questions to help them ‘better understand your judgement’. These range from seemingly benign requests for your age, gender and suchlike through to downright sinister enquiries like: ‘To what extent do you feel that machines will become out of control?’

As you’d expect, the results of all this make for fascinating reading (old people don’t come out of it well, though – I’ll just tell you that now – nor do ‘criminals’) and no doubt it also leaves plenty of scope for MIT to do what they do best: namely, lead the field in necessary, innovative, future-proofing research.

But what are the rest of us learning in the meantime?

Well, a lot of it is pretty predictable – the balance, for example, between saving or killing two same-type pedestrians or drivers is almost entirely evenly split. But it does start to get a lot more interesting when you compare it with other data. Such as the answer to the question: Would you buy a driverless car that valued pedestrians’ lives as much as or more than your own?

To which the answer is, by and large, ‘Er, no, of course I wouldn’t.’

Because the truth is that nobody wants to be chauffeured around by a morally ambivalent machine. We want the driverless driver to be on our side – the people in the street can get their own robots.

But anyway, what difference does any of this really make? It might be interesting enough in a ‘freaky fast-forward’ kind of way, but it’s basically a future problem. And in a world where Donald Trump and Kim Jong-un are organising their second summit of the year, the future is something we’re probably best off approaching one grateful day at a time.

Although, having said that…

I mean, yes, it is true that trying to program ethics into sentient machines is most definitely a future thing, but the question that underlies it is very much in the now.

And that question is: can we delegate our moral choices?

It seems to me that most of us (in fact, probably all of us in our various ways) believe that we can. Otherwise, why else would we routinely place our governments, police forces, armies and healthcare workers at the ethical coalface on our behalf?

And I’m not criticising that. It’s a necessity of civilised life. We can’t, after all, do everything ourselves, and nor should we expect to. But we do have a duty to bear it in mind and to remember that, in times of stress, this is the covenant that we’ve made.

The majority of us are destined to be the moral passengers in this equation, and just like the millions who participated in Moral Machine, we are nothing more than a collective voice. A single, faceless mass.

And yet, we do have an influence over the ethical character of the institutions that surround us.

One of the final questions in the website’s end poll was ‘To what extent do you feel that your decisions on Moral Machine will be used to program self-driving cars?’.

And I found myself, perhaps a little optimistically (but no less truthfully), clicking on ‘Very much’.

– Advertisement –
– Advertisement –