We’re back on our Edge series, What scientific concept ought to be better known. Today’s entry is from Daniel Rockmore on the “Trolley Problem.”
This is an over-worked philosophical or ethical “dilemma”, but here it is given fresh life thinking about how to program automated vehicles to “solve” trolley problems.
“The Trolley Problem” is another thought experiment, one that arose in moral philosophy. There are many versions, but here is one: A trolley is rolling down the tracks and reaches a branchpoint. To the left, one person is trapped on the tracks, and to the right, five people. You can throw a switch that diverts the trolley from the track with the five to the track with the one. Do you? The trolley can’t brake. What if we know more about the people on the tracks? Maybe the one is a child and the five are elderly? Maybe the one is a parent and the others are single? How do all these different scenarios change things? What matters? What are you valuing and why?
It might be guessed most people wouldn’t figure how to use the switch in time to implement whatever decision they’d make, if indeed a person could come to rational decision in the short time. Who would be able to consider and believe with certainty the brakes wouldn’t work? Is there enough, or would people believe there is enough, time to issue some kind of warning? Perhaps that red button over there. So the dilemma in philosophy is a weak one, especially since articulating all the premises one is working with under this panic situation will be next to impossible.
But what if you had to write code so that if the trolley’s brakes failed it had to respond in a certain way?
As we increasingly offload our decisions to machines and the software that manages them, developers and engineers increasingly will be confronted with having to encode—and thus directly code—important and, potentially, life and death decision making into machines. Decision making always comes with a value system, a “utility function,” whereby we do one thing or another because one pathway reflects a greater value for the outcome than the other.
The question is: how different is a driverless car from a drivered car? There is already plenty going on in cars with drivers, and there was, too, before the electronic invasion. If your brakes go out and you have the choice to plow into a sea of (soft) civilians or into the deadly sea itself, what would you do? In that split second, you’d regret having to kill the odd stranger, but maybe you figure people will jump out of the way in time?
And then the number of situations with which drivers are confronted are many. Think of the multitude of decisions you make in fast-slow highway traffic, with maniacs jumping lanes to get head and idiots who drive slow in the left lane, and those nuts who just have to merge in front of you. Wait! Traffic is stopped! No. It only slowed. Wait! It stopped for real! Now it’s going fast; too fast. Can’t these people see the speed limit signs?
Finally you’re off the highway! The off ramp is slick in spots since it snowed a few hours ago. Black ice! Now snow drifts. Now a puddle. That damn truck splashed half of it on the windscreen and computer-driver sensors! At least there is time to use the wipers to scrape off the gunk now that we’re stopped at this light.
Say. What’s wrong with this damned light. It hasn’t changed. It’s busted! Wait! That guy thinks it’s his turn! It’s clearly mine; I’ve been waiting here behind this old lady. Now this other clown is pulling out, too!
We will build driverless cars and they will come with a moral compass—literally. The same will be true of our robot companions. They’ll have values and will necessarily be moral machines and ethical automata, whose morals and ethics are engineered by us. “The Trolley Problem” is a gedankenexperiment for our age, shining a bright light on the complexities of engineering our new world of humans and machines.
I’m not sure how shiny this light is, nor how bright. Engineers have some job ahead of them.