Self-drive Cars And The Trolley Problem

Self-drive cars are coming; the Google cars have driven over a million miles, and other manufacturers are making noises about their own autonomous vehicles (both cars and trucks). If left to decide "for themselves", how will these vehicles make what may well be life-or-death decisions when encountering an emergency situation?

Science Fiction writer Isaac Asimov's famous "three laws of robotics" lays the foundation for artificially intelligent interaction with humans:
  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. 
The three laws laws are for (so far) fictional android robots, but what about "intelligent" cars and other vehicles in the real world? There have to be rules, and probably something very like Asimov's three laws. When you step down the rabbit hole of life or death decision making, you can encounter some very troubling scenarios like that described in the ethical dilemma of the Trolley Problem.

In it, you encounter a "lesser of two evils" kind of life-or-death scenario, where a runaway trolley car can be allowed to crash into a group of people, or by diverting it at the last minute, allowed to crash into one person. A majority of people would opt for the second (still regrettable) choice, where less human life is lost - but how do you program that into a car?

In the 2004 movie "I, Robot" - loosely based upon the Asimov work of the same name - there is an emotional monologue where the main character describes a terrible crash where a helpful passing robot saved his life but in doing so allowed a little girl in a second vehicle to die by drowning. 

Although he had a better percentage chance of surviving with the robot's help - and that was what the robot based it's rescue decision upon - a human helper might have saved the child instead. He laments: "That was somebody's baby!"

Who is going to shoulder that kind of awesome responsibility?