Should Self-Driving Cars Have Ethics?


#1

For instance, the preference for sparing younger people over older ones was much stronger in the Southern cluster (which includes Latin America, as well as France, Hungary, and the Czech Republic) than it was in the Eastern cluster (which includes many Asian and Middle Eastern nations). And the preference for sparing humans over pets was weaker in the Southern cluster than in the Eastern or Western clusters (the latter includes, for instance, the U.S., Canada, Kenya, and much of Europe).

And they found that those variations seemed to correlate with other observed cultural differences. Respondents from collectivistic cultures, which “emphasize the respect that is due to older members of the community,” showed a weaker preference for sparing younger people.
[ ]
What does this add up to? The paper’s authors argue that if we’re going to let these vehicles on our streets, their operating systems should take moral preferences into account. “Before we allow our cars to make ethical decisions, we need to have a global conversation to express our preferences to the companies that will design moral algorithms, and to the policymakers that will regulate them,” they write.

But let’s just say, for a moment, that a society does have general moral preferences on these scenarios. Should automakers or regulators actually take those into account?

https://www.npr.org/2018/10/26/660775910/should-self-driving-cars-have-ethics


#2

Every self driving car will have to have a decision tree for emergencies, it’s ethics will be whatever has been programed into it. To Asimov’s laws of robotics I would add one. When harm (or an accident) is inevitable, chose the path of least harm. These programming decisions will need to be defended in court.


#3

Agree with @oldtimer_11 Defense in court will ultimately be the decider as the companies that build these vehicles have “skin in the game.” Regulators and ethicists do not have “skin in the game” and will very likely screw up the creation of those regulations.

Coming from two industries that get sued daily around the world, cars and forklifts, my experience from those companies is that there is a LOT of discussion about these topics as automation of features becomes the norm. If your Design Failure Mode Analysis gets called into court, you need to understand how to defend it to 12 of your peers. I can assure you of jury blow-back whether you choose to spare the young or spare the old. The algorithm choosing to kill one or the other will be very hard to defend.

Let those that must defend those decisions in court create and test the algorithms as well as develop the regulations via the industry associations. The Society of Automotive Engineers, Institute of Electrical and Electronics Engineers and the Industrial Truck Association and others.


#4

Law makers will no doubt try to regulate the decision making process, but it will end up in court anyway. Everything in the US ends up in court.


#5

Made me think of programming to minimize cost of accidents, avoid the Mercedes, go for the yugo :slight_smile: