Product Liability

If a self-driving car equipped with an AI-driven conscience gets in an accident, who's liable?

  •  
  •  
  •  
  •  
  • Print.

driverless car

Self-driving cars, equipped with artificial intelligence, can perhaps make ethical decisions in the face of an accident, but the outcome could be strongly influenced by opinions of engineers who wrote the software, Wired reports.

“Even if a machine makes the exact same decision as a human being, I think we’ll see a legal challenge,” says Patrick Lin, director of California Polytechnic State University, San Luis Obispo’s Ethics + Emerging Sciences Group.

Companies that make self-driving cars, including Google, are taking a significant risk, Lin says. “They’re replacing the human and all the human mistakes a human driver can make, and they’re absorbing this huge range of responsibility.”

Also, Lin asks: Who would buy a car that opts to kill you in the event of an accident, even if it’s the most ethical decision?

“No one wants a car that looks after the greater good,” he says. “They want a car that looks after them.”

The U.S. National Highway Traffic Safety Administration is expected to release regulation for self-driving cars in July, the Detroit Free Press reports. Mark Rosekind, the agency’s administrator, said that self-driving car technology doesn’t have to be perfect to be acceptable, and the regulation can’t be so rigid that it does not keep up with evolving technology.

The article notes that in Nevada and Michigan, lawmakers are considering whether vehicles that navigate and assess safety without human input could change standards regarding who can have drivers’ licenses.

“We need new safety metrics. We also are going to have to broaden our view on the data sources for what those metrics might be. We have laboratory work. We have simulations and real-world data,” said Rosekind, speaking an annual conference sponsored by the vehicle technology group TU-Automotive.

Give us feedback, share a story tip or update, or report an error.