Artificial Intelligence & Robotics

ChatGPT falsely accuses law prof of sexual harassment; is libel suit possible?

  •  
  •  
  •  
  •  
  • Print.

shutterstock_Screen with ChatGPT

Image from Shutterstock.

A law professor was surprised to hear that he had been accused of sexual harassment during a class trip to Alaska sponsored by his law school, the Georgetown University Law Center.

But in reality, Jonathan Turley is a professor at the George Washington University Law School, he has never taken students on a class trip to Alaska or anywhere else, and he has never been accused of harassing a student. And the supposed article in the Washington Post reporting on the accusation doesn’t exist.

Turley’s accuser is ChatGPT, according to his story for USA Today and an article in the Washington Post.

“It was quite chilling,” Turley told the Washington Post. “An allegation of this kind is incredibly harmful.”

A spokesperson for ChatGPT creator OpenAI gave the Washington Post this statement: “When users sign up for ChatGPT, we strive to be as transparent as possible that it may not always generate accurate answers. Improving factual accuracy is a significant focus for us, and we are making progress.”

ChatGPT had accused Turley after Eugene Volokh, a professor at the University of California at Los Angeles School of Law, asked it to make a list of law professors who had sexually harassed someone. Volokh is writing a law review article that considers whether the creators of ChatGPT could be sued for libel, he wrote in posts here and here for the Volokh Conspiracy.

The Washington Post considered the possibility of lawsuits. One issue is whether OpenAI could avoid liability under Section 230 of the Communications Decency Act, which protects online publishers from suits based on third-party content. Another issue is whether a plaintiff could show reputational damage from a false assertion.

In an article for the Wall Street Journal, cartoonist Ted Rall considered whether he could sue after ChatGPT falsely claimed that he had been accused of plagiarism by another cartoonist with whom he had a “contentious” and “complicated” relationship.

Actually, Rall said, the other cartoonist is his friend, their relationship is neither contentious nor complicated, and no one has ever accused him of plagiarism. He spoke with experts about the possibility of a suit.

Laurence Tribe, a professor emeritus at Harvard Law School, told Rall that it shouldn’t matter for purposes of liability whether lies are generated by a human being or a chatbot.

But a defamation claim could be difficult for a public figure, who would have to show actual malice to recover, said RonNell Andersen Jones, a professor at the University of Utah S.J. Quinney College of Law.

“Some scholars have suggested that the remedy here resides more in a product-liability model than in a defamation model,” Jones told Rall.

When Volokh asked for feedback on the libel issue online, many people said ChatGPT’s assertions shouldn’t be treated as factual claims because they are the product of a predictive algorithm.

“I’ve seen analogies to Ouija boards, Boggle, ‘pulling Scrabble tiles from the bag one at a time,’ and a ‘typewriter (with or without an infinite supply of monkeys),’” he wrote.

“But I don’t think that’s right,” Volokh wrote. “In libel cases, the threshold ‘key inquiry is whether the challenged expression, however labeled by defendant, would reasonably appear to state or imply assertions of objective fact.’ OpenAI has touted ChatGPT as a reliable source of assertions of fact, not just as a source of entertaining nonsense. Its current and future business model rests entirely on ChatGPT’s credibility for producing reasonable accurate summaries of the facts.”

Give us feedback, share a story tip or update, or report an error.