Trials & Litigation

Courts and lawyers struggle with growing prevalence of deepfakes

  •  
  •  
  •  
  •  
  • Print.

fake stamp

Image from Shutterstock.com.

As a child custody battle unfolded behind the closed doors of a British courtroom, a woman said her husband was dangerous and that she had the recording to prove it.

Except, it turned out she didn’t.

The husband’s lawyer revealed that the woman, using widely available software and online tutorials, had doctored the audio to make it sound like his client, a Dubai resident, was making threats.

Byron James, an attorney with the firm Expatriate Law in Dubai, told the United Arab Emirates newspaper the National in February 2020 that by studying the metadata on the recording, his experts revealed that the mother had manipulated it. Under U.K. law, custody proceedings are confidential, but the National reports it took place at some point in 2019.

“If we hadn’t been able to challenge this piece of evidence then it would have negatively affected him and portrayed him as a violent and aggressive man,” James told the National. “It raises all sorts of questions about what sort of evidence can you actually rely on. Is there sufficient judicial training around the world to be able to identify evidence that has been manipulated in this manner?”

The audio in Jones’ case was a “cheapfake,” a less sophisticated variant of artificial intelligence-generated synthetic videos, text and audio—or deepfakes, as they are known. The case throws a spotlight on what could be an emerging challenge in trial courts. How will litigators and judges determine whether evidence is real or fake?

Deepfakes have seeped into our culture and politics but are most often found in pornography. According to a Deeptrace Labs report, The State of Deepfakes: Landscape, Threats, and Impact, 96% of manipulated videos on the internet are pornographic.

As the technology grows in complexity, making it more difficult to spot fakes, attorneys and judges will have to decide how to manage deepfake evidence and authenticate it, says Riana Pfefferkorn, associate director of surveillance and cybersecurity at the Center for Internet and Society at Stanford Law School. She warns that deepfakes could erode trust in the justice system.

“My worry is that juries may be primed to come into the courtroom and be less ready to believe what they see and believe what they hear and will be more susceptible to claims that something is fake or not,” says Pfefferkorn.

Determining what’s real

A deepfake video uses machine learning to transform a person’s image so they resemble someone else and can manipulate people’s words and actions.

For years, Hollywood has been using computer-generated imagery to resurrect dead stars, age actors or make them look more youthful. A viral video morphed Tom Cruise’s likeness over actor and comedian Bill Hader’s face. Actor and director Jordan Peele warned of the dangers of deepfakes by lip-syncing audio and manipulating video of former President Barack Obama. Cheapfakes are less sophisticated but nevertheless just as impactful. Audio of House Speaker Nancy Pelosi was slowed down so that she seemed to be slurring.

In their paper Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security, Robert Chesney and Danielle Citron, describe how the technology could damage the public’s trust in institutions, including the justice system.

“One can readily imagine, in the current climate especially, a fake-but-viral video purporting to show FBI special agents discussing ways to abuse their authority to pursue a Trump family member. Conversely, we might see a fraudulent video of ICE officers speaking with racist language about immigrants or acting cruelly towards a detained child. Particularly where strong narratives of distrust already exist, provocative deep fakes will find a primed audience,” the paper states.

In the past decade, the courts have ruled several times on procedures for authenticating digital evidence, Pfefferkorn says.

In the 2010 case, People v. Beckley, the California 2nd District Court of Appeal ruled that prosecutors should not have admitted a MySpace image claiming to show the girlfriend of a defendant flashing a gang sign because neither a witness nor an expert authenticated it.

Ruling in a different criminal case, the California 1st District Court of Appeal disagreed with the approach taken by the 2nd District. The court relied on a California Supreme Court ruling four years after Beckley to find that eyewitness or expert testimony may not be necessary.

More recently, a Colorado appeals court ruled that an audio recording of a man’s voicemail to his victim was admissible in a murder case, despite his claims that prosecutors did not authenticate it.

“Developments in computer technology and software enable almost any owner of a personal computer with the necessary knowledge and software to falsely edit recordings,” wrote Judge Michael Berger for the Colorado court. “But, the fact that the falsification of electronic recordings is always possible does not, in our view, justify restrictive rules of authentication that must be applied in every case when there is no colorable claim of alteration.”

Pfefferkorn says courts have shown that they can be “robust against generations and generations of fakes.” The photo-editing software Adobe Photoshop has been around for decades, and long before deepfakes, courts have handled everything from forgeries to doctored photocopies, she says.

“As long as there’s been evidence in court, there’s always been fakes, and the courts have come up with rules to deal with those as they come up, and are aware that there’s always the possibility that somebody is trying to hoodwink them,” Pfefferkorn says.

According to Jason Lichter, director of discovery services at Pepper Hamilton in New York City and member of the ABA’s E-Discovery and Digital Evidence Committee, deepfakes are “in many respects just the next iteration or an evolution of a trend or a risk.”

“That being said if a picture’s worth a thousand words, a video or audio could be worth a million. Because of how much weight is given to a video by a factfinder, the risks associated with deepfakes are that much more profound,” Lichter says.

Authenticating at capture

As the technology evolves, and becomes cheaper and more widespread it could end up in the hands of anyone with access to a smartphone. Hany Farid, a digital forensics expert and professor at University of California at Berkeley, admits that the work of detecting deepfakes is a “cat-and-mouse” endeavor.

“I will probably never be able to stop the state-sponsored actors, the Hollywood studios, the highly talented people and forgers from manipulating content,” Farid says. “I’ll absolutely take it out of the hands of somebody who’s downloading some code from the internet and running it on their laptop and then trying to fool the system.”

Farid, who has worked in digital forensics for 20 years, says that there are “passive” and “active” approaches to looking at deepfake evidence.

He has taken the passive approach for much of his career. Among other techniques, he analyzes content and looks for inconsistencies that the average person might not see, such as whether lighting or shadows are consistent across a whole image.

The second approach involves watermarking recordings at the time they are captured. Farid pointed to the digital inspection platform Truepic as an example of a company that’s adopted the active approach (Farid says that he is a paid consultant for the company).

Truepic verifies the data directly from a camera’s image sensors and writes a unique digital fingerprint to the blockchain, a decentralized public ledger. No one, including the company, can alter the signature. Farid says that images authenticated in this way can be investigated to determine if the content is consistent with the digital signature captured at the time of recording. If it’s not, something might be up.

He adds that the digital tool is not foolproof and that safeguards are needed to deter hackers. He says, however, that stamping a digital signature on a recording would “go a long way to authenticating content.”

“I’m not naive. I don’t think you’re going to get 7 billion people to use this app. But for example, you certainly could have every single body cam do this. You can have police investigators when they photograph a scene or do surveillance do this. You could have CCTV do this. Particularly in controlled eco-systems, you could authenticate at the point of recording,” Farid says.

The liar’s dividend

There is another feature of deepfake technology that could be overlooked. The existence of deepfakes could allow anyone to question whether evidence is real or faked. Chesney and Citron call this the “Liar’s Dividend.”

“Deepfakes will make it easier for liars to deny the truth in distinct ways. A person accused of having said or done something might create doubt about the accusation by using altered video or audio evidence that appears to contradict the claim,” their paper states.

John Simek, vice president of the digital forensics and cybersecurity firm Sensei Enterprises in Fairfax, Virginia, compares the threat of deepfakes to the “CSI effect,” that led jurors to heavily rely on forensic science because of police procedures they had seen depicted on TV. Oftentimes, jurors expected all of the DNA, crime scene reenactments and tech they saw in the show in actual court proceedings—and if they didn’t get that, they’d acquit. He says that the burden could be on lawyers to prove that video or audio evidence is real. That could drive up the costs of litigation if attorneys have to hire experts or use the latest AI to detect fakes.

“I could surely see jurors sitting there going, ‘The other side says that it’s fake; how come you didn’t bring an expert here to say that it’s authentic?’” Simek says.

Henry Ajder, the co-author of The State of Deepfakes for the Amsterdam-based Deeptrace Labs says there’s another problem. There aren’t enough digital forensic experts to go around. Ajder says that cements the need for machine learning tools, like those Deeptrace has developed to detect deepfakes.

“As the technology for generating synthetic media and deep fakes increases and becomes more accessible, the number of human experts who could rule with authority on whether a piece of media is real or not, has not,” Ajder says.

A deepfake future

The British child custody case could be a sign of what’s to come. Simek expects more cheapfake and deepfake cases in the family courts.

“It’s fairly easy to fake audio, and especially if you’re in a spousal arrangement, you probably have a lot of sample to start with, and you’re full of emotion,” Simek says. “I think that’s where it’s going to start.”

In an article for the Washington State Bar Association, Pfefferkorn recommends attorneys prepare for deepfake evidence by budgeting for digital forensic experts and witnesses. She says lawyers should know enough about deepfake technology to be able to spot outward signs that the evidence has been tampered with. There are ethical concerns if a client is pushing an attorney to use suspect evidence, she adds.

“You might still need to do your homework and do your due diligence with respect to the evidence that your client brings to you before you bring it to court,” Pfefferkorn says. “If it’s a smoking gun that seems too good to be true, maybe it is.”

Give us feedback, share a story tip or update, or report an error.