Trials & Litigation

Misuse of AI to create sexually explicit, harassing content prompts workplace lawsuits

Employers are facing workplace harassment lawsuits over misuse of artificial intelligence to create sexually explicit and fake content. (Image from Shutterstock)

A new wave of lawsuits over the misuse of artificial intelligence to generate fake, sexually explicit or harassing content is creating new forms of workplace disputes that could pose significant liability for employers, according to an investigation by Bloomberg Law.

The evolving technology already has led to suits, including cases brought by a Washington State Patrol trooper and a Nashville, Tennessee, TV meteorologist—both who claim that they were targeted in demeaning or sexualized AI-generated images that their employers did not adequately address.

“The advent of deepfakes sort of presents employers with a whole new frontier of challenges,” Robert T. Szyba, a partner at Seyfarth Shaw told Bloomberg Law.

The story notes that doctored images, video or audio recordings targeting an employee based on their gender or other protected traits could give rise to workplace harassment or discrimination.