Artificial Intelligence & Robotics

Generative AI creates fake images that bring to life real stories from Australia's offshore detention camp

  •  
  •  
  •  
  •  
  • Print.

howatsonco-maurice-blackburn-exhibit-ai-activation-06_800px

AI-generated images are shared with the public at an exhibition in Canberra, Australia. Photo courtesy of Maurice Blackburn.

The images of Australia’s offshore detention camps on the island country of Nauru and Manus Island in Papua New Guinea pack a punch. There is a shot of a teenage boy, his mouth sewn shut in protest. A close-up shows a razor blade in a blood-splattered sink. Another depicts self-immolation, a body face down on the gravel, consumed by flames.

“While I was living in the detention centre, I witnessed a man who was also a refugee kill himself by putting petrol on himself and burning himself to death,” a survivor recounts in a witness statement.

The statement is real. The photos are fake.

AI-generated, they are among 134 images Australian law firm Maurice Blackburn commissioned to chronicle what asylum-seekers saw after the Commonwealth of Australia sent them to the islands for processing. The project, “Exhibit A-i: The Refugee Account,” includes 32 witness statements from 300 hours of interviews, part of a defunct class action lawsuit. Stemming from the government’s controversial “Pacific Solution” to deny asylum-seekers traveling by boat entry to the mainland, the Manus refugee processing center is now closed. The camp on Nauru remains open should the government need to use it, but in June the camp’s last refugee was flown to Australia.

The asylum-seekers’ litigation became moot after the Supreme Court in Australia ruled against a similar class action. Since court records are usually destroyed after seven years, according to lawyer Jennifer Kanis, leader of Maurice Blackburn’s social justice practice, her firm wanted to preserve the statements for posterity.

“The technology allowed us to bring to life stories that otherwise wouldn’t be brought to life,” Kanis says.

In April, the law firm unveiled the project at the Immigration Museum in Melbourne, along with a website and a hardcover book it hopes will influence policymakers and politicians. While the project has already gained media attention in Australia and has been shown to the country’s minister for immigration and members of Parliament, it also has led to a discussion about the ethical use of generative AI.

The technology has evolved to a point where it’s hard to tell what’s real and what’s fake. In April, German artist Boris Eldagsen fooled judges at the Sony World Photography awards by submitting an AI-generated portrait of two women that won first prize. Earlier this year, amid rumors that former President Donald Trump was about to be indicted in a New York court, a series of fake images imagining how his arrest would unfold, including one of the cops wrestling him to the ground, went viral on Twitter. The furor sparked questions about the potential of AI images to incite violence or unrest.

howatsonco-maurice-blackburn-exhibit-ai-reports_600px The hardcover book of “Exhibit A-i: The Refugee Account.” Image courtesy of Maurice Blackburn.

‘Lasting imprints’

Although Maurice Blackburn is trying to use AI to sharpen its lens on injustice, the ethics of using the technology to illustrate witness statements are murky. People could see the images out of context or believe they are real. But for Kanis and the firm’s creative agency Howatson+Company, the tradeoff was worth it.

Howatson’s executive creative director, Gavin Chimes, knows images have the potential to make an outsized impact. After he signed on to the project, his mind wandered to the photo of Tiananmen Square in Beijing on June 5, 1989, that shows a protester standing in front of a tank, and the “Napalm Girl” photo showing a 9-year-old Phan Thị Kim Phúc, naked, badly burned and fleeing a napalm attack on the Vietnam village of Trảng Bàng on June 8, 1972.

“Words alone aren’t enough to move hearts and minds,” Chimes says. “Our brains process images faster than words. We encode images into memories, and that’s what creates lasting imprints.”

Based on the witness statements, the creative agency’s designers used the AI platform Midjourney with licensed images from Shutterstock to generate preliminary images. Then they worked closely with survivors to fine-tune the images to make sure they reflected their experiences. They also enlisted Dhaka-born Australian photojournalist and reporter Mridula Amin to consult on the project.

“If we had just generated these images and not engaged with the survivors, this campaign would have felt very thin and would have felt less ethical,” Chimes says.

One of those survivors is “Arina,” an alleged victim of sexual assault who was detained on Nauru and is now living in Australia. For that reason, she asked the Journal to use a pseudonym to protect her identity. Iranian-born and now in her late 30s, Arina spent 15 months on Nauru. The AI images made her story more vivid, but she says working on the project forced her to relive her trauma and she had trouble sleeping during the consultation process.

“Sometimes I couldn’t even breathe properly,” she says.

The images undoubtedly have an impact that words alone can’t convey. And Arina says she cried for two hours after she saw the first rough versions of the images, based on what she had seen at the camp.

“I remember thinking if this affected me this much, [we] needed to have those images go with this story,” she says.

Liar’s dividend

Hany Farid, a digital forensics expert and professor at the University of California at Berkeley, doesn’t deny the power of using AI imagery to illustrate human rights violations or other atrocities. But he says that anyone considering using generative AI to depict actual events should “tread lightly.” Using the tech to recreate witness accounts could allow detractors to claim real evidence and photographs are fake. Farid and other experts call this phenomenon “the liar’s dividend.”

“I don’t know how to do that in a way that doesn’t lead to downstream confusion and this world where anybody can claim anything is fake because we’re playing fast and loose with the photographic record,” Farid says in an interview.

The exhibit raises another unsettling question. What if officials used AI to depict a distorted and less harrowing version of camp conditions?

“You can see how it can be a powerful technology for storytelling, but the same technology can also be used to perpetuate lies,” Farid writes in an email. “More broadly, [when] generative AI begins to be used in these types of documentary settings, all forms of visual documentation may be called into question. In a world where anything can be manipulated, can anything be trusted?”

Kanis says “Exhibit A-i” makes clear that the images are made by AI. But when the Guardian newspaper republished images of the project in April, it included captions stating the images are “photographs” courtesy of Maurice Blackburn and used camera icons beneath each image, although they are identified as AI-generated. Still, those small details are misleading, according to Farid.

“There’s nothing on the image that shows it’s AI-generated. I think that’s a mistake,” he says.

In June, CNN also published a story about the exhibit but labeled the images as AI-generated. Farid is in favor of labeling and digital watermarking. But he notes how image-makers pushed the limits of manipulation when using photo editors like Photoshop until eventually some publications adopted standards for the software’s use. He would like to see the same framework applied to generative AI.

“In this particular instance, you may come out and say, ‘Look, we think the story is too important. We think we’ve done a reasonable job in ensuring these photos are reasonably accurate representations of what happened.’ But I think every situation is going to be a little bit different,” Farid says.

More than 3,000 people were taken to the detention camps, according to the Human Rights Law Center. Some were resettled in other countries, including the U.S., but hundreds were repatriated to their own countries. At least 14 people died at the camps. Although the last refugee left Nauru in June, there were still 70 refugees and asylum-seekers left in Papua New Guinea, according to an Oct. 7 Guardian article.

“As long as Nauru remains ‘open’ and refugees remain in limbo in PNG, the dark chapter of offshore detention will not be finally closed,” Ian Rintoul of the Refugee Action Coalition said in a June statement.

Kanis says that for the most part, journalists and photographers were barred from visiting the camps to document what was happening. That meant the decision to use AI to create the images was a statement in itself. Beyond that, she hopes the project will spur the government to change its policies.

“What we want to see is a reckoning for what happened. We want to have a permanent record of those stories in a way that is accessible so that no one can say, ‘I didn’t know what happened.’ The details of what happened should be shocking, but they shouldn’t be unknown,” she says.

Chimes, meanwhile, is excited by generative AI’s potential to highlight injustice.

“AI has been used comically. It’s been used flippantly. This campaign shows that AI can do so much more than that. It can be a powerful tool to give a voice to the voiceless and make the invisible visible. If done right, it can change perceptions and maybe policy as well,” he says.

Give us feedback, share a story tip or update, or report an error.