The National Pulse

Does artificial intelligence discriminate in child neglect case assessments?

  •  
  •  
  •  
  •  
  • Print.

Andrew and Lauren Hackney

Andrew and Lauren Hackney and their dog, Scrappy. AP Photo/Jessi Wardarski.

When Andrew and Lauren Hackney followed their doctor’s advice in caring for their baby, the Pennsylvania parents never dreamed it would lead to losing custody of their 7-month-old daughter—or that their heartbreak would be at the center of a U.S. Department of Justice investigation.

In fall 2021, their daughter started rejecting her bottles. In addition, baby formula was in short supply then, and the parents crossed state lines looking for more formula, ultimately switching brands.

When the baby still wasn’t eating sufficiently, they called their pediatrician. After trying different suggestions, they called again when they noticed her diapers weren’t wet in the mornings, and she wasn’t producing tears when she cried. The doctor advised them to take her to the emergency room.

After being admitted, the girl resumed eating and grew stronger, and the Hackneys prepared to take her home. To their shock, the Allegheny County Office of Children, Youth and Families instead took her into custody. The Hackneys believe hospital staff contacted child services over the baby’s initial dehydration and weight loss.

Given the extreme nature of the agency’s action and their own diligent parenting, they wondered whether their disabilities were a factor in the decision to remove their child. Lauren Hackney has attention-deficit/hyperactivity disorder that can cause memory loss, and Andrew Hackney has some resultant damage from a stroke.

The office filed a dependency case in Allegheny County Family Court in November 2021, and Andrew and Lauren were assigned attorneys. In September 2022, Andrew retained family law expert Robin Frank of Raphael, Ramsden & Behers in Pittsburgh. Their goal, Frank says, was simply to regain custody of their daughter.

AI at issue

They learned the agency employs an artificial intelligence mechanism called the Allegheny Family Screening Tool, which helps assess child welfare risk in cases of potential neglect. According to the American Civil Liberties Union, family demographics as well as prior involvement with child welfare, the criminal justice system and other behavioral health systems are taken into account to create a score. Intake staff uses the score to decide whether to further screen the family. The analyzed risk is whether the child would require further involvement by the agency within two years.

Frank says when the Hackneys asked their caseworker what they could do to get their daughter back, “the caseworker said, ‘I’m sorry. Your case is high-risk. The machine has labeled you as high-risk.’ And they were like, ‘What? What are you talking about?’”

As the Hackney case unfolded, scrutiny of state agencies’ use of AI in child welfare cases was growing. Reports by the ACLU, research by a Carnegie Mellon University team and extensive reporting by the Associated Press exposed flaws with AI tools. Word spread, prompting the Department of Justice to consider a formal investigation.

DOJ attorneys contacted Frank to discuss her experience in cases involving disabled family members. After the discussion, she filed a complaint with the Civil Rights Division on behalf of the Hackneys, alleging unlawful disability discrimination. She filed or assisted with similar claims for two other families in Allegheny County that involved mental health conditions and addiction recovery, both considered disabilities.

According to Frank, the Justice Department officially launched an investigation in February. The DOJ declined to comment on the investigation.

Frank says at the heart of the probe is whether AI tools are leading to discrimination against disabled populations in violation of the Americans with Disabilities Act.

The child welfare system, according to a recent ACLU study, long has been “plagued by inequities based on race, gender, income and disability.”

Arriving in 2016 was what looked like a saving grace: algorithmic AI tools that would assess risk, reduce the negative impact of unnecessary investigations, lessen the burden on agency workers and ostensibly inject an objective analysis.

At the forefront of child welfare screening predictive risk models are Rhema Vaithianathan, director of the Centre for Social Data Analytics at Auckland University of Technology in New Zealand; and Emily Putnam-Hornstein, professor for children in need and director of policy practice at the University of North Carolina at Chapel Hill.

The pair created the most well-known AI in the field: the Allegheny Family Screening Tool, launched in Allegheny County, Pennsylvania. Others soon followed, and now jurisdictions in at least 11 states have adopted predictive algorithms, including cities and counties in Colorado, California, New York and New Jersey.

Faulty data or methods?

Critics such as University of Oklahoma School of Law professor Robyn Powell, a leading authority on the rights of parents with disabilities; and Sarah Lorr, co-director of the Disability and Civil Rights Clinic at Brooklyn Law School, say the crux of the problem lies in the data points used to create the algorithms and the risk they typically assess. The Allegheny Family Screening Tool generates a risk score by looking at whether certain characteristics of the agency’s past cases are present in the current maltreatment allegations. Not only are some of the criteria explicitly disallowed, they say, pointing to disability-related factors, but long-standing racial biases also are implicitly included.

“The ADA explicitly says that state and local government entities cannot discriminate based on disability, and within that requirement is the idea that you cannot use screening tools or eligibility criteria that would [point to] people with disabilities,” Powell says.

Putnam-Hornstein disagrees. In an email to the ABA Journal, she wrote that “information on parental mental health and substance use are risk factor variables in the tools … We know of no child welfare risk assessment tools—either computerized or manual—that do not consider these as important factors relevant to assessing risk and safety.”

Sarah Morris, a Denver attorney who works with the Colorado Office of Respondent Parents’ Counsel, calls it a feedback loop. “It’s processing its own process: This is how likely this biased system is to remove this child in the future. All this does is launder human biases through the mirage of some kind of transparent nonbiased machine calculation,” she says.

Lorr agrees an algorithm can falsely give an impression of scientifically accurate results.”But in fact, the algorithms use proxies for disability directly and really explicitly and include those as risk factors.”

Lack of transparency

ACLU senior staff attorney Anjana Samant has been investigating government use of algorithms related to women’s and family rights. She says the organization does not know of any agency that tells investigated parents or their counsel what their risk score is.

Richard Wexler, executive director of the National Coalition for Child Protection Reform, says, “They can give you a list of what goes into the data points, but they don’t tell you how much each data point weighs.”

Silicon Valley Law Group attorney Stephen Wu, who chairs the ABA Artificial Intelligence & Robotics National Institute, says judgments about human beings are complicated.

“It’s really hard to capture that all into software,” he says. “Our software will get better over time. But at the moment, there are so many things to consider that having an AI score should not be the beginning and end of an analysis.”

In defense of the tools, Putnam- Hornstein points to human oversight.

“The score is one piece of many in a comprehensive review that a caseworker and their supervisor consider when determining next steps,” she says. “The tool is advisory only.”

Most jurisdictions are open about their adoption of AI tools. Allegheny and Los Angeles counties have materials documenting studies and ongoing review. Colorado counties have made information available. Oregon dropped its Safety at Screening Tool in 2022. Oregon Sen. Ron Wyden applauds the move.

“I’ve long made the case that the safety and well-being of children and families cannot be left to untested algorithms—especially when racial and other biases can so easily get baked into a system,” Wyden says.

Meanwhile, almost two years after their dependency case began in 2021, the Hackneys still await the return of their child. In July, a judge agreed to continue to allow the Hackneys to see their daughter twice a week, giving them four supervised and three unsupervised hours. “It’s structured so that the unsupervised time is in 30-to-45-minute increments before and after the two-hour block of supervised time,” Frank says.

The next hearing, originally set for October, was continued until January, pending a further evaluation from the court-appointed psychologist, although the Hackneys were granted one additional hour of unsupervised time with their daughter.

“It’s just really tough for them to fathom how they went from raising a newborn for seven months to having those kind of limitations,” Frank says. “They do everything that they can to make each moment count.”

This story was originally published in the December 2023-January 2024 issue of the ABA Journal under the headline: “Holes in the Screen? Lawyers are questioning whether artificial intelligence discriminates in child neglect case assessments.”


Laurel-Ann Dooley is an Atlanta-based freelance journalist who frequently covers the legal field. She is a former practicing attorney who specialized in complex litigation and also served pro bono as a guardian ad litem.

Give us feedback, share a story tip or update, or report an error.