Parent plays with their children (an infant and a toddler) and stacks blocks
An arrow pointing leftHome

Why Oregon stopped using AI to monitor child welfare

  • Louise Matsakis
9/13/2022

And why states across the country are doing the same.

Artificial intelligence is now used to make a wide range of important decisions, from screening job applicants to deciding whether students are cheating on exams. The problem is that many of these programs have been shown to be biased. Because they’re trained by humans, algorithms often reflect and amplify the same structural inequalities in society.

The latest example came to light in June, when Oregon officials announced they would stop using an AI that helped determine whether families should be investigated by child protective services for abuse and neglect. After an “extensive analysis,” the Oregon Department of Human Services said it would abandon the algorithm to reduce disparities in how families are flagged. It will be replaced by a new system designed to make more racially equitable decisions.

Oregon officials did not disclose what specific issues they may have discovered with the algorithm. But the move to abandon it came just weeks after the Associated Press published a damning investigation into another tool used in Allegheny County, Pennsylvania, which had originally inspired the one in Oregon. The AP reported that it disproportionately flagged Black children for mandatory neglect inquiries.

Pennsylvania and Oregon are not alone. Across the country, local or state welfare agencies in at least 26 states and Washington D.C. have considered deploying predictive analytics tools, and 11 have already done so, according to a report published by the American Civil Liberties Union last year.

The non-profit civil rights organization found that, despite the growing popularity of the algorithms, “few families or advocates have heard about them, much less provided meaningful input into their formulation, implementation, or oversight.”

In Pennsylvania, the Allegheny Family Screening Tool is used to assess incidents of potential neglect reported to the county’s child protection hotline. It analyzes government data collected from law enforcement agencies, Medicaid, and other sources, and then calculates a score from 1 to 20. The higher the score, the more likely it is that a child will be placed into foster care within the next two years. Social workers can decide whether they agree with the tool’s assessment and have the ability to override it.

Proponents of the AI argue that it can help counteract the personal biases of call center workers who field neglect and abuse complaints. But researchers from Carnegie Mellon University found that from August 2016 to May 2018, it recommended 32.5% of Black children reported as neglected should be subject to a mandatory investigation, compared to only 20.8% of white children. (For two years, the tool also suffered from a technical glitch, resulting in some minors receiving both lower and higher scores.)

The CMU researchers found that social workers in Allegheny County were able to manually counteract disparities produced by the algorithm. After the AI was deployed, there wasn’t an increase in the number of Black families investigated for child neglect.

Oregon began implementing a similar algorithm in 2018. It estimated the possibility of two separate outcomes: the likelihood that a child would show up in another abuse report within the next two years, and the likelihood they would be removed from their home within the next two years.

The tool generated scores from 1 to 4 using “more than a hundred different variables,” according to a document, which has not previously been reported on, from the Oregon Department of Human Services that has been removed since the time of this writing. But, a copy can still be found preserved on the Internet Archives.

The document noted that a “fairness correction” had been built into the system in order to account for biases related to race and ethnicity. But despite these safeguards, Oregon officials said they would phase out the AI by the end of June.

“Decisions about what should happen to children and families are far too important to be made by untested algorithms,” Oregon Senator Ron Wyden tweeted at the time.