Every year, there are more than 4 million referrals made to child protection agencies across the US. The practice of screening calls is left to each jurisdiction to follow local practices and policies, potentially leading to large variation in the way in which referrals are treated across the country. While increasing access to linked administrative data is available, it is difficult for workers to make systematic use of historical information about all the children and adults on a single referral call. Jurisdictions around the country are thus increasingly turning to predictive modeling approaches to help distill this rich information. The end result is typically a single risk score reflecting the likelihood of a near-term adverse event. Yet the use of predictive analytics in the area of child welfare remains highly contentious. There is concern that some communities—such as those in poverty or from particular racial and ethnic groups—will be disadvantaged by the reliance on government administrative data. In this talk, I will describe some of the work we have done both in the lab and in the community as part of developing, deploying and evaluating a prediction tool currently in use in the Allegheny County Office of Children, Youth and Families.
● Counterfactual risk assessment, evaluation, and fairness
● Toward algorithmic accountability in public services
● Decisions in the presence of erroneous algorithmic scores
● Excerpt from Virginia Eubanks' Automating Inequality