Normal view MARC view ISBD view

A systematic review of sophisticated predictive and prescriptive analytics in child welfare : accuracy, equity and bias Seventy F. Hall, Melanie Sage, Carol F. Scott and Kenneth Joseph

By: Hall, Seventy F.
Contributor(s): Sage, Melanie | Scott, Carol F | Joseph, Kenneth.
Material type: materialTypeLabelArticleSeries: Child and Adolescent Social Work Journal.Publisher: Springer, 2023Subject(s): CHILD ABUSE | CHILD PROTECTION | CHILD WELFARE | DATA ANALYSIS | ETHICS | PREDICTIVE RISK MODELLING | SOCIAL WORK PRACTICE | SYSTEMATIC REVIEWS | INTERNATIONAL | UNITED STATES | THE NETHERLANDS | NEW ZEALANDOnline resources: DOI: 10.1007/s10560-023-00931-2 In: Child and Adolescent Social Work Journal, 2023, First published online, 23 May 2023Summary: Child welfare agencies increasingly use machine learning models to predict outcomes and inform decisions. These tools are intended to increase accuracy and fairness but can also amplify bias. This systematic review explores how researchers addressed ethics, equity, bias, and model performance in their design and evaluation of predictive and prescriptive algorithms in child welfare. We searched EBSCO databases, Google Scholar, and reference lists for journal articles, conference papers, dissertations, and book chapters published between January 2010 and March 2020. Sources must have reported on the use of algorithms to predict child welfare-related outcomes and either suggested prescriptive responses, or applied their models to decision-making contexts. We calculated descriptive statistics and conducted Mann-Whitney U tests, and Spearman’s rank correlations to summarize and synthesize findings. Of 15 articles, fewer than half considered ethics, equity, or bias or engaged participatory design principles as part of model development/evaluation. Only one-third involved cross-disciplinary teams. Model performance was positively associated with number of algorithms tested and sample size. No other statistical tests were significant. Interest in algorithmic decision-making in child welfare is growing, yet there remains no gold standard for ameliorating bias, inequity, and other ethics concerns. Our review demonstrates that these efforts are not being reported consistently in the literature and that a uniform reporting protocol may be needed to guide research. In the meantime, computer scientists might collaborate with content experts and stakeholders to ensure they account for the practical implications of using algorithms in child welfare settings. (Authors' abstract). New Zealand projects are included in this review. Record #8199
No physical items for this record

Child and Adolescent Social Work Journal, 2023, First published online, 23 May 2023

Child welfare agencies increasingly use machine learning models to predict outcomes and inform decisions. These tools are intended to increase accuracy and fairness but can also amplify bias. This systematic review explores how researchers addressed ethics, equity, bias, and model performance in their design and evaluation of predictive and prescriptive algorithms in child welfare. We searched EBSCO databases, Google Scholar, and reference lists for journal articles, conference papers, dissertations, and book chapters published between January 2010 and March 2020. Sources must have reported on the use of algorithms to predict child welfare-related outcomes and either suggested prescriptive responses, or applied their models to decision-making contexts. We calculated descriptive statistics and conducted Mann-Whitney U tests, and Spearman’s rank correlations to summarize and synthesize findings. Of 15 articles, fewer than half considered ethics, equity, or bias or engaged participatory design principles as part of model development/evaluation. Only one-third involved cross-disciplinary teams. Model performance was positively associated with number of algorithms tested and sample size. No other statistical tests were significant. Interest in algorithmic decision-making in child welfare is growing, yet there remains no gold standard for ameliorating bias, inequity, and other ethics concerns. Our review demonstrates that these efforts are not being reported consistently in the literature and that a uniform reporting protocol may be needed to guide research. In the meantime, computer scientists might collaborate with content experts and stakeholders to ensure they account for the practical implications of using algorithms in child welfare settings. (Authors' abstract). New Zealand projects are included in this review. Record #8199