Individual Submission Summary
Share...

Direct link:

Predictive AI in Criminal Justice: Beyond Poor Algorithmic Performance

Fri, September 5, 3:30 to 4:45pm, Deree | Classrooms, DC 607

Abstract

Predictive AI tools can bring about injustice in institutional decision-making, including criminal punishment. Objections to policies that use them are often framed in terms of poor algorithmic performance. This includes how good or bad the algorithm is at making predictions for the population overall, and how it distributes the risk of error amongst it. For instance, it is often argued that AI tools render inaccurate decisions for certain groups, often those who are otherwise marginalised, because of unrepresentative or biased data.

Grounded in decolonial theory and critical AI studies, this paper argues that there is more to AI justice than eliminating disparities in predictive performance. In particular, the use of socio-economic variables to generate the predictions that are used to allocate the burdens of criminal justice can deny those who are affected by these decisions an adequate chance to choose a path that does not result in criminal punishment. In turn, this can reinforce stigmatising differences in status and perpetuate racial essentialism, leading to the systematic exclusion of historically marginalized and oppressed communities from critical social, economic, and political opportunities. To deliver fair policies of AI decision-making, we must exclude socio-economic factors from the predictive variables that inform critical decisions within criminal justice.

Author