Search
Browse By Day
Browse By Time
Browse By Person
Browse By Area
Browse By Session Type
Search Tips
ASC Home
Sign In
X (Twitter)
We examine documented incidents from the AI Incident Database to assess patterns of algorithmic failures, biases, and unintended consequences in criminal justice applications. We find concerning trends in predictive policing systems, risk assessment tools, and facial recognition technologies deployed across various global jurisdictions. Key findings highlight persistent algorithmic biases affecting marginalized communities, those already carrying experiences of trauma, and accountability gaps when systems produce harmful outcomes. Notable case studies demonstrate how seemingly technical failures often manifest as structural injustices and reinforce and reproduce existing disparities. The data suggests inadequate validation procedures, minimal community involvement in adoption decisions, and insufficient system transparency to support meaningful oversight. Our research indicates that jurisdictions implementing these technologies often lack robust auditing protocols and clear remediation pathways for affected individuals. We propose a framework for critically assessing AI use in criminal justice systems, emphasising where AI can be useful and increase efficiency but also reduce bias and avoid re-traumatisation. These findings contribute to a more nuanced understanding of the practical challenges in deploying algorithmic systems in contexts where fundamental liberties are at stake.