
Search

Browse By Day

Browse By Time

Browse By Person

Browse By Area

Browse By Session Type
Search Tips
ASC Home

Sign In


X (Twitter)
This paper examines how the Correctional Offender Management Profiling for Alternative Sanctions (COMPAS), an AI-based recidivism risk assessment tool, may inadvertently perpetuate biases in the U.S. criminal justice system, particularly along racial, gender, and socioeconomic lines. Drawing upon a comprehensive body of literature spanning criminology, sociology, psychology, and even computer science, this review critically analyzes the socially unequal or biased data inputs and black-box algorithms underlying COMPAS to explore their influence on judicial decision-making. The literature suggests that historical inequities, over-policing of certain communities, and algorithmic opacity can lead to systematically elevated risk scores for marginalized populations. These dynamics not only raise fairness concerns regarding the use of AI but also undermine trust in legal processes. Building on these multidisciplinary insights, this paper recommends improving data quality, enhancing algorithmic transparency through independent oversight, and clarifying the ethical responsibilities of human decision-makers. By contextualizing COMPAS within broader discussions of justice and technology, the analysis underscores the need for policies that mitigate AI-driven discrimination while leveraging its potential benefits. Ultimately, this exploration suggests that ensuring equity in AI-assisted legal assessments is crucial for preserving public confidence in the criminal justice system by addressing current and future policy implications for equitable COMPAS usage.