Individual Submission Summary
Share...

Direct link:

Algorithmic Scores as Boundary Objects: Risk Assessments and Professional Authority in US Pretrial Hearings

Mon, August 11, 10:00 to 11:00am, West Tower, Hyatt Regency Chicago, Floor: Ballroom Level/Gold, Regency B

Abstract

Algorithmic scoring systems are decision-making aids that computationally analyze big data from the past to predict future outcomes. These tools are designed to constrain the misuses of professional expertise, but we know little about how they shape and are shaped by interactions that span professional boundaries. This article uses interviews to investigate how judges, pretrial officers (PTOs), public defenders, and prosecutors in four large US criminal courts used algorithmic risk assessment scores as boundary objects to navigate professional constraints in the pretrial release decision-making process. I show how these actors drew on situated understandings of their institutional roles to assess the legitimacy of risk scores as valid knowledge forms. Yet, they strategically enacted these roles in pretrial hearings by situationally invoking risk scores to justify contradictory knowledge claims across different cases. PTOs were risk score producers who calculated them and defended their validity. Attorneys were risk score advocates who leveraged, ignored, and contested them to bolster adversarial arguments. Judges were risk score adjudicators who used them to anchor and confirm decisions. By positioning algorithmic scores as boundary objects that actors pragmatically use to enact roles and assert professional authority, I reveal how they co-constitute collaboration, governance, and punishment processes.

Author