Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
Annual Meeting App
Onsite Guide
Algorithmic scoring systems are decision-making aids that computationally analyze big data from the past to predict future outcomes. These tools are designed to constrain the misuses of professional expertise, but we know little about how they shape and are shaped by interactions that span professional boundaries. This article uses interviews to investigate how judges, pretrial officers (PTOs), public defenders, and prosecutors in four large US criminal courts used algorithmic risk assessment scores as boundary objects to navigate professional constraints in the pretrial release decision-making process. I show how these actors drew on situated understandings of their institutional roles to assess the legitimacy of risk scores as valid knowledge forms. Yet, they strategically enacted these roles in pretrial hearings by situationally invoking risk scores to justify contradictory knowledge claims across different cases. PTOs were risk score producers who calculated them and defended their validity. Attorneys were risk score advocates who leveraged, ignored, and contested them to bolster adversarial arguments. Judges were risk score adjudicators who used them to anchor and confirm decisions. By positioning algorithmic scores as boundary objects that actors pragmatically use to enact roles and assert professional authority, I reveal how they co-constitute collaboration, governance, and punishment processes.