Search
Browse By Day
Browse By Time
Browse By Person
Browse By Area
Browse By Session Type
Search Tips
ASC Home
Sign In
X (Twitter)
Growing calls for accountability in the US criminal legal system have led to the empowerment and enrollment of nonstate civilian data science experts to institutionalize state punishment reforms variously described as “data-driven”, “evidence-based”, and “smart on crime.” In this presentation, I examine how these nontraditional penal actors reproduce inequalities in the punishment process. I use 12 interviews to show how academic and industry data science experts who produce and evaluate pretrial risk assessments distance themselves from undesirable punishment outcomes. Risk assessment analysts measured their professional successes in terms of the accuracy of their predictions, and they dismissed evidence of racism and classism in predictive modeling in two ways. First, risk assessment analysts asserted that predictive parity was the only legitimate way to measure whether their tools yielded fair outcomes, thereby obscuring how existing inequalities like racist over policing contaminate their models. Second, they positioned risk assessment data as inherently neutral and objective, compared to police officers, prosecutors, and judges to whom they attributed primary responsibility for punishment outcomes. These findings suggest that risk assessment analysts are punishment entrepreneurs who neutralize and legitimize the circulation of racist and classist data in the process of punishment.