Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
Artificial intelligence (AI) tools are “prediction machines.” Their supposedly impressive capacities to predict recidivism, creditworthiness, honesty, and fraud are critical to their adoption in public and private decision-making. Yet sociolegal scholarship approaches AI's predictions like they are new for law. They aren't. AI is a window into a problem with which many areas of law have struggled for decades or more, a problem this project identifies and names: historical data dilemmas. The law faces historical data dilemmas whenever it uses information about the past to produce probabilistic knowledge about the future despite the risks of using information that may, for one reason or another, be unfair to use as a basis for prediction. And, it turns out, the law was a prediction machine long before AI. Based on five case studies in which public and private law use different types of historical information to make predictions about the future behavior of individuals or government actors, this project excavates the law’s approach to historical data dilemmas across tort law, evidence law, family law, criminal law, bankruptcy law, election law, and the law of standing to reveal sociolegal lessons for how law can do prediction better, with AI or without it. Among other things, analysis of the case studies suggests that the law is engaged in an ongoing, sometimes collaborative, sometimes competitive process of social construction that sees prediction as necessary for the production of knowledge yet posing vexing sociolegal questions about prejudice. The project's goals are to highlight prediction within the law, create a taxonomy of the law’s approach to prediction, critique it, and apply the case studies’ lessons to the AI decision-making context.