Search
Browse By Day
Browse By Time
Browse By Person
Browse By Policy Area
Browse By Session Type
Browse By Keyword
Program Calendar
Personal Schedule
Sign In
Search Tips
Generative AI systems are rapidly moving from experimental prototypes to embedded components of government decision-making infrastructure. For instance, California passed a law that mandated all prosecutors in the state implement a new review procedure called “race-blind charging”, where prosecutors review case documents with race-related information redacted, allowing them to make a race-blind decision about whether to file or decline the case. To make this intervention feasible, the state encouraged prosecutors to use AI-based redaction, but emphasized that any such system must be validated, citing widespread concerns about the risk of factual errors and inconsistent reasoning with AI used in high-stakes legal contexts. We designed and tested one such system that uses generative AI to automatically redact race-related information from police reports. Our solution is now used in over 60% of California prosecutor offices, covering nearly 18 million people through its deployment. In the first public validation of generative AI used for race-related redaction, we assessed algorithmic performance by drawing on a corpus of ~10,000 police reports we collected from 253 jurisdictions across nearly every U.S. state. We present the results of this validation, demonstrating that our algorithm reliably removes all race-related indicators required by law, reduces the ability to predict an arrestee’s race from redacted narratives, and performs at the top of its class relative to existing alternatives. This work demonstrates the feasibility of race-blind charging at scale, while highlighting the promise of using algorithmic AI tools to support decision-making when appropriately validated for high-stakes legal contexts.