Individual Submission Summary
Share...

Direct link:

Using Instruction-Tuned Large Language Models to Identify Indicators of Vulnerability in Police Incident Narratives

Wed, Nov 12, 9:30 to 10:50am, Marquis Salon 15 - M2

Abstract

This study investigates how large language models (LLMs) might support qualitative analysis in criminological research. Police officers and despatch handlers routinely collect unstructured narrative data describing police-civilian interactions. These data have the potential to reveal critical insights about engagement with vulnerable populations, yet traditional manual coding of them is prohibitively resource intensive. Analysing publicly available narrative reports from the Boston Police Department, we test whether LLMs can effectively replicate human qualitative coding of four key vulnerabilities: mental ill health, substance misuse, alcohol dependence, and homelessness. Our methodology compares human-generated classifications against those from various LLMs (ranging from 8 billion to over 1 trillion parameters) using different prompting strategies. We examine the accuracy of LLM-assisted coding and the viability of a hybrid approach where LLMs screen narratives before human review. We also employ novel counterfactual experiments to systematically test for potential classification biases related to subject characteristics such as race and sex. This research introduces novel methodological

Authors