Search
Browse By Day
Browse By Time
Browse By Person
Browse By Area
Browse By Session Type
Search Tips
ASC Home
Sign In
X (Twitter)
This study investigates how large language models (LLMs) might support qualitative analysis in criminological research. Police officers and despatch handlers routinely collect unstructured narrative data describing police-civilian interactions. These data have the potential to reveal critical insights about engagement with vulnerable populations, yet traditional manual coding of them is prohibitively resource intensive. Analysing publicly available narrative reports from the Boston Police Department, we test whether LLMs can effectively replicate human qualitative coding of four key vulnerabilities: mental ill health, substance misuse, alcohol dependence, and homelessness. Our methodology compares human-generated classifications against those from various LLMs (ranging from 8 billion to over 1 trillion parameters) using different prompting strategies. We examine the accuracy of LLM-assisted coding and the viability of a hybrid approach where LLMs screen narratives before human review. We also employ novel counterfactual experiments to systematically test for potential classification biases related to subject characteristics such as race and sex. This research introduces novel methodological