Search
Program Calendar
Browse By Day
Browse By Room
Search Tips
Virtual Exhibit Hall
Personal Schedule
Sign In
Law enforcement is increasingly adopting new technologies, including forensic AI tools, to investigate crimes and improve operational practices. These technologies have been criticised for embedding and potentially exacerbating different kinds of biases that can have broad social ramifications, such as discrimination, injustice, and unfair targeting. However, there is a lack of consensus on what kinds of biases can systematically occur across the development process of law enforcement AI systems and how these biases can be mitigated. In this review, we explored the systematicity of biases, the development stage in which they may occur, and mitigation practices in the domain of law enforcement technologies. The review included empirical evidence from various disciplines, such as criminology, computer science and information technology, psychology and policing. The results summarize the various types of biases through a de-biasing framework and provide insights into potential mitigating measures along three dimensions: technical, socio-technical and behavioural. Addressing aspects of bias will contribute to better-informed decision making, facilitating the relationship between law enforcement and the public and ensuring more effective community safeguarding.