Individual Submission Summary
Share...

Direct link:

The Human-(in-on-out)-the-Loop paradigm: Can AI in criminal justice be socially accepted?

Sat, September 6, 8:00 to 9:15am, Communications Building (CN), CN 3104

Abstract

The increasing integration of artificial intelligence (AI) in judicial and penitentiary decision-making raises pressing legal and ethical concerns, particularly regarding due process, fundamental rights, and algorithmic accountability. In Spain, risk assessment tools like Viogen and RisCanvi assist in criminal justice decisions, yet questions about their legitimacy, fairness, and public trust persist. This study empirically explores the social acceptance of AI in judicial decision-making, with a particular focus on human-in-the-loop (HITL) models as a mechanism for fostering trust and legitimacy.
A 5×2 factorial experimental study was conducted with a sample of 1,100 Spanish participants. The study examined two critical variables: (1) the degree of AI autonomy in decision-making, ranging from fully autonomous to human-only decisions, and (2) the congruence of AI-imposed sanctions with legal expectations.

Preliminary results indicate that acceptance is primarily driven by the congruence of AI-imposed sanctions rather than the identity of the decision-maker (AI vs. human). While fully autonomous AI remains controversial, HITL models—where AI supports but does not replace human judgment—are perceived as more legitimate and trustworthy. Public skepticism is significantly reduced when AI systems are transparent and embedded within human decision-making structures.

These findings reinforce the importance of human-in-the-loop approaches in AI governance for criminal justice, ensuring that AI systems enhance legal proportionality, procedural fairness, and public confidence. Understanding these dynamics is essential for developing ethical AI applications that align with societal expectations and justice system requirements.

Authors