Individual Submission Summary
Share...

Direct link:

Judicial Decisions, Automation Bias and Algorithmic Discrimination - CANCELLED

Wed, Nov 13, 8:00 to 9:20am, Salon 2 - Lower B2 Level

Abstract

The increasing use of AI tools to support judicial decisions raises significant ethical, legal, and technical challenges. This paper focuses on one of those challenges: algorithmic discrimination. The use of AI is presented as a way to minimize the cognitive biases of the judge, seeking a more objective justice. Paradoxically, at the same time, there is concern that automation may exacerbate historical biases and create new -more intersectional and opaque- forms of discrimination, while serving as a justification for the prejudices of the judge. This paper is structured around two main questions: 1) What are the tools - technical and normative - best suited to address algorithmic discrimination? 2) How to combat the automation bias, which pushes the adjudicator to rule according to the algorithmic outcome? Regarding the first question, the main conclusion is the insufficiency of technical-solutionist approaches based on statistical parity, in favor of a socio-technical approach, focused on detecting, preventing and acting in the face of discriminatory results. Regarding the second issue, two aspects emerge as key: the training of the judge on the limitations of statistics and the need to explicitly motivate the relative weight of the algorithmic result in the final decision.

Author