Individual Submission Summary
Share...

Direct link:

Justifying Decisions supported by Artificial Intelligence: Approaches and Limitations of the Accountability Settings foreseen by the European Union’s “Artificial Intelligence Act”

Thu, September 12, 1:00 to 2:15pm, Faculty of Law, University of Bucharest, Floor: Ground floor, Amphitheater 2 „Nicolae Titulescu”

Abstract

Artificial Intelligence (AI) is renowned to be a black box. Therefore, the (limited) explainability of decisions supported by AI tools is a major challenge for the justification of these decisions. In law enforcement, false decisions can have a far-reaching negative impact upon the fundamental rights of those concerned. In December 2023, the European Parliament, the Council and the European Commission reached a compromise in the Trilogue negotiations for a Regulation on Artificial Intelligence (“Articficial Intelligence Act”), based on a proposal published by the Commission in 2021 (COM(2021) 206 final). This paper analyses the accountability mechanisms that the AI Regulation introduces and discusses how far these mechanisms can contribute to the accountable use of AI in a democratic rule of law context. Particularly with regards to the requirements of fairness, transparency, explainability and the justification of decisions. The paper also looks at the relationship between additional accountability settings for AI and well-established accountability forums such as the European Data Protection Supervisor, the European Data Protection Board and the European Ombudsman.

Authors