Individual Submission Summary
Share...

Direct link:

Reflections on Regulating Artificial Intelligence Through Data Protection Principles: Strengthening and Guiding International Collaboration with Criminological Insights

Thu, September 4, 9:30 to 10:45am, Communications Building (CN), CN 2113

Abstract

Artificial Intelligence (AI) has emerged as an integral component of modern technology, presenting challenges related to ethics, governance, and data protection. This study explores how foundational principles of personal data protection can serve as guidelines for the ethical and regulatory frameworks necessary to govern AI. It highlights the need for robust mechanisms to ensure fairness, transparency, and accountability in AI systems, particularly in safeguarding fundamental rights.
Criminological insights further enrich this exploration, examining how AI technologies, when improperly regulated, may exacerbate systemic biases, enable discriminatory profiling, or facilitate new forms of cybercrime. These challenges underscore the urgency of embedding criminological perspectives into the design and regulation of AI, aiming to mitigate the risks of misuse and enhance societal trust in these technologies.
Additionally, the study emphasizes the importance of international collaboration through codes of ethics, which can bridge legal and cultural divergences by leveraging shared principles. It advocates for a multidisciplinary approach, integrating criminology, technology, and law to establish mechanisms that ensure the security, integrity, and accountability of AI systems.
This presentation seeks to provoke a dialogue on the creation of equitable and transparent AI practices, with a specific focus on safeguarding societal order and preventing the criminal misuse of emerging technologies.

Keywords: AI and Criminology, Ethical AI Regulation, Data Protection and Crime Prevention, Bias and Discrimination in AI, Cybercrime and AI Accountability

Authors