Individual Submission Summary
Share...

Direct link:

P094. Regulatory efforts and legislative challenges in criminalizing AI-generated CSAM

Thu, September 4, 6:45 to 8:00pm, Other Venues, Poster Venue

Abstract

This Poster presentation aims to provide an overview of the current legislative developments regarding AI-generated CSAM offenses across all member countries of the INHOPE organization, as part of SafeLine’s (the Greek Internet hotline for illegal online content) research activities. This work was supported by the Special Action PreventCSA@EU (MIS 6002910), 2024-2026.
According to the INHOPE’s Universal Classification Schema, AI-generated media depicting CSAM, belongs to the category of realistic media which contain either a real human being or a person that is indistinguishable from a real human being to the observer. Despite the broad definition which could serve as a springboard for a coordinated response, criminalizing AI-generated CSAM presents challenges, especially regarding the attribution of criminal liability, as AI lacks “intent” and is often self-learning. The question of whether the programmer, the system owner or the user should bear legal responsibility becomes extremely crucial in this context.
Within the European Union’s legal order, the recast of the Directive 2011/93 aims to ensure that the CSAM definition addresses the criminalization of AI-generated CSAM. However, the AI Act is criticized for not addressing this issue, as it does not categorize AI systems used to generate deepfakes as "high-risk."
Concerning the INHOPE countries outside the EU, the approach remains fragmented. For instance, under the Federal law of the USA, virtual CSAM is criminalized in case the content is "virtually indistinguishable" from real children. Many states though have already proceeded to the revision of CSAM related laws to encompass AI-generated CSAM in the legal definition. On the other hand, countries such as Mexico and Cambodia do not even criminalize virtual CSAM. A different approach is observed in countries like Australia and South Korea, which criminalize non-consensual deepfake sexually explicit content, despite not explicitly criminalizing AI-generated CSAM.

Authors