Search
Program Calendar
Browse By Day
Browse By Room
Search Tips
Virtual Exhibit Hall
Personal Schedule
Sign In
The expansion of Artificial Intelligence in illicit contexts has facilitated an increase in the development of technologies that can create realistic nude or sexualised fake imagery of a person, generally without their knowledge or approval. Often described as ‘deepfakes’ or ‘deepnudes’, such behaviours may include editing or altering an image or video using AI, digital tools or software to create a sexual image or video of another person known (e.g. friend, colleague) or unknown (e.g. celebrity, stranger) without their knowledge and/or approval; distributing an image or video of another person that has been edited or altered in a sexual way using AI, digital tools or software, without their knowledge and/or approval; threatening to create and/or distribute an image or video of another person that has been edited or altered in a sexual way, using AI, digital tools or software, without their approval; and/or sending someone an image or video without their approval (such as a dic pic) that has been digitally edited or altered in a sexual way, using AI, digital tools or software.
Research data on victimisation and perpetration rates of deepfakes is relatively sparse, as is data on motivations and the contexts of perpetration (such as characteristics, drivers, tools used, people targeted etc.). In this paper, we present findings from qualitative interviews conducted with adults who have engaged in deepfake perpetration in Australia, the United Kingdom and the United States. We focus on the motivations, drivers and characteristics described by perpetrators, and consider what might be needed to help intervene, prevent and respond to this growing and problematic form of sexual violence.