Individual Submission Summary
Share...

Direct link:

Artificial Seduction: The Use of LLMs by Fraudsters in Romance Fraud

Fri, September 5, 5:00 to 6:15pm, Deree | Auditorium, Floor: 6, 6th Level Auditorium

Abstract

Artificial intelligence has made remarkable strides in recent years, especially with the advent of large language models designed for the general public. Online fraudsters are capitalizing on these advances by increasingly integrating AI into their operations. This study examines the use of Large Language Models (LLMs) by scammers engaged in romance fraud. It is based on an analysis of over 80,000 textual exchanges between 172 identified romance scammers and two automated conversational agents (chatbots) simulating potential victims. The data was collected by ForenSwiss, a Swiss start-up specializing in detecting and sharing information on financial fraud.
The scammers featured in this study were manually selected from social media networks and online dating platforms, through the efforts of experts and reports from communities dedicated to identifying fraudulent profiles. Once a fraudulent identity was objectively confirmed, an initial contact was established using fictitious victim profiles. The conversation was subsequently moved to Telegram—initiated either by the scammer or the victim. On Telegram, interactions between the fraudsters and the chatbots continued, enabling both automation and standardization in the analysis while maintaining a diverse range of interactive scenarios. Telegram was chosen for this study due to its compatibility with fully autonomous chatbots.
Based on the results, a textual analysis was conducted to identify indicators of AI usage. The criteria for this analysis draw on scientific literature, the professional experience of chatbot developers, and the research team’s expertise in romance scams. Drawing on specific examples from the sample, the presentation will illustrate both the presence and limitations of these indicators, as well as the challenges involved in detecting the use of LLMs.

Authors