Search
Program Calendar
Browse By Day
Browse By Room
Search Tips
Virtual Exhibit Hall
Personal Schedule
Sign In
The advancements of AI in recent years have raised security concerns based on the impressive capabilities of AI tools to mimic human behaviour and creativity in cyber attack scenarios. Consequently, the need for effective mitigation approaches steadily increases as malicious artificial agents get more powerful and adaptive (e.g., self-replicating AI).
There are different variants of deceptions in Human-AI Interaction (HAII) which are based on AI tool capabilities and human vulnerabilities. Currently, there is a lack of systematic effort based on real-world scenarios to identify the range of deceptions and related effective mitigation strategies. The goal of this article is to take first steps towards an interdisciplinary taxonomy of the most common deceptions to guide appropriate mitigation strategies.
“AI deception” is a fruitful terminological starting point as it is related to a wider range of concepts like trickery, illusion, and disinformation, as well as treachery, scam, and fraud which are connected to key AI risks (e.g., deep fakes, disinformation campaigns, AI-powered social engineering attacks). We can see here at least two "faces" of deceptive AI: one in the context of malicious intent like in cybercrime and one in benevolent intent like in commercial AI engineering where unintended detrimental or harmful side-effects can easily occur.
In an interdisciplinary approach critical human dispositions like anthropomorphism, ignorance, gullibility, technology enthusiasm and social engineering-related vulnerabilities, as well as AI engineering approaches (e.g., explainable, anthropomorphic or empathic AI) and mitigation measures in cybersecurity (e.g., attack and defense role models, training of human competence to detect deceptive AI, red teaming exercises) are mapped based on attack scenarios in emerging wireless technologies. The resulting taxonomy supports AI tool engineers, security concept designers, heads of CDCs as well as rescue and analysis teams to improve appropriate cybersecurity counter measures.
Martin Griesbacher, Research Group for Industrial Software (INSO)
Philipp Harms, ETH Zürich, Department of Mathematics
Matej Kosco, Research Group for Industrial Software (INSO), Vienna University of Technology
Thomas Stipsits, Research Group for Industrial Software (INSO), Vienna University of Technology
Thomas Grechenig, Research Group for Industrial Software (INSO), Vienna University of Technology