Search
Program Calendar
Browse By Day
Browse By Room
Search Tips
Virtual Exhibit Hall
Personal Schedule
Sign In
The term deepfakes has been established in recent years for video and audio content generated by Artificial Intelligence (AI) that imitates real personalities. They may be used for legitimate purposes, for example by the film industry or for assistive purposes, but they can also be used for criminal purposes such as fraud and political disinformation. With the increasing performance of AI, deepfakes have become more sophisticated and more difficult to detect. The European Union AI Act passed in 2024 as Regulation (EU) 2024/1689 has established certain transparency rules for the use of deepfakes.
The paper is the result of a cooperation in the framework of the EU COST Action BEING-WISE (=CA 22104, 2023-2027) on Behavioural Next Generation in Wireless Networks for Cyber Security and of a research project carried out in Germany. It seeks answers to the question of how risks related to deepfakes evolve in newly emerging wireless networks such as 6G. For example, transparency rules may be more difficult to implement in quickly responding wireless networks that could be used in everyday life by Internet of Things(IoT) applications, and new variations of fraud may occur in the context of connected medical devices that use sensitive personal data.
The aim of this paper is to provide an overview of the evolving risks associated with deepfake technology in the context of new wireless networks, particularly 6G. It highlights both the legitimate and malicious use of deepfakes, the increasing difficulty of detection due to AI advances and the regulatory response through the EU AI Act. It also presents the research focus of the EU COST Action BEING-WISE initiative, which investigates how deepfake-related threats can manifest themselves in next-generation wireless environments, including IoT applications and connected medical devices.