Individual Submission Summary
Share...

Direct link:

Exploring College Students’ Views on Deepfake Technology

Wed, Nov 13, 5:00 to 6:20pm, Nob Hill B - Lower B2 Level

Abstract

Trust in digital media is being eroded as a consequence of deepfake technology. This technology has the capability to generate or manipulate audio and video to frighteningly realistic degrees, making people seem like they said or did something they never did. Individuals with malicious intent can use this technology as a tool to convincingly impersonate others to benefit their own agenda. This can take the form of extortion, fraud, or misinformation, among others. Because of the power behind this technology, research has focused on developing deepfake detection technology. However, this detection technology consistently remains a step behind generation technology. Furthermore, while detection technology is continuously being improved, humans do not generally have access to it. Instead, they are left to their own devices to determine whether they are viewing/hearing genuine media or something that has been deepfaked. This makes human’s ability to detect deepfakes undoubtedly important, and research has begun to examine how well humans are able to detect deepfakes. What remains unclear, however, are the ways in which the detection decision-making processes unfold as individuals consume such content. To fill this gap in the literature, the current study asks the primary research question: How do people, specifically college students, determine if media is a deepfake or not? This study will attempt to answer this question and create a theoretical framework of the decision-making process by taking a qualitative and exploratory approach with grounded theory as the method of theory development and individual in-depth interviews as the data collection method. Individual interviews will explore this question in the context of participants viewing a media set containing a mix of authentic and deepfake media. This study will further examine college students’ perception and awareness of deepfake technology and the dangers it poses. Findings associated with this theoretical framework of college students’ deepfake detection decision making processes will have theoretical, empirical, and policy implications. In particular, it will shed light on how social media platforms and publicly accessible deepfake generation applications need stronger regulations to prevent user deception. This research will also serve as a starting point for interventions intended to improve human deepfake detection.

Author