Individual Submission Summary
Share...

Direct link:

Poster #149 - The Threat of AI-Generated Misinformation in the Election: Social Media, Policy Challenges, and Detection Gaps

Friday, November 14, 5:00 to 6:30pm, Property: Hyatt Regency Seattle, Floor: 7th Floor, Room: 710 - Regency Ballroom

Abstract

The 2024 U.S. presidential and congressional elections marked a turning point in the role of AI-generated misinformation. Unlike previous elections, AI-driven speech synthesis, deepfake videos, and automated bot networks created false narratives at an unprecedented scale. A study by the Center for Countering Digital Hate (2024) found that AI-generated misinformation spread three times faster than fact-checked content, while the Stanford Internet Observatory (2024) reported that 40% of Americans struggled to distinguish AI-generated political ads from real ones.

With over 70% of U.S. adults consuming political news via social media (Pew Research Center, 2024), social media platforms have become key battlegrounds for AI-driven misinformation. X’s engagement-driven algorithm boosted misleading political content, with AI-generated tweets receiving 60% more interactions than verified fact-checked content (Brookings Institution, 2024), while Instagram’s short-form video and deepfake content blurred the line between reality and fabrication, making it nearly impossible for users to verify authenticity.

This study applies a literature review and public policy analysis to assess the effectiveness of AI-detection tools and regulatory responses. Existing efforts, including the EU AI Act, and the UK Online Safety Act, offer limited oversight on how AI-generated misinformation spreads. Proposed U.S. legislation, such as the REAL Political Ads Act and the Protecting Americans from Foreign Adversary Controlled Applications Act, aim to introduce disclosure requirements for AI-generated political content but lack enforcement mechanisms. Meanwhile, Section 230 of the Communications Decency Act continues to shield platforms from liability, making regulatory intervention challenging.

Findings indicate that existing AI-detection tools fail to accurately identify synthetic content, with deepfake detection models achieving only 70% accuracy (MIT Technology Review, 2023). A hybrid regulatory approach is needed, combining real-time AI fact-checking, algorithmic transparency mandates, and independent oversight boards to enforce compliance.

The 2024 election was not an isolated case—it signals the beginning of a new era in election influence, where AI-driven disinformation will define the political landscape unless comprehensive regulations and advanced detection technologies are implemented.

Author