Search
Browse By Day
Browse By Time
Browse By Person
Browse By Policy Area
Browse By Session Type
Browse By Keyword
Program Calendar
Personal Schedule
Sign In
Search Tips
Democratic governance depends on the election of public officials by citizens, who serve as principals in the political process. This process fundamentally relies on constructive dialogue and informed public discourse. While vast amounts of information are produced and disseminated before and during elections, artificial intelligence (AI) has significantly transformed how that information is generated, distributed, and consumed.In this context, AI operates as a double-edged sword. On one hand, malicious actors can exploit AI to generate and disseminate misinformation and disinformation with alarming speed and sophistication. On the other hand, governments can leverage AI to detect, counter, and mitigate these threats more effectively than ever before.AI-generated content poses a serious risk of misleading or confusing voters—particularly on issues essential to civic participation, such as voting procedures, logistics, and policy positions. These risks create urgent challenges for election officials, especially at the state level, who must respond to the threats posed by AI-driven disinformation while safeguarding the integrity and credibility of election administration.At the same time, advanced AI tools present new opportunities to combat the spread of misinformation. For instance, Truformation, an AI platform developed by the startup Brinker, is designed to detect and quantify misinformation and generate timely reports to support government-led public messaging and counter-disinformation campaigns.Unlike countries such as South Korea—where the National Election Commission centrally publishes official candidate manifestos—election administration in the United States is highly decentralized and varies significantly across states. Despite growing concerns about the role of AI in generating disinformation, little is known about how this content is created and disseminated by malicious actors, or how state governments are responding within the context of existing election laws and regulations.To address these gaps, this study investigates the extent to which U.S. state governments were vulnerable to AI-generated misinformation and disinformation during the 2024 election season, and how they responded—particularly through the use of AI tools. The study leverages the advanced capabilities of ChatGPT and Gemini’s Deep Research functions to identify state-level policies, detect sources of misinformation (including fake websites), and examine the role of AI in producing misleading electoral content.After compiling a large body of data using these AI tools, the research team will manually cross-check and validate the information for accuracy. The study then employs both qualitative thematic analysis and quantitative content analysis to identify patterns in AI-generated disinformation and to evaluate how state governments responded to these challenges.Findings from this study aim to contribute both theoretical and practical insights into enhancing cybersecurity and strengthening election administration in the era of artificial intelligence.