Individual Submission Summary
Share...

Direct link:

Artificial Ideologies: How Human-AI Alignment Shapes Trust and Persuasion Across Ideological Boundaries

Sun, August 9, 10:00 to 11:30am, TBA

Abstract

How do people evaluate AI systems that share their ideological commitments? As large language models increasingly engage users on contested sociopolitical topics, their perceived ideological positioning may shape whether they are trusted, seen as legitimate, and granted persuasive authority. Drawing on sociological theories of legitimacy, ideology, and persuasion, I present two experimental studies examining how ideological alignment between users and AI systems structures trust and political behavior. In Study 1 (N = 483), participants interacted with an AI assistant that was either ideologically aligned with their views on politics, race, or gender, or presented as neutral. Aligned AI systems were perceived as significantly more objective, and this perceived objectivity fully mediated the relationship between alignment and willingness to grant the AI decisional authority. Conservatives were especially responsive to alignment in the political and racial domains. In Study 2 (N = 636), participants interacted with an aligned or neutral AI and then voted on a zoning ballot measure after receiving a counter-attitudinal persuasive message from the AI. Liberals who received the AI’s recommendation were significantly less likely to support the ballot measure, consistent with successful AI-driven counter-persuasion, while conservatives were unaffected. Across both studies, ideological alignment served as a legitimacy-granting mechanism, shaping perceptions of objectivity and trust that, in turn, conditioned openness to AI influence. I discuss implications for democratic deliberation, platform governance, and the sociology of algorithmic authority.

Author