Individual Submission Summary
Share...

Direct link:

Hostile by Design? Algorithmic Control, Verbal Violence, and Community Resistance in AI-Generated Discourse

Fri, Nov 14, 8:00 to 9:20am, Independence Salon F - M4

Abstract

As Large Language Models (LLMs) and chatbots increasingly mediate online discourse, communities on platforms like Reddit, Discord, and specialized forums have developed their own frameworks for interpreting, challenging, and contextualizing AI-generated hostility. These models, often perceived as passive-aggressive or arbitrarily rigid, introduce new dynamics of moderation, resistance, and reinterpretation in digital spaces. This paper examines how online communities collectively make sense of LLM-generated verbal friction, debating whether these responses reflect systemic biases, design limitations, or emergent adversarial behaviors. Through an analysis of user-led discussions, moderation logs, and real-time interactions with AI systems, we explore how communities construct their own semantic rules, adapt conversational norms, and develop counterstrategies--ranging from ironic engagement to structured appeals against automated enforcement. By foregrounding community-driven knowledge production, this study highlights how everyday users become critical interpreters of AI behavior, transforming digital antagonism into a site of negotiation rather than submission. Understanding these grassroots methodologies is essential for designing AI systems that acknowledge user agency, emergent norms, and the fluidity of online discourse beyond rigid algorithmic enforcement.

Authors