Individual Submission Summary
Share...

Direct link:

A Sociological Turing Test: Social Identity Cues Shape Who is Classified as Human Online

Tue, August 11, 8:00 to 9:30am, TBA

Abstract

The rise of generative artificial intelligence systems complicates our ability to distinguish real people from machine imitators, or “bots.” Drawing upon social psychology, symbolic interactionism, and the sociology of science and technology, we develop a theoretical framework for a “Sociological Turing Test” to explain how social identity cues influence how people differentiate AI from humans. To evaluate our hypotheses we present a pre-registered online experiment with 1,165 respondents who were exposed to social media accounts with varied visual cues about their race, gender, and partisanship. We find that people from dominant social groups are more likely to classify members of minority groups as AI bots—however, our results also show that members of some minority groups are more likely to classify each other as AI bots. These findings suggest existing social hierarchies extend into human-AI interaction, creating a new burden for marginalized groups in online settings and complicating the processes of social identity construction more broadly. Our research thus contributes to an emerging sociology of generative artificial intelligence by revealing how technological disruption challenges ontology and inter-group relations and introduces new methods to study this process in online settings.

Authors