Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
Annual Meeting App
Onsite Guide
The rise of generative AI has sparked troubling developments. Consider when Meta in January 2025 debuted a “sassy black queer” AI assistant called Liv, whose exposure as “digital blackface” saw the company quickly retire the character [1]. While the effort aimed to address racial bias, it unintentionally invoked the fraught history of racial transformation and deceit—whether through acts of passing, minstrelsy, or mockery. Most synthetic data is not problematic because it is artificial, but because their mode of creation echoes historically familiar forms of power and exploitation. These echoes are however more complex than linear histories connecting AI to eugenics would have us understand. This article takes a sociological lens to questions of knowledge and power around synthetic data. As a sociologist of science and technology, I study missteps with progressive intention as the ones mentioned above to try and articulate the special challenges that images pose for computational analysis and meaning-making. In the course of this, I pull together the literatures on moral entrepreneurship in AI ethics [2], that on agency in machine learning [3], the STS literature on translation in the construction of scientific facts [4-6] and the economic sociology literature on performativity, especially where it concerns person categorization and classification [7-9]. This paper has two aims: first, to analyze synthetic data, especially image data, through an STS framework that categorizes it as part of a broader system of generated scientific facts; and second, to examine how power and ethics manifest in generative AI, using examples of progressive initiatives that unintentionally failed.