Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
As artificial intelligence becomes embedded in hiring, healthcare, criminal justice, and education, questions of who bears its risks and who benefits from its rewards have become increasingly urgent and contested. Drawing on a computational analysis of 85,480 arXiv papers from 2018 to 2025, this paper examines AI ethics discourse as a site of hegemonic knowledge production. I argue that AI ethics discourse is not a neutral or pluralistic conversation but a structured field in which those with the most institutional and economic capital define what counts as a legitimate ethical concern, and this structure gets systematically reproduced through academia functioning as civil society. Using keyword framing analysis, TF-IDF vocabulary comparison, and co-occurrence analysis, I find that: (1) technical and governance framing (“safety”, “risk”, “transparency”) has grown substantially faster than justice and structural framing (“race”, “labor”, “power”) over the study period; (2) even when justice-adjacent vocabulary such as "fairness" and "bias" appears, it co-occurs almost exclusively with technical vocabulary, suggesting these terms have been absorbed and operationalized in technical contexts; and (3) race and gender concerns appear in fewer than 4% of papers, and when they do, they are framed as technical variables to be corrected rather than structural conditions to be addressed. With these preliminary findings, this paper argues that technical framing of ethics reflects not industry coercion but naturalized consent, a hegemonic process through which a narrow definition of ethics has been common sense.