Paper Summary
Share...

Direct link:

Artificial Intelligence Is Not Immune to Sociopolitical Failures

Fri, April 10, 1:45 to 3:15pm PDT (1:45 to 3:15pm PDT), Los Angeles Convention Center, Floor: Level Two, Room 404AB

Abstract

1. Objectives or Purposes: This work explores how generative AI—notably tools like ChatGPT and similar large language models—intersect with historic sociopolitical systems of oppression. The piece seeks to surface how intent and deeply embedded bias shape AI technologies and to reframe them not simply as neutral tools but as inheritors of colonial, quantification‑based logics. Its purpose is twofold: to critique the political and racial dimensions baked into AI systems and to suggest ways that AI might instead become a vehicle for liberation and marginalized communities’ empowerment.

2. Perspective(s) or Theoretical Framework: The piece draws from critical race theory, scholarship on the colonial and eugenic roots of modern quantification, and critical media studies. It foregrounds the idea that sociotechnical systems—such as measurement regimes, standardized testing, and algorithmic classification—are extensions of colonial rationality that racialize subjects. Thus, the author positions AI within a historical lineage of measurement and oppression, arguing that generative AI reflects longstanding social hierarchies, not a break from them.

3. Methods, Techniques, or Modes of Inquiry: This work relies on historical analysis and conceptual critique. It weaves together the scholar’s reflections, institutional affiliations, and the legacy of quantification in education and measurement—tying today’s AI to past eugenic measurement regimes. Through critical narrative, it identifies parallels between past systems of racialized quantification and current algorithmic classification, and makes a theoretical argument for intentional, politically aware deployment of AI.

4. Data Sources, Evidence, Objects, or Materials: Evidence includes historical references to the eugenics movement and past uses of statistics to justify subjugation, combined with contemporary commentary on generative AI’s capabilities and risks in educational contexts. Rather than citing survey or dataset findings, the author draws on institutional projects (e.g., work at an urban minority education institute), grant‑supported initiatives, and the broader academic literature on race, quantification, and technology.

5. Results and/or Substantiated Conclusions or Warrants for Arguments/Point of View: The article concludes that AI systems are not inherently neutral or emancipatory. Instead, they replicate the dispositions of past measurement systems that have historically racialized and disenfranchised people. Without intentional intervention, generative AI may reinforce existing power imbalances, extend surveillance, and entrench inequality under the veneer of objectivity. At the same time, the author argues, AI could be repurposed intentionally—by embracing its "alienness"—as a tool for expression, meaning‑making, and liberation. The conclusion warrants a call for politicized and culturally aware design, rather than uncritical adoption of AI in education or governance.

6. Scientific or Scholarly Significance of the Study or Work: This article is significant as it repositions AI within systems of colonial measurement and racial sorting, deepening scholarly conversations about algorithmic bias, critical AI studies, and race‑tech literatures. By weaving together historical and contemporary critique, it pushes back against dominant techno‑solutionist narratives and calls for liberatory possibilities in AI design. It contributes to fields of critical educational theory, digital justice, and AI ethics—suggesting that meaningful oversight must attend to historical structures, not just downstream fairness interventions.

Author