Paper Summary
Share...

Direct link:

Emancipatory Data Science & Artificial Intelligence: Historicizing Data for Equitable AI Futures

Fri, April 10, 1:45 to 3:15pm PDT (1:45 to 3:15pm PDT), Los Angeles Convention Center, Floor: Level Two, Room 404AB

Abstract

Objectives or Purposes: Advance Emancipatory Artificial Intelligence (EAI) as a framework that counters ahistorical futurism (e.g., Levandowski, 2018) by centering how racial hierarchies and power asymmetries shape AI and education. Specify EAI’s four tenets—recognition, refusal, repair, reflection—and outline a research, curriculum, and policy agenda that historicizes harm and advances liberatory futures.
Perspective(s) or Theoretical Framework: EAI is rooted in emancipation (Equal Justice Initiative, 2020) and Wright’s “emancipatory social science” (2010, p. 7), rejecting AI hype (Bender & Hanna, 2025) and presumed neutrality (D’Ignazio & Klein, 2023). Drawing on intersectionality and critical quantitative/computational theory (Zuberi & Bonilla-Silva, 2008; Lee & Soep, 2019), it situates AI within socio-historical processes, showing how design and deployment can exacerbate marginalization (Floridi & Cowls, 2019; Metcalf & Crawford, 2016).
Methods, Techniques, or Modes of Inquiry: Conceptual synthesis and critical historiography operationalized through the four tenets: (1) Recognition of algorithmic harms (e.g., misogynoir; Bailey, 2021; Buolamwini & Gebru, 2018); (2) Refusal via counter-archival analyses of data-driven racial stratification; (3) Repair through emancipatory exemplars (Du Bois, 1899; Buolamwini, 2024); and (4) Reflection engaging Afrofuturism to reimagine equitable AI futures (McGee & White, 2021; Gutiérrez et al., 2017). Methodologically, EAI traces actors/institutions that legitimize hierarchy (Gebru & Torres, 2024) and those that resist (Monroe-White, 2021).
Data Sources, Evidence, Objects, or Materials: Archival and counter-archival materials (Wells, 1895; Du Bois, 1899); foundational statistical texts (Galton, 1892; Pearson, 1901; Fisher, 1914); empirical studies of algorithmic harm (Bailey, 2021; Buolamwini & Gebru, 2018; Obermeyer et al., 2019); policy/curricular analyses in AI education (Williamson & Eynon, 2020; Adams & McIntyre, 2020); and contemporary critiques of AI governance and ethics (Floridi & Cowls, 2019; Metcalf & Crawford, 2016).
Results and/or Substantiated Conclusions or Warrants for Arguments/Point of View: History gives magnitude and direction to racialized AI outputs. EAI demonstrates that without structural transformation, AI education risks perpetuating “cold-blooded” bias. Applying the tenets surfaces concrete harms, builds counter-archives, mobilizes reparative exemplars, and cultivates reflective, community-centered design. The framework yields actionable guidance to recognize harms, refuse ahistorical tools, prioritize repair, and embed reflective practices across research, curricula, and policy.
Scientific or Scholarly Significance of the Study or Work: EAI synthesizes emancipatory social science with critical data/computation to reframe AI from technical debiasing toward historical, structural transformation in the generative AI era. It contributes a coherent, justice-oriented agenda for AI scholarship and education, clarifies mechanisms linking racial ideology to AI practice, and offers scalable, pedagogically grounded pathways for liberatory AI ecosystems.

Author