Paper Summary
Share...

Direct link:

Emancipatory Artificial Intelligence: Unforgetting AI Histories to Reimagine Liberatory Futures

Fri, April 10, 1:45 to 3:15pm PDT (1:45 to 3:15pm PDT), Los Angeles Convention Center, Floor: Level Two, Room 404AB

Abstract

Abstract: This honorary presidential session convenes scholars at the intersection of educational equity and algorithmic justice to advance the 2026 AERA theme, “Unforgetting Histories and Imagining Futures.” Extending the Emancipatory Data Science framework, the session situates algorithmic harms within centuries of racial classification, surveillance, and communicentric bias. Anchored by Edmund W. Gordon’s work on communicentric bias, equitable assessment, and supplementary education, speakers call for decentering dominant perspectives and designing assessments and AI systems that uplift marginalized learners. Panelists examine AI surveillance in schools, platform governance across educational ecosystems, and biases within psychometrics and generative language models. They imagine futures where computational methods, AI literacy, equity ethics, and supplementary supports enable data sovereignty and reimagine AI education for liberation and flourishing.

Objectives: The objectives of this session are to 1) advance the 2026 AERA theme—Unforgetting Histories and Imagining Futures—by extending Emancipatory Artificial Intelligence (EAI) as a lens for education research; 2) center Edmund W. Gordon’s work on communicentric bias, equitable assessment, and supplementary education to decenter dominant perspectives; 3) develop a research agenda that exposes algorithmic harms, embraces data sovereignty, and reimagines AI education for liberation and human flourishing; and 4) build interdisciplinary bridges across divisions and SIGs focused on learning, measurement, social context, policy, technology, and race-critical inquiry.

Overview: This honorary presidential session convenes scholars integrating educational equity and algorithmic justice to elaborate EAI, which situates contemporary algorithmic harms within centuries of racial classification, surveillance, and communicentric bias. Panelists will “unforget” racialized histories by examining: AI surveillance in schools; platform governance across educational ecosystems; and biases within psychometrics and generative language models. They will also imagine futures in which critical computational methods, AI literacy, equity ethics, and supplementary education empower marginalized communities through community-controlled data practices and democratic participation.

Significance: The session advances theory and method by linking emancipatory data science and EAI to Gordon’s program of equitable assessment and supplementary education. It interrogates how quantification, psychometrics, and platform infrastructures mediate power among commercial, technical, and educational actors, and proposes governance models that promote data sovereignty. By consolidating evidence that AI-driven surveillance intensifies racial disparities and that psychometrics must contend with emergent AI ethics, the session constructs a coherent agenda for measurement, learning sciences, and policy. Cross-divisional relevance includes Divisions C, D, G, I, and L; pertinent SIGs include Design & Technology, Instructional Technology, TICL, TACTL, Computer and Internet Applications in Education, Critical Examination of Race/Ethnicity/Class/Gender, and Research Focus on Black Education.

Structure: A roundtable format will foster inclusive, culturally aligned engagement. Attendees rotate among panelists for small-group dialogues focused on: (1) unforgetting racialized histories of AI and assessment; (2) imagining models of safety beyond carceral surveillance; (3) re-envisioning quantification and psychometrics through critical and cybernetic lenses; (4) platform governance and data sovereignty; (5) cultivating anti-racist AI literacy and equity ethics; and (6) integrating supplementary education with AI-driven learning. The session reconvenes for a moderated plenary to synthesize insights into a shared research and policy agenda, articulate design principles for equitable AI systems and assessments, and identify actionable collaborations.

Author