Paper Summary
Share...

Direct link:

Data is Power: Positioning K-12 Youth as Participatory, Sociotechnical AI Justice Researchers

Thu, April 9, 7:45 to 9:15am PDT (7:45 to 9:15am PDT), JW Marriott Los Angeles L.A. LIVE, Floor: 2nd Floor, Platinum I

Abstract

The breakneck adoption of generative AI tools like ChatGPT has led many of YDSL’s K-12 students to become increasingly anxious about what AI means for their futures. While teaching AI literacy, we discovered new forms of race, gender, and sexuality bias in the most widely used large language models (LLMs), which resulted in a year-long journey of collaborative education and research with our students. The result was a community-driven AI ethics research study, which demonstrated that LLMs amplify patterns of erasure, subordination, and stereotypes by over three orders of magnitude in creative writing settings (Author, 2024). Arising from data colonialism (Couldry & Mejias, 2019), these harms advance techno-cultural imperialism (Author, 2025), resulting in psychosocial harms for learners (Author, 2024). Together, these harms contribute to a pernicious cycle of oppression—further impacting communities already minoritized and underrepresented by traditional STEM pathways, and the AI industry (Author, Year). Any AI literacy initiative aiming to break this cycle must empower minoritized students to address sociotechnical issues through a critical quantitative and historical lens.

In this study, we present the results of a pilot that engages over 150 urban students from minoritized communities. Our work draws upon frameworks of culturally relevant pedagogy (Ladson-Billings, 2008), emancipatory data science (Author, Year), participatory action research (Cammarota & Fine, 2008), and critical computing (Lee & Soep, 2016) to enable minoritized learners to conduct participatory AI ethics research as an application of Common Core Math, ELA, NGSS, and CSTA content and practices.

Data and Methods

We describe learnings from YDSL’s Data is Power program, consisting of four AI ethics modules across several domains (justice, surveillance, labor, history, and environmental systems). Topics were identified via participatory co-design sessions with over 60 urban educators and AI ethics researchers. Retrospective pre-post data on three high school classrooms (Chicago, Phoenix, Miami) and one elementary classroom (Oakland) revealed significant gains in teacher self-efficacy in teaching AI content.

Results and Significance

K-12 students and educators are uniquely positioned to benefit from emancipatory AI curricula (Author, Year) in classrooms where they simultaneously learn about and engage in AI research for their empowerment. Data is Power participants conducted peer-reviewed research on topics ranging from environmental issues to how academic pressure may drive AI adoption and impede learning. Participants presented their work at the first K-12 workshop of the ACM Conference on Fairness, Accountability, and Transparency (FAccT) (Author, 2025). This event opened participants' eyes to the necessity of critical AI ethics research at a time when defunding of education is rampant and unchecked AI adoption is widespread.
Educators reported significant gains in students' critical thinking about how AI tools are built, who is included/excluded, the biases they generate, and how AI reinforces existing inequalities, suggesting that critical AI ethics education may increase student engagement, curiosity, and confidence by elevating knowledge from minoritized communities. We also identify areas of improvement, including time constraints, student readiness to digest AI concepts, and teachers’ desire for earlier planning and alignment.

Authors