Paper Summary
Share...

Direct link:

Data is Power: Elevating K-12 Youth as Multidisciplinary Sociotechnical AI Justice Researchers

Thu, April 9, 4:15 to 5:45pm PDT (4:15 to 5:45pm PDT), Los Angeles Convention Center, Floor: Level Two, Room 515B

Abstract

Overview: With the breakneck adoption of generative AI tools like ChatGPT, many of our K-12 students are increasingly anxious about what AI will mean for their futures. While teaching AI literacy, we discovered new forms of race, gender, and sexuality bias in the most widely used large language models, which led us to embark on a year-long journey of collaborative education and research with our students. Our resulting, community-driven AI ethics research showed that generative language models amplify patterns of erasure, subordination, and harmful stereotypes by over three orders of magnitude in creative writing settings.
We find that such biases are associated with psychosocial harms for learners. This creates a vicious cycle that further oppresses communities that are already minoritized by traditional STEM pathways and underrepresented in an AI industry that continually overlooks discriminatory harms. Any AI literacy aiming to break this cycle must empower minoritized students to address sociotechnical issues through a critical quantitative and historical lens.
In this study, we present the results of a pilot that engages over 150 urban K-12 students from minoritized communities. Our work draws upon frameworks of culturally relevant pedagogy, emancipatory data science, participatory action research, and critical computing to enable minoritized learners to conduct participatory AI ethics research as an application of Common Core Math, ELA, NGSS, and CSTA content and practices.

Data and Methods: We describe learnings from [blinded]’s Data is Power program, consisting of four modules teaching AI ethics and justice research across several domains (justice, surveillance, labor, history, and environmental systems). Topics were chosen via participatory design in over 60 interviews with urban educators and AI ethics researchers. Retrospective pre-post data on three high school classrooms (Chicago, Phoenix, Miami) and one elementary classroom (Oakland) revealed significant gains in teacher self-efficacy in teaching AI content.

Results and Significance: We find that K-12 students and educators are uniquely positioned to benefit from emancipatory AI curricula in classrooms where they simultaneously learn about and engage in AI research for their empowerment. Data is Power participants conducted original peer-reviewed research on a range of topics from environmental issues to how academic pressure may drive AI adoption and impede learning, and presented their work at the 2025 ACM Conference on Fairness, Accountability, and Transparency (FAccT) in its first K-12 workshop. This unique opportunity opened participants' eyes to the opportunity and necessity of critical AI research at a time when defunding of education is rampant and unchecked AI adoption is widespread.
Summative feedback indicated that educators reported significant gains in students' critical thinking about how AI tools are built, who is included/excluded, the biases they generate, and how AI can reinforce existing inequalities. Evidence suggests that critical AI ethics education may increase student engagement, curiosity, and confidence by elevating knowledge from minoritized communities. We also identify areas of improvement, including time constraints, student readiness to digest AI concepts, and teacher desire for earlier planning and alignment.

Authors