Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Objectives
As the AI landscape continues to evolve rapidly, tech companies, advocacy groups, legislators, and individual citizens continue to grapple with how to effectively minimize risk, mitigate harm, and harness this powerful technology for the good of society. Yet, the examination of the technologies that are ubiquitous in students’ and educators’ lives (e.g. the impact on education, housing, employment, criminal justice, and democracy, and have disproportionate impacts on marginalized communities) are currently only a peripheral part of their formal educational experiences. While it is critical to delineate guidance for how educators, students, and schools use AI tools in the advancement of education, it is at least equally important to prioritize how students and educators interrogate ethics, equity, and justice in the creation, deployment, and utilization of AI and all technologies as a core component of a robust K-12 education. We argue that the critical interrogation of AI’s development and impact must be a core component of K-12 computing education, and we must intentionally center racial and social justice in the examination of all technologies.
Theoretical Framework
The theoretical framing driving this work was our (redacted name) center’s commitment to equitable computer science. Our commitments align with Madkins’ and colleagues’ (2020) work on equity pedagogies in computer science which contend that “teaching and learning as inseparable from pursuing justice while attending to students’ access to rigorous instruction and equitable outcomes” (Madkins et al., 2020, p. 3).
This positioning requires that educators address systemic racism within education and CS. To work toward a full socio-cultural consciousness of power, educators need to develop an awareness of historic and current oppression in education and in CS. Further, it requires that educators practice pedagogies which connect and teach toward justice.
Methods
We drew on the expertise of critical technology scholars including Noble (2019), who demonstrated racism is entwined with capitalism in machine learning algorithms, Bender and colleagues (2021) who cautioned against the implications of creating massive large language models (LLMs), Gilliard (2017) who critiques surveillance technology, and other computer science and education technology scholars (not named, as they are also authors, discussants, and chairs in this symposium) to develop a series of guidelines for K-12 educators working to equitable implement AI in their classrooms.
Results and Significance
Responsible AI and Tech Justice is a robust and comprehensive course of study that utilizes an explicit racial and social justice lens to equip all students with the knowledge and resources to critically interrogate the ethical and equitable development, deployment, and impacts of AI, while simultaneously challenging, disrupting, and remedying the harms that these various technologies can cause within individual’s lives, communities, and society at large.
The guide is organized around six core components:
Examine the AI technology creation ecosystem
Interrogate the complex relationship between technology and human beings
Explore the impacts and implications of AI on society
Interrogate personal usage of AI technologies
Build a critical lens in the collection, usage, analysis, interpretation, and reporting of data
Minimize, mitigate, and eliminate harm