Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Conceptualizations of “AI literacy” increasingly emphasize ethical engagement and analysis of social impact (Ng et al., 2021). In this paper, we explore one promising pedagogical tool for AI literacy, an AI ethics-focused card game for middle school students. Gameplay often affords emotionally safe ways of exploring diverse perspectives and ideas in complex settings ranging from interpersonal ethics as well as systemic phenomena (Simkins & Steinkuehler, 2008). Especially combined with role play, our game’s design affords a particularly valuable way of engaging with ethical thinking as an actively negotiated process in authentic applied manners, rather than a fixed set of externally imposed rules (Schrier, 2014).
We discuss AI Audit (Author et al., 2023), a card game aimed at surfacing critical conversations about possible risks of AI systems/businesses, and how to mitigate or respond to them. Players role-play as owners of AI-driven businesses as well as “civilians” accusing other players’ businesses of causing harms (e.g., over-policing neighborhoods, taking over existing human jobs). They defend their businesses through debate about how their business’s implementation avoids the alleged harm or by playing one of many possible suggested feature cards. This dialogue-based approach invites a diverse family of resources and reasoning around ethics and accountability from students (and facilitators) about the validity (or not) of holding AI-driven “businesses” to account for different kinds of harms.
In our analysis, we sought to understand how students engaged in argumentation around the harms and benefits of AI within the structure of the AI Audit game. We coded the argumentative strategies and resources one group of students (12 seventh and eighth graders in a STEM enrichment program) used as they played the game for approximately 40 minutes. For example, in one turn of game play centered around “face filter technologies”, students’ argumentative appeals included drawing on personal experience (e.g., “I have anxiety” or “like the dog filter in Snapchat”), fiction (e.g., reading “serial killer books”), news/current events (e.g., objects being mistaken for guns), broad ideas about human nature (e.g., kids will inevitably cyberbully each other), and understanding of laws and policy (e.g., terms of service). Additional argumentative resources students used included bending the rules and objectives of the game itself making it an ethical playground of itself; and they drew on a combination of social and informational argumentative strategies to “win,” from forming alliances to citing facts about racial makeup of prison population.
Through such projects, we want to equip students with experience contesting the gap between what is (legal or allowed) and what should be (the responsibility of different stakeholders) – a critical necessity in the ongoing battle of imagination and worldmaking for the future (Benjamin, 2024). Games like AI Audit can provide openings for youth to develop novel argumentative practices and logics needed to respond to the diverse emerging roles that technological and social systems will play in the future.