Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Objectives: The widespread adoption of artificial intelligence and machine learning (AI/ML) creates an urgent need to support teenagers in developing AI/ML literacies. Young people must gain knowledge not only about how these technologies function but also about their potential societal impacts. Most critically, teenagers need skills to identify algorithmic biases and take action when these systems cause harm. While research has shown that teenagers are able to identify algorithmic biases and harms (Solyst et al., 2023; Solyst et al., 2025; Morales-Navarro et al., 2024), translating these insights into approaches to further systematic and empirical examinations of AI/ML systems remains an open challenge. This study investigates how teens engaged in AI auditing to identify and understand biases in generative AI/ML TikTok filters.
Theoretical framework: Within the literature on computational empowerment, Iversen et al. (2018) define deconstruction as the process of “critically examining and understanding how meaning and intentionality are encoded into digital artifacts and practices that impact upon our everyday lives.” Deconstruction involves describing, evaluating, and reflecting on the values and intentions embedded in sociotechnical systems that are used in everyday life, as well as considering their possible implications (Dindler et al., 2020; Schaper et al., 2022). In this poster, we introduce AI auditing as a five-step method to support young people to participate in deconstructing everyday AI/ML algorithms, an effective strategy in algorithmic accountability and human-centered computing research for systematically investigating and comprehending how AI/ML systems behave from the outside in (Morales-Navarro et al., 2025). Algorithm auditing is systematic, as it involves “repeatedly querying an algorithm and observing its output to draw conclusions about the algorithm's opaque inner workings and possible external impact” (Metaxa et al., 2021).
Methods: We conducted a participatory design workshop with a group of 14 teens (ages 14–15) in a two-week summer program. Workshop activities were designed to support participants in systematically investigating potentially harmful algorithmic biases. Based on screen recordings of participants’ work and the actual files of the collaborative audit (a spreadsheet with inputs and outputs of 1200 tests), we present a descriptive case study (Yin, 2018) of how teenagers engaged in auditing the generative AI model that powers TikTok filters. In addition to describing participants' experiences throughout the auditing process, we also triangulated their findings to examine if the workshop design supported participants in identifying potentially harmful algorithmic biases in their audit.
Results: Participants made connections to their everyday experiences and understandings of biases, selecting inputs based on the social dynamics they observe in their communities (related to race and gender) and contributing ideas about age-related biases that are uncommon in professional audits but were particularly salient to them. By triangulating participants' findings, we confirmed that the workshop design may be conducive to supporting youth to conduct audits with evidence-based, credible conclusions.
Significance: This study highlights the potential for auditing to inspire learning activities that foster AI literacies, empower teenagers to critically examine AI systems, and contribute fresh perspectives to the study of algorithmic harms.