Paper Summary
Share...

Direct link:

“A Path to Activism”: CS Teachers Developing Critical Agency with Algorithmic Systems through Algorithm Auditing

Thu, April 9, 4:15 to 5:45pm PDT (4:15 to 5:45pm PDT), Los Angeles Convention Center, Floor: Level Two, Room 515B

Abstract

Objectives
Today more than ever, computer science (CS) teachers and students need to understand the foundations of artificial intelligence and machine learning (AI/ML) applications to responsibly interact with and critically evaluate those systems (Long & Magerko, 2020). As AI/ML systems are “black-boxed” with proprietary code and massive datasets, critiquing those systems can seem unreachable, with even the designers having difficulty understanding them (Smith et al., 2023). Still, a majority of CS teachers do not “see the importance of covering computing’s role in perpetuating biases… and other inequities in the classroom” (Koshy et al., 2021). Computer science teachers are vital to empowering students to critically engage with AI/ML systems, yet little is known about how CS teachers grow in their understanding and engagement with these systems.

Theoretical framework
In this paper, we draw on the framework of computational empowerment, focusing on developing skills and reflexivity to understand and engage “critically, curiously, and constructively” with digital technology in everyday life and society at large (Smith et al., 2023). In the context of AI/ML, everyday algorithm auditing can foster computational empowerment, where people systematically and empirically query algorithmic systems to “draw conclusions about the algorithm’s opaque inner workings and possible external impact” (Metaxa et al., 2021).

We engaged five experienced CS teachers in one year of participatory design focused on introducing algorithm auditing: learning what it is, collaboratively designing a set of lessons for high school CS classrooms, and implementing the lessons. Here, we ask: how do CS teachers grow in their critical understanding of and engagement with AI/ML systems over the course of a year designing and implementing algorithm auditing in their classrooms?

Methods and Data
Our research draws on iterative, thematic qualitative data collection and analysis, drawing on grounded theory (Charmaz, 2000). Data included three sets of teacher interviews: individual pre-interviews, post-interviews after implementation, and a focus group to member check and illuminate findings. Analysis involved three phases of iterative, open coding, and refinement with the researcher and teacher members.

Results
Three major themes stood out across the data, summarized only briefly here. First, teachers’ experiences of algorithmic justice were rooted in their situated personal and teaching experiences, filtered through their roles as teachers (e.g., what would engage students) and the specific communities of students with whom they worked (e.g., students’ interests, race and ethnicity, educational background). Second, teachers became more critical of AI/ML systems over the course of the year, with this criticality slipping into everyday life (e.g., conversations with family) as well as classroom practice. Third, algorithm auditing became a source of hope and agency, a means of engaging with AI/ML systems. As one teacher put it, “you can actually do something”.

Significance
This research offers insights into the situated nature of teachers’ entry into AI/ML, and how they cultivate critical agency for themselves and their students amidst a society shaped by powerful and pervasive AI/ML systems.

Authors