Paper Summary
Share...

Direct link:

Using Generative AI for Fairness Inquiry (Poster 4)

Thu, April 11, 4:20 to 5:50pm, Pennsylvania Convention Center, Floor: Level 100, Room 118B

Abstract

Objectives or purposes: Recent advancements in generative AI tools such as large language models or realistic art generators have garnered interest from educators (Baidoo-Anu, 2023), and discussion about biases and can further stereotypes based on race, gender, ethnicity, age, sexual orientation and religion (Srinivasan & Uchino, 2021). In this work, we explore how guided investigation of media generated using Text-to-image Generation (TTIG) algorithms can be used as a fairness inquiry tool in classrooms by high school teachers.
Theoretical framework: We used inquiry activities to promote critical thinking (Fine & Desmond, 2015; Hamlin & Wisneski, 2012; Wasis, 2016), to help students with active knowledge construction and improve critical thinking (Samarapungavan et al., 2008; Thaiposri & Wannapiroon, 2015), and also used Freire’s theory (2000) by engaging students in a teacher-guided discourse about their observations of fairness in generated media.
Methods and Data Sources: We developed a module around TTIG algorithms for high school teachers to learn how these algorithms work and how to discuss their societal and ethical implications in classrooms. The module was piloted with 16 teachers across three workshops. Teachers were presented with examples of generated media containing stereotypes. For instance, an AI-generated image of a “pretty girl” that assumes that the subject is white, young and has blue eyes, or AI-generated images of “housekeeper” assumes that the subject is Latin-american, female and middle-aged. Teachers were also asked to generate their own examples of different identities using a TTIG tool and generated images related to genders, occupations, races, and ethnicities. Teachers then engaged in developing topics of discussion to guide their students through an inquiry-based critical thinking activity about fairness of generative AI tools.
Results: Several teachers expressed surprise at the stereotype in the generated media, with some explicitly terming it as “racist”. All teachers expressed that it was critical for students to learn about the bias in generative AI tools so that they understand that the algorithms are flawed. While discussing potential reasons for stereotypes in images, one teacher expressed, “because that is what it sees in the media”, making a connection to a potential source of training data. Of the topics that teachers developed for classroom discussion, the main themes were asking and discussing with students (1) to imagine their visualization of a natural language prompt and comparing it to AI, (2) to recognize what is “off” about the generated image, (3) to tweak the natural language prompt (such as change the gender of the subject) and investigate the newly generated image, (4) the potential causes and consequences of bias in AIML tools, and (5) ideate potential ways to mitigate bias.
Significance: The paper presents an inquiry-based strategy to help teachers teach students about both the technical constitution of generative AI algorithms, and critical thinking about their ethical implications. With their proliferation in classrooms and homes, a critical inquiry of generative AI algorithms and generated media is an essential skill students need to develop in order to prevent the spread of harmful stereotypes.

Authors