Paper Summary
Share...

Direct link:

Middle School Students’ Perceptions of Ethical Implications of Artificial Intelligence/Machine Learning (Poster 1)

Fri, April 12, 9:35 to 11:05am, Pennsylvania Convention Center, Floor: Level 100, Room 115B

Abstract

Objectives: The rapid expansion of artificial intelligence/machine learning (AIML) has brought unprecedented impacts and unintended consequences to society. To thrive in the era of AIML, it is necessary that people learn to become alert to the potential harms of AIML technologies. Educators have realized this need and have incorporated ethics in their AIML education curriculum, e.g., Forsyth el al. (2021) developed an AIML ethics curriculum using stories. Yet, there is sparse research about how students learn ethics (Williams et al., 2022). Such research is critical to provide evidence of the efficacy of ethics instruction and inform the design of AIML education. We present a study investigating students’ perceptions of AIML ethics issues, focusing on the research question “What ideas and skills related to AIML ethics were middle school students able to develop after learning through an AIML literacy curriculum?”
Theoretical framework: Students learned ethics through the “Developing AI Literacy (DAILy)” curriculum with activities designed to address three categories of learning objectives to make effective ethics instruction: emotional engagement, intellectual engagement, and particular knowledge (Harris et al., 1996; Newberry, 2004). To ensure that students possessed adequate technical knowledge to make sense of ethics issues, each ethics lesson followed lessons on related technical concepts. DAILy incorporated a series of case studies engaging students in learning about how AIML can impact people unfairly (e.g., making wrong inferences about job applicants). To prepare students to make ethical decisions, DAILy engaged them in working with flawed AIML models to make faulty predictions and experiment how to mitigate bias.
Methods & Data Sources: This study involved 58 students (grades 6-8) in online summer camps (3 hours per day for 10 days). Students were recruited from districts with a large percentage of students from historically marginalized groups. We focus on exit interview data of 32 students, analyzing questions related to student views of AIML related ethics issues and their experience of learning AIML ethics lessons. Example interview questions included, “Could you tell me your experience with the “Investigating Bias” activity?”, “What do you think are the potential benefits of AI? What are the potential harms of AI?” and “If you are going to build an AI system, what would you do to ensure it’s fair?.” Using a grounded theory approach (Birks & Mills, 2015), we analyzed the interview transcripts and categorized emergent themes around student experiences and perceptions of bias.
Results & Significance: Our preliminary analysis showed that overall middle school students were able to develop a foundational understanding of AIML ethics. Most of the students identified causes of AIML bias and articulated solutions such as using diversified and balanced datasets to mitigate potential bias. Female students of color were particularly engaged in the AIML ethics lessons and were active in brainstorming and experimenting with solutions to minimize AIML bias. The findings reveal the success of integrating AIML ethics into the learning of AIML technical concepts, which could contextualize the technical concepts and highlight the interrelatedness of technological tools with their human impacts and societal implications.

Authors