Paper Summary
Share...

Direct link:

A Socially Relevant AI/ML (Artificial Intelligence/Machine Learning) Curricular Module for High School Students (Poster 11)

Fri, April 12, 9:35 to 11:05am, Pennsylvania Convention Center, Floor: Level 100, Room 115B

Abstract

Objective: The Computer Science Frontiers (CSF) project introduces teachers and high school students to emergent CS topics including artificial intelligence and machine learning (AIML) through curricular modules that are designed to engage minoritized students, especially females (who currently comprise only 19% of CS undergraduates). Drawing on the literature on strategies for engaging women in computing, the pedagogy uses project based learning (Blumenfeld et al., 1991), pair programming (Werner et al., 2004), and societal implication discussions (Khan et al., 2016) and programming activities that connect relevant real-world and social issues to technologies (Fisher & Margolis, 2003). Here we describe the curriculum design and pilot of the AIML module designed with the goal of raising interest and awareness about AIML among high school students, especially issues of algorithmic justice and bias.
Theoretical Framework: The AIML curriculum draws on the AI4K12 framework (Touretzsky et al, 2019). Early in the curriculum, students are introduced to interrogations of bias and ethics in AIML which is then a throughline in all the activities. In the AI & Drawing activity, students explore Google’s Quick, Draw! app. They examine the datasets and watch a video that explains how gathering millions of drawings helped Google train its ML models. Students examine how gendered biases crept in the training of Quick, Draw! through watching a video and discussing how such bias could be mitigated or avoided. Students also interrogate the ethics of creating an app like Quick, Draw! that collects data from unsuspecting users. They also discuss, watch and discuss The Truth About Algorithms. Activities in the ‘Ethics and Real-world Applications’ unit students examine specific real-life examples of algorithmic (in)justice in facial recognition, targeted advertising, and the role of AI in the criminal justice system.
Methods and Data: 6 female (2 Black, 3 White, 1 Asian) and 2 male (1 Black, 1 White) high school teachers participated in a teacher PD preceding a summer camp of high school students. 25 students (10 identified as female, 15 as male from high schools in regions across Tennessee and North Carolina participated in the camp. We administered a pre and post-survey with 52 Likert questions in the following subcategories: confidence, perception, interest in computing, interest in computing careers, and curriculum content knowledge. We conducted t-tests on the pre and post-camp surveys to compare female and male participant responses before and after attending the summer camp.
Results & Significance: After piloting the materials, we found that there was a significant positive increase in female participant confidence and self-efficacy in confidence in AIML content, self-efficacy in computer science, and career identity after attending the AIML camp. Compared to male students, we found that females exhibited statistically significantly stronger increases in self-efficacy in computing, confidence in communicating with computer scientists, and computing content. Open responses to takeaways from the camp reflected the impact the camp had on students’ views of AIML and especially, on the ethical issues related to AIML.

Authors