Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
X (Twitter)
Objectives: Prior work has explored youths’ perceptions of algorithmic bias in the context of AIML (Druga et al., 2017; Long et al., 2020). However, studies provide limited guidance on how youth can participate in discourse around algorithmic fairness in a way that both allows them to draw from their existing knowledge and experiences and safeguards their well-being. We chose to use the term ‘fairness’, instead of ‘bias’ because prior work (Druga et al., 2021; Lee et al., 2022) showed that ‘bias’ may not be in youths' vocabulary and they have limited comprehension of the word itself. Engaging youth in learning experiences around algorithmic fairness that balances both agency and safety will support the growing movement to educate them in the social impacts of AIML and computing.
Theoretical framework: We draw from funds of knowledge and sensemaking theories. The funds of knowledge approach posits that learners already have knowledge from their lives that are de-legitimized because of asymmetrical power relationships in education (Gonzalez et al., 2006; Moll et al., 1992). Sensemaking theory proposes that individuals actively process information from various sources to achieve understanding rather than achieving an arbitrary pinnacle of knowledge (Dervin, 1998). As our goal is to investigate how children may engage in conversations around algorithmic fairness that draw from their funds of knowledge, we ground our methods in sensemaking theory to make space for different paths of achieving understanding.
Methods & Data Sources: Drawing from the Slow Reveal Graphs instructional routine from math and data science classroom practice (Laib, 2022), we developed group sensemaking discussions and activities based on scenarios of algorithmic fairness, employing these discussions as both instruments for discovery and blueprints for future classroom practice. We investigated these discussions and activities with two sets of participants, 16 children (ages 8-12) and 15 teenagers (ages 15-17), and conducted inductive thematic analyses on them (Guest et al., 2011).
Results: Our child participants drew most from their own individual experiences and perspectives in making sense of algorithmic fairness. In contrast, our teen participants mostly considered larger ecosystems, such as community and society, when considering sources of bias and impacts of unfairness. Additionally, our child participants had more specific characterizations of users while our teen participants tended to design for hypothetical “average users”, instead of considering nuances of user populations. Finally, our child participants were especially attuned to gender, race/ethnicity, country of origin, and age, while our teen participants were most attuned to race/ethnicity and economic status, reflective of the different backgrounds of the participants in each study.
Significance: We make two important contributions through this study. First, we contribute to a deeper understanding of youths’ situated knowledge around algorithmic bias in computing more broadly, not only AIML. This suggests potential entryways for critical engagement with technology. Second, we contribute a blueprint for engaging youth in scaffolded reasoning around algorithmic fairness that balances agency and safety. This informs the design of tools, curricula, and other learning experiences in the movement to educate youth on technology's social and ethical impacts.