Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
X (Twitter)
Objectives Recent and rapid innovation in generative AIML has led to increasing interest from youth. In particular, youth are gaining more exposure to generative AIML for entertainment, creative, and educational purposes. With the changing landscape and ubiquity of generative AIML, we aim to understand how youth reason about generative AIML – its outputs, limitations, and potential roles in the future.
Theoretical framework: This research aims to build on prior work exploring children’s perceptions of AIML (Druga et al., 2017), as well as their discernment when it comes to misinformation, i.e., information that is incorrect but not intentionally harmful (Wardle & Derakhshan, 2017). This work investigates youths’ ideas about generative AIML workings, as well as how they reason about correct and incorrect outputs from generative AIML.
Methods and Data Sources: We analyzed data from youth (N = 12, ages 11-14 years old) who participated in a 90-minute educational workshop about generative AIML. The workshop was one session in part of a series held by a student-run university organization, which was aimed at increasing gender diversity in STEM. The workshop included group discussions about generative AIML to gauge participants’ perceptions, a guessing game and follow-up discussion about if output from a text-based generative AIML was correct or not, and an individual booklet activity to scaffold thinking about how generative AIML may be a part of the future. We used a consensus-based, thematic analysis approach (Hammer & Berland 2014; Braun and Clarke, 2012) on the learners’ artifacts and our research notes.
Results: We observed that trust in generative AIML was high in the guessing game, as many learners guessed that most outputs from the text-based generative AIML were correct, even when they were not. Reasoning about generative AIML content and its correctness related to transparency (showing reasoning), expected format (visual appearance and layout of the output), other seemingly correct information (e.g., additional facts or details that seemed correct), their own prior knowledge (i.e., if they already knew certain information about the topic at hand), and their expectations for what AIML technology was capable of. When we revealed that a number of the outputs were incorrect, learners were disillusioned and concerned about how it could provide misinformation. They reasoned that the generative AIML was like a chatbot, while others considered it as an advanced search engine. Despite growing awareness of its current limitations, they still had great optimism when considering futures with AIML and suggested that it could have a key role in solving justice and health-focused challenges.
Significance: This work reveals details in which youth understand generative AIML and may be initially overly trusting. Findings give insight into how learners’ understandings of the potential benefits and drawbacks of generative AIML change with exposure to its limitations, as well as their mental models about how generative AIML works and its envisioned roles in the future. We additionally discuss potential implications for use of generative AIML in educational settings.