Paper Summary
Share...

Direct link:

Establishing a Need for AI-Based Formative Assessment for Open-Ended Simulation-Based Science Learning Tasks (Poster 1)

Thu, April 11, 10:50am to 12:20pm, Pennsylvania Convention Center, Floor: Level 100, Room 115B

Abstract

Objectives
Students may not fully explore computational models underlying a complex system that creates natural hazards. This study was carried out to establish instructional and pedagogical needs for AI-based formative assessment that can facilitate students’ conceptual understanding when simulations ground their experience.
Theoretical Framework
Experiential Learning Theory (Falloon, 2019; Kolb, 1984) consists of concrete experience, making observations of the experience, deeper sensemaking and the development of explanations, and additional experimentation. We consider students’ use of a simulation as the source of their concrete experience, observation, explanation, and further experimentation. We investigated to what extent students’ simulation experience contributed to their learning of wildfire concepts after controlling for teacher variations and student variables.
Methods, data sources, and analysis
This study involved 1,040 middle and high school students who used an interactive computer simulation, called the Wildfire Explorer, in 10 simulation tasks embedded in an online wildfire module. Students took the wildfire test before and after the wildfire module and were taught by 18 teachers in rural, urban, and suburban schools across the U.S. We applied a mixed effects generalized linear model to the wildfire posttest score. The wildfire test (0.86 Cronbach alpha) consisted of 16 multiple-choice items scored 0 (incorrect) or 1 (correct) and 13 open-ended items scored from 0 to 4 depending upon the number of scientific features, concepts, or data. We entered four student demographic variables as fixed effects, i.e., school level, race, English, and gender, and three covariates: module completion percentage, posttest score, and total Simulation Experience (SE) score (a sum of all SE scores from 10 simulation tasks). Each SE score was based on a rubric (i.e., how closely the student set up and ran the simulation with the parameters needed to produce and display the phenomenon required by the task). The teacher variable was entered as a random effect. We examined teachers’ post-survey responses related to demographics, teaching backgrounds, and teaching strategies with the module.
Results
We identified five significant effects in the order of magnitudes: total SE score, teacher variation, module completion, pretest score, and gender favoring female/non-binary students. Teacher variation was explained by differences in teaching strategies used during the wildfire module: (1) elicit students’ ideas prior to instruction; (2) ask students to make predictions and test ideas using the Wildfire Explorer; (3) pull students’ ideas together through a whole class discussion, (4) ask students to participate in small group discussions to help them work through concepts, and (5) have students use climate data to think about future risks and impacts.
Significance
This study illustrates the importance of designing open-ended simulation-based learning tasks to support conceptual development of natural hazards by guiding initial exploration, making descriptive or reflective observations, providing opportunities for sensemaking and explaining, and accommodating further experimentation. While individual student’s use of simulations is important, teachers’ active facilitation of the module is also critical. An effective AI-based formative assessment system can take advantage of these results to coordinate student learning and teacher class management.

Authors