Paper Summary
Share...

Direct link:

Gathering and Interpreting Process Data From Interactive Simulations Guided by Cognitive Science Theory and Methods

Sat, April 9, 2:15 to 3:45pm, Marriott Marquis, Floor: Level Two, Marquis Salon 12

Abstract

Educational simulations are widely used for learning, but a challenge still to be met is using them to make inferences about students in large scale assessments. Interactive STEM simulations facilitate reasoning about scientific phenomena, allowing users to manipulate variables, observe outcomes, and draw inferences about underlying processes. They can represent a variety of problems, including threshold discovery, theory generation, and hypothesis testing. Moreover, they can target both scientific inquiry practices and phenomenon-specific knowledge, consistent with NGSS goals. In this project, we explored the potential of simulations for gathering rich evidence about cognition in the form of process data, captured in log files of students’ interactive behaviors.
A PhET simulation originally created for learning about concentration of solutions was selected for the study. In adapting the simulation for assessment, we targeted specific aspects of science cognition and made the intentionality of users more evident. For example, the original interface involved shaking a container of drink mix into water and adjusting flow from a faucet while observing changes in concentration. Instead, we instigated discrete trials in which students had to plan a priori how much to add (both solute and solvent) and activate a button when they were ready to run a test and collect data. To examine effects of interactive affordances, we experimentally manipulated the degrees of freedom available (e.g., continuous versus categorical input variables). The simulation was also embedded within assessment content, including test questions and output tables. We enhanced back-end data capture to generate log files of user and system events, to create an interpretable external trace of actions made by students while using the simulation to answer questions. Specifically, we created an appropriate grain size and log file structure for captured events to facilitate meaningful, theory-driven, construct-relevant inferences from the process data.
A study including about 300 users on Amazon Mechanical Turk was run with two simulation versions (high and low degrees of freedom; between-subjects).One research question focused on how the different user interface characteristics affected students’ interactions with the system and resulting scores. A second related research question focused on how to analyze the log files of student behaviors in ways that revealed differences in student cognition. We sought to use characterization methods that would reveal patterns of behaviors representing construct-relevant differences among students that were consistent with our a priori predictions and previous literature on scientific thinking and performance.
To answer these questions, we took a data characterization approach that has been successful in previous research, which takes into account Evidence Centered Design for assessment (Mislevy, Almond, & Lukas, 2003) while performing and interpreting results from Educational Data Mining and traditional statistical techniques. Methods include exploratory cluster analyses (e.g., K-means) of specific sequences of performance and modeling of related behaviors in relation to performance. Data and analyses will be presented from this work as well as recommendations and implications for these new evidence types.

Authors