Paper Summary
Share...

Direct link:

Coordinating Empirical and Simulation Evidence During Model Building in Elementary Science

Thu, April 21, 8:00 to 9:30am PDT (8:00 to 9:30am PDT), Marriott Marquis San Diego Marina, Floor: North Building, Lobby Level, Marriott Grand Ballroom 12

Abstract

Theoretical Framework and Objectives

According to the Grasp of Evidence framework (Duncan et al., 2018), an important dimension of skillful reasoning with evidence is integrating larger bodies of evidence (moving beyond considering just one or two studies). This, in turn, includes learning to coordinate and integrate different types of evidence. In contemporary science, two prominent types of evidence are (1) empirical evidence from experiments, observational studies, etc., and (2) evidence from simulation models of phenomena (Lenhard, 2019). In public controversies, such as climate change and epidemiology, citizens must make sense of these two types of evidence. However, little is known about how students reason using these two types of evidence. Accordingly, the purpose of this study was to investigate elementary science students’ grasp of the relationship between simulation evidence and empirical evidence.

Methods

Participants were 50 fifth graders from a predominantly white public charter school in the northeastern USA. Using a computer-based modeling tool, students engaged in a five-week modeling unit incorporating both empirical evidence and simulation evidence to explain fish deaths in a local pond. Data sources included video-recorded class discussions and group interactions during modeling, written responses to prompts about simulation evidence and empirical evidence, and a post-intervention interview with 17 dyads.
In this brief proposal, we focus on interview results. The dyads viewed a Venn diagram with two overlapping circles labeled “Simulations” and “Studies,” along with nine different claims about evidence (Table 1). Using the diagram, students classified each claim as (1) true for simulations; (2) true for studies; or (3) true for both. They discussed their classifications.

Results

Table 1 displays the classification results. Most dyads viewed simulations as representing hypothetical situations, whereas studies show “what actually happens.” Most viewed simulations as themselves based on data. A corresponding theme in some conversations was that empirical evidence was seen as providing one grounding for simulations. For example, one dyad stressed that simulations rely on “programming of what happened” in contrast with studies that rely on data “from exactly that moment.” Further, those who program simulations can base their programs on empirical data. Model building activities indicate that many groups used the two types of evidence as having equivalent weight in warranting model changes.
Most dyads appreciated that simulations are not always accurate, but they mistakenly believed that empirical studies are always accurate. Relatedly, all students viewed simulations as repeatable, but they disagreed about whether studies are repeatable, suggesting that further attention of reproducibility in science may be warranted, in addition to a focus on the challenges of methodological error.

Significance

This study provides an initial portrait of some strengths in students’ grasp of how to coordinate and integrate empirical and simulation evidence. Yet there is room for growth in reasoning about conflicts between the two types of evidence and in understanding the role of error and reproducibility in both evidence types. With a better understanding of how students reason about these two types of evidence, educators can design learning environments to better support students’ integrative reasoning.

Authors