Paper Summary
Share...

Direct link:

Scale Reasoning in Immersive Virtual Reality: Capturing Middle School Students’ Learning

Sat, April 13, 3:05 to 4:35pm, Pennsylvania Convention Center, Floor: Level 100, Room 117

Abstract

Assessing applications of immersive virtual reality (VR) within learning spaces hinges on the ability to capture student learning. However, research on VR in K-12 education is limited (Luo et al., 2021), with no consensus on their impact on science learning outcomes (Matovu et al., 2022). For our NSF-funded project, we utilized VR to address learners' difficulty in discerning the size and scale of objects referenced in science education standards (e.g., Author et al., 2022; Author et al., 2015) by providing virtual experiences with objects beyond everyday experience. We discuss methods for assessing learning during the implementation of [name blinded] VR, an environment where students grow and shrink to the size of entities ranging from 10-10 m to 109 m (Figure 5.1).

Background
Embodied cognition posits that there is an inextricable relationship between the mind and body (Wilson, 2002). VR elicits these experiences with objects beyond everyday experience, which might support the development of accurate conceptions of scale. Magaña et al.’s (2012) framework to characterize and scaffold size and scale cognition (FS2C), which identifies five fundamental ways of thinking about size and scale, guided the development and functionality of [name blinded] VR (i.e., ability to grow and shrink).

Methods
Using a head-mounted display, students (n = 32) at a middle school primarily serving underrepresented minorities utilized [name blinded] VR during a co-designed lesson on energy production. Students collaboratively researched an energy production type (e.g., hydropower), created a scale with [name blinded] VR, established local connections, and analyzed misconceptions. At baseline and conclusion, students completed an adaptation of the [name blinded] (Author et al., 2023a,b) aligned with the FS2C and validated through expert and target population review, with good reliability indices (Cronbach’s alpha = 0.83). Observations were conducted holistically and through the Reformed Teacher Observation Protocol (RTOP; Sawada et al., 2000) during a non-VR and VR instructional period. Following the VR lesson, we conducted student (n = 10) and collaborating teacher (n = 3) interviews. Our multi-pronged approach targeted student affective and cognitive outcomes, teachers’ perspectives, and classroom structures and interactions.

Results
Students’ time to complete the [name blinded] was as expected (~25 min). Students appeared to understand the test language, which was provided in English and Spanish and supported by a glossary. Some students were observed answering at random, and analyzing their responses revealed some unintended processes (e.g., not labeling groups by size). Results indicated the need for detailed interaction metrics (e.g., time). Interview data illuminated [name blinded] results (e.g., explained a new misconception) and affective aspects of the VR experience. RTOP observations detailed teacher instructional patterns and established differences between classrooms and across VR and non-VR lessons. The holistic observations complemented the RTOP data and revealed student-student interactions.

Significance
Capturing learning during VR interventions is dynamic, requiring multi-pronged techniques. Our findings indicate that future research should capture student-student interactions, include follow-up interviews to elevate administered assessments, and consider the need for detailed interaction metrics to capture a holistic view of learning.

Authors