Paper Summary
Share...

Direct link:

Understanding Student Learning in Virtual Reality: Potential of Gesture Speed and Gaze Data

Sat, April 13, 3:05 to 4:35pm, Pennsylvania Convention Center, Floor: Level 100, Room 117

Abstract

Virtual Reality (VR) environments allow direct student interaction with virtual objects. These technology-rich environments enhance gesture-based science reasoning, akin to advanced learning environments (Plummer, 2009; Richards, 2012). VR also generates extensive data from various sources like visual (e.g., eye-motion tracking), auditory (e.g., intensity of the environmental noise levels), haptic (e.g., movement, force), and network (e.g., timestamps; Christopoulos et al., 2020). Analyzing these sources of data can provide insights into the strengths and weaknesses of educational approaches used in VR, thereby informing future decisions (Siemens, 2013; Slade & Prinsloo, 2013). However, more empirical research is needed on how data gathered in VR learning environments help us to understand students' learning outcomes. Most analyses currently focus on event-based log data, such as the number of correct or incorrect attempts (e.g., SantamarĂ­a-Bonfil et al., 2020). Therefore, in this pilot study, we aim to explore how students' bodily movements (e.g., gestures and eye gaze) in a VR simulation learning about cell division could potentially serve as predictive factors for their learning outcomes.
We collected data from seven undergraduates and made two key observations. First, we saw the possibility that bodily movements may hint at learning outcomes. Figure 6.1 shows that Chris, who already had very high self-efficacy in biology (25 out of 25), increased his gesture speed as he played the simulation repeatedly. This change coincided with his high learning achievement (pretest score: 2 -> posttest score: 6). Conversely, Joy, with lower self-efficacy in biology (11 out of 25), exhibited little change in gesture speed throughout the repeated simulation play and her learning achievement was one of the lowest among the participants (pretest score: 2 -> posttest score: 3). Second, we expect that if we analyze various types of log data through multiple hypotheses, we can more accurately understand the learning process and achievement. For example, it is not clear whether Chris increased his gesture speed because he fully understood the learning content, or he simply repeated the gesture quickly to clear the simulation without understanding the learning content. Therefore, we further discerned students' learning outcomes by analyzing gaze data. Figure 6.2 visualizes the gaze data of two students. In the case of Joy, the fact that she looked at the hints on the information panel for quite a long time, even when repeating the simulation for the third time, suggests that she hadn't fully grasped the learning content. In other words, we expect to gain more accurate insights into students' learning achievements when we integrate the results of analyzing these various layers of log data.
While this data implies a correlation between gesture speed and learning outcomes, we should be careful about assuming direct causality. Thus, there is a need to collect data from more participants and closely examine whether an increase in students' gesture speed and their interactions with specific objects at those moments enhances their understanding of the related content.

Authors