Paper Summary

Direct link:

How Does Content Difficulty Impact Physiological Responses and Performance During Learning With Advanced Learning Technologies?

Sun, April 19, 8:15 to 9:45am, Virtual Room


Objectives and Theoretical Framework
Self-regulated learning (SRL) is a multicomponential construct that consists of planning, monitoring and regulating processes students can engage in to be active participants during learning (Azevedo et al., 2018, 2019). Studies have shown that students often have difficulty deploying effective self-regulatory processes during learning (Azevedo et al., 2018, 2019). As such, advanced learning technologies have been developed to foster the use of these processes (Taub & Azevedo, 2019).

According to the information processing theory of SRL (Winne, 2018), students engage in four cyclical and iterative phases during learning, where they use different SRL processes during each phase. During the “using learning strategies” phase, students engage in cognitive learning strategies to ensure they understand the material they are learning. One learning strategy is coordinating informational sources (Greene & Azevedo, 2009), where students coordinate information they learn from both text content and images. Research on multimedia learning (Mayer, 2014) and eye tracking (Scheiter et al., 2019) demonstrated students process information from text differently than from images, and they spend longer durations fixating on more difficult content. It is less clear, however, how students’ physiological responses vary by levels of content difficulty.

The goal of this study was to investigate how college students coordinate informational sources and physiologically respond while examining text and images during learning with an advanced learning technology, and how this impacts the accuracy of responding to content questions about human body systems.

Methods and Preliminary Results
Participants were 89 undergraduate students (72% female) from a large university in the United States. Students learned about nine body systems using MetaTutorIVH (Intelligent Virtual Human) where they were presented with content text and images to respond to questions about the function or malfunction of these systems (see Figure 1). Once students felt they reviewed enough of the content to respond to that question, they had to respond by selecting the most appropriate answer. Students progressed through 18 trials (2: function vs. malfunction x 9: body systems). As they did so, we collected eye tracking data and electrodermal activity.

Paired samples t-tests revealed total fixations on content text or images were significantly different for some function vs. malfunction questions (see Table 1). Additionally, multi-level modeling analysis (see Table 2) revealed a significant interaction between text content and image fixations for malfunction questions (see Figure 2) where students with high text content fixations and high image fixations had the lowest performance. Based on these preliminary findings, we expect an increase in skin conductance responses for malfunction questions because it requires coordinating informational sources from the text and image to answer the multiple-choice question correctly. Results will be presented during the symposium.

Results have implications for designing advanced learning technologies that provide real-time intelligent, adaptive scaffolding based on students’ physiological behaviors for specific content. If students struggle with both text content and images during difficult questions, the system can provide pedagogical guidance to engage in metacognitive monitoring to select the most relevant material to ensure optimal learning outcomes.