Paper Summary
Share...

Direct link:

Automatically Detecting Emotions during Game-based Learning using Large Language Models and Concurrent Verbal Protocols

Wed, April 8, 11:45am to 1:15pm PDT (11:45am to 1:15pm PDT), JW Marriott Los Angeles L.A. LIVE, Floor: Gold Level, Gold 3

Abstract

This study presents a method for detecting learner emotions during game-based learning (GBL) using concurrent verbalizations. Large Language Models (LLMs) were trained on human-coded data from 54 adults who showed significant learning gains. Confusion, enjoyment, and anxiety were common, but only disgust, anxiety, and disengagement correlated positively with learning. LLM detectors, tested across six prompt strategies that differed in the level of detail drawn from a theoretical emotional coding scheme, achieved above-chance accuracy when given descriptions, examples, and reasoning steps. For frequent emotions, LLM performance was comparable to human coders. Findings highlight the potential of audio-based emotion detection for real-time scaffolding, though capturing nuanced emotions may require integrating additional data sources.

Authors