Paper Summary
Share...

Direct link:

Pedagogical Agents for Game-Based Learning

Sat, April 14, 2:15 to 3:45pm, Millennium Broadway New York Times Square, Floor: Sixth Floor, Room 6.01

Abstract

Research on game-based learning has demonstrated the considerable promise of using game technologies to create learning experiences that are both effective and engaging. Game-based learning environments can provide students with challenging problem-solving episodes that yield positive learning outcomes across a range of subject matters, student populations, and learning contexts. While game-based learning offers considerable potential, it is widely recognized that game-based learning environments must better support learners, particularly in the face of the increasing cognitive demands imposed by complex in-game problem-solving scenarios.

Pedagogical agents are emerging as a promising approach to supporting students interacting with game-based learning environments (see references below). Artfully embedded in the narratives of game-based learning environments, pedagogical agents as non-player characters can directly support student problem solving while simultaneously serving as characters who play integral roles in story-based learning interactions. For more than two decades research on pedagogical agents has explored how specific design decisions can impact learning outcomes, and the field now has a solid foundation for exploring how recent advances in AI and virtual reality can be used to create the next generation of pedagogical agents to most effectively support game-based learning.

The time is ripe to investigate how pedagogical agents for game-based learning can leverage advances in three families of technologies: natural language processing, intelligent multimodal interfaces, and virtual reality. First, recent years have seen significant advances in natural language understanding, natural language dialogue management, and natural language generation. These advances can directly support rich communication between pedagogical agents and students through context-sensitive tutorial dialogue; coupled with advances in speech recognition and speech synthesis, we will soon be able to support both text-based and speech-based tutorial conversations with pedagogical agents to provide many forms of game-embedded scaffolding. Second, intelligent multimodal interface capabilities are emerging that, when integrated with game technologies, will support robust facial expression recognition, gesture recognition, and gaze tracking of students during gameplay. Used either alone or in conjunction with biometric data and affective computing, pedagogical agents will be able to provide not only adaptive cognitive support but also adaptive affective support. Third, rapid advances in virtual reality will create opportunities to design game-based learning experiences that are deeply immersive and provide affordances for rich student-agent interactions within games’ virtual storyworlds.

Together, these advances will enable us to investigate a wide range of questions about how to best design pedagogical agents for game-based learning environments. For example, we will be able to study how different types of agents (e.g., classic pedagogical agents, teachable agents, learning companions) can most effectively support game-based learning for different types of subject matters, and how they can be designed to have the strongest motivating effects. We will also be able to study the long-term effects of interacting with one’s own pedagogical agent over weeks or months or even years. As a result, we will be well positioned to explore how pedagogical agents can interact with students to promote lifelong learning through context-sensitive scaffolding that is developmentally appropriate and supports deeply personalized learning.

Author