Paper Summary
Share...

Direct link:

Multimodal Participation Analysis in CSCL Through AI-Enhanced Activity Mapping and Screen Recording Video Analysis Systems

Wed, April 8, 11:45am to 1:15pm PDT (11:45am to 1:15pm PDT), Los Angeles Convention Center, Floor: Level Two, Poster Hall - Exhibit Hall A

Abstract

This study presents a novel mixed-methods approach to analyzing multimodal participation in Computer-Supported Collaborative Learning (CSCL). Focusing on typing as a key form of object-based participation, we introduce two tools: an AI-based Activity Mapping Video Analysis system that analyzes collaborative group interaction video using neural network models, and a Screen Recording Video Analysis system leveraging Optical Character Recognition. Applied to a collaborative robotics programming session among middle school students, these systems enabled detection of who typed, when, and what was typed. Integrated with qualitative video-based interaction analysis, the findings reveal dynamic shifts in participation, tool access, and role negotiation. This approach demonstrates how AI can augment, rather than replace, human interpretation to support scalable, context-sensitive insights into real-world CSCL processes.

Authors