Paper Summary
Share...

Direct link:

Multi-agent Large Language Model systems for Analyzing Elementary Students’ Constructed Response

Sun, April 12, 11:45am to 1:15pm PDT (11:45am to 1:15pm PDT), InterContinental Los Angeles Downtown, Floor: 5th Floor, Hancock Park West

Abstract

In this paper, we propose a multi-agent Large Language Model system to analyze elementary students’ written responses to science tasks. The system analyzes student performances, identifies sources of uncertainty in the AI analysis process, and provides scores with scoring reasons as feedback for teachers and students. Applying our system to a dataset of elementary student responses, the system achieved a scoring accuracy of 90%, using human scores as the baseline. The strong alignment between AI-generated and human-assigned scores demonstrates the potential of the multi-agent system to enhance both the efficiency and quality of instruction. Furthermore, the model offers clear and consistent rationales that can support student learning and promote transparency in assessment.

Authors