Paper Summary
Share...

Direct link:

An AI Framework for Identifying Uncertainty and Weaknesses in Written Responses to Usable Knowledge Tasks

Fri, April 10, 7:45 to 9:15am PDT (7:45 to 9:15am PDT), JW Marriott Los Angeles L.A. LIVE, Floor: Ground Floor, Gold 4

Abstract

This study develops and tests a Large Language Model-based assessment framework for analyzing written responses and identifying response uncertainties to support learning. We conducted two case studies with 837 middle and 90 elementary school students to apply and evaluate the framework. The AI models achieved an average overall accuracy of 86% and successfully detected uncertainty in student responses. The framework explains scoring decisions, flags uncertainty, and highlights weaknesses aligned with learning goals to improve usable knowledge. It is especially effective with vague or inconsistent responses, showing the potential of LLMs in classroom assessment. By providing AI-generated scoring reasons and uncertainty data, the framework enables more targeted, timely feedback to support teaching and learning across diverse student backgrounds and classroom contexts.

Authors