Paper Summary
Share...

Direct link:

GenAI-Enhanced Automated Scoring of Scientific Explanation Open-Ended Responses Using Learning Progression Rubrics

Fri, April 10, 7:45 to 9:15am PDT (7:45 to 9:15am PDT), JW Marriott Los Angeles L.A. LIVE, Floor: Ground Floor, Gold 4

Abstract

This study investigates the effectiveness of embedding learning progression-based rubrics into large language model (LLM) prompts for automated scoring of scientific explanation responses. Using a two-factor design (rubric type × prompt strategy), preliminary results on one item show substantial agreement (QWK > 0.65) for holistic rubrics with few-shot learning, while analytic rubrics exhibit varied performance across explanation components. Findings suggest LLMs may inherently grasp learning progression-based rubrics, even capture the original learning progression framework for scientific explanations. This study provides preliminary validation that LLMs can effectively implement automated scoring based on learning progression rubrics, demonstrating satisfactory performance. Future work will extend this investigation to additional items and conduct in-depth qualitative analyses.

Authors