Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
This study investigates the effectiveness of embedding learning progression-based rubrics into large language model (LLM) prompts for automated scoring of scientific explanation responses. Using a two-factor design (rubric type × prompt strategy), preliminary results on one item show substantial agreement (QWK > 0.65) for holistic rubrics with few-shot learning, while analytic rubrics exhibit varied performance across explanation components. Findings suggest LLMs may inherently grasp learning progression-based rubrics, even capture the original learning progression framework for scientific explanations. This study provides preliminary validation that LLMs can effectively implement automated scoring based on learning progression rubrics, demonstrating satisfactory performance. Future work will extend this investigation to additional items and conduct in-depth qualitative analyses.