Paper Summary
Share...

Direct link:

Modeling Rater Judgments on AI-Aided Rubrics: A Rasch-Based Mixed Methods Study

Sat, April 11, 11:45am to 1:15pm PDT (11:45am to 1:15pm PDT), InterContinental Los Angeles Downtown, Floor: 6th Floor, Mission

Abstract

This study used Many-Facet Rasch Measurement (MFRM) to evaluate the quality of analytic rubrics developed with and without Artificial Intelligence (AI) support. Sixty rubrics (30 AI-assisted, 30 non-AI) were rated by ten expert raters across five domains. MFRM revealed substantial differences in perceived rubric quality and rater behavior across conditions. To support interpretation, qualitative reflections from rubric authors were analyzed, revealing alignment with model findings and offering insight into how AI influenced rubric structure and clarity. This embedded mixed methods design highlights the value of MFRM in detecting nuanced differences in performance task design quality.

Authors