Paper Summary
Share...

Direct link:

Evaluating the Construct Validity of an Automated Writing Evaluation System With Manipulations of Masterwork Narratives (Poster 18)

Sat, April 23, 2:30 to 4:00pm PDT (2:30 to 4:00pm PDT), San Diego Convention Center, Floor: Upper Level, Sails Pavillion

Abstract

This study evaluated the construct validity of six scoring traits of an automated writing evaluation (AWE) system called MI Write. Masterwork narratives (N=14) were selected to control for errors that are typical of student writers and to ensure high-quality writing in regard to the traits of interest. A paragraph randomization algorithm was utilized to assess the high-level traits of idea development and organization for sensitivity to such a manipulation. Texts were randomized 35 times each, and randomized iterations (n = 490) were compared to the control text across all traits. Randomizations did not significantly impact high-level trait scores, indicating a disconnect between MI Write’s formative feedback and its underlying constructs. Implications for consumers and developers of AWE systems are discussed.

Authors