Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
X (Twitter)
This study examined the ability of generative AI (i.e., ChatGPT) to provide formative feedback, a key instructional practice for writing development. We compared the quality of human and AI feedback by deductively coding feedback provided on secondary student essays (n=200) on five measures of quality: criteria-based, clear directions for improvement, accuracy, [prioritization of] essential features, and supportive tone. We examined if heterogeneity in feedback was related to essay quality and EL status. Results showed that human raters were slightly better at providing high-quality feedback to students. Feedback did not vary by language status for humans or AI, but there were differences in feedback quality based on essay quality. Implications for AI as an educational tool are discussed.
Jacob Steiss, University of Missouri - St. Louis
Tamara Powell Tate, University of California - Irvine
Jazmin T. Cruz, WestEd
Steve Graham, Arizona State University
Jiali Wang, University of California - Irvine
Youngsun Moon, Stanford University
Waverly Tseng, University of California - Irvine
Mark Warschauer, University of California - Irvine
Michael Hebert, University of California - Irvine