Paper Summary
Share...

Direct link:

A Meta-Analysis of the Effects of LLM-Generated Feedback on Assessed Writing Outcomes in L2 Contexts

Sat, April 11, 3:45 to 5:15pm PDT (3:45 to 5:15pm PDT), Los Angeles Convention Center, Floor: Level Two, Poster Hall - Exhibit Hall A

Abstract

Large language models (LLMs) are increasingly used to provide automated feedback on students’ writing. Compared to traditional automated writing evaluation (AWE) tools, LLM-based feedback has been described as offering more flexible and context-sensitive responses to student writing. However, empirical findings on whether LLM-generated feedback improves second language (L2) learners’ writing performance remain fragmented and inconsistent across studies. This study conducts a quantitative meta-analysis to synthesize empirical evidence on the effects of LLM-generated feedback on assessed L2 writing outcomes. Using a systematic literature search across major education and psychology databases, this study identifies empirical studies reporting transformable effect size data on writing performance following LLM-based feedback interventions.

Authors