Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Objective
Our goal with this experimental study was to investigate students’ improvement on a writing task following teacher comments or artificial intelligence (AI) feedback. Moreover, we were interested in whether there were differences in student ratings of the helpfulness, clarity, and utility of teacher comments and AI feedback and whether feedback from those sources would elicit different emotional responses in students.
Theoretical Framework
Writing is an important skill as it predicts academic and professional success (Graham & Perin, 2007). However, it is a complex skill that requires time and opportunities for practicing to evolve (Harris & McKeown, 2022). Feedback is considered a powerful tool to guide improvements in writing performance (Fleckenstein et al., 2023; Graham et al., 2015). Lipnevich and Smith (2022) defined instructional feedback as information from any source that can be used to enhance one’s learning and performance. Unfortunately, due to teachers’ personal time constraints and the time-consuming work involved in providing individualized feedback (Wu & Schunn, 2021), teachers often reduce the amount of opportunities to write offered to students (Graham et al., 2014). Artificial intelligence tools such as ChatGPT offer great promise as a feedback generator (Steiss et al., 2024). However, we need to make sure students understand those AI-generated messages and that the lack of individualization would not lead to negative affective responses.
Methods and Data Source
This experimental study involved n=240 students from grades 6-12 from a private school in Brazil. Students completed a writing assignment and were randomly assigned to one of three experimental conditions:
1. Teacher comments: students received comments from an instructor following the assessment criteria for that task.
2. AI Feedback: students received feedback produced by ChatGPT.
3. Simulated Teacher Feedback: students received feedback from ChatGPT, but they were told the feedback was from the teacher. The goal of including this condition is to control for potential bias in students' assessments of the feedback from different sources.
After receiving feedback, students were asked to answer a short survey using a 5-point Likert scale. They were asked to identify the source of feedback (which was stated along with the feedback) to evaluate the utility, clarity, and helpfulness of the feedback, along with their emotional experiences and engagement with the feedback. Lastly, participants were asked to revise their work using the feedback.
Results
Repeated measures ANOVA showed a significant performance improvement from first to final draft (F (1, 244) = 44,82, p < .001). However, no significant differences were found between the different sources of feedback (F (2, 244) = 1,24, p = .291). Moreover, multivariate regression results indicated no significant differences in students' appraisal of the utility (p = .310), helpfulness (p = .394), or clarity of the feedback (p = .102), as well as in their reported emotional experiences.
Significance
This study brings important implications for educational practices, suggesting that AI tools like ChatGPT can function as effective resources for feedback provision, alleviating the burden on teachers by providing quick and helpful feedback and potentially allowing for more frequent writing opportunities.