Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Objective
This research aims to unravel how large language models (LLMs) vs educators as feedback providers and the presence of provider-related information influence learners' perceived feedback effectiveness.
Theoretical Background
Feedback providers and information about them, e.g., their experience or status, can affect feedback effectiveness (Winstone et al., 2017). With the rise of LLMs that are capable of providing written feedback on texts of higher education students, it is important to explore how LLMs can function as effective feedback providers. Next to cognitive aspects, feedback effectiveness also encompasses non-cognitive aspects like learners’ perceptions of the feedback provider and message (Henderson et al., 2019). In this vein, LLMs were found to be perceived as more trustworthy feedback providers than educators (Authors, year). However, when automated feedback is provided by an unknown educator/LLM and due to the social nature of feedback interactions, information about the source’s feedback provider qualifications may improve students’ perceptions of the human or LLM feedback (see also explainable AI; Vössing et al., 2022).
This study investigates how educators vs LLMs as feedback providers and information about them influence learners’ perceived feedback effectiveness. We assumed that (1) the provision of provider-related information and (2) LLMs as feedback providers would lead to improved feedback message and provider perceptions. Moreover, we assumed that when implementing feedback provider-related information, students would perceive the educators and their feedback message more positively as compared to LLMs.
Methods
This experimental 2x2 between-subject study with 168 German undergraduate teacher students (Mage=24.85, SDage=6.74) examined the impact of two different feedback provider labels (educator vs LLM) and the presence of provider-related information (no vs yes) on perceived feedback effectiveness. Students received a screenshot of a feedback interaction in a learning management system for argumentative writing which was either framed to involve an LLM or a human educator as feedback provider and which either provided information on the feedback provider (i.e., its expertise, status, and experience) or not. Students filled in the Feedback Perceptions Questionnaire (α=.94, Strijbos et al., 2010; message perceptions) and the Muenster Epistemic Trustworthiness Inventory (α=.93, Hendriks et al., 2015; provider perceptions). Further control variables were assessed (e.g., content knowledge, attitudes towards AI in education).
Results and Significance
Saturated path analyses confirmed that the provider information improved the feedback message perceptions (β=0.533, p<.05) and that LLMs as feedback providers were perceived more positively than educators (β=0.583, p<.001). Moderator analyses showed that without provider information, and compared to the educator, students allocated more expertise to the LLM (t=-4.46, p<.001), however, they rated its feedback message as less fair (t=-2.52, p<.05). These differences between the feedback providers vanished when comparing the groups with provider information (p>.05). This study highlights the importance of research on LLMs as feedback providers by showing that even with identical feedback messages, feedback perceptions can be affected by the provider, if no provider information is present. Moreover, when designing computer-based feedback for higher education students, this study illustrates the importance to provide sufficient information on the feedback provider as this can benefit students’ feedback perceptions.