Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Objectives and Framework
Mindset interventions, which shift targeted beliefs and perspectives about school and learning, can improve students’ outcomes (Walton & Wilson, 2018). Furthermore, a growing body of research indicates that these interventions are more effective when aspects of the learning environment support and reinforce the targeted mindset (see Walton & Yeager, 2020). For instance, researchers randomly assigned students to complete a “synergistic mindsets” intervention—which teaches the belief that the physiological stress response (e.g., a racing heart) can be an asset to learning and performance—or a control activity (see Yeager et al., 2022). They found that the intervention was more effective at changing students’ beliefs about stress and promoted more challenge-seeking behavior when students also received messages from their instructor that explicitly supported this view of stress, as compared to neutral messages (Hecht et al., 2023).
Mindset-supportive messages can require time and expertise to write effectively. Here we tested whether large language models (LLMs) could generate mindset-supportive language that is comparable to that written by mindset researchers.
Methods and Materials
This study was conducted in an introductory psychology course with 1,833 students. Students were randomly assigned to a 2 (synergistic mindsets vs. control) x 3 (messages: LLM-written supportive vs. researcher-written supportive vs. control) design. At the start of the semester students completed the synergistic mindsets intervention or a control activity, depending on condition. Four times throughout the semester students received messages from the instructor. In the supportive message conditions, these messages written by both researchers and LLMs described how aspects of the course (e.g., weekly quizzes) were designed to give students the chance to practice harnessing their stress response. In the neutral message condition, the messages did not discuss the role of stress in learning. We measured students’ appraisals of stress after a quiz each week with four items (α = .77-.80 depending on week).
We tested effects of condition using a multilevel linear model that nested observations within students using a random intercept. The model included fixed effects for condition and week. Condition was dummy-coded with the Control + Neutral Messages condition as the reference group. We report p-values from these models and use Cohen’s ds to quantify effect sizes.
Results
Consistent with previous findings, intervention effects were larger when paired with researcher-written supportive messages (d = .50, p < .001) than when paired with neutral messages (d = .33, p < .001). Stress appraisals significantly differed between these conditions (p = .003). When paired with LLM-written supportive messages, intervention effects were only slightly smaller than when paired with researcher-written supportive messages (d = .46, p < .001). Stress appraisals did not significantly differ between these conditions (p = .512).
Scholarly Significance
Results of this study suggest that LLMs can effectively learn and generate language that supports a nuanced intervention message. Though future research is needed to expand beyond this narrow use case (e.g., into different mindsets, different courses), LLMs may eventually help to create scalable tools that can help teachers better support their students psychologically.
Margarett Clapper, University of Texas at Austin
Cameron Hecht, University of Rochester
Samuel D. Gosling, University of Texas at Austin
Christopher Bryan, University of Texas at Austin
Jeremy Jamieson, University of Rochester
Jared S. Murray, University of Texas at Austin
David Yeager, University of Texas at Austin