Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Background
Scalable, cost-effective social-psychological interventions have improved self-regulation and raised academic achievement (19). But beyond strictly controlled, randomized trials, how effective are these interventions when scientists need to trade strict control for greater reach and impact, and when students have the choice of when and how to self-administer them? If social-psychological interventions are to fulfill their potential of reaching and benefitting people at scale, researchers have to go beyond efficacy trials—to also scale and track real-world intervention uptake, use, effectiveness, and heterogeneity. Complementing randomized, controlled experiments (RCTs) with real-world translational effectiveness studies is crucial for achieving scalable, sustainable impact (see Figure 6).
Aims & Methods
We started with randomized, controlled trials to establish efficacy of a “Exam Playbook” intervention, integrated it into a university’s learning system for students to self-administer, and then conducted large-scale evaluations of its effectiveness. The Exam Playbook is a strategic resource use intervention that guides students through the metacognitive process of purposefully selecting and planning how they would make effective use of available learning resources in exam preparation (20, 21). In earlier RCTs, this intervention demonstrated a moderate impact on students’ exam performance in college statistics classes (d’s=0.33-0.37; Chen et al. 2017). By integrating it with ECoach technological infrastructure (22), we scaled and tracked its use across the university. This enabled us to address the questions: How do students engage with the Exam Playbook in more versus less effective ways? How might its effectiveness differ based on who uses it and under what conditions they use it?
We made the Exam Playbook freely available in 76 large STEM classes at a large Midwestern university, and studied how 53,299 enrolled students interacted with it across 9 semesters. We combined their behavioral engagement data with their performance metrics.
Results
A mixed-effects meta-regression model showed that students who used the Playbook scored an average of 2.67 ([2.00 – 3.33], p<.001) percentage points higher than non-Playbook users (Figure 7)—even when controlling for college entrance exam scores.
In students’ open-ended text responses, anticipating the exam format (b=1.31, p<.001), articulating how each resource builds mastery (b=1.25, p<.001), and considering one’s strengths and weaknesses (b=0.93, p<.001) significantly predicted higher exam scores, controlling for word count.
Moreover, the intervention was 63% more effective during Covid, compared to pre-Covid (b=1.38 [0.70, 2.06], p<.001; d=0.26; Figure 8). Perhaps self-regulated learning was even more important, and challenging, to do effectively during Covid. Hence, using the Exam Playbook may have been even more beneficial amidst such challenges. These differences were not driven by students’ motivation, prior performance, or socioeconomic status.
The intervention offered greater benefits to lower-performing students, and reduced gender (by 47.5%), first-generation (by 13.6%), and some racial achievement gaps.
Scientific or Scholarly Significance
Our research pushes the boundaries of social-psychological interventions for self-regulated learning—by tracking uptake, analyzing student engagement through text analyses, and investigating effect heterogeneity across time and demographics at scale. Beyond theory development, such continuous evaluations are crucial for sustained real-world impact.
Patricia Chen, University of Texas at Austin
Luke D. Rutten, University of Texas at Austin
Nathaniel Woznicki, New York University
Yang-Hsin Fan, University of Texas at Austin
Holly A. Derry, University of Michigan
Benjamin T. Hayward, University of Michigan
Erin Murray, University of Michigan
Rebecca L. Matz, University of Michigan
Desmond C. Ong, University of Texas at Austin