Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Student engagement critically affects the accuracy and validity of educational assessments. In low-stakes contexts, a misalignment between expected and actual effort can introduce systematic bias. This study examines the “effort gap” by comparing students’ expected effort (if the test were graded) with their self-reported effort during the 2022 PISA mathematics assessment. Using data from 209,909 students across 35 OECD countries and 232 math items, we applied three differential item functioning (DIF) models (difMH, difGMH, GRDIF) within an item response theory framework, accounting for country effects. Two items were consistently flagged across models, with 150 items flagged at least once by one model. Findings emphasize the need for engagement-sensitive approaches to ensure validity in large-scale assessments.