Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
This study addresses a persistent gap in fairness analysis by moving beyond post-hoc flagging of Differential Item Functioning (DIF) to predicting which computer-based test (CBT) items are likely to show bias. Using data from PISA 2022 mathematics and reading assessments, the analysis focuses on a stratified subsample of approximately 12,000 15-year-old students (about 4,000 each from the United States, Germany, and Canada). Logistic regression models first detect DIF, after which machine-learning classifiers predict DIF status using item-level features, together with student-level variables including gender, socioeconomic status, and Information and communication technology (ICT) access. Preliminary results suggest that interactive and linguistically complex items disadvantage students with limited technological access, stressing the need for equitable CBT item design and fairness auditing.