Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
This meta-analysis evaluated the criterion-related validity and classification accuracy of Computer adaptive tests (CATs) in reading assessment. Results synthesizing 32 studies showed that the criterion-related validity for CATs is r = .67, 95% CI [.64, .70] (concurrent validity r = 0.66, 95% CI [.61, .71], predictive validity r = 0.68, 95% CI [.64, .71]). A bivariate meta-analysis accounting for the correlation between sensitivity and specificity yielded an overall sensitivity of .70 (95% CI [.66, .74]) and specificity of .76 (95% CI [.72, .80]). These values fall below recommended benchmarks for universal screeners (e.g., sensitivity ≥ .80). Moderator analyses revealed that validity and accuracy varied by grade level, test duration, item response theory (IRT) framework, and cut-score selection method.