Paper Summary
Share...

Direct link:

The Criterion-Related Validity and Classification Accuracy of Computer Adaptive Testing on Reading: A Meta-Analysis

Wed, April 8, 3:45 to 5:15pm PDT (3:45 to 5:15pm PDT), JW Marriott Los Angeles L.A. LIVE, Floor: Gold Level, Gold 3

Abstract

This meta-analysis evaluated the criterion-related validity and classification accuracy of Computer adaptive tests (CATs) in reading assessment. Results synthesizing 32 studies showed that the criterion-related validity for CATs is r = .67, 95% CI [.64, .70] (concurrent validity r = 0.66, 95% CI [.61, .71], predictive validity r = 0.68, 95% CI [.64, .71]). A bivariate meta-analysis accounting for the correlation between sensitivity and specificity yielded an overall sensitivity of .70 (95% CI [.66, .74]) and specificity of .76 (95% CI [.72, .80]). These values fall below recommended benchmarks for universal screeners (e.g., sensitivity ≥ .80). Moderator analyses revealed that validity and accuracy varied by grade level, test duration, item response theory (IRT) framework, and cut-score selection method.

Authors