Paper Summary
Share...

Direct link:

Using Student Behavior and Scores to Model Not Reached Items Due to Time Limits

Sat, April 11, 3:45 to 5:15pm PDT (3:45 to 5:15pm PDT), Los Angeles Convention Center, Floor: Level Two, Poster Hall - Exhibit Hall A

Abstract

Missing item responses are common in timed educational assessments, yet most scoring methods ignore why students skip items. Using data from the ICILS computational thinking assessment, we developed random forest classifiers to predict performance on unattempted final questions. We compared single-stage versus hierarchical (two-stage) models predicting whether students would skip, answer incorrectly, or earn credit. The hierarchical approach consistently outperformed the shallow model (75-93% accuracy). Feature importance analyses revealed recent performance and behavioral indicators (response time, resets, commands) predicted better than early-test performance, suggesting late-test missing responses reflect fatigue rather than overall ability, supporting model-based imputation for non-random missing data.

Authors