Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
X (Twitter)
Automated scoring engines (ASE) have gained popularity in recent years. Researchers have focused on gathering evidence to support the use of ASE or its integration with human raters in scoring procedures. The purpose of this study is to explore the combination of ASE with human raters to detect changes in rater severity (rater drift) across multiple administrations. We used simulated data to explore how measurement models can be used to incorporate ASE into rater drift analyses. Results indicated that ASE can be efficiently integrated with human raters to detect rater drift using a concurrent calibration approach with measurement models. Our results also suggested that including ASE in the estimation procedure enhanced the accuracy of drift detection for human raters.