Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
What to do in Chicago
Personal Schedule
Sign In
Science researchers and educators have called for the assessment community to apply constructed-response items that measure in-depth learning and understanding. However, the use of constructed-response items has been limited by the cost and complexity in its scoring. Automated scoring, when applied appropriately, has great potential to mitigate the scoring limitations for constructed-response items and therefore enhance scaled-up implementation of this item type. This study explored the use of c-rater-ML, an automated scoring engine developed by the Educational Testing Service, in scoring the content of eight complex science inquiry items. In general automated scoring showed satisfactory agreement with human scoring. In addition, examinations of the scoring across subgroups confirmed the consistency between automated and human scoring.