Session Summary
Share...

Direct link:

Using Assessment Engineering and Item Complexity Modeling to Enhance Score Interpretation

Fri, April 12, 3:05 to 4:35pm, Convention Center, Floor: First, 121C

Session Type: Coordinated Paper Session

Abstract

Subject matter experts (SMEs) used assessment engineering techniques to define complexity design layers (CDLs) and develop complexity scoring protocols (CSPs) for MAP Growth math and reading assessments. Psychometricians used the ratings and derived complexity covariates (DCCs) to predict item difficulty. Purposes of the work include the development of resources to facilitate score interpretation and automatic item generation.

Math and reading SMEs followed a similar process to define the CDLs and CSPs. They drew from existing literature and content standards to define the initial versions. SMEs then went through two rounds of rating a sample of 50 items and revising the protocols. After a final round of revisions, SMEs rated a sample of 100 items in each subject.

Psychometricians analyzed the ratings. R-squared was 0.79 for the math analysis and it was 0.69 for the reading analysis. The analysis explored a variety of machine learning methods for producing DCCs and predicting item difficulty.

Two papers in this coordinated session describe work by the SMEs to define and refine the CDLs and CSPs in math and reading. The other two papers describe item difficulty modeling results using regression and a variety of machine learning methods.

Sub Unit

Session Organizer

Chair

Individual Presentations

Discussant