Paper Summary
Share...

Direct link:

Does Sparseness Matter? Comparing Generalizability Theory and Many-Facet Rasch Measurement in Sparse Rating Designs

Fri, April 9, 4:10 to 5:40pm EDT (4:10 to 5:40pm EDT), SIG Sessions, SIG-Rasch Measurement Paper and Symposium Sessions

Abstract

Sparse rating designs, where each examinee’s performance is scored by a small proportion of raters, are prevalent in practical performance assessments. However, relatively little research has focused on the degree to which different analytic techniques alert researchers to rater effects in such designs. We used a simulation study to compare the information provided by two popular approaches: Generalizability theory (G theory) and Many-Facet Rasch (MFR) measurement. In previous comparisons, researchers used complete data that were not simulated--limiting their ability to manipulate characteristics such as rater effects. Both approaches provided information about rating quality in sparse designs, but the MFR approach highlighted individual rater-level measurement concerns more readily than G theory. We discuss implications in the full paper.

Authors