Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Browse Sessions by Descriptor
Browse Papers by Descriptor
Browse Sessions by Research Method
Browse Papers by Research Method
Search Tips
Annual Meeting Housing and Travel
Personal Schedule
Change Preferences / Time Zone
Sign In
Sparse rating designs, where each examinee’s performance is scored by a small proportion of raters, are prevalent in practical performance assessments. However, relatively little research has focused on the degree to which different analytic techniques alert researchers to rater effects in such designs. We used a simulation study to compare the information provided by two popular approaches: Generalizability theory (G theory) and Many-Facet Rasch (MFR) measurement. In previous comparisons, researchers used complete data that were not simulated--limiting their ability to manipulate characteristics such as rater effects. Both approaches provided information about rating quality in sparse designs, but the MFR approach highlighted individual rater-level measurement concerns more readily than G theory. We discuss implications in the full paper.