Paper Summary
Share...

Direct link:

Diagnostic Assessment Results: Instructional Uses and Potential Pitfalls

Sat, April 6, 12:20 to 1:50pm, Fairmont Royal York Hotel, Floor: Mezzanine Level, Nova Scotia

Abstract

There is a limited body of research on how educators use results to inform instruction. Yeh (2006) found that 56 of 61 interviewees (92%) were concerned that score reports from a federally-mandated assessment provided inadequate diagnostic information about student knowledge, skills, and understandings. Interviews also noted that because results were from the prior year, they were less informative to the current year’s instruction. A similar study surveying teachers on their use of summative score reports found that teachers most frequently evaluated aggregated student results by examining the mean or mode, and less frequently disaggregated results for student subgroups or by content standard (Hoover & Abrams, 2013). These findings indicate that teachers did not use results in ways that would likely provide strong support for instructional practice, such as informing specific plans for instruction regarding student intervention or enrichment, or planning instructional groupings based on similar results across students.
Diagnostic assessments can address some of these shortcomings by providing fine-grained profiles of student mastery, as shown by the example in Figure 1. Rather than reporting an overall performance level or a single raw or scale score value, mastery profiles summarize skills the student mastered based on probability values obtained from a diagnostic scoring model (e.g., diagnostic classification modeling; Bradshaw, 2017; Rupp et al., 2012). However, because of their difference from traditional reporting methods focusing on overall performance in the subject, they may also be prone to misunderstanding or misinterpretation. This presentation will introduce diagnostic assessment systems, including their unique scoring and reporting considerations, in light of interpretations and uses of results and guidance provided in professional standards (AERA et al., 2014). Example contexts will be shared for an operational, large-scale diagnostic assessment system used in 18 states, the Dynamic Learning Maps Alternate Assessment System. Further, implications will be discussed for other diagnostic assessment contexts, including potential pitfalls related to reporting results based on probabilities of mastery and overall profiles of mastery rather than raw or scale score values.

Authors