Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Visiting Washington, D.C.
In Event: Connecting Research and Practice: Regional Educational Laboratory Partnerships to Improve Data Use in Education
Objective or Purpose
REL Southeast tested the extent to which South Carolina’s measures of school performance could be consolidated into an overall, reliable index that can be used to rate schools under the state’s Elementary and Secondary Education Act waiver. At the time of this study, South Carolina rates school performance using three indices. These indices result in very different rankings of schools. In addition, the schools were compared across demographic profiles to identify those performing better than expected on the composite index.
School accountability is grounded, at least in part, on the idea that identifying poorly performing schools will drive innovations that improve student outcomes. For this to work, however, the accountability system must accurately identify struggling schools. Moreover, the accountability system should provide schools with consistent feedback across multiple accountability ratings. In South Carolina, the existence of three different ratings threatens the functioning of the accountability system as a whole. Although the conceptual distinctions between the rating systems can be explained, communicating results to educators is challenging when the rankings of schools are inconsistent across the indices. This also makes it difficult for schools to implement reforms based on addressing identified weaknesses.
Confirmatory factor analysis was used to identify the measurement model that best explained the covariances among the observed measures. Four traditional factor models were tested: (1) a one-factor model, (2) a two-factor model, (3) a three-factor model, and (4) a bi-factor model. Latent profile analysis was used to create demographic profiles of the state’s schools.
This study used data from the South Carolina Department of Education on public elementary schools (grades 3–5), middle schools (grades 6–8), and high schools (grades 9–12) for 2012/13. The data include the individual scores for each school on each measure for all three accountability systems.
The study found that the measures that make up the three indices currently used in South Carolina to rate schools can be combined into an overall, reliable alternative index of school performance using a bi-factor model. The general factor scores estimated from the bi-factor model served as the outcome in a subsequent regression analysis to identify which schools’ performance scores on this alternative index are better than expected (that is, which schools are beating the odds) after controlling for school demographic characteristics. Approximately 3 percent of elementary schools, 2 percent of middle schools, and 3 percent of high schools were identified as statistically exceeding their expected performance given the characteristics of the school.
Many states have multiple accountability systems, often because one system was created separately from federal accountability requirements. All systems use multiple measures, many of which may not be correlated or may even be inversely correlated. This study sheds light on one state’s system but also on more general ways for states to evaluate the congruence of their accountability measures.