Paper Summary
Share...

Direct link:

Identification and Selection of "Odds-Beating" Elementary Schools: Comparisons of Relative School Performance on Common Core Assessments

Sat, April 18, 2:45 to 4:15pm, Swissotel, Floor: Event Centre First Level, Zurich AB

Abstract

Purpose

The purpose of this paper is to explain the sampling of this study’s schools. Special interest resided in schools whose students exceeded performance expectations on the 2012-13 New York State assessments, the state’s first that were explicitly aligned with the Common Core Learning Standards (CCLS). In addition, this paper explores how schools’ state accountability status based on earlier assessments was related to 2012-13 results.

Theory

Starting with the Coleman Report in the 1960s, research documents a strong relationship between schools’ demographic composition and academic performance. However, systematic variation in performance can be found even between schools serving similar populations. Higher-than-predicted student learning results on standardized assessments suggest that some are making more progress than comparison schools. Our team calls these schools “odds beating.” Our sampling method enables contrasts with comparable “typical schools.”

Methods

Six comparisons of relative student performance – two subjects (mathematics and English language arts) at three grade levels (third through fifth) – were used in identifying “odds-beating” and “typically performing” schools. This paper presents salient details about the multiple analyses and comparisons made in order to ensure that schools identified as “odds beating” were distinctive in their CCLS assessments.

Data Sources and Analytic Procedures

The regression analyses used publically available data. These analyses predicted the schools’ average scores in a given subject for a given grade level as a function of two student populations: the percentage of students who were economically disadvantaged and the percentage who were limited English proficient.

The number of students in a school who took a given exam was used to adjust for the statistical reliable of mean score estimates. The six actual-expected performance gaps were used to classify schools based on whether (a) a school’s average gap was at least one standard deviation greater than the mean gap in the state and/or (b) at least three of the gaps were statistically significant.

Findings

Student demographics accounted for over two thirds of the variation in average scores on the CCLS assessments. In a viewer-friendly format, graphical displays will show a downward facing s-shaped curve summarizes the relationship between average scores and rates of economic disadvantage for all subjects and grades. “Typically performing” schools will be indicated along an “expected” line along the center of the curve; “odds-beating” schools shown along the top border of the curve indicating significantly positive actual-expected gaps.

Displays will also show the congruence between schools’ accountability status based on 2010-12 assessments and the 2012-13 CCLS assessments. For example, just over 30% of the “reward” elementary schools (recognized by the state for high student growth or closing sub-group performance gaps on the earlier assessments) being classified as “odds beating” compared to only 9% of “priority or focus” schools (classified by the state as the bottom 5% of schools on earlier assessments).

Significance

The fact that our odds-beating schools tended to perform relatively well on pre-2013 CCLS assessments suggests that they have enduring characteristics; and also that these characteristics are instrumental in their CCLS and APPR adoption and implementation.

Authors