Paper Summary
Share...

Direct link:

Developing a System of Practical Measures, Routines, and Representations to Inform and Enhance Instructional Improvement Initiatives

Sun, April 7, 8:00 to 9:30am, Metro Toronto Convention Centre, Floor: 800 Level, Room 801A

Abstract

Purpose:
This poster focuses on a key aspect of improvement research, the development and use of practical measures to inform improvement efforts (Bryk, Gomez, Grunow, & LeMahieu, 2015). Specifically, we report on an ongoing effort to develop practical measures of 1) aspects of high-quality mathematics instruction linked to student learning, and 2) supports for teachers to improve the quality of instruction and students’ learning.

Perspective:
We first describe criteria for effective practical measures that differentiate them from research and accountability measures, including that they should fit with rather than disrupt practitioners’ current practices and can thus be used frequently to provide ongoing feedback. We also give an overview of the envisioned system of measures for supporting instructional improvement efforts, together with the associated routines, and representations that five research-practice partnerships (RPPs) are developing collaboratively.

Method and Data:
We illustrate our process for developing practical measures and investigating their use by focusing on a measure of the quality of whole-class discussions that takes the form of a short student survey. We describe how we drew on existing research to develop initial items and how we iteratively tested and revised the items until students’ responses (aggregated at the classroom level) matched expert assessment of the quality of the corresponding aspects of discussions.

Results:
We report analyses of how the whole class discussion measure has been used to inform instructional improvement initiatives in two RPPs.

In one RPP, mathematics specialists used the measure to inform a curriculum writing initiative aimed at improving the rigor of lessons and units. As part of the effort, six teachers (grades 6-8) routinely piloted newly written lessons and administered the survey measures to their students (total of 56 administrations). The resulting data provided feedback to the mathematics specialists regarding the quality of discussions associated with specific lessons. Researchers audio-recorded sessions in which mathematics specialists discussed the data and revised lessons; and coded the rigor of the written lessons using the Instructional Quality Assessment (Boston, 2012). Analysis indicates the data provided by the measure supported productive revisions of the written lessons.

In a second RPP, the measure has been used to enhance coaches’ work with teachers. We are investigating coaches’ use of the measure as they conduct coaching cycles with individual teachers by observing and live coding focal lessons, by audio-recording the coaches’ and teachers’ initial planning conversations and subsequent debriefing conversations, and by audio-recording semi-structured interviews with coaches and teachers. The analysis of the resulting data clarifies the specific ways in which the coaches’ use of the surveys enhanced their efforts to support teachers’ learning, thereby serving as supports for as well as measures of improvement.

As part of the investigation of each of these uses of the measure, we seek to clarify necessary conditions for the measure to be used productively to enhance rather than impair instructional improvement efforts.

Significance:
These results illustrate the contribution that practical measures can make to instructional improvement efforts in core content areas by serving as levers for (and measures of) improvement.

Authors