Paper Summary
Share...

Direct link:

Capturing the "Context": Learning With and From the Development of a Survey for "Urban" Settings

Mon, April 16, 2:15 to 3:45pm, New York Hilton Midtown, Floor: Concourse Level, Concourse A Room

Abstract

Objective
As the field moves towards a consideration of context-specific over more generic approaches to teacher preparation, we are faced with the dilemma of developing instruments with the capacity to surface what might be relevant for a particular setting while also capturing important contextual differences. In this paper, we report on the collective effort of one research consortium to develop a survey about teacher preparation for “urban” settings to use across our multiple programs. We discuss our approach to survey development, share key findings from our pilot, and consider implications for ongoing improvement.

Theoretical Framework
We consider this broad call for improved instruments through the lens of context-specific teacher education, defined as a targeted form of preparation to help aspiring teachers learn in ways that are grounded in practice while also developing a nuanced understanding of specific students and contexts in which they will be teaching (Authors, 2014; Authors, 2015). Given that the programs we represented included many “context-specific” programs, we aimed to develop a survey instrument that surfaces, rather than masks, the unique targeted nature of work within their settings.

Methods and Data Sources Our group’s survey development efforts were guided by the question how do our candidates learn to be “urban” educators? We initially searched for existing surveys about preservice teaching as well as those addressing issues of race/ethnicity, class, language, and identity--all pertinent to the preparation of teachers entering schools in underserved settings.

The survey we ultimately developed was administered electronically (6/2016) across seven teacher preparation programs (n=233) in the sample. Because there were no forced responses, item response ranged between 130--150; and program response range per item was 18--46 candidates. We used a non-parametric test of variance (Kruskal-Wallis) on each item to determine statistically significant differences between programs.

Results
We were surprised by the dearth of surveys that queried candidates about issues of race, class, language, and culture. When present, items were often in the form of attitudinal scales, rather than measuring “opportunities to learn” or “perceived preparedness.” Some questions raised concerns about their potential to lead to socially desirable responses.

Our results revealed surprising variations as well as some assuring consistencies. For instance, while candidates varied significantly in the degree to which they felt their programs were committed to social justice and equity (unexpected given how foundational these ideas were to the programs included), across all programs, candidates reported strong intentions to teach in underserved schools. The structure of a shared survey provided a rich built-in comparison group, affording programs an opportunity to consider their candidates’ responses relative to the mean of the entire group for the same item.

Significance
The research group’s experience in developing a shared survey reveals some of the challenges and opportunities inherent to developing quality instruments across multiple context-specific programs. By learning with and from relevant data, programs can collaboratively engage in informed program and instrument improvement.

Authors