Paper Summary
Share...

Direct link:

Impact Evaluation of a Science Teacher Professional Learning Intervention Grounded in the Program's Logic Model

Mon, April 20, 4:05 to 6:05pm, Virtual Room

Abstract

Objectives: We report impacts of a two-year randomized controlled trial of a teacher professional learning (PL) intervention, including impacts on teacher content knowledge (TCK) and student science achievement and non-academic outcomes. We demonstrate a collaborative and iterative process involving evaluators and program developers that informs exploratory analysis to support ongoing development of the program’s logic model (LM).

Perspective(s): A program’s LM is a visual representation of its theoretical underpinnings. It is critical for setting a course and prioritizing analysis at the outset and for interpreting results. We stress this latter application: an LM is a falsifiable system of conjectures to be empirically tested and refined. It is essential that evaluators, program developers, and other stakeholders leverage expertise from their respective arenas and collaborate in “backward mapping” the LM in light of empirical results to show where the program’s LM is supported or challenged.

Methods: Sixty schools were randomly assigned to the intervention or Business-As-Usual. Impacts were assessed on intermediate and final outcomes after two years using standard methods for analyzing impacts from cluster randomized trials.

Data: We use data from a student science assessment, a TCK assessment, and administrator, teacher, and student surveys from more than 100 upper elementary school teachers and 2000 students across 60 schools in two states.

Results: Preliminary results (available currently) include impact on TCK of 0.292 standard deviations (p = .10) and no impacts on student-reported non-academic outcomes (self-efficacy in science learning [ES = -0.063, p = .44], quality of the science learning environment [ES = -0.001, p = .99], quality of science instruction [ES = 0.007, p = .94 ], science activities [ES = 0.067, p = .59], and cognitive demand of science lessons [ES = 0.086, p = .43] (all standardized effect sizes). There was a small negative impact on student aspirations in science learning and career (ES = -0.124, p =.05) (Tables 2.1 and 2.2). We will have the remaining results by the conference, as funder report is due in 2019.

At the conference, we will backward map impact findings further by reporting impacts on student science achievement and potential mediators, including teacher instructional practices and philosophies, student attitudes and beliefs, opportunity to learn, school climate, and outcomes related to NGSS-implementation. For instance, we are interested in whether impacts on distal outcomes, such as achievement, are mediated by opportunity to learn and school climate — both theoretically critical in moving the needle on final outcomes - and whether they in turn are associated with school leadership and teacher PL. We seek explanation of results regardless of whether impacts are observed or not. (If yes, then what are facilitators of impact? If not, then what are potential impediments?). We will also examine fidelity of implementation. The final product will be an emergent picture of the LM with impacts empirically tested.

Significance: Aligned with the AERA 2020 Conference’s theme, our session shows how collaboration among researchers and organizational stakeholders over evaluation results supports ongoing work to strengthen a program’s LM.

Authors