Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Visiting Washington, D.C.
Personal Schedule
Sign In
X (Twitter)
Much attention has been given recently to the importance of measuring quality of implementation of research and evidence-based programs. Identification of the core elements of the program gives rise to measures that allow monitoring of the fidelity of structure. Equally important is the identification of core facilitation skills that must be present to be confident in the fidelity of the process (Mobray et al, 2003). By measuring both major components of fidelity, we are better able to understand the participant experience of the programs being provided. This information is vital to both program improvement and ongoing training, but also for understanding participant outcomes.
This paper reports on the fidelity measures collected during the delivery of the Cultivating Awareness and Resilience (CARE) for Teachers Professional Development Program for an IES-funded efficacy trial. Two cohorts participated in the study during the 2012-2014 school years, and a total of 226 teachers from 36 schools were recruited. Teachers were randomly assigned within schools to intervention or waitlist control. Intervention teachers received CARE at the beginning of Year 1 and control teachers receive training at the end of Year 2. For these analyses, data was used from 125 teachers in three intervention group trainings and one control group training. The CARE program is presented across five days, over a span of two to three months.
Implementation quality was measured in two ways; fidelity to structure was measured by observers rating the extent to which participant objectives for each core component were met. To address fidelity of process, observers rated facilitators on the extent to which they engaged in desired and undesired facilitation skills during each session. A combined rating of process and structure was computed to examine overall implementation quality. Participant outcomes were assessed through daily knowledge assessments and participant ratings of usefulness that were completed at the end of each session and averaged at the training level. Additionally, teachers completed a participant evaluation on the last day of training which asked about effects of CARE for them personally, and effects they were seeing in the classroom.
Preliminary analyses indicate that overall implementation quality significantly predicted positive increases at the end of the training in teacher ratings of feelings of control over their time (= .31, t(88) = 2.94, p < .01) and teacher ratings of student prosocial behavior in the classroom (= .34, t(88) = 3.25, p < .01). There was a marginally significant effect of fidelity of process on participants average ratings of usefulness of the training (= .18, t(88) = 1.83, p <=.07), that is, when facilitators were rated as having better facilitation skills participants rated the trainings as more useful. However no relationship was found between fidelity of structure and participant ratings. Finally, no relationship was found between either fidelity of structure or fidelity of process and participants’ knowledge assessment scores. Data analyses are currently underway with follow-up ratings from intervention teachers to see if effects seen at the end of the training were still present in the following school year.
Sebrina Doyle, The Pennsylvania State University
Patricia A. Jennings, University of Virginia
Anna DeWeese, Fordham University
Jennifer L Frank, The Pennsylvania State University
Joshua L. Brown, Fordham University
Regin Tanler, Fordham University
Damira S Rasheed, Fordham University
Mark T. Greenberg, The Pennsylvania State University