Individual Submission Summary
Share...

Direct link:

Developing a Preschool Observational Measure to Assess Group Time Intervention Success

Sat, March 23, 12:45 to 2:15pm, Baltimore Convention Center, Floor: Level 3, Room 330

Integrative Statement

Self-regulation is an important predictor of short- and long-term social and academic success (McClelland et al., 2013). Because many young children enter school struggling with self-regulation, there are many preschool intervention programs that target these skills (e.g., McClelland, Tominey, Schmitt, & Duncan, 2017). Given that effect sizes for many early childhood interventions are modest (McClelland et al., 2017), understanding factors related to implementation effectiveness is critical.
Red Light, Purple Light (RLPL) is an intervention (twice a week for eight weeks) using music and movement games to target children’s self-regulation development. Three randomized controlled trials of RLPL have been conducted, with significant intervention effects for subgroups of children on self-regulation and early achievement outcomes (e.g., literacy and math; Duncan et al., 2018; Tominey & McClelland 2011; Schmitt et al., 2015). Another randomized controlled trial is being conducted to refine the intervention.
As part of an iterative development process, we developed an observational measure to examine quality, adherence, and responsiveness (Berkel et al., 2011)-key factors associated with intervention implementation. Minimal training on the measure was required. Observers attended a 2-hour training on the measure and coded master-coded videos, using a detailed observation rubric. To use the measure, observers watch and take notes while teachers deliver intervention sessions during group time. Observers then code (in 5 minutes) for ten items (e.g., teacher behaviors, classroom management, availability of additional support, and child engagement). The measure was piloted with fourteen early childhood teachers across two years. Data are presented from N=34 group time sessions and paired with evaluation surveys, which asked teachers to rate aspects of the intervention session, including length, difficulty, whether or not they and children liked the games, if the training manual was helpful, and time required for preparation.
Observer report. Teachers were rated relatively highly on all ten items (Means: 2.15-2.88; SDs:.41-.89; range: 1-3) with lowest mean score for “Teacher incorporated children’s ideas” and the highest for “Teacher modeled appropriate behavior.” The child-related item on the measure (children engaged) was correlated with “Teacher checks for children’s understanding” (r=.3, p=.08), but not with other items.
Comparisons with intervention evaluation surveys. Observer ratings were significantly related to teacher evaluation surveys. Teacher reports on finding the manual helpful were associated with teachers and kids liking the games (r=.76 and r=.71, p<.001), optimal length of session (r=.24, p<.001), and optimal difficulty (r=.19, p<.01). Teachers reporting that they (and children) liked a session was associated with teacher rating that length of session took the appropriate amount of time (r=.39 and .37, respectively). Teachers and observer ratings of the appropriateness of session length were correlated (r=.36, p=.06). In addition, teacher reports related to session difficulty was associated with frequency of teachers checking for children’s understanding, with greater difficulty related to more frequent checking for understanding (r=-.33, p=.08). Overall, results suggest that the observational coding system, which is short and practical, may be a feasible way to assess fidelity of implementation and is related to teacher-report measures. Future studies will investigate associations between the measure and intervention effects.

Authors