Individual Submission Summary
Share...

Direct link:

Mind the gap: A cross-national look at the contextualization and functioning of a teacher observation tool

Mon, March 26, 3:00 to 4:30pm, Hilton Reforma, Floor: 2nd Floor, Don Diego 3

Proposal

Background:
The quality of program implementation matters; practitioners know it to be true anecdotally and research has confirmed that it is one of the strongest predictors of participants’ outcomes. Yet despite both empirical evidence and intuitive understanding that who delivers the intervention, how often, and with what quality are critical factors influencing an intervention’s effectiveness, systematic guidance and agreement on the definition of program implementation and how to measure it does not yet exist. As such, a major aim of the Education in Emergencies: Evidence for Action (3EA) project has been to design, adapt, and develop tools and systems that enable practitioners on the ground to better monitor the implementation and quality of education programs while simultaneously allowing researchers to measure the implementation and quality of education interventions.

One such field-developed tool is the Teacher Classroom Observation (TCO) measure created by IRC staff in Lebanon to assess teacher implementation of both behavioral (“the teacher stops and ask questions to check understanding”) and global (“the teacher treats all students in the class equally”) aspects of the Learning in a Healing Classroom program.

In this presentation, we will discuss the adaptation of the TCO to assess critical elements of the 3EA initiative (e.g., Mindfulness, Brain Games activities; see intervention, below) as well as for contextualization in each 3EA program country (Lebanon, Niger, Sierra Leone). We will also investigate preliminary cross-country validation of the tool using evidence of internal consistency, inter-rater reliability, face validity, and predictive validity.

Intervention:
Working in three countries affected by conflict or crisis—Lebanon, Niger, and Sierra Leone—the 3EA initiative is implementing a set of contextually-appropriate, low-intensity SEL interventions (Mindfulness practices and executive-functioning Brain Games) targeted at improving children’s stress, executive functioning, and basic literacy skills in emergency contexts. SEL interventions are implemented upon the foundation of IRC’s Learning in a Healing Classroom (LIHC) program.

Data and Methods:
The adaptation and contextualization of the TCO was completed at in-country design workshops via a collaborative and iterative process between field team and researchers. Teacher Classroom Observation (TCO) data collection procedures varied by country but involved trained IRC staff or ministry affiliates observing teachers a minimum of two lessons within each 4-month program period (“cycle”); content of lessons were limited to math, reading, and SEL. Teacher practices in the TCO were each rated on 1-4 scale.

For quantitative analyses, results are pooled across teachers (Niger n=150, Lebanon n=150, Sierra Leone n-60), within country, to the level of teacher practice (e.g. “the teacher uses a variety of questioning techniques”). To investigate internal consistency, practices are grouped into theorized constructs (e.g. “Reading Pedagogy”). Inter-rater reliability is investigated via a subset of TCO observations completed by two trained raters; exact-match agreement and Cohen’s kappa are both utilized.

Information regarding teacher and learning coaches’ acceptance of the tool, including perceptions on its alignment to program-specific teaching practices and congruence with teaching ideologies are examined via teacher survey and focus group in Sierra Leone.

Selected Preliminary Results:
Face validity of the tool was found to be generally high among both teachers and coaches, particularly following the contextualization of the practices.

Results indicate that psychometric functioning of the TCO varies across countries and constructs. Cronbach’s alphas, on average, in the “good” to “acceptable” range (0.6-0.8), demonstrating internal consistency across constructs with the exception of constructs with minimal items (n=2). Predictive validity was mixed with EGRA practices best predicted by classroom-management practices.

Inter-rater reliability was high in Lebanon, where the tool was developed, and lower in Niger and Sierra Leone.

Conclusion:
Given that the field of measurement of implementation is only just emerging, particularly in education, this cross-country validation of a field-generated implementation tool may be some of the first rigorous evidence of such a tool in developing contexts. Due to the cross-national sample of this study, we are indeed benefitting from the expertise of our in-country partners in the global south and deploying that knowledge across contexts—where appropriate—to ensure high-quality implementation of programming and, ultimately, children’s academic learning and well-being.

Authors