Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Search Tips
Personal Schedule
Change Preferences / Time Zone
Sign In
Play-based learning promotes children's holistic development and has been found to support a wide range of early learning outcomes such as early math, language, and social-emotional skills (Hirsh-Pasek et al., 2020; Nores et al., 2019; Storksen et al., 2023). While advancements have been made to develop and implement models of play-based learning, there is a need to understand the mechanisms by which these models work. Critically, the overlap between engaged learning and play has been recognized as a key juncture where measurement approaches could be strengthened (Wolf et al., 2024). Not only do measures need to be rigorous and evidence-based, they must be flexible enough to be responsive to the myriad cultural and linguistic contexts that may benefit from them. For early childhood education, particularly in low- and middle-income countries, supporting children’s engagement has largely been absent from discussions around process quality (Chen & Wolf, 2021). In this paper, we will discuss steps taken to adapt and validate an early childhood classroom observation tool in three different countries, Colombia, Bangladesh, and South Africa.
Through the Playful Learning Across the Years (PLAY) project, RTI and NYU, with support from the LEGO Foundation, developed a globally relevant, contextually adapted, and psychometrically tested suite of tools, which we hypothesize have four underlying constructs that support children’s engagement in learning: support for exploration, agency, personal & social connection, and emotional climate (Wolf et al., 2024). In the ongoing second phase of the project, the tools have been refined based on initial findings from Colombia, Ghana, Kenya, and Jordan, and are now being implemented in a second set of countries in collaboration with local partners. In addition to collecting the data, implementing partners helped contextualize them for that context.
We will first present the process of contextualization that we have taken (and that is included in the guidelines for implementing PLAY tools in other contexts), illustrating using examples from the three countries, and with a particular focus on the classroom observation tool. In each country, we followed a general process of collaborating with local partners (BRAC in Bangladesh, AeioTU in Colombia and JET in South Africa) to review, identify contextual challenges and/or gaps, and make modifications accordingly to items. To maintain the central meaning of the measure, adaptations to each item were limited to: minor changes to terminology that are more appropriate in a given language or setting; adjustments to the item metric (e.g. raising the number of instances of a behavior observed in the “high” level of the item); or adding, removing, or otherwise changing the examples provided for a given item. Following initial contextualization, research partners conducted remote trainings (using a train the trainer model) with a team of six from each implementing partner. Following these trainings, the organizations led trainings with enumerators for pilot data collection, the results of which were then used to inform further adaptations to items in each context. Pilot trainings were a minimum of five days and required that participants pass calibration with 70% accuracy to move forward as trainers. As part of this presentation we will share pilot data from each country, presenting distributions of the items and adaptations that were made in response. We will discuss the role of working across variant sub-contexts in each country (in Bangladesh our sample includes children in Rohingya refugee camps and host settings, and in Colombia spans across center-based and home-based early learning settings), and offer implications for other contexts including other countries serving displaced populations.
Second, we will present preliminary findings on the reliability and validity of the tools in these contexts, which will provide evidence for whether these tools, and the processes we undertook to contextualize them, are indeed valid for measuring quality of support for children’s engagement in learning in these contexts. Prior to full-scale data collection (and following the contextualization and pilot processes), research partners delivered in-person trainings on the finalized versions of each of the contextualized tools. Trainings for this main data collection varied in form across contexts. In Colombia, we provided a hybrid model whereby we collaborated with AeioTU to first co-lead a training in Bogota (with master coders identified from the first pilot training); these trainers then led the following trainings in the other cities in our sample (Medellin and Cartagena). In Bangladesh, a train the trainer model was used, and South Africa employed a direct training model with program staff in person. Again, all participants were required to pass a 70% accuracy threshold with video calibration. PLAY data collection will include two rounds of observations in classrooms. In addition, at the beginning and end of each of their school year, partners collect child learning outcomes data using child direct assessments (the International Development and Early Learning Assessment (IDELA) in Bangladesh and Colombia, and the Early Learning Outcomes Measure (ELOM), a direct assessment validated in South Africa for 4-6 year old children, in the South Africa sample).
Following completion of all data collection, we will assess the performance of these tools in each country (n= 150-375 classrooms and 900- 4,259-children). We will present results on inter-rater and test-retest reliability, factor analysis, and concurrent validity with other classroom features and teacher characteristics. Furthermore, we will examine associations with child outcomes, as directly measured in children at the beginning (baseline) and end (endline) of the relevant school year.
By investigating the implementation and performance of these tools across a set of countries, Colombia, Bangladesh, and South Africa, we continue to take the first steps in developing a technical toolkit that is both responsive to context and validly measures the quality at which early learning environments support children’s engagement in learning. In addition, these results will add to the evidence base on the mechanisms underlying how adult-child interactions support playful learning, how these mechanisms might vary and manifest differently across contexts, and the steps that researchers and programs alike can take to ensure the cultural and contextual relevance of tools that capture these processes.