Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Search Tips
Personal Schedule
Change Preferences / Time Zone
Sign In
Measurement of early childhood development is crucial to determine if curricula, teaching practices, programs, and policies are effective for young children. The International Development and Early Learning Assessment (IDELA) has proven itself a valuable tool with strong validity evidence from a range of settings and countries and has been used for research on early childhood development internationally as well as an outcome measure in evaluations of ECD programs and policies designed to promote children’s development.
Since 2013, Save the Children Rwanda, in collaboration with the Rwanda Education Board have adapted and deployed IDELA in the Rwandan context as part of the “Early Literacy & Maths Initiative (ELMI)”. To further resolve the gap of limited access to real-time ECD data in the country, USAID Schools and System started the process of institutionalizing IDELA in order to ensure that pre-primary education has a standard tool for ECD measurement, determine the effectiveness of curricula, teaching practices, programs and policies for young children.
While scores calculated according to standardized IDELA methodology are useful for hypothesis testing and group mean comparison, they do not have intrinsic value-laden meaning. This makes using IDELA for monitoring purposes challenging. Average scores can be compared over time, but without the ability to attach labels to ranges of scores, it can be difficult to communicate the practical meaning of changes in scores distributions.
This presentation describes the efforts to integrate IDELA into systems-level ECD monitoring in Rwanda, and outlines a methodology to derive contextually relevant and culturally sensitive benchmarks. These benchmarks allow users to give meaningful labels to different scoring levels on IDELA, and facilitates communicating results of monitoring and evaluation efforts.
The proposed methodology is based on a modification of the Angoff and bookmark standard setting methods to follow several steps to derive benchmarks in conjunction with a group of subject matter experts (SME) in a setting.
1. Identify the desired score levels (e.g. on-track vs. off-track or struggling, basic, and proficient)
2. Identify the relevant age-group populations;
3. Write policy-level descriptions for each age-group and each score level, and calibrate performance expectations across SMEs;
4. Examine score distribution of a representative population;
5. Estimate probabilities that “minimally proficient” children in each score level will answer correctly;
6. Collate crowd-sourced probabilities to aggregate cut-points and examine effects on empirical score distributions;
7. Iterate until consensus on appropriate cut-points among SMEs is reached.
This presentation illustrates a generalizable methodology that is possible to implement in any setting with IDELA data and a group of subject matter experts. To support the presentation, the practical experience of gathering SMEs in Rwanda and creating national benchmarks with a nationally representative sample of 2,738 children is discussed, along a reflection around the challenges and usefulness of deriving performance benchmarks for ECD.