Individual Submission Summary
Share...

Direct link:

Approaches and Considerations for Short Forming Internationally Used Direct Assessments of Early Childhood Development

Fri, April 9, 1:10 to 2:40pm EDT (1:10 to 2:40pm EDT), Virtual

Abstract

Well-administered direct assessments of child developmental outcomes are widely regarded as a rich source of data for early child development (National Research Council, 2008). By assessing children directly and observing their competencies in a range of developmental domains through games and activities, we avoid problems of bias and inaccurate recall. As a result, direct child assessments designed specifically for use in low- and middle-income countries have become increasingly popular tools for program evaluations and research studies in developing countries (Fernald et al., 2017; Rubio-Codina et al., 2016).

Despite their increasing use in research and evaluation, most large-scale data collection and nationally representative studies with early childhood developmental indicators rely on parent or caregiver-reported measures of child development. Instruments such as the Early Child Development Index (and the forthcoming ECDI 2030) are quick and easy to administer and can yield imperfect but valuable data on child development from an international perspective (McCoy, 2016; Cuartas, 2019).

Short forms of internationally used child developmental assessments may provide a valuable compromise that are practical for use in large-scale data collection in low-resources settings. Doing so may allow the collection of more comparable data across countries by standardizing assessment administration and minimizing recall and social desirability bias from caregivers.

This paper examines a large cross-country dataset of more than 30,000 children from 23 countries on the International Development and Early Learning Assessment (IDELA) to understand the possibility of creating a short form IDELA. Previous research has found the IDELA a valuable tool for measuring school readiness with adequate fit both as a unidimensional and multidimensional construct (Wolf et al, 2017), that the multidimensional factor structure of IDELA holds across multiple countries (Halpin et al, 2019), and that a small subset of subtasks from the assessment can explain a large amount of variance in total scores (Seiden et al., 2019). Expanding on this research, this paper analyzes this dataset using predominately 2PL Item Response Theory models. In doing so, this paper attempts to understand:

1) What IDELA items (and subtasks) are most informative to understand children’s overall developmental status?
2) What is the minimum number of items/subtasks required for a reliable short form?
3) Is it possible to preserve information about different developmental domains in a short form?

IRT gives relatively straightforward answers about the information captured by individual items in an assessment, but does not consider the relative costs of administration (e.g. in administration time, materials used, or difficulty of training and scoring). As such, this paper builds on its quantitative assessment of items and subtasks with a qualitative examination of the tradeoffs short-form assessment designers must make when selecting items for a potential short form. The paper concludes with a brief overview of two potential configurations of short form IDELA to maximize validity for divergent uses, framing the process of short form item selection as a general process applicable to any direct child development assessment.

Author