Individual Submission Summary
Share...

Direct link:

Cross-country psychometrics and framing of ReAL

Mon, March 11, 8:00 to 9:30am, Hyatt Regency Miami, Floor: Third Level, Johnson 1

Proposal

In the last two decades, the use of assessments, such as the Early Grade Reading Assessment (EGRA) and the Annual Status of Education Report (ASER), has been widespread in low- and middle-income countries (LMICs) to help policy makers and educators understand children’s progress toward literacy skill acquisition. With a shift in orienting education systems from access to achievement, the use of learning assessments has grown rapidly. Given their wider reach and ease of administration, orally administered learning assessments are being used more frequently than other learning assessments to inform education policy and practice in LMICs.

However, in 2020, at the height of the COVID-19 global pandemic, education actors in LMICs were presented with a challenge of how to assess academic and non-academic outcomes when traditional face-to-face administration modalities were not possible. Distance learning programs were designed and implemented, but educators and service providers did not have the usual assessment tools to evaluate the effectiveness of these programs.

Save the Children sought to fill this gap by developing and testing an instrument to assess literacy, numeracy, and psychosocial outcomes that could be administered remotely. In order to enable a rapid development process and use assessment items within a structure already familiar to country teams, Save the Children leveraged existing assessments such as the Holistic Assessment of Learning and Development Outcomes (HALDO), International Development and Early Learning Assessment (IDELA), International Social-emotional Learning Assessment (ISELA), Literacy Boost Reading Assessment (LBRA), and Numeracy Boost Assessment (NBA) to develop the pilot instrument known as the Remote Assessment of Learning (ReAL).

In this paper, we provide global psychometric evidence for the reliability and validity of the ReAL tool using data from Cambodia, El Salvador, Mozambique, Niger, the Occupied Palestinian Territories, the Philippines, and Sudan. Different modalities of the ReAL tool have been used in these countries including a high-access version in which items were presented using a smartphone or printed materials, a low-access version in which a conventional/basic phone was used along with common household items, and a caregiver-report version in which the child was not assessed directly. Within countries, we will assess the inter-rater and test-retest reliability, the structural validity utilizing confirmatory factor analysis and item response theory, the measurement invariance by age and gender, and the criterion validity by assessing the correlations between our domains of learning and the EGRA, EGMA and a measure of social-emotional development. Findings will describe the extent to which the ReAL tool is valid across age groups, gender, and country context, and therefore if it can serve as a reliable and valid measure of literacy, numeracy and psychosocial outcomes in settings utilizing distance learning. We will also share findings by describing the adaptation process that institutional actors undertook in each country.

Authors