Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Search Tips
Personal Schedule
Change Preferences / Time Zone
Sign In
Group Submission Type: Formal Panel Session
The United Nations (UN) estimates that by 2030, 300 million students will lack the basic numeracy and literacy skills essential for full participation in today’s world (United Nations, 2023). Importantly, global stakeholders increasingly converge on the importance of investing in field-feasible, contextually appropriate, and psychometrically sound measurement tools to support achieving quality education in crisis contexts. These tools can provide accurate and timely data — what is often referred to as the “lifeblood” of the SDGs (Sachs, 2012, p. 2210) — about critical dimensions of children’s learning and holistic development to support evidence-based decision-making within education systems. In the last two decades, widespread use of orally administered assessments of learning for pre-primary and primary school-aged children in low-and middle-income countries (LMICs) has helped policymakers and educators understand children’s progress in reading, numeracy, and increasingly in social-emotional learning (SEL) (Montoya et al., 2016; Mulligan & Ayoub, 2023; Sowa et al., 2021). The development of sound assessments is an important scientific inquiry and a moral imperative to safeguard children’s right to quality education. Such culturally relevant and scalable assessments are needed to better understand learning gaps of children – a need that is underscored by the high economic cost of the lack of formal education (UNESCO, 2024).
While there has been growth in development and testing of learning assessments administered face-to-face, there is limited evidence for learning assessments administered remotely or using new technologies and frameworks. There is an urgent need to develop and test such assessments as educators, officials and humanitarian actors in crisis-affected settings have consistently faced the challenge of how to assess children’s academic outcomes and social-emotional skills when lockdowns, school closures and other unexpected crises prevent the use of face-to-face assessment tools. The 2020 COVID-19 pandemic exacerbated and highlighted this challenge, as 214 million students from pre-primary to upper secondary education in 23 countries missed at least three quarters of classroom instruction time over one academic year due to school closures (UNICEF, 2021). Distance learning programs were implemented, but practitioners lacked valid, reliable, relevant and feasible remote assessment tools to evaluate students’ growth of academic and social-emotional skills when engaging with these programs. New measure and data collection frameworks to develop longitudinal datasets are also needed for researchers to access and further explore variability among target populations.
In Paper 1, the authors will present the psychometric properties of the High Access modality of the Remote Assessment of Learning (ReAL) tool, with a focus on inter-rater reliability, factor structure, item difficulty, criterion validity, and test-retest reliability. The authors will show that the results provide moderate evidence that ReAL is a valid and reliable measure for literacy and numeracy sub-domains and that the evidence is less robust for the social-emotional sub-domains. The authors will discuss the tool revisions to better align literacy and numeracy items with the skill levels of 5-14-year-old children and results of additional testing. The authors also will discuss the implications for future tech-enabled assessments of learning and development.
In Paper 2, the authors will share practical experiences and findings from 3 years of rigorous research and development of two free assessments, known respectively as the Self-Administered Early Grade Reading Assessment (SA-EGRA) and the Self-Administered Early Grade Mathematics Assessment (SA-EGMA) in English, Chichewa, and Kiswahili. As a self-assessment, children complete the assessments independently in response to instructions and stimuli imbedded in the tablet-based software, RTI’s open-source data collection platform, Tangerine. The authors will share findings from a wide range of reliability and validity analyses as well as implementation tips and tricks.
In Paper 3. the authors will present the development of the Learning Variability Network Exchange (LEVANTE) core tasks, which are designed to assess literacy, numeracy, and a number of core cognitive skills (reasoning, executive function, spatial cognition, social cognition, and language), as well as social constructs related to the home and school environment. The presentation will focus on the design, psychometric properties and factor structure of the instruments in an initial sample of Colombian, German, and Canadian children, 5-12 years of age. The authors will discuss the challenges of choosing, translating, and revising the instruments, and the promise of transforming them into adaptive assessments to collect a global longitudinal dataset to capture variability in children’s development.
Remotely Assessing Foundational Skills of 5-14-Year-Old Children: A Six-Country Psychometric Evaluation of the Remote Assessment of Learning (ReAL) - Elizabeth Hentschel, Abt Global; Sascha Hein, Free University Berlin; Julia Taladay, Save the Children US; Gillian Valentine, Save The Children International; Liliana Angelica Ponguta, Yale University; Allyson Krupar, Save the Children US
Tablet-based Self-administered EGRA/EGMA Development and Practical Lessons - Abraham Bahlibi, Imagine Worldwide; Jennifer Ryan, RTI International