Individual Submission Summary
Share...

Direct link:

Technical barriers to the use of learning assessment data: Availability of Data

Thu, April 18, 3:15 to 4:45pm, Hyatt Regency, Floor: Street (Level 0), Regency B

Proposal

Multiple countries around the globe still lack reliable and comparative data on learning outcomes, especially in the developing world. According to the UNESCO Institute of Statistics (UIS), approximately 100 countries still do not systematically assess learning outcomes. Based on Education Commission estimations, in developing countries, only half conduct national learning assessments more consistently at primary level. At lower-secondary level, this figure accounts for only 7% of low-income countries and 26% of lower-middle-income countries. The participation of developing countries in international or regional assessments is also low – only 30%. Consequently, many countries do not possess data on basic reading and math competencies: out of the 121 countries examined in an assessment of capacity to monitor progress toward Sustainable Development Goals (SDGs), a third did not possess data necessary to report on the reading and mathematics proficiency of children at the end of primary school whereas around a half of participating countries did not possess this data for lower-secondary education.

When assessment systems are in place, they frequently suffer from various quality issues regarding assessment design and administration. This is a key issue as if learning assessments provide unreliable data or if it is perceived as such, they could be discredited for the use in policy-making. Flaws in instrumentation, sampling and analysis can raise questions about the validity of data, causing potential users to pause before acting on findings or to ignore them altogether. Moreover, non-comparability of assessment cycles as well as difficult comparisons among countries is another important issue. This prevents countries from drawing meaningful conclusions regarding the performance of their education systems and such data cannot guide policy formulation or suggest system-level solutions.

Beyond technical elements, the relevance of the information provided by learning assessments highly determines its use for policies. There is a common criticism among countries participating in the international and regional assessments regarding the irrelevance of such assessments to the needs and goals of the specific education systems. Countries therefore often approach this evidence with caution and reservation.

These technical aspects substantially influence the extent to which learning data can meaningfully influence education policies. This panel presentation will therefore focus on the following question:

• To what extent can a lack of learning data as well as its methodological shortcomings explain its limited use in policy formulation in developing countries?

This presentation will draw on the UIS experience when working with countries to report on SDG 4 indicators linked to learning and directly observing gaps in national databases. It will allow estimating what is the weight of technical aspects in a broader explanation of why learning data often remain underused.

Author