Proportion of children and young people: (a) in grades 2/3; (b) at the end of primary; and (c) at the end of lower secondary achieving at least a minimum proficiency level in (i) reading and (ii) mathematics, by sex
Rationale
The indicator aims to measure the percentage of children and young people who have achieved the minimum learning outcomes in reading and mathematics during or at the end of the relevant stages of education.
The higher the figure, the higher the proportion of children and/or young people reaching at least minimum proficiency in the respective domain (reading or mathematic) with the limitations indicated under the “Comments and limitations” section.
The indicator is also a direct measure of the learning outcomes achieved in the two subject areas at the end of the relevant stages of education. The three measurement points will have their own established minimum standard. There is only one threshold that divides students into above and below minimum:
Below minimum refers to the proportion or percentage of students who do not achieve a minimum standard as set up by countries according to the globally-defined minimum competencies.
Above minimum refers to the proportion or percentage of students who have achieved the minimum standards. Due to heterogeneity of performance levels set by national and cross-national assessments, these performance levels will have to be mapped to the globally-defined minimum performance levels. Once the performance levels are mapped, the global education community will be able to identify for each country the proportion or percentage of children who achieved minimum standards.
Concepts
(a) Minimum proficiency level (MPL) is the benchmark of basic knowledge in a domain (mathematics, reading, etc.) measured through learning assessments. In September 2018, an agreement was reached on a verbal definition of the global minimum proficiency level of reference for each of the areas and domains of Indicator 4.1.1 as described in the document entitled: Minimum Proficiency Levels (MPLs): Outcomes of the consensus building meeting (http://gaml.uis.unesco.org/wp-content/uploads/sites/2/2019/02/MPLs_revised_doc_20190204.docx).
Minimum proficiency levels (MPLs) defined by each learning assessment to ensure comparability across learning assessments; a verbal definition of MPL for each domain and levels between cross-national assessments (CNAs) were established by conducting an analysis of the performance level descriptors, the descriptions of the performance levels to express the knowledge and skills required to achieve each performance level by domain, of cross-national, regional and community-led tests in reading and mathematics. The analysis was led and completed by the UIS and a consensus among experts on the proposed methodology was deemed adequate and pragmatic.
The global MPL definitions for the domains of reading and mathematics are presented here (insert link)
The Programme for International Student Assessment (PISA) reading test has six proficiency levels, of which Level 2 is described as the minimum proficiency level. In Trends in International Mathematics and Science Study (TIMSS) and Progress in International Reading Literacy Study (PIRLS), there are four proficiency levels: Low, Intermediate, High and Advanced. Students reaching the Intermediate benchmark are able to apply basic knowledge in a variety of situations, similar to the idea of minimum proficiency. Currently, there are no common standards validated by the international community or countries. The indicator shows data published by each of the agencies and organizations specialised in cross-national learning assessments.
Limitations
Learning outcomes from cross-national learning assessment are directly comparable for all countries which participated in the same cross-national learning assessments. However, these outcomes are not comparable across different cross-national learning assessments or with national learning assessments. A level of comparability of learning outcomes across assessments could be achieved by using different methodologies, each with varying standard errors. The period of 2020-2021 will shed light on the standard errors’ size for these methodologies.
The comparability of learning outcomes over time has additional complications, which require, ideally, to design and implement a set of comparable items as anchors in advance. Methodological developments are underway to address comparability of assessments outcomes over time.
While data from many national assessments are available now, every country sets its own standards so the performance levels might not be comparable. One option is to link existing regional assessments based on a common framework. Furthermore, assessments are typically administered within school systems, the current indicators cover only those in school and the proportion of in-school target populations might vary from country to country due to varied out-of-school children populations. Assessing competencies of children and young people who are out of school would require household-based surveys. Assessing children in households is under consideration but may be very costly and difficult to administer and unlikely to be available on the scale needed within the next 3-5 years. Finally, the calculation of this indicator requires specific information on the ages of children participating in assessments to create globally-comparable data. The ages of children reported by the head of the household might not be consistent and reliable so the calculation of the indicator may be even more challenging. Due to the complication in assessing out-of-school children and the main focus on improving education system, the UIS is taking a stepping stone approach. It will concentrate on assessing children in school in the medium term, where much data are available, then develop more coherent implementation plan to assess out-of-school children in the longer term.
Computation Method
(a) The number of children and/or young people at the relevant stage of education n in year t achieving at least the pre-defined proficiency level in subject s expressed as a percentage of the number of children and/or young people at stage of education n, in year t, in any proficiency level in subjects.
Harmonize various data sources
To address the challenges posed by the limited capacity of some countries to implement cross- national, regional and national assessments, actions have been taken by the UIS and its partners. The strategies are used according to its level of precision and following a reporting protocol (http://gaml.uis.unesco.org/wp-content/uploads/sites/2/2019/05/GAML6-WD-2-Protocol-for-reporting-4.1.1_v1.pdf) that includes the national assessments under specific circumstances.
Out-of-school children
In 2016, 263 million children, adolescents and youth were out of school, representing nearly one-fifth of the global population of this age group. 63 million, or 24% of the total, are children of primary school age (typically 6 to 11 years old); 61 million, or 23% of the total, are adolescents of lower secondary school age (typically 12 to 14 years old); and 139 million, or 53% of the total, are youth of upper secondary school age (about 15 to 17 years old). Not all these kids will be permanently outside school, some will re-join the educational system and, eventually, complete late, while some of them will enter late. The quantity varies per country and region and demands some adjustment in the estimate of Indicator 4.1.1. There is currently a discussion on how to implement these adjustments to reflect all the population. In 2017, the UIS proposed to make adjustments using the out-of-school children and the completion rates.(http://uis.unesco.org/en/blog/helping-countries-improve-their-data-out-school-children) and the completion rates.
Disaggregation
(a) Indicator is published disaggregated by sex. Other disaggregation such as location, socio-economic status, immigrant status, ethnicity and language of the test at home are based on data produced by international organizations administering cross learning assessment detailed in the expanded metadata document and validated by countries. Parity indexes are estimated in the reporting of Indicator 4.5.1. Information on the disaggregation of variable for Indicator 4.1.1 are presented in the following tables.
(b) and (c) By age or age-group of students, sex, location, socio-economic status, migrant status and ethnicity. Disability status is not currently available in most national and cross-national learning assessments but could be considered for future assessments.
Missing Values Country
(a) Missing values are not imputed.
(b) and (c) None by data compiler.
Missing Values Global
(a) Missing values are not imputed.
(b) and (c) None by data compiler.
Regional aggregates
(a) Not yet applicable. Data are reported at the national level only. Population weighted average by region to be reported in 2020.
(b) and (c) Regional and global aggregates are not currently available for this indicator.
Sources of discrepancies
(a) Not yet applicable. Data are reported at the national level only.
Data Availability Description
(b) and (c) 79 countries
Data Availability Time Series
(a) Data available since 2000. The indicator will be reported annually.
(b) and (c) Latest year available in the period 2010-2015.
Data Sources Description
(a) Type of data sources: In school and population-based learning assessments.
See table 2 (image attached)
(b) and (c) Various cross-national learning assessments including: Programme d’analyse des systèmes éducatifs de la CONFEMEN (PASEC), Progress in International Reading Literacy Study (PIRLS), Programme for International Student Assessment (PISA), Southern and Eastern Africa Consortium for Monitoring Educational Quality (SACMEQ), Tercer Estudio Regional Comparativo y Explicativo (TERCE) and Trends in International Mathematics and Science Study (TIMSS). (a) Short-term strategy: Use national large-scale representative assessment data from cross-national assessments even though the performance levels may not be directly comparable. (b) Medium-term strategy: Use a global reporting scale based on either a new test or the statistical linking of national, regional and cross-national assessments.
Data Sources - Collection Process
(a) Information not available.
(b) and (c) For cross-national learning assessments, data were provided by the respective organizations responsible for each assessment.
Calendar – Data Description
(a) Data collection is ongoing.
(b) and (c) Various. Each learning assessment has its own data collection cycle.
Calendar – Data Release
(a) February 2020
(b) and (c) July 2016
Data Providers – Description
(a) School-Based assessments
• International Large Scale Assessments are reported to the UIS by cross-national organisations (LLECE, PASEC, TIMSS, and PIRLS). Typically, Cross National Large Scale Assessment, either regional or international, define various performance levels, and report as well the mean and standard deviation. They choose as well one level as the cut-off point that defines what children/youth are below or above level.
• Regional assessments: PASEC, SACMEQ, ERCE, PILNA, SEAMEO.
• National Large-Scale Assessments either sample- or census- based. Countries should report the proportion of students by level of competency for each domain indicating as well the minimum proficiency level, when it is defined by the national assessment. EGRA and EGMA as reported by USAID or individual countries.
Household-Based surveys
• MICS6: reported to the UIS by UNICEF
• Pal Network: reported to the UIS by Pal Network
(b) and (c) Bodies responsible for conducting learning assessments (including Ministries of Education, National Statistical Offices and other data providers). For cross-national assessments, the data providers are the International Association for the Evaluation of Educational Achievement (IEA), Laboratorio Latinoamericano de Evaluación de la Calidad de la Educación (LLECE), the Organisation for Economic Co-operation and Development (OECD), Programme d’Analyse des Systèmes Educatifs de la CONFEMEN (PASEC) and Southern and Eastern Africa Consortium for Monitoring Educational Quality (SACMEQ).
References
(a) Minimum Proficiency Levels http://gaml.uis.unesco.org/wp- content/uploads/sites/2/2019/07/MPLs_revised_doc_20190506_v2.pdf
Costs and Benefits of Different Approaches to Measuring the Learning Proficiency of Students (SDG Indicator 4.1.1) http://uis.unesco.org/sites/default/files/documents/ip53-costs-benefits-approaches-measuring- proficiency-2019-en.pdf
Protocol for Reporting on SDG Global Indicator 4.1.1
http://gaml.uis.unesco.org/wp-content/uploads/sites/2/2019/05/GAML6-WD-2-Protocol-for- reporting-4.1.1_v1.pdf
Global Proficiency Framework for Reading and Mathematics – Grade 2 to 6
http://gaml.uis.unesco.org/wp-content/uploads/sites/2/2019/05/Global-Proficiency-Framework- 18Oct2019_KD.pdf
(b) and (c) http://www.uis.unesco.org/Pages/default.aspx
Programme d’analyse des systems éducatifs de la CONFEMEN (PASEC):
Progress in International Reading Literacy Study (PIRLS): http://www.iea.nl/pirls_2016.html
Programme for International Student Assessment (PISA): https://www.oecd.org/pisa/aboutpisa/
The Southern and Eastern Africa Consortium for Monitoring Educational Quality (SACMEQ): http://www.sacmeq.org/?q=sacmeq-projects/sacmeq-iv
Tercer Estudio Regional Comparativo y Explicativo (TERCE): http://www.unesco.org/new/es/santiago/education/education-assessment-llece/third-regional-comparative-and-explanatory-study-terce/
Trends in International Mathematics and Science Study (TIMSS): http://www.iea.nl/timss_2015.html