Individual Submission Summary
Share...

Direct link:

Towards an indicator of higher education and the public good? Some critical reflections

Wed, March 28, 8:00 to 9:30am, Hilton Reforma, Floor: 2nd Floor, Don Américo

Proposal

This paper considers the debate about indicators and the power of numbers used to assess and evaluate social policy. Drawing on questions posed in Unterhalter’s (2017) discussion of how to measure the unmeasurable in education, the paper considers both the debate about why it might be useful to develop an indicator of higher education and the public good for use in Africa, and some of the critiques that are made of such initiatives. The paper reviews the range of fields that an indicator of higher education and the public good should encompass and evaluates some of the contemporary data sources to build it.

While higher education is now broadly recognised as being central to development, there is a severe lack of evidence relating to its impact on society, particularly in low-income countries (Oketch, McCowan & Schendel 2014). Collecting and disseminating information on the different ways higher education can contribute to the public good is important both for justifying public expenditure on the sector, and in understanding which areas need strengthening. However, there are some obvious dangers in such a process, particularly given the inevitable tensions and contradictions between generating a contextualised understanding of public good (based on countries’ distinctive histories and current landscapes), and creating an indicator that – in its requirement for comparability – necessarily transcends context.

The best-known indicators of higher education are those used in the international rankings of universities, e.g. Shanghai, Times Higher Education and QS. These rankings draw on a range of indicators, but privilege those of research excellence, including publications and Nobel Laureates. There are some more specific rankings, such as that of QS relating to employability. While garnering significant interest and influencing the actions of institutions and governments, these rankings are the subject of intense criticism, whether on account of the weightings and calculations involved, the choice of indicators, areas omitted, or the resulting distortions of institutional and individual behaviour. An alternative ranking system is that put forward by Universitas 21: instead of ranking institutions, it ranks national systems, thereby allowing for positive synergies and complementarity between universities within a system. The possibilities of a systemic instead of institutional ranking are considered here in relation to the African countries in question.

This paper discusses two possible ways of constructing the indicator. The first would be using the conventional industrial frame of inputs, processes and outputs. Inputs would include enrolment and completion rates (disaggregated by background factors of gender, ethnicity/race etc, disciplinary area and institutional type), staffing and expenditure. In relation to access, ideally the indicator would be sensitive to horizontality within the system (the extent to which stratification may confine less advantaged students to less prestigious institutions), in addition to availability of places and accessibility (McCowan 2016b). Process indicators would include academic freedom, participation and representation (e.g. governance boards) and student-lecturer ratio. Output indicators would include graduate employment (rates and type of employment), tax contributions, research and publications, and knowledge exchange activities. Alternatively, proxies could be developed around the ideas of intrinsic and instrumental manifestations of public good in higher education as explored in the first paper in this symposium.

The possible subcomponents of the indicator outlined above leave some significant gaps, with areas of community engagement, pedagogical approaches, curriculum, deliberative spaces, formation of values and impact on poverty reduction, amongst others, hard to address. As highlighted by the capability approach, counterfactuals are essential to understanding the impact of higher education on individuals’ lives, and these are hard to bring into an indicator (Unterhalter 2017). Even in relation to the more restrictive indicators outlined, there are a range of significant challenges. First, reliable information is hard to gather given the low resource base and administrative capacity of many universities and ministries. Second, a number of the crucial dimensions of the public good can only be assessed in a qualitative manner, making them hard to combine into a composite quantitative indicator. Third, the transnational nature of higher education and employment (with many students studying and working outside of their countries of origin) are not acknowledged.

Despite these significant challenges, this paper argues that engaging constructively with an indicator is an important task, given the dominance of current metrics of research intensity and employability, which fail to capture both what is valuable and problematic in contemporary African higher education institutions.

Authors