Individual Submission Summary
Share...

Direct link:

Education governance and the challenge of AI: A proposal for collaborative policymaking

Tue, March 25, 2:45 to 4:00pm, Palmer House, Floor: 7th Floor, LaSalle 5

Proposal

This paper responds to the question of how we can adapt to rapid developments driven by the growth of artificial intelligence (AI) and datafication in education. Specifically, the paper focuses on the challenges that AI presents for education governance. Our research questions are: (1) How do we govern the technical basis and social impact of AI in education?; and (2) How can decision making about the use of AI in education be more inclusive of a broad range of stakeholders in schools, school systems and the EdTech industry?

The governance of AI education is complex. We follow Rhodes (1997, p.15) in understanding governance to describe ‘a change in the meaning of government referring to new processes of governing’ through ‘self-organising, inter-organisational networks characterised by interdependence, resource exchange, rules of the game and significant autonomy from the state’. The use of AI in education is extending these forms of ‘network governance’ (Bevir and Rhodes, 2007), in which the state still plays an important role, but in new relationships with non-state actors such as technology companies, think tanks, consultancy firms and so on (Ball, Junemann, and Santori, 2017). Due to the ubiquity of the integration of AI into platforms and software that are widely used in education, including personal devices, it is necessary to take a broad perspective on governance that includes policy, legislation and regulation, guidelines and frameworks, and practices within organisations such as procurement. The governance of AI in education must address both how AI is developed and what it does (i.e. its technical basis) and how people use AI, including the power relations involved in this use and associated issues of equity and sustainability (i.e. its social impact). Therefore, we employ sociotechnical framework to understand how governance can respond to and anticipate the issues arising from AI in education (Jasanoff and Kim, 2009).

The rapid development and widespread use of AI, especially generative AI (GenAI), now poses novel challenges for governing education in the areas of teaching, learning and administration. The rise of GenAI has quickly raised issues of trust, transparency and accountability that are are exacerbated in high-stakes contexts such as education. We are now confronted by the questions of: (1) how to govern the use and impact of GenAI; and (2) how GenAI impacts on governing practices, including blurring the line between human and machine agency in decision making processes that are central to governance. Key challenges include the need for context specific governance within school jurisdictions and nation-states, while recognising that the technical, commercial and legal dimensions of these technologies cut across systems and nations. There is thus also a need for policy makers and stakeholders to draw on and learn from experiences elsewhere. There is also evidence of the negative effects of AI and automated decision making in other sectors such as welfare and policing, which demonstrates that building sociotechnical expertise is necessary to limit the deleterious effects of AI by developing iterative and robust processes to respond to and shape its development, use and oversight.

Given the pervasive technical basis for AI in education and its diverse social impacts, it is clear governing AI in education requires more collaborative approaches that bring together a diverse range of actors. This has been the long-standing concern within the field of science and technology studies (STS), which has sought to develop approaches such as technical democracy attends to the social dimensions of science and technological innovation (Callon, Lascoumes & Barthes, 2011). This approach involves increasing the types of expertise and the range of experiences that are drawn upon to create policies on developing, using and governing AI in education, especially to include those groups and locations who are most impacted. Moreover, due to the complexity of AI, many policy makers do not have the requisite technical expertise needed to evaluate the decisions made by AI or to make informed decisions about its implementation. More collaborative approaches to decision making about AI in education must thus involve a spectrum of actors from technical experts to policy makers and end users.

Drawing on empirical examples from projects being conducted across a range of national contexts, this paper will: (a) describe collaborative approaches that aim to build the capability of stakeholders to understand and shape the development and use of AI in education; and (b) extract some principles from these examples to support the development of a framework for collaborative policymaking in relation to AI in education. This framework will highlight how collaborative governing can: interrogate and respond to the opportunities that AI creates for education; identify and assess potential impacts and harms; allocate responsibility for conducting these assessments; and establish the authority to demand changes to the development and use of AI. The limits of this collaborative approach will also be discussed, including: (1) the challenge of responding to rapid changes collaboratively; and (2) the question of who is included in collaborative governing. The paper will make an original and significant contribution to current debates about education technology, education governance and AI in education.

Authors