Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Search Tips
Personal Schedule
Change Preferences / Time Zone
Sign In
The rise of artificial intelligence (AI), characterized by highly sophisticated, versatile, and interoperable applications replicating or supplanting human actions, has elicited responses from national, supranational, and transregional governance structures and international organizations that aim to regulate or set limitations to the ethical or acceptable use of AI in various aspects of societal life, including in education. Consequently, acknowledging the challenges and opportunities posed by AI to education, the United Nations Educational, Scientific and Cultural Organization (UNESCO) drafted a set of recommendations for the ethical use of AI its member states could implement in a broad array of policy areas covering the entire spectrum of social, cultural, political, and economic activity, with education and research representing a distinct focus for policy action. In turn, following its own agenda on AI, but also aligning with policy recommendations issued by the OECD and UNESCO, the European Union (EU) has begun constructing a regulatory framework, including a proposed Artificial Intelligence Act “laying down harmonised rules on artificial intelligence” (European Commission, 2021, p. 1). Through its sweeping legislative authority, the EU is attempting to develop a coherent AI regime that would prescribe rules on the ethical and human implications of AI use in its realm that would align with the EU’s principles, values, and fundamental rights. The EU aligns itself with UNESCO’s principled stance on the ethical uses of AI in education by formulating broad directives on creating benchmarks for AI systems.
Nonetheless, this benign, lofty, and benevolent stance on the ethical use of AI is accompanied by a more pragmatic and, perhaps, hegemonic positioning as the EU claims it is in its interest to preserve its technological leadership. Thus, in the emerging policy language, the EU’s drive to steer and coordinate AI policy development is not limited only to its member states. In fact, the EU seeks to carve out a primary role on the world stage in this regard, considering that its proposed regulatory framework “significantly strengthens the Union’s role to help shape global norms and standards and promote trustworthy AI consistent with Union values and interests. It provides the Union with a powerful basis to engage further with its external partners, including third countries, and at international fora on issues relating to AI” (European Commission, 2021, p. 5). In declaring its ambition to set the AI agenda, the EU is engaging both in framing the evolving global scripts on AI implementation and in shaping them in its own image to reflect its core values and its pragmatic (self)interests. In its efforts to establish its primacy in global AI policy development and emphasizing this prominent role in its interaction with third countries and partners, whether deliberately or inadvertently, the EU may risk enforcing AI norms, standards, rules, and regulations on other countries or regions that may be partly dependent on its financial assistance, donor capacity and/or philanthropic activities.