Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Search Tips
Personal Schedule
Change Preferences / Time Zone
Sign In
The recent global emergence of generative AI has already started revolutionizing the way we consider ethical implications of human-technology interactions in the educational space. Educational institutions have entered a race for regulating the use of AI by their different stakeholder groups, in a context where law seems to have been temporarily (?) replaced by morality. In this setting, a multifaceted global movement resisting the changes introduced by generative AI has been added to the already existing collective of manifestations of reluctance toward AI. What some may perceive at a first glance as technophobia or a chasm of generations, might actually have deeper structural roots into prevailing cultural institutions in the inequitably woven social fabric. Thus, the socio-technological interactive space is continuously shaped by resistance, read as a dynamic process that signals the unique collective representations of a specific technological achievement by different social groups. Racialized groups, for example, may have a legitimate reason to resist the widespread use of AI algorithms as they may contain discriminatory biases, generating for them further social peripheralization. Inspired by the theme of this year’s conference, this conceptual paper provides a systemic typology of AI resistance and theorizes upon the global socio-political significance of such claims in the context of an increased need for AI trust and trustworthiness. Specifically, by employing a cultural constructionist approach, the paper attempts to explain how institutionalized beliefs against AI uses, facilitate or hinder the legitimacy of AI, and whether this dynamic hybrid nature of socio-technological development helps or disrupts the profile of AI applications in educational institutions.