Search
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Committee or SIG
Browse By Session Type
Browse By Keywords
Browse By Geographic Descriptor
Search Tips
Personal Schedule
Change Preferences / Time Zone
Sign In
The rapid rise of Generative Artificial Intelligence (GenAI) has prompted universities around the world to react in different ways, reflecting both excitement over its potential and concern over its risks. Existing literature focuses heavily on theoretical discussions surrounding technology adoption frameworks (e.g., Rogers, E. M. (2003). Diffusion of innovations, FL: Free Press), but these traditional models appear insufficient in explaining the multifaceted responses of universities to GenAI. Unlike previous technological advancements, GenAI’s development challenges established norms in teaching, assessment, and student integrity. Recent studies (e.g., Chen, K., Tallant, A. C., & Selig, I. (2024). Exploring generative AI literacy…. Information and Learning Sciences) highlight the cautious yet optimistic approaches institutions are taking, but there is still a lack of empirical research on how universities are integrating GenAI into their educational practices and policies.
What framework guided the research questions? Our research is guided by the Dynamic Capabilities Framework (Teece, D. J. (2023). The evolution of the dynamic capabilities framework. Artificiality and sustainability in entrepreneurship, 113), and Sociotechnical Transition Pathways approach (Geels, F. W., & Schot, J. (2007). Typology of sociotechnical transition pathways. Research policy, 36(3), 399-417) that allow us to interpret the complex institutional and individual responses to the rise of Generative AI. These frameworks also allow us to address both the excitement over GenAI’s potential and the concerns related to its practical integration in educational environments.
How is the topic relevant to CIES 2025? This submission focuses on digital transformation in education. By providing a comparative analysis of universities across four countries and by analyzing the recent literature, it offers a global view of how institutions are managing the challenges posed by specific type of digital transformation. The study identifies key gaps in research, making it a contribution to discussions about education in a rapidly evolving digital landscape.
Research Methods
How are sources of information used to inform choices about data collection and analysis? Our data collection and analysis were informed by both primary and secondary sources. Primary data were gathered through web scraping and text analysis of university websites, focusing on pages related to GenAI. Secondary data came from a comprehensive literature review.
With respect to the university websites we examined them in four countries - China, Germany, Russia, and USA - sampling 100-200 institutions per country. Stages include:
1. Preparation Stage: Formulated keywords in multiple languages and selected a sample of universities. Collected websites for analysis.
2. Link Collection: For each site, conducted keyword-based queries and collected relevant links.
3. Content Extraction: Web scraping algorithms extracted text content from each link, checking for the presence of keywords. Non-relevant texts were excluded, and non-English texts were translated automatically. A database was created with categorized texts for each university.
4. Text Processing: Automated workflow using a data science platform allowed to process and analyze text data, including sentiment analysis to categorize the emotional tone of the texts (positive, negative, or neutral) and identified significant keywords to reveal key themes across universities.
For the secondary data analysis for the literature review we considered three sources: academic literature (162 sources), "grey literature" (41 sources), and policy papers (7 sources) to map the current research landscape and identify existing gaps. Our review covered sources from the Scopus database, supplemented by Google Scholar, conference presentations, analytical reports and policy documents from leading universities and international organizations.
Contribution
Analysis of university websites revealed a highly uneven distribution of publications across universities, with larger, and research-focused institutions being radically most active. While most website publications are AI-positive, the official recommendations often emphasize caution and limitations. National patterns also emerged: Chinese universities focus more on professional tools and skills, Russian universities highlight locally developed AI tools over ChatGPT, American universities emphasize business, hiring, and investment, while German institutions prioritize training in ethics and in prompt engineering.
Analysis of the literature supports the finding about complex balance between AI-positive and AI-negative dispositions. However, much of the discussion remains speculative, with many works lacking empirical evidence. The discourse is often dominated by ethical challenges. Moreover, there is a scarcity of publications systematically addressing GenAI's real capabilities.
Very few studies focus on the long-term effects of GenAI on student skills. Additionally, there is a lack of cross-university and cross-country comparisons that explore the broader implications of GenAI on educational practices globally.
We emphasize the importance of cross-institutional and international studies to develop evidence-based strategies for GenAI implementation. Further work is required to understand the long-term consequences of GenAI on educational practices, and to create robust frameworks for evaluating the impact of AI adoption.
How do the research methods and results support the conclusions drawn from the data? Web scraping allowed us to uncover emerging national trends, institutional priorities, and the differing approaches between countries. By combining this with a comprehensive literature review, we were able to validate our findings and support the conclusion that while there is widespread optimism about GenAI, many institutions remain cautious due to ethical and practical concerns. This multi-faceted approach ensures that our conclusions are grounded in both empirical data from institutions and theoretical insights from the literature.
What do we learn that we did not know, and why is it important? While much of the existing literature focuses on single-country studies or theoretical discussions, our work fills a gap by offering a global perspective on institutional policies, practical integration, and national priorities related to GenAI. We see very complex picture of the universities’ activities related to GenAI. We also learn about the dominance of risk-oriented discourse and lack of empirical studies, which likely contribute to AI-negative discourse.