Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
We will describe a multi-stage effort to develop research principles to inform the application of AI to education research. This work involves convening representatives from multiple institutions, representing diverse roles and areas of expertise, to collaborate on the development of a framework to serve as the basis for these principles. We will also discuss challenges, such as the need for a mechanism to update the principles frequently as both the technology and the field evolve. Our primary purpose is to foster discussion among a diverse group of panelists and attendees to inform next steps in developing these principles.
This work is informed by a review of literature, combined with expert input, on specific AI use cases in research. We examine research from prior waves of advancement in educational technologies to learn about best practices and potential challenges with integrating new technologies (e.g., Granić, 2022) and to understand the implications of particular designs, like gamification, for student learning (Huang et al., 2020). Further, the principles are informed by research on importance of education researchers understanding the technologies that they leverage for their own methods and study in their research (Castañeda & Williamson, 2021).
The work identified several key considerations. First, researchers must recognize bias-related risks that can arise at multiple points during a research project, taking proactive steps to identify and mitigate them. For instance, researchers might begin with an off-the-shelf generative AI model and supplement it with data to customize the model. While the ability to customize is a strength of AI, researchers should be aware of how that process can introduce bias (Qi et al., 2023) and shift the model’s original features (Kumar et al., 2022). Second, researchers should be both curious about, and critical of, AI-enabled tools and methods. Researchers use AI for coding qualitative data, scoring essays, generating new questionnaire or test items, and synthesizing literature. Increasingly, researchers can develop tools for specialized use cases that perform customized tasks. Users of these tools must gather evidence of quality and validity and conduct extensive piloting. Third, transparency and replicability are important standards in research and should guide decisions about use of AI-enabled research tools. Large language models (LLMs) have incredible computational power, but users cannot determine how results were generated. This raises concerns about how to meet the goals of transparency and replicability, and it suggests a clear need for guidance and norms (such as recommending or requiring that research reports include the specific steps taken and acknowledge what is known or unknown about how data in AI were processed). These considerations are a starting point for what will be an ongoing effort to ensure appropriate use of AI tools in research while keeping up with the rapidly changing nature of these tools. Our work will contribute to conversations among researchers and the groups they partner with and serve regarding benefits and drawbacks of using AI in research. The principles will help ensure that we optimize the impact of AI and avoid falling prey to its potential pitfalls.