Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
This presentation examines our systematic investigation of generative AI capabilities and limitations as a foundation for developing responsible integration approaches in doctoral education. As faculty members tasked with preparing the next generation of scholars, we recognized the need to move beyond speculative discussions of AI to evidence-based understanding of these tools' actual functioning, biases, and potential applications in research contexts.
Our research involved testing multiple generative AI systems (including GPT-4, Claude, and Bard) across various scholarly tasks relevant to doctoral education: literature review synthesis, methodological planning, data analysis interpretation, and academic writing. We developed standardized prompts across disciplines represented by our doctoral students and systematically evaluated outputs for accuracy, depth, disciplinary alignment, and potential biases. This technical investigation revealed significant variations in AI performance across knowledge domains, with particular limitations in specialized disciplinary knowledge, methodological nuance, and engagement with very recent scholarship.
These findings directly informed our pedagogical approach with doctoral students. Rather than positioning AI as either a threat to academic integrity or an unproblematic productivity tool, we developed a nuanced framework that identifies specific research processes where AI might serve as a productive thinking partner versus areas requiring traditional scholarly engagement. This "augmentation not automation" approach emphasizes AI as a tool for amplifying human scholarly judgment rather than replacing core intellectual processes.
Our presentation will share specific examples of how our technical research translated into teaching practices. For instance, after discovering particular weaknesses in how AI systems handle methodological nuance, we developed a worksheet to help students critically evaluate AI-generated methodological suggestions. Similarly, after identifying patterns of Western knowledge prioritization in AI outputs, our students created exercises helping them recognize and counteract these biases when using AI for literature review processes.
The presentation will address how our ongoing research has evolved in response to student experiences and rapid technological changes. We have established continuous adaptable approaches in teaching generative AI that allows us to respond to improvements in AI systems as well as student responsiveness to generative AI use and adjust our teaching accordingly. This responsive approach has been essential given the rapid pace of AI development and the lack of established best practices in this area.
We will conclude by discussing how faculty members might develop similar research programs appropriate to their disciplinary contexts and student populations. We argue that faculty-led investigation of AI capabilities should precede integration into doctoral education, allowing for evidence-based rather than reactive approaches to these transformative technologies. This approach positions faculty not merely as consumers of AI but as critical investigators who can shape how these technologies are understood and applied in scholarly contexts.