Paper Summary
Share...

Direct link:

Constructing New Educational Futures: Integrating AI and Instructional Design in Faculty Development for Clinical Educators

Fri, April 10, 11:45am to 1:15pm PDT (11:45am to 1:15pm PDT), JW Marriott Los Angeles L.A. LIVE, Floor: Ground Floor, Gold 4

Abstract

Faculty in health professions education are increasingly expected to integrate artificial intelligence (AI) into curriculum design and assessment, yet there are few structured opportunities to critically explore its use through an instructional design lens. This study examined how six clinical educators enrolled in a PhD-level teaching and learning course responded to AI-assisted teaching strategies and how they evaluated instructional alignment, ethical use, and role adaptation during a semester-long experience. The course was redesigned to incorporate AI tools (e.g., ChatGPT) into structured assignments supported by Fink’s Taxonomy of Significant Learning and instructional design theory (Fink, 2013).

Participants completed AI-supported tasks including rubric generation, case-based activity design, and formative assessment planning. They were required to critically evaluate each AI-generated output against instructional goals, clinical accuracy, and ethical concerns. Each participant also engaged in a structured classroom debate, assuming stakeholder roles such as “AI Critic” or “Health Professions Faculty Member,” and submitted reflections on the impact of AI in education. Data sources included reflection papers, assignment artifacts, and transcripted group debate discussions. Grounded theory methods were used to code and categorize their evolving perspectives over time (Charmaz, 2014; Creswell & Poth, 2018; Parker et al., 2024; Castellanos et al., 2025).
Faculty began the course with cautious curiosity and limited experience using generative AI tools. As assignments progressed, most participants reported increased confidence using AI for instructional ideation but consistently emphasized that outputs lacked specificity and often introduced hallucinated or clinically irrelevant data (Masters & Ellaway, 2023). The debate activity was particularly revealing. Participants expressed concern over ethical boundaries, student misuse, and reliance on AI as a proxy for professional judgment. These themes support existing literature on the risks and tensions AI introduces into educational environments (Chan et al., 2023; Wong & Goh, 2023).

By the end of the course, participants showed improved ability to revise AI-generated content using instructional frameworks and to articulate the instructional implications of generative AI in both academic and clinical settings. This aligns with other faculty development models that emphasize the need for scaffolding, ethical review, and reflection as core components of AI integration (van Schaik, 2021).

Four major themes emerged: (1) AI use increases instructional creativity but must be guided by theory, (2) ethical discomfort is persistent and unresolved without structured discourse, (3) faculty value AI for rapid ideation but not as a replacement for subject-matter expertise, and (4) integrating AI into instructional planning helped redefine faculty roles as critical reviewers of educational content.

This study contributes a grounded theory of how clinical educators think about, feel about, and work through the instructional use of AI during and after a formal course. Findings suggest that AI integration should begin not with tools, but with faculty values, reflective practice, and pedagogical training aligned with future-focused education research.

Authors