Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Purpose. As Artificial Intelligence (AI) tools increase in popularity, many faculty, staff, and students have integrated them into teaching and learning practices. Utilizing large language models (LLMs) can save time, organize or summarize information, create multimodal learning tools, and develop assessment instruments. However, using AI tools without understanding best practices or underlying educational principles can carry significant costs. As noted by the conference framework, stimulating dialogues that reconnect users to historical perspectives, methods, and proven practices within professional education is essential. Thus, this project highlights the value of guiding AI usage and framing AI research findings with historical educational research principles to better understand tensions, limitations, and possibilities.
Framework. This project used coaching and developmental frameworks to test approaches for using, encouraging, and training others to apply AI effectively, specifically to enhance instructional design and assess learner readiness. Foundational instructional practices such as backward design, cognitive load theory, and feedback loops can be augmented using AI by emphasizing underlying educational principles. For instance, AI can effectively summarize feedback from various sources, yet careful consideration is necessary when creating prompts and evaluating outputs. Our previous research, aligned with broader scholarship, indicated AI's strength in pattern identification, categorization, and sentiment analysis. However, concerns persist regarding reproducibility, errors, bias, conflation, and minimalization.
Methodology and Results. This project blends case-studies and mixed-methods analysis to examine faculty members perceptions, abilities, usage, and change. Our findings show that using feedback systematically and actionably – prioritizing learning objectives and curricular goals – requires specific skills. Consequently, educators must engage in insight-building techniques individually and through peer role-modeling. Traditional faculty development approaches combined with AI can enhance critical discernment, enabling educators to evaluate both their AI usage and AI-generated outputs effectively. Building educators' capacity for using and critically evaluating AI according to major educational frameworks, highlighted cautions, limitations, and successes.
Results also illustrate how to optimally use AI applications in assessing learner readiness, such as creating sample questions and identifying students' key takeaways or misunderstandings. Effective measurement and assessment creation depend on clearly defined educational goals. Therefore, training must emphasize fundamental testing principles, effective question formulation, and alignment with overall learning objectives. Engaging educators in individual, case-study-based, and community learning activities that blend established methods with AI use can improve output quality by refining prompt creation and evaluating frameworks for collaborating with AI. This project demonstrates how to effectively write and evaluate assessment questions, align assessments with their intended purposes, verify knowledge appropriately, identify curricular connections, and respond meaningfully to assessment outcomes.
Significance. We illustrate how coaching and capacity-building strategies can strengthen learning communities to maintain expertise in both content and instructional design while integrating AI into curriculum planning. Emphasizing foundational educational principles alongside AI use enhances learning design effectiveness, reduces potential issues, and fosters improved learning experiences. Findings show approaches that enabling educators to deepen their understanding of educational principles and AI use, to design better team-based learning environments and assess readiness more intentionally.
Kadian McIntosh, University of Arizona
Jennifer Wishnie, University of Arizona
Jennifer Bouschor, University of Arizona
Laura Roberts, University of Arizona
Daniel Johnson, University of Arizona
Brisa Hsieh, University of Arizona
Stephanie Shaver, University of Arizona
Sallianne Schlacks, University of Arizona
Chris Hauser, University of Arizona
Zachary Boeder, University of Arizona
Holly Bender, University of Arizona
Patricia Beyers Pelzel, University of Arizona