Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Objective and Perspective. This study presents an analysis of over 100,000 teacher requests made to an educational chat assistant on the [redacted] platform. Using a validated, scalable methodology for classifying and interpreting educational practices in naturalistic teacher-AI dialogues using large language models (LLMs), we expand prior work to broaden the quantitative analysis of usage patterns in message content (Redacted, under review). We reveal patterns in how educators engage with generative AI for instructional design, content adaptation, assessment, and professional responsibilities.
Data and Methods. The LLM based qualitative coding methodology is explained in detail in [Redacted]. Briefly, the structured human-AI collaborative inductive coding process uses the LLM model’s pattern recognition to augment the researcher’s interpretive reach, and the researcher’s domain knowledge grounds and refines the model’s generative power. Validation results align with prior findings that LLM-generated codes can serve as cognitive scaffolds, enhancing human consistency in interpretive tasks (Mirzakhmedova, 2024).
Results. Empirical findings of teacher usage of the chat-assistant revealed that 79.7% of teacher-AI conversations involved Instructional Practices, 76.1% Curriculum and Content Focus, 46.9% Assessment and Feedback, 43.3% Student Needs and Context, and 34.2% other Professional Responsibilities. Among the Instructional Practice domain, the most frequent practice groups were Explicit Teaching (45.9%), Critical Thinking and Inquiry (42.4%), Differentiation and Accessibility (36.9%), and student Engagement and Motivation (32.9%). Figure 1 shows the augmentation role of AI as AI responses frequently elaborated on teacher pedagogical intent. Critical Thinking and Inquiry featured prominently in the AI responses, as the generated responses often added cognitive scaffolding, such as reasoning prompts, deep question sequences, or analysis facilitation, even when such dimensions were not fully articulated in the original teacher prompt. The AI tool appeared to act as a teaching assistant capable of expanding the epistemic demand of the learning activity.
Co-occurrence analysis revealed pedagogically meaningful correlations, such as Student Profile in Student Needs and Context and Differentiation and Accessibility in Instructional Practices (r = 0.62), suggesting that teachers frequently used AI to adapt and customize instruction to learner needs (e.g., multilingual learners, IEPs, low-resource settings). This pattern highlights the integration of learner-centered framing into instructional design, with AI serving as a tool to enhance teaching strategies across diverse settings.
Signficance. This work contributes to the TICL symposium by demonstrating how NLP-based machine perception can rigorously and at scale identify instructional patterns in unstructured teacher discourse. Building on our methodology for AI-assisted qualitative analysis, we offer empirical insights into how teachers are using AI chat assistants to inform teacher-facing AI design, teacher education, and emerging frameworks for AI competency in K-12 education.