Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Engaging students in scientific discourse is critical, yet teachers rarely have opportunities to receive feedback on their questioning practice. Recent advances in artificial intelligence (AI) have been used to provide mathematics teachers with feedback on their instruction, yet little research has explored the effectiveness of AI-generated feedback for science teachers. This study evaluates the performance of a DistilBERT model, fine-tuned using elementary mathematical teacher questions, compared with three base models: DistilBERT (distilbert-base-uncased), BART-large-mnli, and Llama 3.1 8B. Using expert human coding on 73 teacher questions from a fourth grade science lesson, results demonstrated the fine-tuned DistilBERT outperformed base models. Findings suggest reasonable domain transfer from mathematics to science contexts; however, domain-specific fine-tuning is essential for effective teacher question classification.