Paper Summary
Share...

Direct link:

Exploring the Use of LLMs to Automatically Classify Teacher Questions in Science Classrooms (Stage 2, 10:35 AM)

Sun, April 12, 9:45 to 11:15am PDT (9:45 to 11:15am PDT), Los Angeles Convention Center, Floor: Level One, Exhibit Hall A - Stage 2

Abstract

Engaging students in scientific discourse is critical, yet teachers rarely have opportunities to receive feedback on their questioning practice. Recent advances in artificial intelligence (AI) have been used to provide mathematics teachers with feedback on their instruction, yet little research has explored the effectiveness of AI-generated feedback for science teachers. This study evaluates the performance of a DistilBERT model, fine-tuned using elementary mathematical teacher questions, compared with three base models: DistilBERT (distilbert-base-uncased), BART-large-mnli, and Llama 3.1 8B. Using expert human coding on 73 teacher questions from a fourth grade science lesson, results demonstrated the fine-tuned DistilBERT outperformed base models. Findings suggest reasonable domain transfer from mathematics to science contexts; however, domain-specific fine-tuning is essential for effective teacher question classification.

Authors