Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Large language models (LLMs) are increasingly used in the field of education to analyze and assess students’ learning through textual artifacts. However, the robustness of these models in relation to language complexity remains largely unexamined, leaving questions like whether these models work better for simpler or more complex language unanswered. Recent studies show that language complexity can indeed impact LLM performance, making models less accurate with ungrammatical or uncommon language. Given students' varied language backgrounds and writing skills, it is critical to assess the robustness of these models to ensure consistent performance. This study examines LLM performance in detecting self-regulated learning in math problem-solving, comparing model performance on texts with high and low language complexity based on three linguistic measures.