Paper Summary
Share...

Direct link:

Examining the Robustness of Large Language Models Across Language Complexity

Sat, April 26, 1:30 to 3:00pm MDT (1:30 to 3:00pm MDT), The Colorado Convention Center, Floor: Meeting Room Level, Room 103

Abstract

Large language models (LLMs) are increasingly used in the field of education to analyze and assess students’ learning through textual artifacts. However, the robustness of these models in relation to language complexity remains largely unexamined, leaving questions like whether these models work better for simpler or more complex language unanswered. Recent studies show that language complexity can indeed impact LLM performance, making models less accurate with ungrammatical or uncommon language. Given students' varied language backgrounds and writing skills, it is critical to assess the robustness of these models to ensure consistent performance. This study examines LLM performance in detecting self-regulated learning in math problem-solving, comparing model performance on texts with high and low language complexity based on three linguistic measures.

Author