Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
This study investigates how large language models (LLMs) interpret and justify scoring adolescent-authored writing when demographic indicators are included versus withheld. Analyzing 652 writing samples scored by ChatGPT-4o, Gemini 2.5 Pro Preview, and DeepSeek-V2, we examine shifts in both numeric scores and accompanying rationales. Findings reveal that demographic cues influence not only scores but also the interpretive criteria LLMs apply, often reweighting formal structure over emotional depth or cultural expression. Using critical algorithm studies and sociolinguistic theory, we show how AI feedback systems regulate discourse and risk reinforcing dominant norms. Results highlight the need for interpretive justice and educator mediation to ensure culturally responsive, developmentally appropriate writing assessment in AI-augmented classrooms.