Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
This study investigates a learning analytics-based approach to uncover students’ misconceptions and evaluate the effectiveness of large language model (LLM) feedback using both context-free and misconception-informed prompts in an undergraduate Algorithms course. We analyzed open-ended student responses using Sentence-BERT embeddings, UMAP for dimensionality reduction, and HDBScan clustering to identify and group semantically similar misunderstandings. We used ChatGPT to generate two types of feedback: one with identified misconceptions incorporated into the prompt, and one without. Instructor evaluations showed that feedback informed by misconceptions was rated significantly higher in cognitive quality and constructive suggestions, while feedback without them was preferred for its affective tone. This work highlights the potential to deliver personalized, pedagogically grounded support in computer science education.