Paper Summary
Share...

Direct link:

How Novices Learn to Code With AI: Analyzing the Structure and Effects of Error-Type-Specific Feedback in LLM-Based Tutors

Wed, April 8, 9:45 to 11:15am PDT (9:45 to 11:15am PDT), Los Angeles Convention Center, Floor: Level Two, Poster Hall - Exhibit Hall A

Abstract

This study examined the effects of an AI-based error helper and automated feedback system in introductory university-level programming. Using 54,753 submission logs, we identified common novice errors and analyzed the structural characteristics and effectiveness of AI-generated feedback. The most frequent errors were SyntaxError, NameError, IndentationError, and TypeError, each requiring distinct feedback strategies. A follow-up analysis of 406,383 correct submissions employed a pre–post comparison of 2,018 students who both used and did not use the error helper. Results showed a significant increase in average problem difficulty after tool use, with 54.9% of students attempting harder problems. Findings suggest that AI-based scaffolding promotes engagement with more challenging tasks and highlight the value of teacher-facing analytics for adaptive instruction.

Authors