Paper Summary
Share...

Direct link:

Humanizing AI Grading: Student-Centered Insights on Fairness, Trust, Consistency and Transparency

Sat, April 11, 9:45 to 11:15am PDT (9:45 to 11:15am PDT), Los Angeles Convention Center, Floor: Level Two, Poster Hall - Exhibit Hall A

Abstract

This study investigates students’ perceptions of Artificial Intelligence (AI) grading systems in an undergraduate computer science course (n = 27), focusing on a block-based programming final project. Guided by the ethical principles' framework articulated by Jobin (2019), our study focused on fairness, trust, consistency and transparency in AI grading, using AI-generated feedback as a second version compared to the original human-graded feedback. Findings reveal concerns about AI’s lack of contextual understanding and personalization. We recommend that equitable and trustworthy AI systems reflect human judgment, flexibility, and empathy, serving as supplementary tools under human oversight. This work contributes to ethics-centered assessment practices by amplifying student voices and offering design principles for humanizing AI in designed learning environments.

Authors