Paper Summary
Share...

Direct link:

The Effects of SHAP Mechanism Disclosure on Trust in LLM Scoring Across Computational Thinking Levels

Wed, April 8, 7:45am to Sun, April 12, 3:00pm PDT (Wed, April 8, 7:45am to Sun, April 12, 3:00pm PDT), Virtual Posters Exhibit Hall, Virtual Poster Hall

Abstract

Large Language Model (LLM)-based automated essay scoring (AES) enables scalable writing assessment but lacks transparent decision-making, undermining trust in educational settings. Disclosing the SHapley Additive exPlanations (SHAP) mechanism could enhance transparency, though its effectiveness and the role of learners’ Computational Thinking (CT) remain underexplored. This study found that mechanism disclosure significantly increased subjective trust. CT did not moderate these effects, but an inverted-U relationship emerged between CT and subjective trust. These findings highlight the need for clear explanations and learner-centered XAI tools tailored to students’ cognitive profiles and educational goals to foster AI trust.

Authors