Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Large Language Model (LLM)-based automated essay scoring (AES) enables scalable writing assessment but lacks transparent decision-making, undermining trust in educational settings. Disclosing the SHapley Additive exPlanations (SHAP) mechanism could enhance transparency, though its effectiveness and the role of learners’ Computational Thinking (CT) remain underexplored. This study found that mechanism disclosure significantly increased subjective trust. CT did not moderate these effects, but an inverted-U relationship emerged between CT and subjective trust. These findings highlight the need for clear explanations and learner-centered XAI tools tailored to students’ cognitive profiles and educational goals to foster AI trust.