Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
This review examines how explainable AI (XAI) techniques are implemented in predictive learning dashboards and how they affect educators’ trust and instructional decision-making. Synthesizing 22 empirical studies (2015–2025), we apply a theory-informed framework grounded in trust in automation, cognitive load theory, and human–AI interaction. Each study is analyzed across four dimensions: algorithmic transparency, pedagogical interpretability, goal-directedness, and user controllability. Findings reveal a gap between technical explainability and pedagogical usability. We identify design strategies that support meaningful educator engagement and propose a framework for evaluating educational dashboards beyond accuracy—emphasizing clarity, agency, and instructional alignment. This work informs future XAI design and evaluation by identifying which explanation strategies effectively support educator trust, reduce cognitive barriers, and enable pedagogically aligned action.