Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Objectives
Research on personalized explainable artificial intelligence (XAI) is warranted because it helps us understand how to create AI systems that effectively explain their actions and decisions to the right users at the right time. Explainable AI aims to contribute to the field by making AI systems more transparent and trustworthy by displaying their inner workings. We explored XAI to enhance students’ experience and understand the value of explanations in AI-driven pedagogical decisions within an intelligent pedagogical agent (IPA).
Theoretical Framework
Previous studies have shown the need for the AI to provide users with personalized explanations and to analyze their perceptions based on different traits (e.g., Kouki et al., 2019; Conati et al., 2021). Some XAI research has begun to address this need, such as in the design of personalized explanations for a music recommender system (Martijn et al., 2022). However, these personalized explanations have been tested only on users with higher or lower levels of the targeted traits, rather than in real-time interactions.
Methods
Our personalized explanations are generated based on students’ attitudes toward learning because prior research has shown that these attitudes play a crucial role in students’ motivation to learn (Assor et al., 2002), their engagement in class (Bryson and Hand, 2007), their confidence (Lavigne et al., 2007), and their retention of the material (Öztürk and Şahin, 2014). Here, we define learning attitudes by extending our prior work, in which inverse reinforcement learning (IRL) was applied to student-IPA logs to infer their learning intentions (Yang et al., 2020). We further extended our tool by incorporating a deep learning classifier to predict students’ learning attitudes in real time and generating personalized explanations with our predictions.
Data Sources
Our AI-tool was given to students as a homework assignment in an undergraduate computer science class in the fall of 2021. Students were told to complete the study in one week and that they would be graded based on their demonstrated effort rather than their learning performance. All students went through the same four stages following a strict order: (1) textbook, (2) pretest, (3) training on the IPA, and (4) posttest. In total, 180 students were randomly assigned to one of three conditions: Intervene-Only, Intervene-Explain, or StuChoice. All tests were double-blind graded by two experienced graders.
Results
Table 2 shows the mean and SD of students’ learning gains as well as their time on task in hours. Personalized explanations improved students’ ability to learn the principles needed to solve similar problems on the pretest and posttest. The students in the Intervene-Explain condition were the only ones who improved from the pretest to the posttest. Notably, this increase in learning was not related to students’ taking more time to solve the practice problems with the IPA because we found no significant difference in time on task across the conditions (last column of Table 2).
Significance of the Study
This work contributes to the field by exploring the impact on students’ learning performance of providing personalized explanations, depending on their attitudes toward learning.