Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
An AI-based expert model is introduced for adaptive learning content selection. It employs Cognitive Diagnostic Models and optimization to deliver optimal instructional content, addressing individual student needs while minimizing cognitive fatigue and enhancing learning efficiency.
Background and Motivation
Many adaptive learning systems struggle with precise selection and delivery of instructional materials, such as educational videos. These systems often provide uniform content to students who answer assessments incorrectly, resulting in superficial feedback, cognitive fatigue, and unaddressed conceptual misunderstandings. This necessitates a shift from generic content recommendations to targeted, construction-based feedback that directly addresses the root causes of learning difficulties—rule-based (Woolf, 2010), similarity-driven (Hwang, 2012), and learner-centered (VanLehn, 2006)—offer personalization but often lack the diagnostic depth and adaptive precision needed for effective remediation and cognitive load management (Aleven et al., 2016). This project develops an AI expert model that emulates the nuanced decision-making of human educators to address these gaps.
Aims
● Demonstrate an AI expert model for precise content selection, leveraging Cognitive Diagnostic Models and optimization to target conceptual deficiencies and reduce cognitive load.
● Develop a framework for delivering expert-curated, inclusive instructional content, tailored to student mastery levels for improved efficiency in adaptive learning systems.
Methodology
This study developed an optimized framework for assigning educational videos to maximize coverage of diagnosed skill deficiencies while minimizing cognitive load and content redundancy. Formulated as a multi-objective integer programming problem, it balances comprehensive remediation with pedagogical efficiency. Each video was tagged with metadata, including duration, skill vectors, and expert-reviewed difficulty levels. Two algorithms were employed: Greedy Heuristic (GH) for rapid, locally optimal selections prioritizing immediate skill coverage (Resende, 2010), and Gradient Descent (GD) for iterative minimization of a composite loss function, navigating trade-offs in coverage, duration, and difficulty (Boyd & Vandenberghe, 2004). Efficacy was assessed through simulations (1000 students, 60 items, 5 skills) using 3PL Item Response Theory and DINA models in Computerized Adaptive Testing (Lord, 1980; de la Torre, 2011). Real-world validation involved 1204 university physics students and 45 expert-curated videos in different lengths, evaluated on metrics like Satisfactory Rate, Gain Decay, Utility, and Total Penalty.
Results, Conclusion, and Discussion
Results demonstrate that GD and GH effectively deliver personalized video content in both simulations and real-world settings. Both achieved a 100% Satisfactory Rate, ensuring all identified skill gaps were addressed. In simulations, GD showed superior efficiency, reducing Gain Decay (e.g., 0.844 at 5 videos to 0.112 at 20), optimizing resource use, and adapting to larger content pools. GH, while faster, exhibited variability and less optimal allocation. In real-world tests with 589 students, both met skill requirements, but GD achieved higher Utility, lower over-coverage (65.2% vs. GH’s 76.9%), and reduced redundancy, mitigating cognitive fatigue. GD also utilized more diverse videos (13 vs. 11), though both relied heavily on key resources like Video 13. These findings highlight GD’s balanced optimization compared to GH’s rapid but less refined selections, offering valuable insights for designing effective, scalable adaptive learning systems.