Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Understanding effective math learning strategies can help Intelligent Tutoring Systems (ITSs) adapt itself to a student's problem solving strategy. This leads to improved learning gains, engagement and motivation. However, while ITSs may be designed to teach several alternate math learning strategies, it is often hard to analyze if learners could internalize these strategies to execute them effectively in varied contexts. In particular, when we consider scaling-up such analyses over large groups of learners, the task becomes even more complex. The growth of Artificial Intelligence (AI) methods and tools offer unique opportunities to analyze math learning strategies at a large scale. In this work, we present our approach we call ASTRA (AI-based Strategy Analysis) where we use state-of-the-art methods in AI representation learning to discover hidden structure in ITS data to help us analyze math strategies.
In particular, we show that we can adapt methods that have revolutionized language understanding such as BERT (Bidirectional Encoder Representations from Transformers) models to learn representations for math learning strategies. We learn the AI models with large-scale data (involving several thousand learners) from 6th and 7th grade Math collected from Carnegie Learning’s MATHia platform. To do this, we identify specific topics (also called workspaces) within MATHia where the design allows students to execute multiple strategies to solve a problem.
While math strategies is indeed a broad term, in this work, we define it more precisely based on sequences of actions performed by learners. Thus, by observing sequences of actions performed by the student, we develop an approach to pre-train the BERT model in an unsupervised manner (without requiring external labels). The model learns a representation (also called embeddings) that represents strategies followed by students in solving problems corresponding to that
workspace.
The objective of the pre-training is to understand hidden structure within the sequences of actions (that will correspond to a strategy) by looking for patterns over a large amount of data. Using the pre-trained embeddings, we explore several downstream tasks that can help improve the design of the ITS. Specifically, the ITS can utilize predictions or insights from the AI model to help clear misconceptions and nudge the student towards the right strategy. To do this, we develop learning methods that fine-tune the embeddings for different tasks such as i) identifying
correct strategies, ii) analyzing the effectiveness of strategies, and iii) understanding how strategies are learned over time. We present quantitative and qualitative results from our studies to evaluate the efficacy of our approach. Finally, we conclude with our learnings on the feasibility as well as limitations of AI methods in understanding math strategies at scale.