Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
X (Twitter)
The purpose of our study was to create a heuristic example of gold values (training set) for classifying/parsing student math discourse using RoBERTa (Robustly Optimized BERT Pretraining Approach). The math discourse classification used Bloom’s Taxonomy to categorize the text and an original data set of 2,246 data points, to which we added 135 samples for a total of 2,381 data points. Our model accuracy was 52%. Our study contributes to natural language processing analytics by refining our understanding of NLP math models and improving model performance; to math pedagogy literature, by improving teaching and learning quality. Study implications: parsed math discourse data could map math students’ progress, clarify students’ thinking about math, and be an invaluable resource for formative assessment.