Paper Summary
Share...

Direct link:

Parsing Math Discourse With Bloom’s Taxonomy and RoBERTa—a Natural Language Processing Neural Language Mode

Sat, April 13, 7:45 to 9:15am, Pennsylvania Convention Center, Floor: Level 200, Exhibit Hall B

Abstract

The purpose of our study was to create a heuristic example of gold values (training set) for classifying/parsing student math discourse using RoBERTa (Robustly Optimized BERT Pretraining Approach). The math discourse classification used Bloom’s Taxonomy to categorize the text and an original data set of 2,246 data points, to which we added 135 samples for a total of 2,381 data points. Our model accuracy was 52%. Our study contributes to natural language processing analytics by refining our understanding of NLP math models and improving model performance; to math pedagogy literature, by improving teaching and learning quality. Study implications: parsed math discourse data could map math students’ progress, clarify students’ thinking about math, and be an invaluable resource for formative assessment.

Authors