Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
X (Twitter)
Simulated classrooms are a way to develop pre-service teachers' (PSTs') classroom skills. One criterion of interest is how well the PST fosters direct interaction between students in the simulated classroom discussion. While manual evaluation of discussion transcripts is common, it is not scalable. In the present work, we used natural language processing and supervised machine learning techniques to automate scoring of the peer-interaction criterion of the Mystery Powder Task, a module for simulated classrooms. Instead of using state-of-the-art large language models, we attempted automated scoring in a more explainable way, by tracking the turn-taking structure in the discussion. Our preliminary results underperformed relative to similar work for utterance-level evaluation, but transcript-level evaluation showed a lot of promise, with cross-validated r=+0.649.