Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Constructing explanations is a key practice in science education but requires frequent assessment/feedback for effective implementation. The use of automatic grading methods (e.g., Latent Semantic Analysis: LSA) could support instructional usage, but it is unclear whether automatic methods can assess the depth of students’ science explanations. This study compared automatic (LSA) grading and human grading of students’ written responses in an online science investigation. Results suggest similar patterns between automatic scoring and human grading across varied response categories; however, LSA (unlike a human) did not readily distinguish between description and reasoning. Overall, generating LSA cosines to both an expert explanation and on-screen text may be a useful method for identifying deeper vs. more shallow student explanations during online science learning.