Paper Summary
Share...

Direct link:

Using Automatic and Human Grading to Assess Student Explanations in an Online Science Investigation (Poster 14)

Wed, April 23, 4:20 to 5:50pm MDT (4:20 to 5:50pm MDT), The Colorado Convention Center, Floor: Exhibit Hall Level, Exhibit Hall F - Poster Session

Abstract

Constructing explanations is a key practice in science education but requires frequent assessment/feedback for effective implementation. The use of automatic grading methods (e.g., Latent Semantic Analysis: LSA) could support instructional usage, but it is unclear whether automatic methods can assess the depth of students’ science explanations. This study compared automatic (LSA) grading and human grading of students’ written responses in an online science investigation. Results suggest similar patterns between automatic scoring and human grading across varied response categories; however, LSA (unlike a human) did not readily distinguish between description and reasoning. Overall, generating LSA cosines to both an expert explanation and on-screen text may be a useful method for identifying deeper vs. more shallow student explanations during online science learning.

Authors