Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
Scientific writing is a core practice in science education, yet teachers often find it challenging to provide comprehensive, constructive feedback in real-time. Large Language Models (LLMs) have demonstrated assessment capabilities in various educational settings. In this research, we investigate the effectiveness of prompting instruction-tuned LLMs to assess middle school science essays based on a rubric of main ideas. In a comparison of Llama-3-8b with three GPT models, we found that prompting GPT4o with three examples outperformed customized AI assessment tools. We also found consistent results when we varied the examples in the prompt. Our results add to a body of recent research demonstrating the potential benefits of LLMs in assessment, alongside the importance of prompt design.