Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
As large language models (LLMs) gain traction in education, understanding
their influence on student thinking is essential. This study explores cognitive
presence (CP) within the Community of Inquiry framework by comparing student
interactions assisted by LLMs and human Teaching Assistants (TAs). Leveraging
GPT-4o and recent LLM-based classification frameworks, we analyzed over 7,000
messages from two datasets. Benchmark metrics revealed that LLMs achieved 9.2%
higher Resolution Attainment, suggesting greater effectiveness in helping students
reach closure, whereas human TAs fostered deeper engagement through higher Stage
Weighted Scores. Other CP metrics showed comparable performance. While limited
in scope, this study marks an initial step toward understanding how LLMs can be
thoughtfully integrated alongside educators to enhance personalization, reflection,
and engagement.
Arjun Rawal, North Carolina School of Science and Mathematics
Harrish Ayyanar Jeyajothi Pommiraj, University College London - IOE
Yuyang Tong, Stanford University
Rohit Sandadi, University of California - Berkeley
Taransh Goyal, McMaster University
Charith Narreddy, Georgia Institute of Technology
Ishaan Gangwani, Indus International School