Search
On-Site Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Room
Browse By Unit
Browse By Session Type
Search Tips
Change Preferences / Time Zone
Sign In
Bluesky
Threads
X (Twitter)
YouTube
To train argumentation in socio-scientific debates, large language models (LLMs) can be prompted to act as debate opponents but it lacks research on how students perceive LLMs compared to humans as discussion partners. In an experiment, 117 higher-education students debated with an LLM whether meal plans at university cafeterias should become compulsory vegan. We examined whether levels of persuasion and trust vary depending on the debate partner label (peer/LLM) and the wording in a predetermined counterargument (neutral/emotional) delivered during the debate. Results revealed that despite the experimentally controlled messages students trusted peers more than LLMs. Moreover, trust was lower after the debate with the opposing debate partner. Educators should consider these influences on trust when planning LLM-assisted debate trainings.