Paper Summary
Share...

Direct link:

On the Trust Towards LLMs and Peers as Debate Partners: An Experiment

Sun, April 12, 7:45 to 9:15am PDT (7:45 to 9:15am PDT), JW Marriott Los Angeles L.A. LIVE, Floor: Ground Floor, Gold 4

Abstract

To train argumentation in socio-scientific debates, large language models (LLMs) can be prompted to act as debate opponents but it lacks research on how students perceive LLMs compared to humans as discussion partners. In an experiment, 117 higher-education students debated with an LLM whether meal plans at university cafeterias should become compulsory vegan. We examined whether levels of persuasion and trust vary depending on the debate partner label (peer/LLM) and the wording in a predetermined counterargument (neutral/emotional) delivered during the debate. Results revealed that despite the experimentally controlled messages students trusted peers more than LLMs. Moreover, trust was lower after the debate with the opposing debate partner. Educators should consider these influences on trust when planning LLM-assisted debate trainings.

Authors