Paper Summary
Share...

Direct link:

In AI we trust? (Poster 3)

Sat, April 26, 8:00 to 9:30am MDT (8:00 to 9:30am MDT), The Colorado Convention Center, Floor: Terrace Level, Bluebird Ballroom Room 2A

Abstract

We approach AI’s capabilities from two angles: 1) examining how AI can be used to help individuals decide what to trust online, and 2) summarizing concerns and questions about the degree to which AI is itself trustworthy. As AI becomes more commonly integrated into tools students and teachers already use (such as search engines), educators must grapple with issues of its place in our information ecosystem, both as a verifier of information and a generator of falsehoods. The literature on individuals’ abilities to evaluate credibility has painted a fairly pessimistic picture, especially among young people (e.g., Breakstone et al., 2021; McGrew et al., 2018), but that individuals improve with explicit instruction on determining credibility through lateral reading, the process of opening new tabs to explore the source of a website or post (e.g. Brodsky et al., 2021; McGrew, 2020; Wineburg et al., 2022). Yet, even students who receive such instruction on lateral reading can still struggle to decide what is trustworthy online (e.g., McGrew, 2021). We describe our research asking the Copilot AI to help us make decisions about who and what to trust online (e.g., evaluating trustworthiness, identifying sources of misinformation, assessing the qualifications of online posters, and fact-checking claims). Then, we take up the issues of AI “hallucinations,” intellectual property, and the general methods of training AI that may have bearing on the credibility of AI outputs. Rather than producing a definitive set of conditions on when and how much to trust AI platforms, we hope to encourage educators and students to think critically about AI outputs and understand where they come from and how they can (or cannot) be used to inform civic thinking and decision-making.

Authors