Paper Summary
Share...

Direct link:

AI, Automation Bias, and Social Accountability in Medical Education

Wed, April 8, 11:45am to 1:15pm PDT (11:45am to 1:15pm PDT), JW Marriott Los Angeles L.A. LIVE, Floor: 2nd Floor, Platinum J

Abstract

AI is quickly becoming embedded in medical practice in multiple ways. However, the full implications of AI use for medical education are not receiving the consideration they deserve. As noted in Gin, et al, “whether an AI tool can be entrusted to perform a given task depends not only on its intrinsic trustworthiness but also on the context of its use…the accountability of the user…and the relationship between the AI, developers, and users” (2025:267). Put more directly, the authors note, “Health professions educators face the responsibility of asking whether they trust AI, and whether they also trust themselves to judiciously incorporate AI into HPE practice” (2025:270).
This paper addresses the potential for automation bias (the tendency to assume that technologically-mediated information is correct) in the adoption of AI in clinical practice and in health professions education. While Gin, et al examines the entrustability of AI across multiple dimensions, this paper will focus on the characteristic of beneficence through the lens of social accountability. For instance, Nguyen (2024) has pointed out that the Prescription Drug Monitoring Program (a machine learning system, or MLS, that provides risk scores for patients’ likelihood to misuse prescription drugs) can cause testimonial injustice. Testimonial injustice occurs when a patient’s account of their health is unfairly dismissed by their provider. MLSs like the PDMP can over-represent data from multiple patients in creating a risk profile, thus compromising the integrity of any one patient account and the individual patient’s trustworthiness in the eyes of the provider. From a treatment perspective, a patient with chronic pain may not receive the medication they need due to the PDMP providing an incorrect risk score. From an educational perspective, a preceptor or attending can perpetuate automation bias (as well as other biases) to learners without critical assessment of MLSs.
The author will reference scholarship in AI and social accountability with the intention of recommending entrustment criteria for HPE educators using AI. As Nguyen notes, “The mentality that AI is always right is often associated with medical students and residents” (2024:2). Thus, critical appreciation of AI tools is an essential skill for medical schools to teach and HPE educators to model. This approach is not only essential for clinical accuracy, but also for adhering to the core values of social accountability (Barber, et al 2020). This model goes beyond individual entrustment to entrustment of HPE as a profession. “Faculty should serve as an example for students by ensuring that students have the right critical analysis skills and are comfortable with questioning results instead of accepting what is being given to them” (Nguyen 2024:3). Creating guard rails for HPE educators minimizes the risk of automation bias and maximizes the trustworthiness of healthcare as a whole.

Author