Paper Summary
Share...

Direct link:

Entrustment: a framework to monitor and mitigate bias in artificial intelligence applications

Thu, April 24, 3:35 to 5:05pm MDT (3:35 to 5:05pm MDT), The Colorado Convention Center, Floor: Ballroom Level, Four Seasons Ballroom 2-3

Abstract

In this symposium discussion, we propose extending the concept of entrustment to moderate the use of Artificial Intelligence (AI) in Health Professions Education (HPE). Entrustment can help address challenges and risks associated with integrating generative AI tools with limited transparency in their accuracy, source material, and disclosure of bias into HPE. With AI’s growing role in education-related activities like automated screening and summarization of written materials, there is a critical need for a trust-based approach to ensure these technologies are beneficial, safe, and free from bias. Drawing parallels with HPE’s entrustment concept, which assesses a trainee’s readiness to perform clinical tasks – or entrustable professional activities (EPAs) – we propose assessing the trustworthiness of AI tools to perform HPE-related tasks across three dimensions: ability (competence to perform tasks accurately), integrity (transparency, fairness, and freedom from bias), and benevolence (alignment with ethical principles, including justice).
The issue of bias in AI intersects with these dimensions in multiple ways. Regarding integrity, AI models are prone to repeating human-like biases and stereotypes from their training data. Additionally, biases in AI models may reveal previously unknown biases in data. These biases are associated with gender identity, sexual orientation, race/ethnicity, religion, socioeconomic factors, and other demographic characteristics. Strategies to mitigate such biases include identifying and minimizing bias in datasets used to train AI tools (i.e., developing datasets that fairly represent the demographics of the target populations) and developing measures of bias to optimize how AIs are trained.
Assessing fairness requires context and necessitates framing AI’s potential benefits and risks within each stakeholder group. For instance, an AI tool that automates applicant screening in medical school admissions may benefit faculty but have an unclear effect on the diversity of the selected applicant pool. AI-based screening that emphasizes standardized test scores and the quantity of select experiences may not recognize merit in learners’ varied training paths or their ability to overcome challenges. An equitable balance between benefits and risks to stakeholder groups reflects the ethical principle of justice and requires a transparent process of AI integration and monitoring to assess and achieve this balance.
Based on these dimensions of AI’s trustworthiness, we draw on existing frameworks of entrustment decision-making to envision a structured way to determine and monitor AI’s role and level of engagement in HPE-related tasks, including proposing an AI-specific entrustment scale. Identifying tasks that AI could be entrusted with provides a focus around which considerations of trustworthiness and entrustment decision-making may be synthesized – making explicit the risks and biases associated with AI use and identifying strategies to mitigate them. Entrustment could provide a pragmatic framework to guide the responsible and ethical use of AI tools in HPE, and to minimize bias in their development and implementation.

Authors