Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
Annual Meeting App
Onsite Guide
Despite the rapid adoption of artificial intelligence (AI) and machine learning (ML) in healthcare, the persistence of the “AI chasm” highlights the gap between model development and real-world implementation. This paper examines how AI systems require continuous maintenance and repair to ensure their safety, efficacy, and equity—yet this critical work remains largely invisible and undervalued. Drawing on interviews with 21 clinicians, informaticists, AI developers, and policy experts, we argue that the prevailing focus on technological innovation overlooks the routine labor needed to sustain AI tools in clinical practice. We find that AI model degradation—due to factors like dataset drift and changing clinical workflows—is widely acknowledged, yet responsibility for maintenance is diffuse and unclaimed. This “responsibility vacuum” is exacerbated by institutional incentives that prioritize rapid deployment over long-term oversight, often leading to strategic ignorance of AI failures. At the same time, we document creative grassroots efforts by healthcare practitioners who develop ad hoc solutions to monitor and repair AI tools in the absence of formalized infrastructure. Without structured accountability, resource investment, and institutional commitment to building maintenance infrastructure, we suggest that AI/ML technologies designed to improve patient health will introduce new forms of harm, ultimately eroding trust in AI and machine learning for healthcare.