Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
There has recently been a rapid proliferation of AI models deployed in healthcare systems, with applications ranging from tumor diagnosis to sepsis prediction to ambient documentation systems for clinical note taking. The infrastructure to regulate and ethically monitor AI, however, has lagged well behind innovations in the technology itself. Drawing on the history of U.S. health information technology governance and expert interviews, this paper identifies two challenges to AI regulation that arise from the way the technology is classified. First, at the federal level, there has long been a lack of clarity regarding how to regulate medical software. A key distinction that funnels technology into the FDA regulatory apparatus is between medical devices and non-devices, but the boundary between these categories remains fuzzy for most AI applications. In addition, there is a further distinction at the local level between projects classified as human subjects research and ‘quality improvement’ (QI). Applications of AI that are designated research are overseen by institutional review boards, whereas QI projects are not subject to the same degree of ethical scrutiny. Given these twin difficulties of sorting AI models into categories for regulatory purposes, this paper analyzes the work of experts involved in practical AI implementation, which it characterizes as “governance from below.” We find that in the absence of established infrastructure for regulation and ethical oversight, experts develop pragmatic and creative solutions by drawing from neighboring fields, a process we refer to as “reflexive analogical reasoning.”