Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
Artificial intelligence has been framed as a linear sequence in which developers build foundation models and science actors adopt them to accelerate discovery and improve productivity. We argue that this “user versus developer” framing is inadequate. In practice, science actors navigate an interdependent AI ecosystem structured by platform dependencies, standardized tools, and shifting technological possibilities. What appears as “development” in scientific settings often consists of plugging into and recombining platformized components—such as application marketplaces and shared infrastructures—rather than building systems from scratch. These dependencies shape what science actors can do and how they justify AI as scientifically credible and institutionally acceptable.
We develop an alternative account centered on the hybrid space between use and development, where science actors undertake translation, alignment, and governance work under strategic ambiguity and uneven power. In this in-between space, hybrid science actors convert model capabilities into scientifically meaningful promises and institutionally defensible narratives. They also build evaluation and monitoring routines that define what “works,” often privileging commensurable metrics, benchmarks, and audit artifacts as authoritative evidence.
This navigation remakes expertise in scientific knowledge production. Authority shifts toward actors who can operationalize evaluation, manage platform interfaces, and produce defensible documentation, while domain experts are re-situated within new jurisdictional settlements over who can certify reliability and responsibility. We propose that expertise is reconfigured through two-sided legitimacy pressures (technical credibility versus scientific and institutional acceptability) and power-laden ecosystem interdependencies that shape what counts as “responsible AI.” The paper advances propositions about when AI becomes durable scientific infrastructure versus symbolic adoption, lock-in, and recurring legitimacy crises.