Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
Automation has become a central rationalization agenda in the technology sector, yet most automated systems and AI products remain dependent on human expertise, where people encode complex information into standardized training data through a wide range of “data work.” How do organizations sustain automation as a credible achievement while relying on human expertise as a core innovation input? This paper argues that the key issue is not simply that human expertise is obscured, but that human inputs are integrated into and reclassified, through organizational and labor arrangements, as properties of the automated system. The analysis draws on 20 weeks of fieldwork and interviews conducted during the summers of 2024 and 2025, including work shadowing at a data annotation company in China and an AI startup in the United States. I show how tech firms reclassify human expertise into machine intelligence through everyday data production practices, a process of commensuration without a settled ontology. By design, the data pipeline circulates the same item through multiple rounds of annotation, cross-checking, and calibration across teams and locations. Human judgment is elicited at each pass, but repetition produces convergence on a “standard” judgment that circulates as impersonal data rather than credited as interpretive work or expertise. This reclassification is further reinforced by imaginaries of technical superiority shared by engineers and founder–investor networks, who treat commensurated data outputs as evidence of autonomous capability. The findings contribute to organizational studies by showing how standards for scientific innovation stabilize under legitimacy pressures. It also advances understanding of how science and expertise are organizationally produced, classified, and attributed in transnational sites of knowledge production.