Browse By Day
Browse By Time
Browse By Person
Browse By Mini-Conference
Browse By Division
Browse By Session or Event Type
Social scientists have long recognized the problem of deductive reification. In order to generate theories about the world, we create ideal-typical conceptualizations of social phenomena. We then collect data to identify instances of those phenomena and observe the manner in which they operate. The problem of deductive reification is that we falsely attribute concreteness to our abstract concepts. We overlook the ontological gaps and biases in the data generating processes we use to validate our theories. In the emerging world of machine learning, this problem takes on immense magnitude. Those biased, incomplete data train algorithms that distribute news, grant access to credit, allocate all manner of resources, and soon might make life-or-death decisions for autonomous vehicles and in military combat.
In machine learning, inductive reificiation occurs when emergent data structures are made explicit in the process of deriving ontologies from large observational datasets. This paper exploits the multi-vocality of the term reification, in its deductive and inductive modalities, to begin bridging the cavernous gaps between social and computer scientists by invoking a shared term with different valences in each practice. In social science, reification is a problem that reproduces biases; in computer science, it solves problems by mapping context onto chaos. I propose that this shared label can identify opportunities for intervention, where important ethical problems may be identified and even resolved. Reifying moments can bring together social scientists who understand the biases in the processes that generate training data with the computer scientists that use those data to develop machine learning systems.