Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
Annual Meeting App
Onsite Guide
Artificial Intelligence (AI) increasingly shapes economic decision-making, yet existing studies on trust in AI often reduce it to intrinsic properties of the system or the individual trustor, overlooking its relational dimensions. Drawing on a Bourdieusian framework, this study re-conceptualizes trust in AI as a relational construct influenced by the forms of capital that individuals possess and the social context in which evaluations occur. Using the case of personal credit assessments in China—where respondents choose between AI and human evaluators—we address one core question: How does a trustor’s social position, particularly the types of capital they hold, shape their preference of AI when comparing AI against humans?
Prior research has documented a tendency toward algorithmic aversion, favoring human judgment in subjective tasks. However, we posit that this tendency is not uniform but varies with individuals’ capital profiles. Specifically, we hypothesize that those with greater relational forms of capital (cultural and social) prefer human evaluators, as these forms resist quantification and thrive on interpersonal recognition, while individuals with more measurable economic capital favor AI evaluators for their perceived objectivity.
Analysis of a representative survey of Chinese urban residents supports our hypotheses. Respondents with higher cultural and social capital show a marked preference for human evaluation—mediated by concerns about the comprehensiveness and accuracy of AI assessments—whereas those with greater economic capital tend to prefer AI evaluation. Notably, the overall preference for AI among most respondents challenges the prevailing literature on algorithmic aversion.
By situating trust within the interplay of social capital and evaluative contexts, this research contributes to both sociological and AI trust literatures, highlighting the importance of a relational and context-sensitive approach to understanding trust in AI, particularly within developing country settings.