Individual Submission Summary
Share...

Direct link:

Algorithmic Bias as a Situated Practice: Auditing Signal Penalties in LLM Hiring Recommendations

Sat, August 8, 8:00 to 9:30am, TBA

Abstract

The saturation of graduate labor markets has accelerated the adoption of AI in recruitment, raising critical concerns about algorithmic fairness. Unlike earlier rule-based systems, Large Language Models (LLMs) perform dynamic, semantic reasoning, potentially introducing more subtle biases. This study investigates bias in LLM-mediated hiring as a situated practice, examining how it manifests across applicant signals and is shaped by context. We conduct a systematic audit using a factorial survey experiment with 385,395 Chinese job ads. Candidate profiles were created by orthogonally manipulating gender, GPA, and university prestige. LLMs (including multiple models) were tasked with selecting and ranking candidates for each job advertisement.

Results reveal that LLMs systematically replicate nuanced human biases. A significant penalty against high-achieving female candidates was found: the positive return of a high GPA was substantially diminished for women (β = -0.113, p < 0.001), who had 35% lower odds of selection than male counterparts. Similarly, the premium of an elite university degree was not gender-neutral, providing significantly less benefit for female candidates (β = -0.066, p < 0.001). Crucially, bias was context-dependent. A significant three-way interaction (Female × High GPA × Time Pressure, β = -0.045, p < 0.001) showed that bias amplified under simulated time pressure. Bias severity also varied by occupational field and gendered organizational language. These findings challenge the notion of bias as a static model property, framing it instead as a dynamic outcome of situated decision-making. This underscores the need for context-aware auditing frameworks and mitigation strategies in AI-powered hiring.

Author