Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
This paper examines algorithmic inequality in AI-based labor systems through a qualitative, theory-driven literature review of empirical and conceptual research on AI in employment. I use the term “AI-based labor systems” to refer to predictive and classificatory tools deployed across the employment cycle. In hiring, these tools include automated resume screening and ranking, pre-employment assessments, and video-interview analytics. In worker profiling, they include employability and risk scoring as well as predictive systems that infer “fit,” reliability, or turnover risk. In workplace management and governance, they include productivity tracking, algorithmic scheduling, performance scoring, and automated or semi-automated disciplinary interventions.
Methodologically, the review combines a transparent search strategy and clear inclusion criteria with narrative and thematic synthesis grounded in sociological and intersectional theory. I searched Sociological Abstracts, Web of Science/SSCI, and Social Services Abstracts, using Google Scholar as a complementary resource, and focused on peer-reviewed journal articles published in English between 2015 and December 2025. Using Boolean keyword combinations across technology, labor/hiring, and inequality domains, I screened titles/abstracts and then full texts, yielding a core set of studies for qualitative thematic synthesis.
Findings are organized into three themes. Theme 1 shows how algorithmic hiring systems contribute to constructing an “ideal worker” through data and model assumptions, and how HR professionals and managers interact with algorithmic recommendations in ways that shape hiring outcomes. Theme 2 shows how profiling and algorithmic management extend classification beyond hiring, affecting access to employment services, work intensity, and job security through risk labels and continuous monitoring. Theme 3 shows how law, ethics, and audit frameworks govern these systems while often translating discrimination into measurable “risk” managed through audits, impact assessments, and thresholds.
The paper’s contribution is to specify a political economy gap: how AI-mediated assessment aligns with managerial imperatives to sort workers, manage “risk,” and standardize evaluation at scale, thereby reproducing marginalization and reinforcing power hierarchies even when systems are framed as neutral or responsible.