Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
AI is now widely used in hiring, HR analytics, and workplace governance, yet U.S. federal AI policy is still contesting what “algorithmic bias” means. This definitional struggle matters because it structures state capacity: what agencies can monitor, what counts as evidence, and which remedies become thinkable and administratively actionable, especially for gendered harms in employment-related domains.
This paper analyzes how competing bias frames in U.S. federal AI governance reorganize the meaning of algorithmic bias and, as a result, reshape the practical toolkit for gender justice in labor markets. I ask: what counts as “bias” in federal AI policy, and which harms become legible, measurable, and enforceable under different definitions? Drawing on Knowledge Governance and Civic Epistemologies, I treat bias definitions as civic-epistemic settlements that authorize particular forms of expertise, standards, and intervention while delegitimating others.
Empirically, I conduct a qualitative frame analysis of four anchor federal texts (2020–2025): EO 13960 (Trump, 2020), the Blueprint for an AI Bill of Rights (2022), EO 14110 (Biden, 2023), and EO 14319 (Trump, 2025). The analysis combines Entman’s framing functions (problem, cause, moral evaluation, remedy) with Snow & Benford’s diagnostic, prognostic, and motivational frames. I code policy clauses for BIAS_FRAME, FRAME_FUNCTION, the associated GOV_TOOL package (e.g., assessment, audit/monitoring, transparency/documentation, redress, standards/guidance, procurement conditions, viewpoint neutrality), and GENDER_REF.
Findings show a patterned shift. Civil-rights oriented bias frames (in the Blueprint and EO 14110) activate toolkits aimed at algorithmic discrimination, including audits, impact assessment, documentation, and redress. By contrast, ideological/viewpoint bias frames (EO 14319) prioritize procurement-based compliance and viewpoint neutrality, with minimal attention to gendered labor harms. Interpreted through Epistemologies of Ignorance and Data Feminism, these shifts illuminate how bias framings produce policy “silences” that narrow what counts as harm and who is positioned for protection, redistributing visibility and enforceability in federal AI policy.