Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
Annual Meeting App
Onsite Guide
HR professionals and office staff increasingly rely on AI, particularly LLMs, to streamline workloads. While AI is already being used for automating hiring tasks like screening applications and onboarding, it is being explored for job posting generation. Large corporations use AI-powered job listing tools, but smaller organizations often rely on free or inexpensive LLMs, which may introduce bias depending on the input provided. Prior research has found that AI can perpetuate gender bias present in its training data, raising concerns about how LLMs generate job descriptions.
This study examines whether AI-generated job descriptions differ based on gender-coded job titles. We tested 38 LLM iterations across eight model families, generating 3,648 job descriptions from 96 job title triads (male-coded, female-coded, gender-neutral). Each output included a job summary, responsibilities, salary range, and reporting structure. LLMs were then prompted to assess and justify gender bias ratings on a 1-to-10 scale.
Preliminary findings suggest that female-coded job titles typically yield lower salaries than their male-coded counterparts, though inconsistencies exist. Male-coded job titles generally receive higher salaries than gender-neutral ones. Bias ratings show gender-neutral titles receive the lowest scores (1.4–1.7), while female-coded titles receive the highest (6.2–6.4), with male-coded titles falling in between (5.4–5.6).
Initial results indicate that without proper oversight, LLMs can reinforce gender disparities in hiring. HR professionals must be aware of these biases and receive training on how to mitigate them before relying on AI-generated job descriptions.