Individual Submission Summary
Share...

Direct link:

Should we trust ChatGPT to write job posts? Deconstructing algorithmic gender bias in job descriptions

Tue, August 12, 8:00 to 9:00am, East Tower, Hyatt Regency Chicago, Floor: Ballroom Level/Gold, Grand Ballroom A

Abstract

HR professionals and office staff increasingly rely on AI, particularly LLMs, to streamline workloads. While AI is already being used for automating hiring tasks like screening applications and onboarding, it is being explored for job posting generation. Large corporations use AI-powered job listing tools, but smaller organizations often rely on free or inexpensive LLMs, which may introduce bias depending on the input provided. Prior research has found that AI can perpetuate gender bias present in its training data, raising concerns about how LLMs generate job descriptions.

This study examines whether AI-generated job descriptions differ based on gender-coded job titles. We tested 38 LLM iterations across eight model families, generating 3,648 job descriptions from 96 job title triads (male-coded, female-coded, gender-neutral). Each output included a job summary, responsibilities, salary range, and reporting structure. LLMs were then prompted to assess and justify gender bias ratings on a 1-to-10 scale.

Preliminary findings suggest that female-coded job titles typically yield lower salaries than their male-coded counterparts, though inconsistencies exist. Male-coded job titles generally receive higher salaries than gender-neutral ones. Bias ratings show gender-neutral titles receive the lowest scores (1.4–1.7), while female-coded titles receive the highest (6.2–6.4), with male-coded titles falling in between (5.4–5.6).

Initial results indicate that without proper oversight, LLMs can reinforce gender disparities in hiring. HR professionals must be aware of these biases and receive training on how to mitigate them before relying on AI-generated job descriptions.

Authors