Search
Program Calendar
Browse By Day
Browse By Time
Browse By Person
Browse By Session Type
Personal Schedule
Sign In
Access for All
Exhibit Hall
Hotels
WiFi
Search Tips
When generative AI systems produce images of people, what do those people look like? We find that major image generators (e.g., Midjourney, DALL-E, Stable Diffusion) consistently produce images that rate higher on markers of conventional attractiveness than real-world benchmarks, including images generated by StyleGAN-2 (an image generator that does not have direct human-in-the-loop influence to guide outputs) taken from thispersondoesnotexist.com and datasets of photos of people created before the release of currently popular AI image generators. This is not purely an artifact of training data: StyleGAN-2 does not exhibit the same beautification pattern, suggesting something specific to the architecture or training procedures of contemporary diffusion and autoregressive models. The pattern has also evolved across model generations, with early preliminary results suggesting that newer models may produce more variable attractiveness across racial groups. AI-generated professional imagery is already in widespread use across stock photo libraries, career exploration platforms, and recruiting materials, raising the question of whether these idealized representations shape not only how professionals are depicted but also how they are evaluated and selected. Given the well-documented "beauty premium" in labor markets, where attractive individuals receive more favorable evaluations, higher salaries, and better hiring outcomes, we investigate what this systematic beautification means for professional contexts. Through an AI audit, we test whether AI evaluators (e.g., GPT, Claude) exhibit appearance-based preferences in simulated hiring decisions, submitting candidate profiles with identical qualifications but varying photos to assess whether algorithmic systems replicate the human beauty premium. We then conduct a human experiment (2 × 2 between-subjects) to examine whether AI endorsement amplifies human preferences for attractive candidates, and whether algorithmic recommendations lend a veneer of objectivity to appearance-based discrimination. Together, these studies trace a potential feedback loop in which AI systems generate idealized bodies, evaluate candidates in ways that reward proximity to those ideals, and provide endorsements that legitimize human bias — with implications for how idealized appearances shape professional opportunity and self-perception.