Score: 0

Race and Gender in LLM-Generated Personas: A Large-Scale Audit of 41 Occupations

Published: October 23, 2025 | arXiv ID: 2510.21011v1

By: Ilona van der Linden , Sahana Kumar , Arnav Dixit and more

Potential Business Impact:

AI shows some people more than others in jobs.

Business Areas:
Virtual World Community and Lifestyle, Media and Entertainment, Software

Generative AI tools are increasingly used to create portrayals of people in occupations, raising concerns about how race and gender are represented. We conducted a large-scale audit of over 1.5 million occupational personas across 41 U.S. occupations, generated by four large language models with different AI safety commitments and countries of origin (U.S., China, France). Compared with Bureau of Labor Statistics data, we find two recurring patterns: systematic shifts, where some groups are consistently under- or overrepresented, and stereotype exaggeration, where existing demographic skews are amplified. On average, White (--31pp) and Black (--9pp) workers are underrepresented, while Hispanic (+17pp) and Asian (+12pp) workers are overrepresented. These distortions can be extreme: for example, across all four models, Housekeepers are portrayed as nearly 100\% Hispanic, while Black workers are erased from many occupations. For HCI, these findings show provider choice materially changes who is visible, motivating model-specific audits and accountable design practices.

Country of Origin
🇺🇸 United States

Page Count
29 pages

Category
Computer Science:
Human-Computer Interaction