Linear socio-demographic representations emerge in Large Language Models from indirect cues
By: Paul Bouchaud, Pedro Ramaciotti
Potential Business Impact:
Computers learn and use unfair ideas about people.
We investigate how LLMs encode sociodemographic attributes of human conversational partners inferred from indirect cues such as names and occupations. We show that LLMs develop linear representations of user demographics within activation space, wherein stereotypically associated attributes are encoded along interpretable geometric directions. We first probe residual streams across layers of four open transformer-based LLMs (Magistral 24B, Qwen3 14B, GPT-OSS 20B, OLMo2-1B) prompted with explicit demographic disclosure. We show that the same probes predict demographics from implicit cues: names activate census-aligned gender and race representations, while occupations trigger representations correlated with real-world workforce statistics. These linear representations allow us to explain demographic inferences implicitly formed by LLMs during conversation. We demonstrate that these implicit demographic representations actively shape downstream behavior, such as career recommendations. Our study further highlights that models that pass bias benchmark tests may still harbor and leverage implicit biases, with implications for fairness when applied at scale.
Similar Papers
Linear Representations of Political Perspective Emerge in Large Language Models
Computation and Language
Changes computer opinions to be liberal or conservative.
Are LLMs Empathetic to All? Investigating the Influence of Multi-Demographic Personas on a Model's Empathy
Computation and Language
AI understands feelings differently for everyone.
Unmasking Implicit Bias: Evaluating Persona-Prompted LLM Responses in Power-Disparate Social Scenarios
Computers and Society
AI models favor some people over others.