'Rich Dad, Poor Lad': How do Large Language Models Contextualize Socioeconomic Factors in College Admission ?
By: Huy Nghiem , Phuong-Anh Nguyen-Le , John Prindle and more
Potential Business Impact:
Computers unfairly favor poor students for college.
Large Language Models (LLMs) are increasingly involved in high-stakes domains, yet how they reason about socially sensitive decisions remains underexplored. We present a large-scale audit of LLMs' treatment of socioeconomic status (SES) in college admissions decisions using a novel dual-process framework inspired by cognitive science. Leveraging a synthetic dataset of 30,000 applicant profiles grounded in real-world correlations, we prompt 4 open-source LLMs (Qwen 2, Mistral v0.3, Gemma 2, Llama 3.1) under 2 modes: a fast, decision-only setup (System 1) and a slower, explanation-based setup (System 2). Results from 5 million prompts reveal that LLMs consistently favor low-SES applicants -- even when controlling for academic performance -- and that System 2 amplifies this tendency by explicitly invoking SES as compensatory justification, highlighting both their potential and volatility as decision-makers. We then propose DPAF, a dual-process audit framework to probe LLMs' reasoning behaviors in sensitive applications.
Similar Papers
Where Should I Study? Biased Language Models Decide! Evaluating Fairness in LMs for Academic Recommendations
Computation and Language
AI unfairly favors rich countries and men.
Poor Alignment and Steerability of Large Language Models: Evidence from College Admission Essays
Computation and Language
AI can't write like real people applying to college.
Can Large Language Models Become Policy Refinement Partners? Evidence from China's Social Security Studies
Computers and Society
Helps governments make better plans for people.