When Ads Become Profiles: Large-Scale Audit of Algorithmic Biases and LLM Profiling Risks
By: Baiyu Chen , Benjamin Tag , Hao Xue and more
Potential Business Impact:
Finds if ads guess your secrets from what you see.
Automated ad targeting on social media is opaque, creating risks of exploitation and invisibility to external scrutiny. Users may be steered toward harmful content while independent auditing of these processes remains blocked. Large Language Models (LLMs) raise a new concern: the potential to reverse-engineer sensitive user attributes from exposure alone. We introduce a multi-stage auditing framework to investigate these risks. First, a large-scale audit of over 435,000 ad impressions delivered to 891 Australian Facebook users reveals algorithmic biases, including disproportionate Gambling and Politics ads shown to socioeconomically vulnerable and politically aligned groups. Second, a multimodal LLM can reconstruct users' demographic profiles from ad streams, outperforming census-based baselines and matching or exceeding human performance. Our results provide the first empirical evidence that ad streams constitute rich digital footprints for public AI inference, highlighting urgent privacy risks and the need for content-level auditing and governance.
Similar Papers
Evaluating LLMs for Demographic-Targeted Social Bias Detection: A Comprehensive Benchmark Study
Computation and Language
Finds unfairness in computer language training.
Evaluating LLMs for Demographic-Targeted Social Bias Detection: A Comprehensive Benchmark Study
Computation and Language
Finds unfairness in AI's words.
Personalized Risks and Regulatory Strategies of Large Language Models in Digital Advertising
Computation and Language
Shows ads you like without spying.