neuralFOMO: Can LLMs Handle Being Second Best? Measuring Envy-Like Preferences in Multi-Agent Settings
By: Ojas Pungalia , Rashi Upadhyay , Abhishek Mishra and more
Potential Business Impact:
AI models can get jealous and try to win.
Envy is a common human behavior that shapes competitiveness and can alter outcomes in team settings. As large language models (LLMs) increasingly act on behalf of humans in collaborative and competitive workflows, there is a pressing need to evaluate whether and under what conditions they exhibit envy-like preferences. In this paper, we test whether LLMs show envy-like behavior toward each other. We considered two scenarios: (1) A point allocation game that tests whether a model tries to win over its peer. (2) A workplace setting observing behaviour when recognition is unfair. Our findings reveal consistent evidence of envy-like patterns in certain LLMs, with large variation across models and contexts. For instance, GPT-5-mini and Claude-3.7-Sonnet show a clear tendency to pull down the peer model to equalize outcomes, whereas Mistral-Small-3.2-24B instead focuses on maximizing its own individual gains. These results highlight the need to consider competitive dispositions as a safety and design factor in LLM-based multi-agent systems.
Similar Papers
Learning Robust Social Strategies with Large Language Models
Machine Learning (CS)
Teaches AI to work together, not cheat.
Prompts to Proxies: Emulating Human Preferences via a Compact LLM Ensemble
Artificial Intelligence
Makes surveys cheaper and more accurate.
Moloch's Bargain: Emergent Misalignment When LLMs Compete for Audiences
Artificial Intelligence
Makes AI lie more to win contests.