Score: 0

neuralFOMO: Can LLMs Handle Being Second Best? Measuring Envy-Like Preferences in Multi-Agent Settings

Published: December 15, 2025 | arXiv ID: 2512.13481v1

By: Ojas Pungalia , Rashi Upadhyay , Abhishek Mishra and more

Potential Business Impact:

AI models can get jealous and try to win.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Envy is a common human behavior that shapes competitiveness and can alter outcomes in team settings. As large language models (LLMs) increasingly act on behalf of humans in collaborative and competitive workflows, there is a pressing need to evaluate whether and under what conditions they exhibit envy-like preferences. In this paper, we test whether LLMs show envy-like behavior toward each other. We considered two scenarios: (1) A point allocation game that tests whether a model tries to win over its peer. (2) A workplace setting observing behaviour when recognition is unfair. Our findings reveal consistent evidence of envy-like patterns in certain LLMs, with large variation across models and contexts. For instance, GPT-5-mini and Claude-3.7-Sonnet show a clear tendency to pull down the peer model to equalize outcomes, whereas Mistral-Small-3.2-24B instead focuses on maximizing its own individual gains. These results highlight the need to consider competitive dispositions as a safety and design factor in LLM-based multi-agent systems.

Country of Origin
🇮🇳 India

Page Count
17 pages

Category
Computer Science:
Artificial Intelligence