CORE: Measuring Multi-Agent LLM Interaction Quality under Game-Theoretic Pressures
By: Punya Syon Pandey , Yongjin Yang , Jiarui Liu and more
Potential Business Impact:
Measures how well AI talks to each other.
Game-theoretic interactions between agents with Large Language Models (LLMs) have revealed many emergent capabilities, yet the linguistic diversity of these interactions has not been sufficiently quantified. In this paper, we present the Conversational Robustness Evaluation Score: CORE, a metric to quantify the effectiveness of language use within multi-agent systems across different game-theoretic interactions. CORE integrates measures of cluster entropy, lexical repetition, and semantic similarity, providing a direct lens of dialog quality. We apply CORE to pairwise LLM dialogs across competitive, cooperative, and neutral settings, further grounding our analysis in Zipf's and Heaps' Laws to characterize word frequency distributions and vocabulary growth. Our findings show that cooperative settings exhibit both steeper Zipf distributions and higher Heap exponents, indicating more repetition alongside greater vocabulary expansion. In contrast, competitive interactions display lower Zipf and Heaps exponents, reflecting less repetition and more constrained vocabularies. These results provide new insights into how social incentives influence language adaptation, and highlight CORE as a robust diagnostic for measuring linguistic robustness in multi-agent LLM systems. Our code is available at https://github.com/psyonp/core.
Similar Papers
CORE: A Conceptual Reasoning Layer for Large Language Models
Computation and Language
Keeps chatbots remembering conversations better.
The Social Laboratory: A Psychometric Framework for Multi-Agent LLM Evaluation
Artificial Intelligence
AI agents learn to agree and persuade each other.
SCORE: Systematic COnsistency and Robustness Evaluation for Large Language Models
Computation and Language
Tests AI to see if it's reliable.