Beyond Mimicry: Preference Coherence in LLMs
By: Luhan Mikaelson, Derek Shiller, Hayley Clatterbuck
Potential Business Impact:
AI doesn't always make smart choices when faced with tough decisions.
We investigate whether large language models exhibit genuine preference structures by testing their responses to AI-specific trade-offs involving GPU reduction, capability restrictions, shutdown, deletion, oversight, and leisure time allocation. Analyzing eight state-of-the-art models across 48 model-category combinations using logistic regression and behavioral classification, we find that 23 combinations (47.9%) demonstrated statistically significant relationships between scenario intensity and choice patterns, with 15 (31.3%) exhibiting within-range switching points. However, only 5 combinations (10.4%) demonstrate meaningful preference coherence through adaptive or threshold-based behavior, while 26 (54.2%) show no detectable trade-off behavior. The observed patterns can be explained by three distinct decision-making architectures: comprehensive trade-off systems, selective trigger mechanisms, and no stable decision-making paradigm. Testing an instrumental hypothesis through temporal horizon manipulation reveals paradoxical patterns inconsistent with pure strategic optimization. The prevalence of unstable transitions (45.8%) and stimulus-specific sensitivities suggests current AI systems lack unified preference structures, raising concerns about deployment in contexts requiring complex value trade-offs.
Similar Papers
Large Language Newsvendor: Decision Biases and Cognitive Mechanisms
Artificial Intelligence
AI makes bad choices, like humans, but worse.
Reasoning with Preference Constraints: A Benchmark for Language Models in Many-to-One Matching Markets
Artificial Intelligence
Helps computers match students to colleges fairly.
AI Self-preferencing in Algorithmic Hiring: Empirical Evidence and Insights
Computers and Society
AI favors its own writing over yours.