Framing the Game: How Context Shapes LLM Decision-Making
By: Isaac Robinson, John Burden
Potential Business Impact:
Makes AI make better choices by changing how it's asked.
Large Language Models (LLMs) are increasingly deployed across diverse contexts to support decision-making. While existing evaluations effectively probe latent model capabilities, they often overlook the impact of context framing on perceived rational decision-making. In this study, we introduce a novel evaluation framework that systematically varies evaluation instances across key features and procedurally generates vignettes to create highly varied scenarios. By analyzing decision-making patterns across different contexts with the same underlying game structure, we uncover significant contextual variability in LLM responses. Our findings demonstrate that this variability is largely predictable yet highly sensitive to framing effects. Our results underscore the need for dynamic, context-aware evaluation methodologies for real-world deployments.
Similar Papers
Evaluating the Sensitivity of LLMs to Prior Context
Computation and Language
Computers forget things in long talks.
Computational Basis of LLM's Decision Making in Social Simulation
Artificial Intelligence
Changes AI's fairness by adjusting its "personality."
Fane at SemEval-2025 Task 10: Zero-Shot Entity Framing with Large Language Models
Computation and Language
Helps computers understand how news stories frame people.