Interpolative Decoding: Exploring the Spectrum of Personality Traits in LLMs
By: Eric Yeh , John Cadigan , Ran Chen and more
Recent research has explored using very large language models (LLMs) as proxies for humans in tasks such as simulation, surveys, and studies. While LLMs do not possess a human psychology, they often can emulate human behaviors with sufficiently high fidelity to drive simulations to test human behavioral hypotheses, exhibiting more nuance and range than the rule-based agents often employed in behavioral economics. One key area of interest is the effect of personality on decision making, but the requirement that a prompt must be created for every tested personality profile introduces experimental overhead and degrades replicability. To address this issue, we leverage interpolative decoding, representing each dimension of personality as a pair of opposed prompts and employing an interpolation parameter to simulate behavior along the dimension. We show that interpolative decoding reliably modulates scores along each of the Big Five dimensions. We then show how interpolative decoding causes LLMs to mimic human decision-making behavior in economic games, replicating results from human psychological research. Finally, we present preliminary results of our efforts to ``twin'' individual human players in a collaborative game through systematic search for points in interpolation space that cause the system to replicate actions taken by the human subject.
Similar Papers
The Personality Illusion: Revealing Dissociation Between Self-Reports & Behavior in LLMs
Artificial Intelligence
Computers can act like people, but don't always behave that way.
From Five Dimensions to Many: Large Language Models as Precise and Interpretable Psychological Profilers
Artificial Intelligence
Computers guess your personality from a few answers.
The Personality Illusion: Revealing Dissociation Between Self-Reports & Behavior in LLMs
Artificial Intelligence
Computers can act like people, but don't always behave that way.