Steering Risk Preferences in Large Language Models by Aligning Behavioral and Neural Representations
By: Jian-Qiao Zhu, Haijiang Yan, Thomas L. Griffiths
Potential Business Impact:
Changes AI's answers without retraining it.
Changing the behavior of large language models (LLMs) can be as straightforward as editing the Transformer's residual streams using appropriately constructed "steering vectors." These modifications to internal neural activations, a form of representation engineering, offer an effective and targeted means of influencing model behavior without retraining or fine-tuning the model. But how can such steering vectors be systematically identified? We propose a principled approach for uncovering steering vectors by aligning latent representations elicited through behavioral methods (specifically, Markov chain Monte Carlo with LLMs) with their neural counterparts. To evaluate this approach, we focus on extracting latent risk preferences from LLMs and steering their risk-related outputs using the aligned representations as steering vectors. We show that the resulting steering vectors successfully and reliably modulate LLM outputs in line with the targeted behavior.
Similar Papers
Improving Multilingual Language Models by Aligning Representations through Steering
Computation and Language
Makes computers understand many languages better.
Shifting Perspectives: Steering Vectors for Robust Bias Mitigation in LLMs
Machine Learning (CS)
Makes AI fairer by reducing unfair ideas.
Steerable Chatbots: Personalizing LLMs with Preference-Based Activation Steering
Human-Computer Interaction
Lets AI understand your hidden feelings better.