Compositional Steering of Large Language Models with Steering Tokens
By: Gorjan Radevski , Kiril Gashteovski , Giwon Hong and more
Potential Business Impact:
Teaches computers to do many things at once.
Deploying LLMs in real-world applications requires controllable output that satisfies multiple desiderata at the same time. While existing work extensively addresses LLM steering for a single behavior, \textit{compositional steering} -- i.e., steering LLMs simultaneously towards multiple behaviors -- remains an underexplored problem. In this work, we propose \emph{compositional steering tokens} for multi-behavior steering. We first embed individual behaviors, expressed as natural language instructions, into dedicated tokens via self-distillation. Contrary to most prior work, which operates in the activation space, our behavior steers live in the space of input tokens, enabling more effective zero-shot composition. We then train a dedicated \textit{composition token} on pairs of behaviors and show that it successfully captures the notion of composition: it generalizes well to \textit{unseen} compositions, including those with unseen behaviors as well as those with an unseen \textit{number} of behaviors. Our experiments across different LLM architectures show that steering tokens lead to superior multi-behavior control compared to competing approaches (instructions, activation steering, and LoRA merging). Moreover, we show that steering tokens complement natural language instructions, with their combination resulting in further gains.
Similar Papers
Beyond Linear Steering: Unified Multi-Attribute Control for Language Models
Machine Learning (CS)
Teaches AI to do many things at once.
Steering Language Models in Multi-Token Generation: A Case Study on Tense and Aspect
Computation and Language
Teaches computers to use verb tenses correctly.
Improving Multilingual Language Models by Aligning Representations through Steering
Computation and Language
Makes computers understand many languages better.