Instruction Following by Boosting Attention of Large Language Models
By: Vitoria Guardieiro , Adam Stein , Avishree Khare and more
Potential Business Impact:
Guides AI to follow instructions better.
Controlling the generation of large language models (LLMs) remains a central challenge to ensure their safe and reliable deployment. While prompt engineering and finetuning are common approaches, recent work has explored latent steering, a lightweight technique that alters LLM internal activations to guide generation. However, subsequent studies revealed latent steering's effectiveness to be limited, often underperforming simple instruction prompting. To address this limitation, we first establish a benchmark across diverse behaviors for standardized evaluation of steering techniques. Building on insights from this benchmark, we introduce Instruction Attention Boosting (InstABoost), a latent steering method that boosts the strength of instruction prompting by altering the model's attention during generation. InstABoost combines the strengths of existing approaches and is theoretically supported by prior work that suggests that in-context rule following in transformer-based models can be controlled by manipulating attention on instructions. Empirically, InstABoost demonstrates superior control success compared to both traditional prompting and latent steering.
Similar Papers
Spotlight Your Instructions: Instruction-following with Dynamic Attention Steering
Machine Learning (CS)
Helps computers follow your instructions better.
Boosting Instruction Following at Scale
Artificial Intelligence
Makes AI follow instructions better, even many.
Steering Large Language Models for Machine Translation Personalization
Computation and Language
Makes computer translations sound more like a person.