Steering Language Models Before They Speak: Logit-Level Interventions
By: Hyeseon An , Shinwoo Park , Hyundong Jin and more
Potential Business Impact:
Guides AI writing to be more helpful and safe.
Steering LLMs is essential for specialized applications such as style-sensitive text rewriting, user-adaptive communication, and toxicity mitigation. Current steering methods, such as prompting-based and activation-based approaches, are widely used to guide model behavior. However, activation-based techniques require deep access to internal layers, while prompting-based steering often fails to provide consistent or fine-grained control. In order to address these limitations, we propose a training-free inference-time logit intervention for controllable generation. Our approach utilizes a statistical token score table derived from z-normalized log-odds of labeled corpora to shift the decoding distribution. Empirical evaluations across three diverse datasets focusing on writing complexity, formality, and toxicity demonstrate that our method effectively steers output characteristics, confirming its broad applicability and task-agnostic nature. Our results show that statistically grounded logit steering can achieve large, consistent, and multi-task control gains: up to +47%p accuracy and 50x f1 improvement.
Similar Papers
Steering LLMs for Formal Theorem Proving
Machine Learning (CS)
Helps computers write math proofs better.
ExpertSteer: Intervening in LLMs through Expert Knowledge
Computation and Language
Guides AI to act as you want.
Steering LLMs for Formal Theorem Proving
Machine Learning (CS)
Helps computers prove math problems faster.