Mitigating Content Effects on Reasoning in Language Models through Fine-Grained Activation Steering
By: Marco Valentino , Geonhee Kim , Dhairya Dalal and more
Potential Business Impact:
Makes AI think more logically, less based on guesses.
Large language models (LLMs) frequently demonstrate reasoning limitations, often conflating content plausibility (i.e., material inference) with logical validity (i.e., formal inference). This can result in biased inferences, where plausible arguments are incorrectly deemed logically valid or vice versa. Mitigating this limitation is critical, as it undermines the trustworthiness and generalizability of LLMs in applications that demand rigorous logical consistency. This paper investigates the problem of mitigating content biases on formal reasoning through activation steering. Specifically, we curate a controlled syllogistic reasoning dataset to disentangle formal validity from content plausibility. After localising the layers responsible for formal and material inference, we investigate contrastive activation steering methods for test-time interventions. An extensive empirical analysis on different LLMs reveals that contrastive steering consistently supports linear control over content biases. However, we observe that a static approach is insufficient for improving all the tested models. We then leverage the possibility to control content effects by dynamically determining the value of the steering parameters via fine-grained conditional methods. We found that conditional steering is effective on unresponsive models, achieving up to 15% absolute improvement in formal reasoning accuracy with a newly introduced kNN-based method (K-CAST). Finally, additional experiments reveal that steering for content effects is robust to prompt variations, incurs minimal side effects on language modeling capabilities, and can partially generalize to out-of-distribution reasoning tasks. Practically, this paper demonstrates that activation-level interventions can offer a scalable strategy for enhancing the robustness of LLMs, contributing towards more systematic and unbiased formal reasoning.
Similar Papers
Mitigating Memorization in LLMs using Activation Steering
Computation and Language
Stops AI from remembering and sharing private info.
Patterns and Mechanisms of Contrastive Activation Engineering
Artificial Intelligence
Changes AI answers without retraining it.
Activation Steering for Bias Mitigation: An Interpretable Approach to Safer LLMs
Artificial Intelligence
Fixes AI to stop saying unfair or wrong things.