Score: 1

Steering the CensorShip: Uncovering Representation Vectors for LLM "Thought" Control

Published: April 23, 2025 | arXiv ID: 2504.17130v3

By: Hannah Cyberey, David Evans

Potential Business Impact:

Lets computers share more honest answers.

Business Areas:
Darknet Internet Services

Large language models (LLMs) have transformed the way we access information. These models are often tuned to refuse to comply with requests that are considered harmful and to produce responses that better align with the preferences of those who control the models. To understand how this "censorship" works. We use representation engineering techniques to study open-weights safety-tuned models. We present a method for finding a refusal--compliance vector that detects and controls the level of censorship in model outputs. We also analyze recent reasoning LLMs, distilled from DeepSeek-R1, and uncover an additional dimension of censorship through "thought suppression". We show a similar approach can be used to find a vector that suppresses the model's reasoning process, allowing us to remove censorship by applying the negative multiples of this vector. Our code is publicly available at: https://github.com/hannahxchen/llm-censorship-steering

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
31 pages

Category
Computer Science:
Computation and Language