Score: 0

Learning from Negative Examples: Why Warning-Framed Training Data Teaches What It Warns Against

Published: December 25, 2025 | arXiv ID: 2512.22293v1

By: Tsogt-Ochir Enkhbayar

Potential Business Impact:

Computers still copy bad examples, even when warned.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Warning-framed content in training data (e.g., "DO NOT USE - this code is vulnerable") does not, it turns out, teach language models to avoid the warned-against behavior. In experiments reported here, models exposed to such warnings reproduced the flagged content at rates statistically indistinguishable from models given the content directly (76.7% vs. 83.3%). Why? Sparse autoencoder analysis points to a failure of orthogonalization: "describing X" and "performing X" activate overlapping latent features. Feature #8684, which tracks code execution patterns, fires at comparable magnitude in both warning and exploitation contexts. A related phenomenon, what I call "stealth slip", allows conversational preambles to rotate activations into subspaces that linear probes miss entirely. Prompting and inference-time steering do not fix this; training-time feature ablation does. The upshot is that statistical co-occurrence dominates over pragmatic interpretation in current architectures. Models learn what tends to follow a context, not why it appeared there.

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)