Score: 0

When Should We Introduce Safety Interventions During Pretraining?

Published: January 11, 2026 | arXiv ID: 2601.07087v1

By: Dylan Sam , Sachin Goyal , Pratyush Maini and more

Potential Business Impact:

Teaches AI to be safe from the start.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Ensuring the safety of language models in high-stakes settings remains a pressing challenge, as aligned behaviors are often brittle and easily undone by adversarial pressure or downstream finetuning. Prior work has shown that interventions applied during pretraining, such as rephrasing harmful content, can substantially improve the safety of the resulting models. In this paper, we study the fundamental question: "When during pretraining should safety interventions be introduced?" We keep the underlying data fixed and vary only the choice of a safety curriculum: the timing of these interventions, i.e., after 0%, 20%, or 60% of the pretraining token budget. We find that introducing interventions earlier generally yields more robust models with no increase in overrefusal rates, with the clearest benefits appearing after downstream, benign finetuning. We also see clear benefits in the steerability of models towards safer generations. Finally, we observe that earlier interventions reshape internal representations: linear probes more cleanly separate safe vs harmful examples. Overall, these results argue for incorporating safety signals early in pretraining, producing models that are more robust to downstream finetuning and jailbreaking, and more reliable under both standard and safety-aware inference procedures.

Country of Origin
πŸ‡ΊπŸ‡Έ United States

Page Count
13 pages

Category
Computer Science:
Machine Learning (CS)