Chain-of-Sanitized-Thoughts: Plugging PII Leakage in CoT of Large Reasoning Models
By: Arghyadeep Das , Sai Sreenivas Chintha , Rishiraj Girmal and more
Potential Business Impact:
Keeps private thoughts safe while computers think.
Large Reasoning Models (LRMs) improve performance, reliability, and interpretability by generating explicit chain-of-thought (CoT) reasoning, but this transparency introduces a serious privacy risk: intermediate reasoning often leaks personally identifiable information (PII) even when final answers are sanitized. We study how to induce privacy-first reasoning, where models reason without exposing sensitive information, using deployable interventions rather than post-hoc redaction. We introduce PII-CoT-Bench, a supervised dataset with privacy-aware CoT annotations, and a category-balanced evaluation benchmark covering realistic and adversarial leakage scenarios. Our results reveal a capability-dependent trend: state-of-the-art models benefit most from prompt-based controls, whereas weaker models require fine-tuning to achieve meaningful leakage reduction. Across models and categories, both approaches substantially reduce PII exposure with minimal degradation in utility, demonstrating that private reasoning can be achieved without sacrificing performance. Overall, we show that private CoT reasoning can be achieved with minimal utility loss, providing practical guidance for building privacy-preserving reasoning systems.
Similar Papers
Chain-of-Thought Hijacking
Artificial Intelligence
Makes AI think it's safe, but it's not.
Thought Purity: Defense Paradigm For Chain-of-Thought Attack
Machine Learning (CS)
Protects smart AI from being tricked.
Latent Chain-of-Thought for Visual Reasoning
Artificial Intelligence
Makes AI think step-by-step better for new problems.