You Only Anonymize What Is Not Intent-Relevant: Suppressing Non-Intent Privacy Evidence
By: Weihao Shen , Yaxin Xu , Shuang Li and more
Potential Business Impact:
Keeps private words secret, but lets important ones show.
Anonymizing sensitive information in user text is essential for privacy, yet existing methods often apply uniform treatment across attributes, which can conflict with communicative intent and obscure necessary information. This is particularly problematic when personal attributes are integral to expressive or pragmatic goals. The central challenge lies in determining which attributes to protect, and to what extent, while preserving semantic and pragmatic functions. We propose IntentAnony, a utility-preserving anonymization approach that performs intent-conditioned exposure control. IntentAnony models pragmatic intent and constructs privacy inference evidence chains to capture how distributed cues support attribute inference. Conditioned on intent, it assigns each attribute an exposure budget and selectively suppresses non-intent inference pathways while preserving intent-relevant content, semantic structure, affective nuance, and interactional function. We evaluate IntentAnony using privacy inference success rates, text utility metrics, and human evaluation. The results show an approximately 30% improvement in the overall privacy--utility trade-off, with notably stronger usability of anonymized text compared to prior state-of-the-art methods. Our code is available at https://github.com/Nevaeh7/IntentAnony.
Similar Papers
A False Sense of Privacy: Evaluating Textual Data Sanitization Beyond Surface-level Privacy Leakage
Cryptography and Security
Keeps private details hidden even after cleaning text.
Towards Better Attribute Inference Vulnerability Measures
Cryptography and Security
Protects private info while keeping data useful.
Anonymization and Information Loss
General Finance
Hides company secrets, but makes it harder to understand money news.