Beyond Frequency: The Role of Redundancy in Large Language Model Memorization
By: Jie Zhang , Qinghua Zhao , Chi-ho Lin and more
Potential Business Impact:
Makes AI forget private stuff, not important facts.
Memorization in large language models poses critical risks for privacy and fairness as these systems scale to billions of parameters. While previous studies established correlations between memorization and factors like token frequency and repetition patterns, we revealed distinct response patterns: frequency increases minimally impact memorized samples (e.g. 0.09) while substantially affecting non-memorized samples (e.g., 0.25), with consistency observed across model scales. Through counterfactual analysis by perturbing sample prefixes and quantifying perturbation strength through token positional changes, we demonstrate that redundancy correlates with memorization patterns. Our findings establish that: about 79% of memorized samples are low-redundancy, these low-redundancy samples exhibit 2-fold higher vulnerability than high-redundancy ones, and consequently memorized samples drop by 0.6 under perturbation while non-memorized samples drop by only 0.01, indicating that more redundant content becomes both more memorable and more fragile. These findings suggest potential redundancy-guided approaches for data preprocessing, thereby reducing privacy risks and mitigating bias to ensure fairness in model deployments.
Similar Papers
Memories Retrieved from Many Paths: A Multi-Prefix Framework for Robust Detection of Training Data Leakage in Large Language Models
Computation and Language
Finds when AI copies private information.
Assessing and Mitigating Data Memorization Risks in Fine-Tuned Large Language Models
Computation and Language
Keeps private info safe when computers learn.
Trade-offs in Data Memorization via Strong Data Processing Inequalities
Machine Learning (CS)
Protects private info when computers learn.