PDR: A Plug-and-Play Positional Decay Framework for LLM Pre-training Data Detection
By: Jinhan Liu , Yibo Yang , Ruiying Lu and more
Detecting pre-training data in Large Language Models (LLMs) is crucial for auditing data privacy and copyright compliance, yet it remains challenging in black-box, zero-shot settings where computational resources and training data are scarce. While existing likelihood-based methods have shown promise, they typically aggregate token-level scores using uniform weights, thereby neglecting the inherent information-theoretic dynamics of autoregressive generation. In this paper, we hypothesize and empirically validate that memorization signals are heavily skewed towards the high-entropy initial tokens, where model uncertainty is highest, and decay as context accumulates. To leverage this linguistic property, we introduce Positional Decay Reweighting (PDR), a training-free and plug-and-play framework. PDR explicitly reweights token-level scores to amplify distinct signals from early positions while suppressing noise from later ones. Extensive experiments show that PDR acts as a robust prior and can usually enhance a wide range of advanced methods across multiple benchmarks.
Similar Papers
Position-Aware Depth Decay Decoding ($D^3$): Boosting Large Language Model Inference Efficiency
Computation and Language
Makes AI answer questions much faster.
Mitigating Posterior Salience Attenuation in Long-Context LLMs with Positional Contrastive Decoding
Computation and Language
Makes AI remember more of long stories.
WeDLM: Reconciling Diffusion Language Models with Standard Causal Attention for Fast Inference
Computation and Language
Makes AI write much faster by changing how it thinks.