Refinement Provenance Inference: Detecting LLM-Refined Training Prompts from Model Behavior
By: Bo Yin , Qi Li , Runpeng Yu and more
Potential Business Impact:
Finds if AI learned from original or changed instructions.
Instruction tuning increasingly relies on LLM-based prompt refinement, where prompts in the training corpus are selectively rewritten by an external refiner to improve clarity and instruction alignment. This motivates an instance-level audit problem: for a fine-tuned model and a training prompt-response pair, can we infer whether the model was trained on the original prompt or its LLM-refined version within a mixed corpus? This matters for dataset governance and dispute resolution when training data are contested. However, it is non-trivial in practice: refined and raw instances are interleaved in the training corpus with unknown, source-dependent mixture ratios, making it harder to develop provenance methods that generalize across models and training setups. In this paper, we formalize this audit task as Refinement Provenance Inference (RPI) and show that prompt refinement yields stable, detectable shifts in teacher-forced token distributions, even when semantic differences are not obvious. Building on this phenomenon, we propose RePro, a logit-based provenance framework that fuses teacher-forced likelihood features with logit-ranking signals. During training, RePro learns a transferable representation via shadow fine-tuning, and uses a lightweight linear head to infer provenance on unseen victims without training-data access. Empirically, RePro consistently attains strong performance and transfers well across refiners, suggesting that it exploits refiner-agnostic distribution shifts rather than rewrite-style artifacts.
Similar Papers
ProRefine: Inference-Time Prompt Refinement with Textual Feedback
Computation and Language
Makes AI agents work together better for tasks.
GenProve: Learning to Generate Text with Fine-Grained Provenance
Computation and Language
Helps AI show proof for its answers.
Retrieval-augmented Prompt Learning for Pre-trained Foundation Models
Computation and Language
Helps computers learn better from less data.