Knowledge Collapse in LLMs: When Fluency Survives but Facts Fail under Recursive Synthetic Training
By: Figarri Keisha , Zekun Wu , Ze Wang and more
Potential Business Impact:
Keeps AI from making up wrong facts.
Large language models increasingly rely on synthetic data due to human-written content scarcity, yet recursive training on model-generated outputs leads to model collapse, a degenerative process threatening factual reliability. We define knowledge collapse as a distinct three-stage phenomenon where factual accuracy deteriorates while surface fluency persists, creating "confidently wrong" outputs that pose critical risks in accuracy-dependent domains. Through controlled experiments with recursive synthetic training, we demonstrate that collapse trajectory and timing depend critically on instruction format, distinguishing instruction-following collapse from traditional model collapse through its conditional, prompt-dependent nature. We propose domain-specific synthetic training as a targeted mitigation strategy that achieves substantial improvements in collapse resistance while maintaining computational efficiency. Our evaluation framework combines model-centric indicators with task-centric metrics to detect distinct degradation phases, enabling reproducible assessment of epistemic deterioration across different language models. These findings provide both theoretical insights into collapse dynamics and practical guidance for sustainable AI training in knowledge-intensive applications where accuracy is paramount.
Similar Papers
Future of AI Models: A Computational perspective on Model collapse
Computation and Language
AI training data is losing its variety.
A Closer Look at Model Collapse: From a Generalization-to-Memorization Perspective
Machine Learning (CS)
Stops AI from copying itself when making new pictures.
Multi-modal Synthetic Data Training and Model Collapse: Insights from VLMs and Diffusion Models
Machine Learning (CS)
Keeps AI from getting worse when it learns.