Memorization: A Close Look at Books
By: Iris Ma , Ian Domingo , Alberto Krone-Martins and more
Potential Business Impact:
Computers can now remember and repeat entire books.
To what extent can entire books be extracted from LLMs? Using the Llama 3 70B family of models, and the "prefix-prompting" extraction technique, we were able to auto-regressively reconstruct, with a very high level of similarity, one entire book (Alice's Adventures in Wonderland) from just the first 500 tokens. We were also able to obtain high extraction rates on several other books, piece-wise. However, these successes do not extend uniformly to all books. We show that extraction rates of books correlate with book popularity and thus, likely duplication in the training data. We also confirm the undoing of mitigations in the instruction-tuned Llama 3.1, following recent work (Nasr et al., 2025). We further find that this undoing comes from changes to only a tiny fraction of weights concentrated primarily in the lower transformer blocks. Our results provide evidence of the limits of current regurgitation mitigation strategies and introduce a framework for studying how fine-tuning affects the retrieval of verbatim memorization in aligned LLMs.
Similar Papers
Extracting memorized pieces of (copyrighted) books from open-weight language models
Computation and Language
Finds AI copies books, but not always.
Extracting books from production language models
Computation and Language
Lets computers copy books from their training.
The Landscape of Memorization in LLMs: Mechanisms, Measurement, and Mitigation
Machine Learning (CS)
Stops AI from remembering private stuff it learned.