Score: 2

Beyond Perplexity: A Lightweight Benchmark for Knowledge Retention in Supervised Fine-Tuning

Published: January 7, 2026 | arXiv ID: 2601.03505v1

By: Soheil Zibakhsh Shabgahi, Pedram Aghazadeh, Farinaz Koushanfar

Potential Business Impact:

Tests if AI truly learns facts, not just words.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Supervised Fine-Tuning (SFT) is a standard approach for injecting domain knowledge into Large Language Models (LLMs). However, relying on validation perplexity to monitor training is often insufficient, as it confounds stylistic mimicry with genuine factual internalization. To address this, we introduce the Knowledge Retention (KR) Test , a lightweight, corpus-grounded evaluation framework designed to distinguish factual learning from linguistics. KR-Test utilizes automatically generated contrastive examples to measure likelihood preferences for correct versus incorrect continuations, requiring no instruction tuning or generative decoding. We validate the framework's integrity through a "blind vs. oracle" baseline analysis. Furthermore, we demonstrate the diagnostic capabilities of KR-Test by analyzing the training dynamics of Low-Rank Adaptation (LoRA). By exposing the fine-grained dissociation between linguistic convergence and knowledge retention, KR-Test enhances the interpretability of fine-tuning dynamics.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
8 pages

Category
Computer Science:
Computation and Language