Score: 3

Simplify-This: A Comparative Analysis of Prompt-Based and Fine-Tuned LLMs

Published: January 9, 2026 | arXiv ID: 2601.05794v1

By: Eilam Cohen , Itamar Bul , Danielle Inbar and more

Potential Business Impact:

Makes complex writing easier to understand.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large language models (LLMs) enable strong text generation, and in general there is a practical tradeoff between fine-tuning and prompt engineering. We introduce Simplify-This, a comparative study evaluating both paradigms for text simplification with encoder-decoder LLMs across multiple benchmarks, using a range of evaluation metrics. Fine-tuned models consistently deliver stronger structural simplification, whereas prompting often attains higher semantic similarity scores yet tends to copy inputs. A human evaluation favors fine-tuned outputs overall. We release code, a cleaned derivative dataset used in our study, checkpoints of fine-tuned models, and prompt templates to facilitate reproducibility and future work.


Page Count
22 pages

Category
Computer Science:
Computation and Language