Score: 0

UM_FHS at the CLEF 2025 SimpleText Track: Comparing No-Context and Fine-Tune Approaches for GPT-4.1 Models in Sentence and Document-Level Text Simplification

Published: December 18, 2025 | arXiv ID: 2512.16541v1

By: Primoz Kocbek, Gregor Stiglic

Potential Business Impact:

Makes science papers easy to understand.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

This work describes our submission to the CLEF 2025 SimpleText track Task 1, addressing both sentenceand document-level simplification of scientific texts. The methodology centered on using the gpt-4.1, gpt-4.1mini, and gpt-4.1-nano models from OpenAI. Two distinct approaches were compared: a no-context method relying on prompt engineering and a fine-tuned (FT) method across models. The gpt-4.1-mini model with no-context demonstrated robust performance at both levels of simplification, while the fine-tuned models showed mixed results, highlighting the complexities of simplifying text at different granularities, where gpt-4.1-nano-ft performance stands out at document-level simplification in one case.

Country of Origin
🇸🇮 Slovenia

Page Count
10 pages

Category
Computer Science:
Computation and Language