UM_FHS at the CLEF 2025 SimpleText Track: Comparing No-Context and Fine-Tune Approaches for GPT-4.1 Models in Sentence and Document-Level Text Simplification
By: Primoz Kocbek, Gregor Stiglic
Potential Business Impact:
Makes science papers easy to understand.
This work describes our submission to the CLEF 2025 SimpleText track Task 1, addressing both sentenceand document-level simplification of scientific texts. The methodology centered on using the gpt-4.1, gpt-4.1mini, and gpt-4.1-nano models from OpenAI. Two distinct approaches were compared: a no-context method relying on prompt engineering and a fine-tuned (FT) method across models. The gpt-4.1-mini model with no-context demonstrated robust performance at both levels of simplification, while the fine-tuned models showed mixed results, highlighting the complexities of simplifying text at different granularities, where gpt-4.1-nano-ft performance stands out at document-level simplification in one case.
Similar Papers
LLM-Guided Planning and Summary-Based Scientific Text Simplification: DS@GT at CLEF 2025 SimpleText
Computation and Language
Makes science papers easy to understand.
Plain language adaptations of biomedical text using LLMs: Comparision of evaluation metrics
Computation and Language
Makes doctor's notes easy for anyone to read.
XtraGPT: Context-Aware and Controllable Academic Paper Revision via Human-AI Collaboration
Computation and Language
Helps scientists write better research papers.