Score: 1

LLM Agents Implement an NLG System from Scratch: Building Interpretable Rule-Based RDF-to-Text Generators

Published: December 20, 2025 | arXiv ID: 2512.18360v1

By: Mateusz Lango, Ondřej Dušek

Potential Business Impact:

Makes computers write stories from facts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We present a novel neurosymbolic framework for RDF-to-text generation, in which the model is "trained" through collaborative interactions among multiple LLM agents rather than traditional backpropagation. The LLM agents produce rule-based Python code for a generator for the given domain, based on RDF triples only, with no in-domain human reference texts. The resulting system is fully interpretable, requires no supervised training data, and generates text nearly instantaneously using only a single CPU. Our experiments on the WebNLG and OpenDialKG data show that outputs produced by our approach reduce hallucination, with only slight fluency penalties compared to finetuned or prompted language models

Country of Origin
🇨🇿 Czech Republic

Repos / Data Links

Page Count
16 pages

Category
Computer Science:
Computation and Language