Score: 2

Leveraging LLMs to support co-evolution between definitions and instances of textual DSLs

Published: December 7, 2025 | arXiv ID: 2512.06836v1

By: Weixing Zhang, Regina Hebig, Daniel Strüber

Potential Business Impact:

Keeps old computer code working with new rules.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Software languages evolve over time for various reasons, such as the addition of new features. When the language's grammar definition evolves, textual instances that originally conformed to the grammar become outdated. For DSLs in a model-driven engineering context, there exists a plethora of techniques to co-evolve models with the evolving metamodel. However, these techniques are not geared to support DSLs with a textual syntax -- applying them to textual language definitions and instances may lead to the loss of information from the original instances, such as comments and layout information, which are valuable for software comprehension and maintenance. This study explores the potential of Large Language Model (LLM)-based solutions in achieving grammar and instance co-evolution, with attention to their ability to preserve auxiliary information when directly processing textual instances. By applying two advanced language models, Claude-3.5 and GPT-4o, and conducting experiments across seven case languages, we evaluated the feasibility and limitations of this approach. Our results indicate a good ability of the considered LLMs for migrating textual instances in small-scale cases with limited instance size, which are representative of a subset of cases encountered in practice. In addition, we observe significant challenges with the scalability of LLM-based solutions to larger instances, leading to insights that are useful for informing future research.

Country of Origin
🇩🇪 🇸🇪 Sweden, Germany


Page Count
11 pages

Category
Computer Science:
Software Engineering