Score: 0

Data-Model Co-Evolution: Growing Test Sets to Refine LLM Behavior

Published: October 14, 2025 | arXiv ID: 2510.12728v1

By: Minjae Lee, Minsuk Kahng

Potential Business Impact:

Teaches computers to follow rules by changing instructions.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

A long-standing challenge in machine learning has been the rigid separation between data work and model refinement, enforced by slow fine-tuning cycles. The rise of Large Language Models (LLMs) overcomes this historical barrier, allowing applications developers to instantly govern model behavior by editing prompt instructions. This shift enables a new paradigm: data-model co-evolution, where a living test set and a model's instructions evolve in tandem. We operationalize this paradigm in an interactive system designed to address the critical challenge of encoding subtle, domain-specific policies into prompt instructions. The system's structured workflow guides people to discover edge cases, articulate rationales for desired behavior, and iteratively evaluate instruction revisions against a growing test set. A user study shows our workflow helps participants refine instructions systematically and specify ambiguous policies more concretely. This work points toward more robust and responsible LLM applications through human-in-the-loop development aligned with local preferences and policies.

Country of Origin
🇰🇷 Korea, Republic of

Page Count
14 pages

Category
Computer Science:
Human-Computer Interaction