Data-Model Co-Evolution: Growing Test Sets to Refine LLM Behavior
By: Minjae Lee, Minsuk Kahng
Potential Business Impact:
Teaches computers to follow rules by changing instructions.
A long-standing challenge in machine learning has been the rigid separation between data work and model refinement, enforced by slow fine-tuning cycles. The rise of Large Language Models (LLMs) overcomes this historical barrier, allowing applications developers to instantly govern model behavior by editing prompt instructions. This shift enables a new paradigm: data-model co-evolution, where a living test set and a model's instructions evolve in tandem. We operationalize this paradigm in an interactive system designed to address the critical challenge of encoding subtle, domain-specific policies into prompt instructions. The system's structured workflow guides people to discover edge cases, articulate rationales for desired behavior, and iteratively evaluate instruction revisions against a growing test set. A user study shows our workflow helps participants refine instructions systematically and specify ambiguous policies more concretely. This work points toward more robust and responsible LLM applications through human-in-the-loop development aligned with local preferences and policies.
Similar Papers
Envisioning Future Interactive Web Development: Editing Webpage with Natural Language
Software Engineering
Lets computers change website designs by talking.
CoLa: Learning to Interactively Collaborate with Large Language Models
Computation and Language
AI learns to guide other AI to solve problems.
Leveraging LLMs to support co-evolution between definitions and instances of textual DSLs
Software Engineering
Keeps old computer code working with new rules.