Instruction-Tuning Open-Weight Language Models for BPMN Model Generation
By: Gökberk Çelikmasat, Atay Özgövde, Fatma Başak Aydemir
Domain models are central to software engineering, as they enable a shared understanding, guide implementation, and support automated analyses and model-driven development. Yet, despite these benefits, practitioners often skip modeling because it is time-consuming and demands scarce expertise. We address this barrier by investigating whether open-weight large language models, adapted via instruction tuning, can generate high-quality BPMN process models directly from natural language descriptions in a cost-effective and privacy-preserving way. We introduce InstruBPM, a reproducible approach that prepares paired text-diagram data and instruction tunes an open source large language model with parameter-efficient fine-tuning and quantization for on-prem deployment. We evaluate the tuned model through complementary perspectives: (i) text/code similarity using BLEU, ROUGE-L, and METEOR, (ii) structural fidelity using Relative Graph Edit Distance, (iii) guidelines conformance using external tool checks, and (iv) a small expert review. Using a curated subset of a multi-domain BPMN dataset, we compare the tuned model with untuned open-weight baselines and strong proprietary models under consistent prompting regimes. Our compact tuned model outperforms all baselines across sequence and structural metrics while requiring substantially fewer resources; guideline analysis and expert feedback further indicate that the generated diagrams largely follow BPMN best practices and are useful starting points that reduce modeling effort. Overall, instruction tuning improves structural accuracy and robustness compared to untuned baselines and reduces reliance on heavy prompt scaffolding. We publicly share the trained models and scripts to support reproducibility and further research.
Similar Papers
Towards Alignment-Centric Paradigm: A Survey of Instruction Tuning in Large Language Models
Computation and Language
Teaches AI to follow instructions better.
How Instruction-Tuning Imparts Length Control: A Cross-Lingual Mechanistic Analysis
Computation and Language
Makes AI write exactly how long you want.
Instruction Tuning of Large Language Models for Tabular Data Generation-in One Day
CV and Pattern Recognition
Teaches computers to create good tables from simple instructions.