Flow2Code: Evaluating Large Language Models for Flowchart-based Code Generation Capability
By: Mengliang He , Jiayi Zeng , Yankai Jiang and more
Potential Business Impact:
Teaches computers to make code from flowcharts.
While large language models (LLMs) show promise in code generation, existing benchmarks neglect the flowchart-based code generation. To promote further research on flowchart-based code generation, this work presents Flow2Code, a novel benchmark for flowchart-based code generation evaluation. The evaluation dataset spans 15 programming languages and includes 5,622 code segments paired with 16,866 flowcharts of three types: code, UML, and pseudocode. Extensive experiments with 13 multimodal LLMs reveal that current LLMs can not generate code based on flowcharts perfectly. Besides, experiment results show that the supervised fine-tuning technique contributes greatly to the models' performance. We publicly release our code and datasets at https://github.com/hml-github/Flow2Code.
Similar Papers
From Charts to Code: A Hierarchical Benchmark for Multimodal Models
Software Engineering
Helps computers make charts from data.
Dynamic Benchmark Construction for Evaluating Large Language Models on Real-World Codes
Software Engineering
Tests AI code writing to find its mistakes.
Flowco: Rethinking Data Analysis in the Age of LLMs
Human-Computer Interaction
Helps anyone analyze data without coding.