Enhancing LLM Code Generation Capabilities through Test-Driven Development and Code Interpreter
By: Sajed Jalil, Shuvo Saha, Hossain Mohammad Seym
Potential Business Impact:
Helps computers write Bengali code easily.
Over the past few years, improving LLM code generation capabilities has been a key focus in NLP research. Despite Bengali having 242 million native speakers worldwide, it receives little attention when it comes to training LLMs. More recently, various fine-tuning and augmented generation techniques have been employed to significantly enhance code generation performance. However, they require considerable expertise and resources to utilize effectively as an end user. The goal of our work is to democratize access to powerful code generation tools in resource-constrained emerging markets, enabling users to leverage them in their native language. We introduce a novel approach that combines Test-Driven Development (TDD) and Code Interpreter (CI), utilizing open-weight models, which improves the baseline accuracy for code generation with Bengali prompts and achieves an overall accuracy of 85%. Our approach requires no finetuning and proves that even the smallest models in the same family can attain up to 98% accuracy compared to the largest models. All of our results are publicly shared in GitHub for validation and reproducibility.
Similar Papers
Retriv at BLP-2025 Task 2: Test-Driven Feedback-Guided Framework for Bangla-to-Python Code Generation
Computation and Language
Helps computers write code from Bengali instructions.
TigerCoder: A Novel Suite of LLMs for Code Generation in Bangla
Computation and Language
Helps computers write computer code in Bangla.
BanglaForge: LLM Collaboration with Self-Refinement for Bangla Code Generation
Software Engineering
Helps computers write code from Bengali words.