TigerCoder: A Novel Suite of LLMs for Code Generation in Bangla
By: Nishat Raihan, Antonios Anastasopoulos, Marcos Zampieri
Potential Business Impact:
Helps computers write computer code in Bangla.
Despite being the 5th most spoken language, Bangla remains underrepresented in Large Language Models (LLMs), particularly for code generation. This primarily stems from the scarcity of high-quality data to pre-train and/or finetune such models. Hence, we introduce the first dedicated family of Code LLMs for Bangla (1B & 9B). We offer three major contributions: (1) a comprehensive Bangla code instruction datasets for programming domain adaptation; (2) MBPP-Bangla, an evaluation benchmark for Bangla code generation; and (3) the TigerCoder-family of Code LLMs, achieving significant ~11-18% performance gains at Pass@1 over existing multilingual and general-purpose Bangla LLMs. Our findings show that curated, high-quality datasets can overcome limitations of smaller models for low-resource languages. We open-source all resources to advance further Bangla LLM research.
Similar Papers
TigerLLM - A Family of Bangla Large Language Models
Computation and Language
Makes computers understand and speak Bengali better.
Enhancing LLM Code Generation Capabilities through Test-Driven Development and Code Interpreter
Software Engineering
Helps computers write Bengali code easily.
BanglaForge: LLM Collaboration with Self-Refinement for Bangla Code Generation
Software Engineering
Helps computers write code from Bengali words.