Across Programming Language Silos: A Study on Cross-Lingual Retrieval-augmented Code Generation
By: Qiming Zhu , Jialun Cao , Xuanang Chen and more
Potential Business Impact:
Helps computers translate code between languages better.
Current research on large language models (LLMs) with retrieval-augmented code generation (RACG) mainly focuses on single-language settings, leaving cross-lingual effectiveness and security unexplored. Multi-lingual RACG systems are valuable for migrating code-bases across programming languages (PLs), yet face risks from error (e.g. adversarial data corruption) propagation in cross-lingual transfer. We construct a dataset spanning 13 PLs with nearly 14k instances to explore utility and robustness of multi-lingual RACG systems. Our investigation reveals four key insights: (1) Effectiveness: multi-lingual RACG significantly enhances multi-lingual code LLMs generation; (2) Inequality: Java demonstrate superior cross-lingual utility over Python in RACG; (3) Robustness: Adversarial attacks degrade performance significantly in mono-lingual RACG but show mitigated impacts in cross-lingual scenarios; Counterintuitively, perturbed code may improve RACG in cross-lingual scenarios; (4) Specialization: Domain-specific code retrievers outperform significantly general text retrievers. These findings establish foundation for developing effective and secure multi-lingual code assistants.
Similar Papers
Retrieval-Augmented Code Generation: A Survey with Focus on Repository-Level Approaches
Software Engineering
Helps computers write complex software code.
Multilingual Retrieval-Augmented Generation for Knowledge-Intensive Task
Computation and Language
Helps computers answer questions in any language.
Give LLMs a Security Course: Securing Retrieval-Augmented Code Generation via Knowledge Injection
Cryptography and Security
Keeps computer code safe from bad instructions.