SelfRACG: Enabling LLMs to Self-Express and Retrieve for Code Generation
By: Qian Dong , Jia Chen , Qingyao Ai and more
Potential Business Impact:
Helps computers write better code by asking for help.
Existing retrieval-augmented code generation (RACG) methods typically use an external retrieval module to fetch semantically similar code snippets used for generating subsequent fragments. However, even for consecutive code fragments, the content often diverges due to logical progression, resulting in a content gap. This gap undermines the performance of current RACG methods, as \textit{external} retrieval modules based on content matching fail to infer the specific information need of LLMs to generate the next code fragment. Therefore, we propose \textbf{SelfRACG}, a novel paradigm that enables large language models (LLMs) to \textbf{Self}-express their information needs to enhance \textbf{RACG}. Specifically, SelfRACG includes an information need expression module and a two-stage information need-guided training strategy, which encourages LLMs to express their information need. Extensive experiments demonstrate that SelfRACG can retrieve external knowledge that better aligns with the LLM's own information needs, resulting in superior generation performance compared to vanilla RACG.
Similar Papers
Retrieval-Augmented Code Generation: A Survey with Focus on Repository-Level Approaches
Software Engineering
Helps computers write complex software code.
Give LLMs a Security Course: Securing Retrieval-Augmented Code Generation via Knowledge Injection
Cryptography and Security
Keeps computer code safe from bad instructions.
Across Programming Language Silos: A Study on Cross-Lingual Retrieval-augmented Code Generation
Software Engineering
Helps computers translate code between languages better.