When Many-Shot Prompting Fails: An Empirical Study of LLM Code Translation
By: Amirkia Rafiei Oskooei , Kaan Baturalp Cosdan , Husamettin Isiktas and more
Potential Business Impact:
Few examples help computers translate code best.
Large Language Models (LLMs) with vast context windows offer new avenues for in-context learning (ICL), where providing many examples ("many-shot" prompting) is often assumed to enhance performance. We investigate this assumption for the complex task of code translation. Through a large-scale empirical study of over 90,000 translations, we systematically evaluate the impact of scaling in-context examples from zero-shot to many-shot configurations of up to 625 examples, with prompts spanning from approximately 100,000 to 800,000 tokens. Our findings reveal a "many-shot paradox": while static similarity metrics may modestly improve with more examples, functional correctness consistently peaks with few-shot prompting (5-25 examples). Providing substantially more examples often degrades this crucial functional performance. This study highlights that for code translation, the quality of a few well-chosen examples outweighs sheer quantity, challenging the universal efficacy of "more is better" for ICL and underscoring the task-dependent nature of optimal prompting strategies. Our results have significant implications for effectively leveraging LLMs in software engineering.
Similar Papers
On Selecting Few-Shot Examples for LLM-based Code Vulnerability Detection
Software Engineering
Helps computers find mistakes in code better.
The Few-shot Dilemma: Over-prompting Large Language Models
Computation and Language
Helps AI understand better with fewer examples.
You Only Fine-tune Once: Many-Shot In-Context Fine-Tuning for Large Language Model
Computation and Language
Teaches computers to do many jobs well at once.