Cross-Lingual Prompt Steerability: Towards Accurate and Robust LLM Behavior across Languages
By: Lechen Zhang , Yusheng Zhou , Tolga Ergen and more
Potential Business Impact:
Makes AI understand and work in many languages.
System prompts provide a lightweight yet powerful mechanism for conditioning large language models (LLMs) at inference time. While prior work has focused on English-only settings, real-world deployments benefit from having a single prompt to operate reliably across languages. This paper presents a comprehensive study of how different system prompts steer models toward accurate and robust cross-lingual behavior. We propose a unified four-dimensional evaluation framework to assess system prompts in multilingual environments. Through large-scale experiments on five languages, three LLMs, and three benchmarks, we uncover that certain prompt components, such as CoT, emotion, and scenario, correlate with robust multilingual behavior. We develop a prompt optimization framework for multilingual settings and show it can automatically discover prompts that improve all metrics by 5-10%. Finally, we analyze over 10 million reasoning units and find that more performant system prompts induce more structured and consistent reasoning patterns, while reducing unnecessary language-switching. Together, we highlight system prompt optimization as a scalable path to accurate and robust multilingual LLM behavior.
Similar Papers
Evaluating Large Language Models for Code Translation: Effects of Prompt Language and Prompt Design
Software Engineering
Helps computers rewrite code between languages.
The Art of Asking: Multilingual Prompt Optimization for Synthetic Data
Computation and Language
Makes AI understand many languages better.
PromptBridge: Cross-Model Prompt Transfer for Large Language Models
Computation and Language
Makes AI prompts work on different AI brains.