XRAG: Cross-lingual Retrieval-Augmented Generation
By: Wei Liu , Sony Trenous , Leonardo F. R. Ribeiro and more
Potential Business Impact:
Tests if computers can answer questions in different languages.
We propose XRAG, a novel benchmark designed to evaluate the generation abilities of LLMs in cross-lingual Retrieval-Augmented Generation (RAG) settings where the user language does not match the retrieval results. XRAG is constructed from recent news articles to ensure that its questions require external knowledge to be answered. It covers the real-world scenarios of monolingual and multilingual retrieval, and provides relevancy annotations for each retrieved document. Our novel dataset construction pipeline results in questions that require complex reasoning, as evidenced by the significant gap between human and LLM performance. Consequently, XRAG serves as a valuable benchmark for studying LLM reasoning abilities, even before considering the additional cross-lingual complexity. Experimental results on five LLMs uncover two previously unreported challenges in cross-lingual RAG: 1) in the monolingual retrieval setting, all evaluated models struggle with response language correctness; 2) in the multilingual retrieval setting, the main challenge lies in reasoning over retrieved information across languages rather than generation of non-English text.
Similar Papers
Multilingual Retrieval-Augmented Generation for Knowledge-Intensive Task
Computation and Language
Helps computers answer questions in any language.
A Survey of Multimodal Retrieval-Augmented Generation
Information Retrieval
Lets computers understand pictures and words together.
Improving Multilingual Retrieval-Augmented Language Models through Dialectic Reasoning Argumentations
Computation and Language
Helps computers understand different facts better.