Score: 1

Exploring Training and Inference Scaling Laws in Generative Retrieval

Published: March 24, 2025 | arXiv ID: 2503.18941v2

By: Hongru Cai , Yongqi Li , Ruifeng Yuan and more

Potential Business Impact:

Makes computers find information by writing it.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Generative retrieval reformulates retrieval as an autoregressive generation task, where large language models (LLMs) generate target documents directly from a query. As a novel paradigm, the mechanisms that underpin its performance and scalability remain largely unexplored. We systematically investigate training and inference scaling laws in generative retrieval, exploring how model size, training data scale, and inference-time compute jointly influence performance. We propose a novel evaluation metric inspired by contrastive entropy and generation loss, providing a continuous performance signal that enables robust comparisons across diverse generative retrieval methods. Our experiments show that n-gram-based methods align strongly with training and inference scaling laws. We find that increasing model size, training data scale, and inference-time compute all contribute to improved performance, highlighting the complementary roles of these factors in enhancing generative retrieval. Across these settings, LLaMA models consistently outperform T5 models, suggesting a particular advantage for larger decoder-only models in generative retrieval. Our findings underscore that model sizes, data availability, and inference computation interact to unlock the full potential of generative retrieval, offering new insights for designing and optimizing future systems.

Country of Origin
πŸ‡­πŸ‡° πŸ‡ΈπŸ‡¬ Hong Kong, Singapore

Page Count
11 pages

Category
Computer Science:
Information Retrieval