Exploring Training and Inference Scaling Laws in Generative Retrieval
By: Hongru Cai , Yongqi Li , Ruifeng Yuan and more
Potential Business Impact:
Makes computers find information by writing it.
Generative retrieval reformulates retrieval as an autoregressive generation task, where large language models (LLMs) generate target documents directly from a query. As a novel paradigm, the mechanisms that underpin its performance and scalability remain largely unexplored. We systematically investigate training and inference scaling laws in generative retrieval, exploring how model size, training data scale, and inference-time compute jointly influence performance. We propose a novel evaluation metric inspired by contrastive entropy and generation loss, providing a continuous performance signal that enables robust comparisons across diverse generative retrieval methods. Our experiments show that n-gram-based methods align strongly with training and inference scaling laws. We find that increasing model size, training data scale, and inference-time compute all contribute to improved performance, highlighting the complementary roles of these factors in enhancing generative retrieval. Across these settings, LLaMA models consistently outperform T5 models, suggesting a particular advantage for larger decoder-only models in generative retrieval. Our findings underscore that model sizes, data availability, and inference computation interact to unlock the full potential of generative retrieval, offering new insights for designing and optimizing future systems.
Similar Papers
Inference-Time Scaling for Generalist Reward Modeling
Computation and Language
Teaches AI to judge answers better for any question.
Test-Time Scaling Strategies for Generative Retrieval in Multimodal Conversational Recommendations
Information Retrieval
Helps online shoppers find products faster in chats.
Scaling Laws Meet Model Architecture: Toward Inference-Efficient LLMs
Machine Learning (CS)
Makes AI smarter and faster to use.