Score: 2

Retrieval Capabilities of Large Language Models Scale with Pretraining FLOPs

Published: August 24, 2025 | arXiv ID: 2508.17400v1

By: Jacob Portes , Connor Jennings , Erica Ji Yuen and more

BigTech Affiliations: Databricks

Potential Business Impact:

Bigger AI learns better to find information.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

How does retrieval performance scale with pretraining FLOPs? We benchmark retrieval performance across LLM model sizes from 125 million parameters to 7 billion parameters pretrained on datasets ranging from 1 billion tokens to more than 2 trillion tokens. We find that retrieval performance on zero-shot BEIR tasks predictably scales with LLM size, training duration, and estimated FLOPs. We also show that In-Context Learning scores are strongly correlated with retrieval scores across retrieval tasks. Finally, we highlight the implications this has for the development of LLM-based retrievers.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
15 pages

Category
Computer Science:
Machine Learning (CS)