Score: 1

Efficient Hyperparameter Search for Non-Stationary Model Training

Published: December 1, 2025 | arXiv ID: 2512.01258v1

By: Berivan Isik , Matthew Fahrbach , Dima Kuzmin and more

BigTech Affiliations: Google

Potential Business Impact:

Saves money training smart computer programs faster.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Online learning is the cornerstone of applications like recommendation and advertising systems, where models continuously adapt to shifting data distributions. Model training for such systems is remarkably expensive, a cost that multiplies during hyperparameter search. We introduce a two-stage paradigm to reduce this cost: (1) efficiently identifying the most promising configurations, and then (2) training only these selected candidates to their full potential. Our core insight is that focusing on accurate identification in the first stage, rather than achieving peak performance, allows for aggressive cost-saving measures. We develop novel data reduction and prediction strategies that specifically overcome the challenges of sequential, non-stationary data not addressed by conventional hyperparameter optimization. We validate our framework's effectiveness through a dual evaluation: first on the Criteo 1TB dataset, the largest suitable public benchmark, and second on an industrial advertising system operating at a scale two orders of magnitude larger. Our methods reduce the total hyperparameter search cost by up to 10$\times$ on the public benchmark and deliver significant, validated efficiency gains in the industrial setting.

Country of Origin
🇺🇸 United States

Page Count
21 pages

Category
Computer Science:
Machine Learning (CS)