Exploring Test-time Scaling via Prediction Merging on Large-Scale Recommendation
By: Fuyuan Lyu , Zhentai Chen , Jingyan Jiang and more
Potential Business Impact:
Makes online suggestions faster and better.
Inspired by the success of language models (LM), scaling up deep learning recommendation systems (DLRS) has become a recent trend in the community. All previous methods tend to scale up the model parameters during training time. However, how to efficiently utilize and scale up computational resources during test time remains underexplored, which can prove to be a scaling-efficient approach and bring orthogonal improvements in LM domains. The key point in applying test-time scaling to DLRS lies in effectively generating diverse yet meaningful outputs for the same instance. We propose two ways: One is to explore the heterogeneity of different model architectures. The other is to utilize the randomness of model initialization under a homogeneous architecture. The evaluation is conducted across eight models, including both classic and SOTA models, on three benchmarks. Sufficient evidence proves the effectiveness of both solutions. We further prove that under the same inference budget, test-time scaling can outperform parameter scaling. Our test-time scaling can also be seamlessly accelerated with the increase in parallel servers when deployed online, without affecting the inference time on the user side. Code is available.
Similar Papers
Test-Time Scaling Strategies for Generative Retrieval in Multimodal Conversational Recommendations
Information Retrieval
Helps online shoppers find products faster in chats.
Scaling Test-time Compute for LLM Agents
Artificial Intelligence
Makes AI agents smarter by letting them think more.
Rethinking Test-Time Scaling for Medical AI: Model and Task-Aware Strategies for LLMs and VLMs
Computation and Language
Improves AI's medical image understanding.