Score: 1

MoEs Are Stronger than You Think: Hyper-Parallel Inference Scaling with RoE

Published: September 21, 2025 | arXiv ID: 2509.17238v1

By: Soheil Zibakhsh , Mohammad Samragh , Kumari Nishu and more

BigTech Affiliations: Apple

Potential Business Impact:

Makes AI smarter by trying many answers for each word.

Business Areas:
A/B Testing Data and Analytics

The generation quality of large language models (LLMs) is often improved by utilizing inference-time sequence-level scaling methods (e.g., Chain-of-Thought). We introduce hyper-parallel scaling, a complementary framework that improves prediction quality at the token level. Hyper-parallel scaling computes and aggregates multiple output proposals for a single token from the model. We implement this concept in Mixture-of-Experts (MoE) models, which we refer to as Roster of Experts (RoE). RoE is a training-free inference algorithm that turns a single MoE into a dynamic ensemble of MoEs. RoE injects controlled stochasticity into the expert routing mechanism, enabling it to sample multiple diverse experts for each token and aggregate their outputs for a more accurate final prediction.To overcome the computational cost, we introduce an efficient batching strategy and a specialized KV-caching mechanism that minimizes compute and memory overhead. For example, RoE enables a 7B MoE model to match the performance of a 10.5B MoE model while using 30% less compute for inference. These gains are achieved without any fine-tuning of model parameters.

Country of Origin
🇺🇸 United States

Page Count
18 pages

Category
Computer Science:
Artificial Intelligence