Score: 0

Mixture of Many Zero-Compute Experts: A High-Rate Quantization Theory Perspective

Published: October 3, 2025 | arXiv ID: 2510.03151v1

By: Yehuda Dar

Potential Business Impact:

Makes computer models learn faster with more experts.

Business Areas:
Quantum Computing Science and Engineering

This paper uses classical high-rate quantization theory to provide new insights into mixture-of-experts (MoE) models for regression tasks. Our MoE is defined by a segmentation of the input space to regions, each with a single-parameter expert that acts as a constant predictor with zero-compute at inference. Motivated by high-rate quantization theory assumptions, we assume that the number of experts is sufficiently large to make their input-space regions very small. This lets us to study the approximation error of our MoE model class: (i) for one-dimensional inputs, we formulate the test error and its minimizing segmentation and experts; (ii) for multidimensional inputs, we formulate an upper bound for the test error and study its minimization. Moreover, we consider the learning of the expert parameters from a training dataset, given an input-space segmentation, and formulate their statistical learning properties. This leads us to theoretically and empirically show how the tradeoff between approximation and estimation errors in MoE learning depends on the number of experts.

Country of Origin
🇮🇱 Israel

Page Count
39 pages

Category
Computer Science:
Machine Learning (CS)