Score: 1

Sampling and Loss Weights in Multi-Domain Training

Published: November 10, 2025 | arXiv ID: 2511.06913v1

By: Mahdi Salmani , Pratik Worah , Meisam Razaviyayn and more

BigTech Affiliations: Google

Potential Business Impact:

Helps computers learn better from different data.

Business Areas:
A/B Testing Data and Analytics

In the training of large deep neural networks, there is a need for vast amounts of training data. To meet this need, data is collected from multiple domains, such as Wikipedia and GitHub. These domains are heterogeneous in both data quality and the diversity of information they provide. This raises the question of how much we should rely on each domain. Several methods have attempted to address this issue by assigning sampling weights to each data domain using heuristics or approximations. As a first step toward a deeper understanding of the role of data mixing, this work revisits the problem by studying two kinds of weights: sampling weights, which control how much each domain contributes in a batch, and loss weights, which scale the loss from each domain during training. Through a rigorous study of linear regression, we show that these two weights play complementary roles. First, they can reduce the variance of gradient estimates in iterative methods such as stochastic gradient descent (SGD). Second, they can improve generalization performance by reducing the generalization gap. We provide both theoretical and empirical support for these claims. We further study the joint dynamics of sampling weights and loss weights, examining how they can be combined to capture both contributions.

Country of Origin
🇺🇸 United States

Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)