Score: 1

FoundIR-v2: Optimizing Pre-Training Data Mixtures for Image Restoration Foundation Model

Published: December 10, 2025 | arXiv ID: 2512.09282v1

By: Xiang Chen , Jinshan Pan , Jiangxin Dong and more

Potential Business Impact:

Fixes blurry pictures better by balancing training.

Business Areas:
A/B Testing Data and Analytics

Recent studies have witnessed significant advances in image restoration foundation models driven by improvements in the scale and quality of pre-training data. In this work, we find that the data mixture proportions from different restoration tasks are also a critical factor directly determining the overall performance of all-in-one image restoration models. To this end, we propose a high-capacity diffusion-based image restoration foundation model, FoundIR-v2, which adopts a data equilibrium scheduling paradigm to dynamically optimize the proportions of mixed training datasets from different tasks. By leveraging the data mixing law, our method ensures a balanced dataset composition, enabling the model to achieve consistent generalization and comprehensive performance across diverse tasks. Furthermore, we introduce an effective Mixture-of-Experts (MoE)-driven scheduler into generative pre-training to flexibly allocate task-adaptive diffusion priors for each restoration task, accounting for the distinct degradation forms and levels exhibited by different tasks. Extensive experiments demonstrate that our method can address over 50 sub-tasks across a broader scope of real-world scenarios and achieves favorable performance against state-of-the-art approaches.

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition