Alchemist: Unlocking Efficiency in Text-to-Image Model Training via Meta-Gradient Data Selection
By: Kaixin Ding , Yang Zhou , Xi Chen and more
Potential Business Impact:
Makes AI art look better with less data.
Recent advances in Text-to-Image (T2I) generative models, such as Imagen, Stable Diffusion, and FLUX, have led to remarkable improvements in visual quality. However, their performance is fundamentally limited by the quality of training data. Web-crawled and synthetic image datasets often contain low-quality or redundant samples, which lead to degraded visual fidelity, unstable training, and inefficient computation. Hence, effective data selection is crucial for improving data efficiency. Existing approaches rely on costly manual curation or heuristic scoring based on single-dimensional features in Text-to-Image data filtering. Although meta-learning based method has been explored in LLM, there is no adaptation for image modalities. To this end, we propose **Alchemist**, a meta-gradient-based framework to select a suitable subset from large-scale text-image data pairs. Our approach automatically learns to assess the influence of each sample by iteratively optimizing the model from a data-centric perspective. Alchemist consists of two key stages: data rating and data pruning. We train a lightweight rater to estimate each sample's influence based on gradient information, enhanced with multi-granularity perception. We then use the Shift-Gsampling strategy to select informative subsets for efficient model training. Alchemist is the first automatic, scalable, meta-gradient-based data selection framework for Text-to-Image model training. Experiments on both synthetic and web-crawled datasets demonstrate that Alchemist consistently improves visual quality and downstream performance. Training on an Alchemist-selected 50% of the data can outperform training on the full dataset.
Similar Papers
Few-Step Distillation for Text-to-Image Generation: A Practical Guide
CV and Pattern Recognition
Makes AI draw pictures from words faster.
EDITS: Enhancing Dataset Distillation with Implicit Textual Semantics
CV and Pattern Recognition
Makes small data learn like big data.
Instant Preference Alignment for Text-to-Image Diffusion Models
CV and Pattern Recognition
Creates images that match your exact ideas.