Score: 1

RewardSDS: Aligning Score Distillation via Reward-Weighted Sampling

Published: March 12, 2025 | arXiv ID: 2503.09601v2

By: Itay Chachy, Guy Yariv, Sagie Benaim

Potential Business Impact:

Makes 3D pictures follow your exact ideas.

Business Areas:
A/B Testing Data and Analytics

Score Distillation Sampling (SDS) has emerged as an effective technique for leveraging 2D diffusion priors for tasks such as text-to-3D generation. While powerful, SDS struggles with achieving fine-grained alignment to user intent. To overcome this, we introduce RewardSDS, a novel approach that weights noise samples based on alignment scores from a reward model, producing a weighted SDS loss. This loss prioritizes gradients from noise samples that yield aligned high-reward output. Our approach is broadly applicable and can extend SDS-based methods. In particular, we demonstrate its applicability to Variational Score Distillation (VSD) by introducing RewardVSD. We evaluate RewardSDS and RewardVSD on text-to-image, 2D editing, and text-to-3D generation tasks, showing significant improvements over SDS and VSD on a diverse set of metrics measuring generation quality and alignment to desired reward models, enabling state-of-the-art performance. Project page is available at https://itaychachy.github.io/reward-sds/.

Country of Origin
🇮🇱 Israel

Page Count
14 pages

Category
Computer Science:
CV and Pattern Recognition