ReasonX: MLLM-Guided Intrinsic Image Decomposition
By: Alara Dirik , Tuanfeng Wang , Duygu Ceylan and more
Potential Business Impact:
Teaches computers to see image parts better.
Intrinsic image decomposition aims to separate images into physical components such as albedo, depth, normals, and illumination. While recent diffusion- and transformer-based models benefit from paired supervision from synthetic datasets, their generalization to diverse, real-world scenarios remains challenging. We propose ReasonX, a novel framework that leverages a multimodal large language model (MLLM) as a perceptual judge providing relative intrinsic comparisons, and uses these comparisons as GRPO rewards for fine-tuning intrinsic decomposition models on unlabeled, in-the-wild images. Unlike RL methods for generative models, our framework aligns conditional intrinsic predictors by rewarding agreement between the judge's relational assessments and analytically derived relations from the model's outputs. ReasonX is model-agnostic and can be applied to different intrinsic predictors. Across multiple base architectures and modalities, ReasonX yields significant improvements, including 9-25% WHDR reduction on IIW albedo and up to 46% depth accuracy gains on ETH3D, highlighting the promise of MLLM-guided comparative supervision to bridge low- and high-level vision reasoning.
Similar Papers
REASONEDIT: Towards Reasoning-Enhanced Image Editing Models
CV and Pattern Recognition
Makes AI better at changing pictures with words.
Texture-aware Intrinsic Image Decomposition with Model- and Learning-based Priors
CV and Pattern Recognition
Separates object colors from lighting in photos.
LumiX: Structured and Coherent Text-to-Intrinsic Generation
CV and Pattern Recognition
Creates realistic 3D scenes from text descriptions.