The Power of Context: How Multimodality Improves Image Super-Resolution
By: Kangfu Mei , Hossein Talebi , Mojtaba Ardakani and more
Potential Business Impact:
Makes blurry pictures sharp using extra clues.
Single-image super-resolution (SISR) remains challenging due to the inherent difficulty of recovering fine-grained details and preserving perceptual quality from low-resolution inputs. Existing methods often rely on limited image priors, leading to suboptimal results. We propose a novel approach that leverages the rich contextual information available in multiple modalities -- including depth, segmentation, edges, and text prompts -- to learn a powerful generative prior for SISR within a diffusion model framework. We introduce a flexible network architecture that effectively fuses multimodal information, accommodating an arbitrary number of input modalities without requiring significant modifications to the diffusion process. Crucially, we mitigate hallucinations, often introduced by text prompts, by using spatial information from other modalities to guide regional text-based conditioning. Each modality's guidance strength can also be controlled independently, allowing steering outputs toward different directions, such as increasing bokeh through depth or adjusting object prominence via segmentation. Extensive experiments demonstrate that our model surpasses state-of-the-art generative SISR methods, achieving superior visual quality and fidelity. See project page at https://mmsr.kfmei.com/.
Similar Papers
MegaSR: Mining Customized Semantics and Expressive Guidance for Image Super-Resolution
CV and Pattern Recognition
Makes blurry pictures sharp and clear.
SuperF: Neural Implicit Fields for Multi-Image Super-Resolution
CV and Pattern Recognition
Makes blurry pictures sharp using many views.
An Efficient Remote Sensing Super Resolution Method Exploring Diffusion Priors and Multi-Modal Constraints for Crop Type Mapping
CV and Pattern Recognition
Makes old satellite pictures clearer for farming.