Sharp Monocular View Synthesis in Less Than a Second
By: Lars Mescheder , Wei Dong , Shiwei Li and more
Potential Business Impact:
Creates new views of a picture instantly.
We present SHARP, an approach to photorealistic view synthesis from a single image. Given a single photograph, SHARP regresses the parameters of a 3D Gaussian representation of the depicted scene. This is done in less than a second on a standard GPU via a single feedforward pass through a neural network. The 3D Gaussian representation produced by SHARP can then be rendered in real time, yielding high-resolution photorealistic images for nearby views. The representation is metric, with absolute scale, supporting metric camera movements. Experimental results demonstrate that SHARP delivers robust zero-shot generalization across datasets. It sets a new state of the art on multiple datasets, reducing LPIPS by 25-34% and DISTS by 21-43% versus the best prior model, while lowering the synthesis time by three orders of magnitude. Code and weights are provided at https://github.com/apple/ml-sharp
Similar Papers
Blur2Sharp: Human Novel Pose and View Synthesis with Generative Prior Refinement
CV and Pattern Recognition
Makes 3D people look real from any angle.
CloseUpShot: Close-up Novel View Synthesis from Sparse-views via Point-conditioned Diffusion Model
CV and Pattern Recognition
Creates detailed 3D views from few pictures.
Novel View Synthesis from A Few Glimpses via Test-Time Natural Video Completion
CV and Pattern Recognition
Creates realistic 3D scenes from few pictures.