EcoSplat: Efficiency-controllable Feed-forward 3D Gaussian Splatting from Multi-view Images
By: Jongmin Park , Minh-Quan Viet Bui , Juan Luis Gonzalez Bello and more
Feed-forward 3D Gaussian Splatting (3DGS) enables efficient one-pass scene reconstruction, providing 3D representations for novel view synthesis without per-scene optimization. However, existing methods typically predict pixel-aligned primitives per-view, producing an excessive number of primitives in dense-view settings and offering no explicit control over the number of predicted Gaussians. To address this, we propose EcoSplat, the first efficiency-controllable feed-forward 3DGS framework that adaptively predicts the 3D representation for any given target primitive count at inference time. EcoSplat adopts a two-stage optimization process. The first stage is Pixel-aligned Gaussian Training (PGT) where our model learns initial primitive prediction. The second stage is Importance-aware Gaussian Finetuning (IGF) stage where our model learns rank primitives and adaptively adjust their parameters based on the target primitive count. Extensive experiments across multiple dense-view settings show that EcoSplat is robust and outperforms state-of-the-art methods under strict primitive-count constraints, making it well-suited for flexible downstream rendering tasks.
Similar Papers
EasySplat: View-Adaptive Learning makes 3D Gaussian Splatting Easy
CV and Pattern Recognition
Makes 3D pictures from photos more real.
G3Splat: Geometrically Consistent Generalizable Gaussian Splatting
CV and Pattern Recognition
Makes 3D pictures from photos more real.
Breaking the Vicious Cycle: Coherent 3D Gaussian Splatting from Sparse and Motion-Blurred Views
CV and Pattern Recognition
Makes 3D pictures from few, blurry photos.