Score: 1

StableDPT: Temporal Stable Monocular Video Depth Estimation

Published: January 6, 2026 | arXiv ID: 2601.02793v1

By: Ivan Sobko , Hayko Riemenschneider , Markus Gross and more

Potential Business Impact:

Makes videos show depth smoothly, not flickering.

Business Areas:
Image Recognition Data and Analytics, Software

Applying single image Monocular Depth Estimation (MDE) models to video sequences introduces significant temporal instability and flickering artifacts. We propose a novel approach that adapts any state-of-the-art image-based (depth) estimation model for video processing by integrating a new temporal module - trainable on a single GPU in a few days. Our architecture StableDPT builds upon an off-the-shelf Vision Transformer (ViT) encoder and enhances the Dense Prediction Transformer (DPT) head. The core of our contribution lies in the temporal layers within the head, which use an efficient cross-attention mechanism to integrate information from keyframes sampled across the entire video sequence. This allows the model to capture global context and inter-frame relationships leading to more accurate and temporally stable depth predictions. Furthermore, we propose a novel inference strategy for processing videos of arbitrary length avoiding the scale misalignment and redundant computations associated with overlapping windows used in other methods. Evaluations on multiple benchmark datasets demonstrate improved temporal consistency, competitive state-of-the-art performance and on top 2x faster processing in real-world scenarios.

Country of Origin
🇨🇭 Switzerland

Page Count
10 pages

Category
Computer Science:
CV and Pattern Recognition