Feature-aligned Motion Transformation for Efficient Dynamic Point Cloud Compression
By: Xuan Deng , Xiandong Meng , Longguang Wang and more
Potential Business Impact:
Makes 3D videos smaller for faster streaming.
Dynamic point clouds are widely used in applications such as immersive reality, robotics, and autonomous driving. Efficient compression largely depends on accurate motion estimation and compensation, yet the irregular structure and significant local variations of point clouds make this task highly challenging. Current methods often rely on explicit motion estimation, whose encoded vectors struggle to capture intricate dynamics and fail to fully exploit temporal correlations. To overcome these limitations, we introduce a Feature-aligned Motion Transformation (FMT) framework for dynamic point cloud compression. FMT replaces explicit motion vectors with a spatiotemporal alignment strategy that implicitly models continuous temporal variations, using aligned features as temporal context within a latent-space conditional encoding framework. Furthermore, we design a random access (RA) reference strategy that enables bidirectional motion referencing and layered encoding, thereby supporting frame-level parallel compression. Extensive experiments demonstrate that our method surpasses D-DPCC and AdaDPCC in both encoding and decoding efficiency, while also achieving BD-Rate reductions of 20% and 9.4%, respectively. These results highlight the effectiveness of FMT in jointly improving compression efficiency and processing performance.
Similar Papers
Content Adaptive based Motion Alignment Framework for Learned Video Compression
CV and Pattern Recognition
Makes videos smaller without losing quality.
Feature Coding for Scalable Machine Vision
CV and Pattern Recognition
Shrinks computer vision data for faster, private use.
Discrete Fourier Transform-based Point Cloud Compression for Efficient SLAM in Featureless Terrain
Robotics
Shrinks robot maps to save space.