3DRot: 3D Rotation Augmentation for RGB-Based 3D Tasks
By: Shitian Yang , Deyu Li , Xiaoke Jiang and more
Potential Business Impact:
Improves 3D vision accuracy with simple image flips.
RGB-based 3D tasks, e.g., 3D detection, depth estimation, 3D keypoint estimation, still suffer from scarce, expensive annotations and a thin augmentation toolbox, since most image transforms, including resize and rotation, disrupt geometric consistency. In this paper, we introduce 3DRot, a plug-and-play augmentation that rotates and mirrors images about the camera's optical center while synchronously updating RGB images, camera intrinsics, object poses, and 3D annotations to preserve projective geometry-achieving geometry-consistent rotations and reflections without relying on any scene depth. We validate 3DRot with a classical 3D task, monocular 3D detection. On SUN RGB-D dataset, 3DRot raises $IoU_{3D}$ from 43.21 to 44.51, cuts rotation error (ROT) from 22.91$^\circ$ to 20.93$^\circ$, and boosts $mAP_{0.5}$ from 35.70 to 38.11. As a comparison, Cube R-CNN adds 3 other datasets together with SUN RGB-D for monocular 3D estimation, with a similar mechanism and test dataset, increases $IoU_{3D}$ from 36.2 to 37.8, boosts $mAP_{0.5}$ from 34.7 to 35.4. Because it operates purely through camera-space transforms, 3DRot is readily transferable to other 3D tasks.
Similar Papers
3DRot: 3D Rotation Augmentation for RGB-Based 3D Tasks
CV and Pattern Recognition
Helps computers understand 3D pictures better.
Accelerated Rotation-Invariant Convolution for UAV Image Segmentation
CV and Pattern Recognition
Makes drones see objects from any angle.
Accelerated Rotation-Invariant Convolution for UAV Image Segmentation
CV and Pattern Recognition
Makes drones see objects from any angle.