3D Trajectory Reconstruction of Moving Points Based on Asynchronous Cameras
By: Huayu Huang , Banglei Guan , Yang Shang and more
Potential Business Impact:
Tracks moving things from many cameras.
Photomechanics is a crucial branch of solid mechanics. The localization of point targets constitutes a fundamental problem in optical experimental mechanics, with extensive applications in various missions of UAVs. Localizing moving targets is crucial for analyzing their motion characteristics and dynamic properties. Reconstructing the trajectories of points from asynchronous cameras is a significant challenge. It encompasses two coupled sub-problems: trajectory reconstruction and camera synchronization. Present methods typically address only one of these sub-problems individually. This paper proposes a 3D trajectory reconstruction method for point targets based on asynchronous cameras, simultaneously solving both sub-problems. Firstly, we extend the trajectory intersection method to asynchronous cameras to resolve the limitation of traditional triangulation that requires camera synchronization. Secondly, we develop models for camera temporal information and target motion, based on imaging mechanisms and target dynamics characteristics. The parameters are optimized simultaneously to achieve trajectory reconstruction without accurate time parameters. Thirdly, we optimize the camera rotations alongside the camera time information and target motion parameters, using tighter and more continuous constraints on moving points. The reconstruction accuracy is significantly improved, especially when the camera rotations are inaccurate. Finally, the simulated and real-world experimental results demonstrate the feasibility and accuracy of the proposed method. The real-world results indicate that the proposed algorithm achieved a localization error of 112.95 m at an observation range of 15 ~ 20 km.
Similar Papers
Event-based multi-view photogrammetry for high-dynamic, high-velocity target measurement
CV and Pattern Recognition
Tracks fast objects precisely without missing details.
Online 3D Multi-Camera Perception through Robust 2D Tracking and Depth-based Late Aggregation
CV and Pattern Recognition
Tracks people in 3D from many cameras.
Finding 3D Positions of Distant Objects from Noisy Camera Movement and Semantic Segmentation Sequences
CV and Pattern Recognition
Drones find fires better with fewer computer limits.