Self-localization on a 3D map by fusing global and local features from a monocular camera
By: Satoshi Kikuch, Masaya Kato, Tsuyoshi Tasaki
Potential Business Impact:
Helps self-driving cars see better with people.
Self-localization on a 3D map by using an inexpensive monocular camera is required to realize autonomous driving. Self-localization based on a camera often uses a convolutional neural network (CNN) that can extract local features that are calculated by nearby pixels. However, when dynamic obstacles, such as people, are present, CNN does not work well. This study proposes a new method combining CNN with Vision Transformer, which excels at extracting global features that show the relationship of patches on whole image. Experimental results showed that, compared to the state-of-the-art method (SOTA), the accuracy improvement rate in a CG dataset with dynamic obstacles is 1.5 times higher than that without dynamic obstacles. Moreover, the self-localization error of our method is 20.1% smaller than that of SOTA on public datasets. Additionally, our robot using our method can localize itself with 7.51cm error on average, which is more accurate than SOTA.
Similar Papers
Towards an Accurate and Effective Robot Vision (The Problem of Topological Localization for Mobile Robots)
Robotics
Helps robots know where they are using pictures.
Monocular Person Localization under Camera Ego-motion
CV and Pattern Recognition
Helps robots find people even when moving fast.
Graph-based Robot Localization Using a Graph Neural Network with a Floor Camera and a Feature Rich Industrial Floor
CV and Pattern Recognition
Helps robots find their way using floor patterns.