Learned IMU Bias Prediction for Invariant Visual Inertial Odometry
By: Abdullah Altawaitan , Jason Stanley , Sambaran Ghosal and more
Potential Business Impact:
Robots move better by learning sensor errors.
Autonomous mobile robots operating in novel environments depend critically on accurate state estimation, often utilizing visual and inertial measurements. Recent work has shown that an invariant formulation of the extended Kalman filter improves the convergence and robustness of visual-inertial odometry by utilizing the Lie group structure of a robot's position, velocity, and orientation states. However, inertial sensors also require measurement bias estimation, yet introducing the bias in the filter state breaks the Lie group symmetry. In this paper, we design a neural network to predict the bias of an inertial measurement unit (IMU) from a sequence of previous IMU measurements. This allows us to use an invariant filter for visual inertial odometry, relying on the learned bias prediction rather than introducing the bias in the filter state. We demonstrate that an invariant multi-state constraint Kalman filter (MSCKF) with learned bias predictions achieves robust visual-inertial odometry in real experiments, even when visual information is unavailable for extended periods and the system needs to rely solely on IMU measurements.
Similar Papers
Legged Robot State Estimation Using Invariant Neural-Augmented Kalman Filter with a Neural Compensator
Robotics
Helps robots walk more accurately by learning from mistakes.
A Plug-and-Play Learning-based IMU Bias Factor for Robust Visual-Inertial Odometry
CV and Pattern Recognition
Makes robots and phones know where they are better.
Debiasing 6-DOF IMU via Hierarchical Learning of Continuous Bias Dynamics
Robotics
Fixes wobbly phone movement data for better tracking.