It is well known that visual-inertial state estimation is possible up to a four degrees-of-freedom (DoF) transformation (rotation around gravity and translation), and the extra DoFs (“gauge freedom”) have to be handled properly. While different approaches for handling the gauge freedom have been used in practice, no previous study has been carried out to systematically analyze their differences. In this letter, we present the first comparative analysis of different methods for handling the gauge freedom in optimization-based visual-inertial state estimation. We experimentally compare three commonly used approaches: fixing the unobservable states to some given values, setting a prior on such states, or letting the states evolve freely during optimization. Specifically, we show that 1) the accuracy and computational time of the three methods are similar, with the free gauge approach being slightly faster; 2) the covariance estimation from the free gauge approach appears dramatically different, but is actually tightly related to the other approaches. Our findings are validated both in simulation and on real-world data sets and can be useful for designing optimization-based visual-inertial state estimation algorithms.
- Published in: IEEE Robotics and Automation Letters (Volume: 3 , Issue: 3 , July 2018)
- DOI: 10.1109/LRA.2018.2833152
- Read paper
- Date: 2018