Authors: Schmuck, Patrik; Chli, Margarita
Egomotion and scene estimation is a key component in automating robot navigation, as well as in virtual reality applications for mobile phones or head-mounted displays. It is well known, however, that with long exploratory trajectories and multi-session mapping for long-term autonomy or collaborative applications, the maintenance of the ever-increasing size of these maps quickly becomes a bottleneck. With the explosion of data resulting in increasing runtime of the optimization algorithms ensuring the accuracy of the Simultaneous Localization And Mapping (SLAM) estimates, the large quantity of collected experiences is imposing hard limits on the scalability of such techniques. Considering the keyframe-based paradigm of SLAM techniques, this paper investigates the redundancy inherent in SLAM maps, by quantifying the information of different experiences of the scene as encoded in keyframes. Here we propose and evaluate different information-theoretic and heuristic metrics to remove dispensable scene measurements with minimal impact on the accuracy of the SLAM estimates. Evaluating the proposed metrics in two state-of-the-art centralized collaborative SLAM systems, we provide our key insights into how to identify redundancy in keyframe-based SLAM.
Reference
- Published in: 2019 International Conference on 3D Vision (3DV)
- DOI: 10.1109/3DV.2019.00071
- Read paper
- Date: 2019