Collaboration, learning and tests
This project was responsible for all aspects that concern the creation of a heterogeneous team in terms of collaboration between different robots, between robots and human operators. It included work on learning strategies that are useful for both flying and legged robots and allowed them to adapt quickly to the environment, and collaborative testing in the different scenarios.
Labs involved: M. Hutter, L. Gambardella, A. Ijspeert, M. Chli, D. Floreano, D. Scaramuzza, R. Siegwart, T. Delbruck
Collective localisation and mapping
While a drone can swiftly gain the overview of the scene, it cannot carry heavy computational units on board, in contrast to a ground robot. Using different robotic platforms, such as walking robots and drones, can ultimately yield effective coordination strategies.
The goal of this part of the project was to give walking and flying robots the ability to share visual information during a search-and-rescue mission and build collectively a map of the environment, at the same time determining each robot’s position into it.
Related publications
- Schmuck and M. Chli, “On the Redundancy Detection in Keyframe- based SLAM”, IEEE International Conference on 3D Vision (3DV), Quebec City, Sept. 2019.
- Reijgwart, A. Millane, H. Oleynikova, R. Siegwart, C. Cadena, and J. Nieto, “Voxgraph: Globally Consistent, Volumetric Mapping Using Signed Distance Function Submaps”, IEEE Robotics and Automation Letters, vol. 5, no. 1, pp. 227–234, Jan. 2020.
- Oleynikova, C. Lanegger, Z. Taylor, M.l Pantic, A. Millane, R. Siegwart, and J. Nieto. “An Open-Source System for Vision-Based Micro-Aerial Vehicle Mapping, Planning, and Flight in Cluttered Environments”, Journal of Field Robotics, vol. 37, no. 4, pp. 642–666, April 2020.
- Pinto Teixeira, M. R. Oswald, M. Pollefeys and M. Chli, “Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation”, IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1055–1062, Jan. 2020.
Machine learning for rescue robots
The role of Machine Learning (ML), and in particular Deep Learning (DL) in NCCR Robotics has been steadily growing: learning-based approaches have been extensively used for quadrotor control, perception, ground robot navigation, and human-robot interaction, and we heavily rely on machine learning for flying robots and legged locomotion.
One of the main problems in the field remains how to transfer to the real world the learning strategies developed in simulation, and how to adapt to different real-world domains (for example from indoor obstacle avoidance to outdoor forest navigation).
The groups of Luca Gambardella and Marco Hutter, for example, collaborate on the use of machine learning to allow ANYmal to walk on unknown terrains
Related publications
- Nava, L. M. Gambardella, A. Giusti, “Object Permanence for Self-Supervised Learning”, RSS Workshop on Self-Supervised Robot Learning, 2020.
- Nava, D. Mantegazza, L. M. Gambardella, A. Giusti, “Supervised and Unsupervised Domain Adaptation Techniques for Visual Perception Tasks” (submitted to IEEE Robotics and Automation Letters).
- Hu, T. Delbruck, S.-C. Liu, “Learning to Exploit Multiple Vision Modalities by Using Grafted Networks”, European Conference on Computer Vision (ECCV), 2020.
- Guzzi, J., R. O. Chavez-Garcia, M. Nava, L. M. Gambardella, A. Giusti, “Path Planning with Local Motion Estimations”. IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 2586–2593, April 2020.
ARCHE
Each year, NCCR Robotics’s rescue robots were collectively tested in a joint effort with Armasuisse and with support from the Swiss Rescue and Ordenance Disposal Units (LVb G/ Rttg/ABC) during the ARCHE (Advanced Robotics Capabilities for Hazardous Environments) event.
This exercise included applications such as mapping, firefighting, obstacle removal, localization of hazardous materials or recovery of casualties. Flying and walking drones were deployed together, and formed an heterogeneous team where, for example, drones first maps the area, providing the ground robot with initial information about how the environment looks like and where the entrance to the building, with a potential victim, is located.
Related publications
- Delmerico et al., “The current state and future outlook of rescue robotics,” Journal of Field Robotics, vol. 36, no. 7, pp. 1171–1191, Oct. 2019.