Collaboration, learning and tests
This project is responsible for all aspects that concern the creation of a heterogeneous team in terms of collaboration between different robots, between robots and human operators. It includes work on learning strategies that are useful for both flying and legged robots and allow them to adapt quickly to the environment, and collaborative testing in the different scenarios.
Collective localisation and mapping
While a drone can swiftly gain the overview of the scene, it cannot carry heavy computational units on board, in contrast to a ground robot. Using different robotic platforms, such as walking robots and drones, can ultimately yield effective coordination strategies.
The goal of this part of the project is to give walking and flying robots the ability to share visual information during a search-and-rescue mission and build collectively a map of the environment, at the same time determining each robot’s position into it.
- Schmuck and M. Chli, “On the Redundancy Detection in Keyframe- based SLAM”, IEEE International Conference on 3D Vision (3DV), Quebec City, Sept. 2019.
- Reijgwart, A. Millane, H. Oleynikova, R. Siegwart, C. Cadena, and J. Nieto, “Voxgraph: Globally Consistent, Volumetric Mapping Using Signed Distance Function Submaps”, IEEE Robotics and Automation Letters, vol. 5, no. 1, pp. 227–234, Jan. 2020.
- Oleynikova, C. Lanegger, Z. Taylor, M.l Pantic, A. Millane, R. Siegwart, and J. Nieto. “An Open-Source System for Vision-Based Micro-Aerial Vehicle Mapping, Planning, and Flight in Cluttered Environments”, Journal of Field Robotics, vol. 37, no. 4, pp. 642–666, April 2020.
- Pinto Teixeira, M. R. Oswald, M. Pollefeys and M. Chli, “Aerial Single-View Depth Completion with Image-Guided Uncertainty Estimation”, IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 1055–1062, Jan. 2020.
Machine learning for rescue robots
The role of Machine Learning (ML), and in particular Deep Learning (DL) in NCCR Robotics has been steadily growing: learning-based approaches have been extensively used for quadrotor control, perception, ground robot navigation, and human-robot interaction, and we heavily rely on machine learning for flying robots and legged locomotion.
One of the main problems in the field remains how to transfer to the real world the learning strategies developed in simulation, and how to adapt to different real-world domains (for example from indoor obstacle avoidance to outdoor forest navigation).
The groups of Luca Gambardella and Marco Hutter, for example, collaborate on the use of machine learning to allow ANYmal to walk on unknown terrains
- Nava, L. M. Gambardella, A. Giusti, “Object Permanence for Self-Supervised Learning”, RSS Workshop on Self-Supervised Robot Learning, 2020.
- Nava, D. Mantegazza, L. M. Gambardella, A. Giusti, “Supervised and Unsupervised Domain Adaptation Techniques for Visual Perception Tasks” (submitted to IEEE Robotics and Automation Letters).
- Hu, T. Delbruck, S.-C. Liu, “Learning to Exploit Multiple Vision Modalities by Using Grafted Networks”, European Conference on Computer Vision (ECCV), 2020.
- Guzzi, J., R. O. Chavez-Garcia, M. Nava, L. M. Gambardella, A. Giusti, “Path Planning with Local Motion Estimations”. IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 2586–2593, April 2020.
Each year, NCCR Robotics’s rescue robots are collectively tested in a joint effort with Armasuisse and with support from the Swiss Rescue and Ordnance Disposal Units (LVb G/ Rttg/ABC) during the event ARCHE (Advanced Robotics Capabilities for Hazardous Environments).
This exercise includes applications such as mapping, firefighting, obstacle removal, localization of hazardous materials or recovery of casualties. Flying and walking drones are deployed together, and form an heterogeneous team where, for example, drones first maps the area, providing the ground robot with initial information about how the environment looks like and where the entrance to the building, with a potential victim, is located.