After a natural disaster such as an earthquake or flood, it is often very dangerous for teams of rescue workers to go into affected areas to locate victims. The idea behind robots for rescue activities is to create robust robots that can travel into areas too dangerous for humans and rescue dogs. Robots can be used to assess the situation and to locate people who may be trapped and to relay the location back to the rescue teams, so that all efforts can be concentrated on areas where victims are known to be. Robots are also being developed to carry items such as medical supplies and food to known victims, thereby focusing resources where need is known to be greatest. The main research issues within the field of mobile robotics for search and rescue mission centres around durability and usability of robots – how to design robots that are easily transported, can function efficiently in all weather conditions and that have long lasting power, and robots that can navigate themselves and have effective enough sensors to pick out victims.
In Phase III (2018-2022) the work of our Rescue Robotics Grand Challenge is organised along three projects:
Search-and-rescue requires adaptability and agility far beyond current state-of-the-art drones. For instance, the tasks of flying inside partially collapsed buildings, manoeuvring at high speed in a forest or among buildings, safely transporting cargo of varying size and weight in complex environments, performing dexterous yet forceful manipulation, or transitioning to the ground for inspection, require a completely new class of drones. The main objective of this project is to develop a new class of agile flying vehicles for gathering data, collecting items, and delivering first-aid tools in extended areas as well as in cluttered environments. This objective will be addressed by advancing the state of the art in mechanical design, electronic design, and control algorithms.
On the mechanical design side, the design principles of adaptive morphology are combined with the use of soft and smart materials to develop morphologically adaptive drones that can reshape their body to meet the requirements imposed by various tasks and environments and can perform multiple functions.
On the electronic design side, we develop convolutional neural network hardware, processing architectures in FPGA logic circuits placed immediately after the sensor. Also, neuromorphic silicon retina event cameras, such as the DAVIS sensors, are being further miniaturized so that they could be mounted on small drones.
On the algorithmic side, a new class of control and perception algorithms are developed which, by combining standard sensors with neuromorphic ones, can provide robustness to low texture and high dynamic range scenes, as well as to high speed motion. Additionally, algorithms are developed that are able to tune and reconfigure their parameters on the fly.
In addition to flying robots, legged robots can play a key role in helping a rescue team because of their ability to carry increased payloads, to perform long-duration missions, to interact with the environment and to locomote in complex terrains. Legged robots that can operate in such grounds must be able to apply different gaits and manoeuvers depending on the terrain and obstacles.
NCCR Robotics has developed two field-ready platforms, ANYmal and Krock-2, that can face some of the most challenging situations where the robots have to work in mud or water. The objective of this project is to improve the state of the art in visual and haptic environment perception, navigation, motion planning and control for legged robots. The proposed algorithms must be efficient and sophisticated enough to quickly and robustly handle new situations, such as navigation in more complex terrains (e.g. with tight spaces, steep slopes, large obstacles, water, mud, etc.), in different environmental conditions (e.g., wind, water, mud), the capability to transport cargos of different dimension and mass, the ability to grasp and manipulate samples, including tactile perception during grasping and opening doors. Additionally, we tackle hard-ware failure. This means that the robots must be able to detect and recover from failures and cope with degenerated modes.
Collaboration, learning and tests
From single robots to a heterogenous robotic team
This project is responsible for all aspects that concern the creation of a heterogeneous team in terms of collaboration between different robots, between robots and human operators. It includes work on learning strategies that are useful for both flying and legged robots and allow them to adapt quickly to the environment, and collaborative testing in the different scenarios.
It has four main objectives: 1) to extend the work on collaboration between robots in heterogeneous teams, in particular collaborative mapping and localization, collaborative traversability estimation and collaborative scene understanding, 2) to add deep learning abilities for airborne-aided terrain understanding and traversability map extraction, 3) to enable symbiotic interactions between human operators and robots, and 4) to coordinate the implementation of field tests in the two SAR scenarios.