Looking for publications? All our latest publications are listed here, and you can also use our search functions to help find what you are looking for. Power users might also want to consider searching on the EPFL Infoscience site which provides advanced publication search capabilities.

Keep Rollin’ – Whole-Body Motion Control and Planning for Wheeled Quadrupedal Robots

Authors: Bjelonic, M.; Bellicoso, C. D.; de Viragh, Y.; Sako, D.; Tresoldi, F. D.; Jenelten, F.; Hutter, M.

 

  • IEEE Robotics and Automation Letters
We show dynamic locomotion strategies for wheeled quadrupedal robots, which combine the advantages of both walking and driving. The developed optimization framework tightly integrates the additional degrees of freedom introduced by the wheels. Our approach relies on a zero-moment point based motion optimization which continuously updates reference trajectories. The reference motions are tracked by a hierarchical whole-body controller which computes optimal generalized accelerations and contact forces by solving a sequence of prioritized tasks including the nonholonomic rolling constraints. Our approach has been tested on ANYmal, a quadrupedal robot that is fully torque-controlled including the non-steerable wheels attached to its legs. We conducted experiments on flat and inclined terrains as well as over steps, whereby we show that integrating the wheels into the motion control and planning framework results in intuitive motion trajectories, which enable more robust and dynamic locomotion compared to other wheeled-legged robots. Moreover, with a speed of 4 m/s and a reduction of the cost of transport by 83% we prove the superiority of wheeled-legged robots compared to their legged counterparts.

Reference

Posted on: February 11, 2019

Challenges and implemented technologies used in autonomous drone racing

Authors: Moon, H.; Martinez-Carranza, J.; Cieslewski, T.; Faessler, M.; Falanga, D.; Simovic, A.; Scaramuzza, D.; Li, S.; Ozo, M.; De Wagter, C.; de Croon, G.; Hwang, S.; Jung, S.; Shim, H.; Kim, H.; Park, M.; Au, T. C.; Kim, S. J.

 

  • Springer: Intelligent Service Robotics Series
Autonomous Drone Racing (ADR) is a challenge for autonomous drones to navigate a cluttered indoor environment without relying on any external sensing in which all the sensing and computing must be done on board. Although no team could complete the whole racing track so far, most successful teams implemented waypoint tracking methods and robust visual recognition of the gates of distinct colors because the complete environmental information was given to participants before the events. In this paper, we introduce the purpose of ADR as a benchmark testing ground for autonomous drone technologies and analyze challenges and technologies used in the two previous ADRs held in IROS 2016 and IROS 2017. Six teams which participated in these events present their implemented technologies that cover modified ORBSLAM, robust alignment method for waypoints deployment, sensor fusion for motion estimation, deep learning for gate detection and motion control, andstereo-vision for gate detection.

Reference

  • Detailed record:
  • Paper
  • Date: 2019
Posted on: February 4, 2019

Semi-Dense 3D Reconstruction with a Stereo Event Camera

Authors: Zhou, Yi; Gallego, Guillermo; Rebecq, Henri; Kneip, Laurent; Li, Hongdong; Scaramuzza, Davide

 

  • Presented at: European Conference on Computer Vision (ECCV), Munich, 2018

Event cameras are bio-inspired sensors that offer several advantages, such as low latency, high-speed and high dynamic range, to tackle challenging scenarios in computer vision. This paper presents a solution to the problem of 3D reconstruction from data captured by a stereo event-camera rig moving in a static scene, such as in the context of stereo Simultaneous Localization and Mapping. The proposed method consists of the optimization of an energy function designed to exploit small-baseline spatio-temporal consistency of events triggered across both stereo image planes. To improve the density of the reconstruction and to reduce the uncertainty of the estimation, a probabilistic depth-fusion strategy is also developed. The resulting method has no special requirements on either the motion of the stereo event-camera rig or on prior knowledge about the scene. Experiments demonstrate our method can deal with both texture-rich scenes as well as sparse scenes, outperforming state-of-the-art stereo methods based on event data image representations.

Reference

  • Detailed record: arXiv
  • Date: 2018
Posted on: January 4, 2019

Continuous-Time Visual-Inertial Odometry for Event Cameras

Authors: Mueggler, Elias; Gallego, Guillermo; Rebecq, Henri; Scaramuzza, Davide

 

Event cameras are bio-inspired vision sensors that output pixel-level brightness changes instead of standard intensity frames. They offer significant advantages over standard cameras, namely a very high dynamic range, no motion blur, and a latency in the order of microseconds. However, due to the fundamentally different structure of the sensor’s output, new algorithms that exploit the high temporal resolution and the asynchronous nature of the sensor are required. Recent work has shown that a continuous-time representation of the event camera pose can deal with the high temporal resolution and asynchronous nature of this sensor in a principled way. In this paper, we leverage such a continuous-time representation to perform visual-inertial odometry with an event camera. This representation allows direct integration of the asynchronous events with micro-second accuracy and the inertial measurements at high frequency. The event camera trajectory is approximated by a smooth curve in the space of rigid-body motions using cubic splines. This formulation significantly reduces the number of variables in trajectory estimation problems. We evaluate our method on real data from several scenes and compare the results against ground truth from a motion-capture system. We show that our method provides improved accuracy over the result of a state-of-the-art visual odometry method for event cameras. We also show that both the map orientation and scale can be recovered accurately by fusing events and inertial data. To the best of our knowledge, this is the first work on visual-inertial fusion with event cameras using a continuous-time framework.

Reference

Posted on: January 4, 2019

Deep Drone Racing: Learning Agile Flight in Dynamic Environments

Authors: Kaufmann, Elia; Loquercio, Antonio; Ranftl, Rene; Dosovitskiy, Alexey; Koltun, Vladlen; Scaramuzza, Davide

 

  • Presented at: Conference on Robotic Learning (CoRL) 2018, Zurich

Autonomous agile flight brings up fundamental challenges in robotics, such as coping with unreliable state estimation, reacting optimally to dynamically changing environments, and coupling perception and action in real time under severe resource constraints. In this paper, we consider these challenges in the context of autonomous, vision-based drone racing in dynamic environments. Our approach combines a convolutional neural network (CNN) with a state-of-the-art path-planning and control system. The CNN directly maps raw images into a robust representation in the form of a waypoint and desired speed. This information is then used by the planner to generate a short, minimum-jerk trajectory segment and corresponding motor commands to reach the desired goal. We demonstrate our method in autonomous agile flight scenarios, in which a vision-based quadrotor traverses drone-racing tracks with possibly moving gates. Our method does not require any explicit map of the environment and runs fully onboard. We extensively test the precision and robustness of the approach in simulation and in the physical world. We also evaluate our method against state-of-the-art navigation approaches and professional human drone pilots.

Reference

  • Detailed record: arXiv
  • Date: 2018
Posted on: January 4, 2019

ESIM: an Open Event Camera Simulator

Authors: Rebecq, Henri; Gehrig, Daniel; Scaramuzza, Davide

 

  • Published in: Proceedings of the 2nd Conference on Robot Learning, PMLR 2018

Event cameras are revolutionary sensors that work radically differently from standard cameras. Instead of capturing intensity images at a fixed rate, event cameras measure changes of intensity asynchronously, in the form of a stream of events, which encode per-pixel brightness changes. In the last few years, their outstanding properties (asynchronous sensing, no motion blur, high dynamic range) have led to exciting vision applications, with very low-latency and high robustness. However, these sensors are still scarce and expensive to get, slowing down progress of the research community. To address these issues, there is a huge demand for cheap, high-quality synthetic, labeled event for algorithm prototyping, deep learning and algorithm benchmarking. The development of such a simulator, however, is not trivial since event cameras work fundamentally differently from frame-based cameras. We present the first event camera simulator that can generate a large amount of reliable event data. The key component of our simulator is a theoretically sound, adaptive rendering scheme that only samples frames when necessary, through a tight coupling between the rendering engine and the event simulator. We release an open source implementation of our simulator.

Reference

Posted on: January 3, 2019

The Foldable Drone: A Morphing Quadrotor That Can Squeeze and Fly

Authors: Falanga, Davide; Kleber, Kevin; Mintchev, Stephano; Floreano, Dario; Scaramuzza, Davide

 

The recent advances in state estimation, perception, and navigation algorithms have significantly contributed to the ubiquitous use of quadrotors for inspection, mapping, and aerial imaging. To further increase the versatility of quadrotors, recent works investigated the use of an adaptive morphology, which consists of modifying the shape of the vehicle during flight to suit a specific task or environment. However, these works either increase the complexity of the platform or decrease its controllability. In this letter, we propose a novel, simpler, yet effective morphing design for quadrotors consisting of a frame with four independently rotating arms that fold around the main frame. To guarantee stable flight at all times, we exploit an optimal control strategy that adapts on the fly to the drone morphology. We demonstrate the versatility of the proposed adaptive morphology in different tasks, such as negotiation of narrow gaps, close inspection of vertical surfaces, and object grasping and transportation. The experiments are performed on an actual, fully autonomous quadrotor relying solely on onboard visual-inertial sensors and compute. No external motion tracking systems and computers are used. This is the first work showing stable flight without requiring any symmetry of the morphology.

Reference

Posted on: January 3, 2019

A Real-Time Game Theoretic Planner for Autonomous Two-Player Drone Racing

Authors: Spica, Riccardo; Falanga, Davide; Cristofalo, Eric; Montijano, Eduardo; Scaramuzza, Davide; Schwager, Mac

 

  • Published in: Proceedings Robotics: Science and Systems 2018
  • Pittsburgh, PA, USA, June 26-30, 2018

To be successful in multi-player drone racing, a player must not only follow the race track in an optimal way, but also compete with other drones through strategic blocking, faking, and opportunistic passing while avoiding collisions. Since unveiling one’s own strategy to the adversaries is not desirable, this requires each player to independently predict the other players’ future actions. Nash equilibria are a powerful tool to model this and similar multi-agent coordination problems in which the absence of communication impedes full coordination between the agents. In this paper, we propose a novel receding horizon planning algorithm that, exploiting sensitivity analysis within an iterated best response computational scheme, can approximate Nash equilibria in real time. We demonstrate that our solution effectively competes against alternative strategies in a large number of drone racing simulations.

Reference

Posted on: January 3, 2019

Fast, Autonomous Flight in GPS-denied and Cluttered Environments

Authors: Mohtam, Kartik; Watterson Michael; Mulgaonkar, Yash; Liu, Sikang; Qu, Chao; Makineni, Anurag; Saulnier, Kelsey; Sun, Ke; Zhu, Alex; Delmerico, Jeffrey; Karydis, Konstantinos; Atanasov, Nikolay; Loianno, Giuseppe; Scaramuzza, Davide; Daniilidis, Kostas; Jose Taylor, Camillo; Kumar; Vijay

 

  • Published in: Journal of Field Robotics

One of the most challenging tasks for a flying robot is to autonomously navigate between target locations quickly and reliably while avoiding obstacles in its path, and with little to no a priori knowledge of the operating environment. This challenge is addressed in the present paper. We describe the system design and software architecture of our proposed solution and showcase how all the distinct components can be integrated to enable smooth robot operation. We provide critical insight on hardware and software component selection and development and present results from extensive experimental testing in real‐world warehouse environments. Experimental testing reveals that our proposed solution can deliver fast and robust aerial robot autonomous navigation in cluttered, GPS‐denied environments.

Reference

Posted on: January 3, 2019