Authors: Loquercio, A.; Kauffmann, E.; Ranftl, R.; Dosovitskiy, A.; Koltun, V.; Scaramuzza, D.
Dynamically changing environments, unreliablestate estimation, and operation under severe resource constraintsare fundamental challenges for robotics, which still limit thedeployment of small autonomous drones. We address thesechallenges in the context of autonomous, vision-based droneracing in dynamic environments. A racing drone must traversea track with possibly moving gates at high speed. We enable thisfunctionality by combining the performance of a state-of-the-artpath-planning and control system with the perceptual awarenessof a convolutional neural network (CNN). The CNN directly mapsraw images to a desired waypoint and speed. Given the CNNoutput, the planner generates a short minimum-jerk trajectorysegment that is tracked by a model-based controller to actuatethe drone towards the waypoint. The resulting modular systemhas several desirable features: (i) it can run fully on-board, (ii) itdoes not require globally consistent state estimation, and (iii) it isboth platform and domain independent. We extensively test theprecision and robustness of our system, both in simulation and ona physical platform. In both domains, our method significantlyoutperforms the prior state of the art. In order to understandthe limits of our approach, we additionally compare againstprofessional human drone pilots with different skill levels
Reference
- Published in: IEEE Transactions on Robotics (accepted)
- Read paper
- Date: 2019