Authors: Rebecq, Henri; Ranftl, René; Koltun, Vladen; Scaramuzza, Davide
Event cameras are novel sensors that report brightnesschanges in the form of asynchronous “events” instead ofintensity frames. They have significant advantages overconventional cameras: high temporal resolution, high dy-namic range, and no motion blur. Since the output of eventcameras is fundamentally different from conventional cam-eras, it is commonly accepted that they require the devel-opment of specialized algorithms to accommodate the par-ticular nature of events. In this work, we take a differ-ent view and propose to apply existing, mature computervision techniques to videos reconstructed from event data.We propose a novel recurrent network to reconstruct videosfrom a stream of events, and train it on a large amountof simulated event data. Our experiments show that ourapproach surpasses state-of-the-art reconstruction meth-ods by a large margin (>20%) in terms of image qual-ity. We further apply off-the-shelf computer vision algo-rithms to videos reconstructed from event data on taskssuch as object classification and visual-inertial odometry,and show that this strategy consistently outperforms algo-rithms that were specifically designed for event data. Webelieve that our approach opens the door to bringing theoutstanding properties of event cameras to an entirely newrange of tasks. A video of the experiments is available at https://www.youtube.com/watch?v=IdYrC4cUO0I
Reference
- Presented at: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019
- Read paper
- Date: 2019