Authors: Gehrig, D.; Rebecq, H.; Gallego, G.; Scaramuzza, D.
We present EKLT, a feature tracking methodthat leverages the complementarity of event camerasand standard cameras to track visual features with hightemporal resolution. Event cameras are novel sensorsthat output pixel-level brightness changes, called *events*.They offer significant advantages over standard cam-eras, namely a very high dynamic range, no motionblur, and a latency in the order of microseconds. How-ever, because the same scene pattern can produce dif-ferent events depending on the motion direction, estab-lishing event correspondences across time is challeng-ing. By contrast, standard cameras provide intensitymeasurements (frames) that do not depend on motiondirection. Our method extracts features on frames andsubsequently tracks them asynchronously using events,thereby exploiting the best of both types of data: theframes provide a photometric representation that doesnot depend on motion direction and the events pro-vide updates with high temporal resolution. In contrastto previous works, which are based on heuristics, thisis the first principled method that uses intensity mea-surements directly, based on a generative event modelwithin a maximum-likelihood framework. As a result,our method produces feature tracks that are more ac-curate than the state of the art, across a wide varietyof scenes.
Reference
- Published in: International Journal of Computer Vision (IJCV) (accepted)
- Read paper
- Date: 2019