Publication:Unsupervised Moving Object Detection viaContextual Information Separation

Publication: Unsupervised Moving Object Detection viaContextual Information Separation

Authors: Yang, Yanchao; Loquercio, Antonio; Scaramuzza, Davide; Soatto, Stefano

 

We propose an adversarial contextual model for detecting moving objects in images. A deep neural network istrained to predict the optical flow in a region using information from everywhere else but that region (context), while another network attempts to make such context as uninformative as possible. The result is a model where hypotheses naturally compete with no need for explicit regularization or hyper-parameter tuning. Although our method requires no supervision whatsoever, it outperforms several methods that are pre-trained on large annotated datasets. Our model can be thought of as a generalization of classical variational generative region-based segmentation, but in a way that avoids explicit regularization or solution of partial differential equations at run-time. We publicly release all our code and trained networks

Reference

  • Presented at: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, USA, 2019
  • Read paper
  • Data set
  • Date: 2019
Posted on: May 31, 2019