IPAB Seminar - 30/11/2018

Title

Towards Tracking Everything

 

Abstract

Visual scene understanding has made amazing progress in the last few years, fuelled by the development of powerful deep learning methods for object detection, classification and segmentation. This progress has created exciting opportunities for applications in mobile robotics and intelligent vehicles.

 

However, major obstacles will have to be overcome before such mobile vision applications can be widely deployed in general driving scenarios. The currently prevalent strategy of relying on supervised training with large-scale human annotation is reaching its limits as we approach the long tail of the object distribution. Future mobile systems will need the capability to cope with rich human-made environments, in which training detectors with manual annotations for every possible object category would be infeasible. There is therefore a strong need for new approaches that can adopt weakly supervised and self-supervised forms of training.

 

In order to address those issues, we propose a category-agnostic multi-object tracking approach that starts from low-level region proposals to track generic objects of both known and unknown classes. Our approach can utilize semantic information whenever it is available for classifying objects at the track level, while retaining the capability to track generic unknown objects in the absence of such information. We demonstrate experimentally that our approach achieves performance comparable to state-of-the-art tracking-by-detection methods for popular object categories such as cars and pedestrians. Additionally, we show that the proposed method can discover and robustly track a large variety of other objects. We apply our approach to several large video collections and demonstrate its potential for fully automatic object discovery and detector learning. 

Nov 30 2018 -

IPAB Seminar - 30/11/2018

Bastian Leibe

IF, 4.31/33