IPAB workshop - 04/05/2017

Buyu Liu:

Title: Learning Dynamic Hierarchical Models for Anytime Scene Labeling.

Abstract:

With increasing demand for efficient image and video analysis, test-time cost of scene parsing becomes critical for many large-scale or time-sensitive vision applications. We propose a dynamic hierarchical model for anytime scene labeling that allows us to achieve flexible trade-offs between efficiency and accuracy in pixel-level prediction. In particular, our approach incorporates the cost of feature computation and model inference, and optimizes the model performance for any given test-time budget by learning a sequence of image-adaptive hierarchical models. We formulate this anytime representation learning as a Markov Decision Process with a discrete-continuous state-action space. A high quality policy of feature and model selection is learned based on an approximate policy iteration method with action proposal mechanism. We demonstrate the advantages of our dynamic non-myopic anytime scene parsing on three semantic segmentation datasets, which achieves 90% of the state-of-the-art performances by using 15% of their overall costs.

 

******************************************  

Vladimir Ivan

Title: Constraint Consistent Learning Exploiting Knowledge of the Task Structure

Abstract:

Many practical tasks in robotic systems, such as cleaning windows, writing or grasping, are inherently constrained. Learning policies subject to constraints is a challenging problem.  I will present  the locally weighted constrained projection learning method (LWCPL) that first estimates the constraint and then exploits this estimate across multiple observations of the constrained motion to learn an unconstrained policy. The generalization is achieved by projecting the unconstrained policy onto a new, previously unseen, constraint. No prior knowledge about the task or the policy are required, therefore, generic regressors can be used to model the task and the policy. However, any prior beliefs about the structure of the motion can be expressed by choosing task-specific regressors. In particular, we can use robot kinematics and motion priors to improve the accuracy. This contribution to learning from demonstration is an analytical solution to problems defined in previous work. As such it achieves higher accuracy and computational speeds. **********************************************  

 

May 04 2017 -

IPAB workshop - 04/05/2017

Buyu Liu / Vladimir Ivan

4.31/33