Abstract: I will present our recent NeurIPS paper: Learning Object-Centric Representations of Multi-Object Scenes from Multiple Views. Learning object-centric representations of multi-object scenes is a promising approach towards machine intelligence, facilitating high-level reasoning and control from visual sensory data. However, current approaches for unsupervised object-centric scene representation are incapable of aggregating information from multiple observations of a scene. As a result, these ``single-view'' methods form their representations of a 3D scene based only on a single 2D observation (view). Naturally, this leads to several inaccuracies, with these methods falling victim to single-view spatial ambiguities. To address this, we propose The Multi-View and Multi-Object Network (MulMON)---a method for learning accurate, object-centric representations of multi-object scenes by leveraging multiple views. In order to sidestep the main technical difficulty of the multi-object-multi-view scenario---maintaining object correspondences across views---MulMON iteratively updates the latent object posteriors for a scene over multiple views. To ensure that these iterative updates do indeed aggregate spatial information to form a complete 3D scene understanding, MulMON is asked to predict the appearance of the scene from novel viewpoints during training. Through experiments we show that MulMON better-resolves spatial ambiguities than single-view methods---learning more accurate and disentangled object representations---and also achieves new functionality in predicting object segmentations for novel viewpoints.
Abstract: Applications of flow-based models to the distribution of human poses. I will discuss some very early results in applying flow-based generative models to human poses. We will show simple applications to pose generation, interpolations between unseen poses using the exact latent variable inference provided by normalising flows, and experiments on detecting corrupted poses. We also experiment with methods for building the structure of a pre-defined belief network over the human skeleton into our model and using this to edit poses by intervening on certain joint positions.