IPAB workshop - 01/08/2019

 

Speaker: Floyd Chitalu

Title: Explicit Displacement-Correlated XFEM for Simulating Brittle Fracture

Abstract: In this presentation I will be talking about my current work where we present an efficient, scalable and controllable technique for physical simulation of the propagation of fracture in brittle objects. To achieve this, we devise two algorithms. First, we develop an approximate volumetric simulation, based on the the extended Finite Element Method (XFEM), to initialize and propagate crack-fronts within brittle objects. Our second contribution is a new mesh cutting algorithm, which produces fragments of the input mesh using the fracture surface. We do this directly operating the half-edge data structures of two surface meshes, which enables us cut general surface meshes including those of concave polyhedra and with abutting concave polygons.

 

Speaker: Yordan Hristov

TItle: Disentangled Relational Representations for Explaining and Learning from Demonstration

Abstract: Learning from demonstration is an effective method for human users to instruct desired robot behaviour. However, for most non-trivial tasks of practical interest, efficient learning from demonstration depends crucially on inductive bias in the chosen structure for rewards/costs and policies. We address the case where this inductive bias comes from an exchange with a human user. We propose a method in which a learning agent utilizes the information bottleneck layer of a high-parameter variational neural model, with auxiliary loss terms, in order to ground abstract concepts such as spatial relations. The concepts are referred to in natural language instructions and are manifested in the high-dimensional sensory input stream the agent receives from the world. We evaluate the properties of the latent space of the learned model in a photorealistic synthetic environment and particularly focus on examining its usability for downstream tasks. Additionally, through a series of controlled table-top manipulation experiments, we demonstrate that the learned manifold can be used to ground demonstrations as symbolic plans, which can then be executed on a PR2 robot.

 

Speaker: Kunkun Pang

Title: Learning to Fine-tune

Abstract: With the parameter tuning approach, recent research on neural network (NN) has shown great performance on transfer learning with the parameter tuning approach. However, the training set of the target domain is often too small for fine-tuning an architecture with surplus capacity, which leads to overfitting problems. We propose to do the architecture tuning process with deep reinforcement learning in which neurons can be deletable. Specifically, we model the architecture tuning policies as NN that take human-designed neuron information as input and predict which neuron to be deleted. Furthermore, to achieve a general solution, one of the key challenges is to enable the policy to be transferable across various domains. We train the policy with multiple datasets, and it shows generalisation performance on the Omniglot dataset.

Aug 01 2019 -

IPAB workshop - 01/08/2019

Floyd Chitalu, Yordan Hristov

4.31/33, IF