IPAB Workshop - 29/03/2018

Title

Disentanglement Learning and Interpretable Symbol Grounding

Abstract

The talk would be a progress update on my thesis work and will discuss the usage of disentangled data representations for the sake of interpretable symbol grounding. A disentangled data representation is one, where the underlying generative factors of variation, responsible for key attributed of the data, are explicitly factorised during the learning process. Current state-of-the-art approaches for learning such representations rely on using no supervision and as a result might disentangle unimportant factors of variation, with respect to a given task. We hypothesise that through the usage of coarse labels/symbols, and by borrowing ideas from prototype and case-based learning, we can perform more focused disentanglement only of factors that are relevant to the grounding of those symbols.

Mar 29 2018 -

IPAB Workshop - 29/03/2018

Yordan Hristov

IF 4.31/4.33