2nd November 2020 - 2pm - Jacob Andreas: Seminar

 

Title: Compositionality and generalization in neural network models of language

Abstract:

How can we build machine learning models that can learn new concepts in context from little data? Human language learners acquire new word meanings from a single exposure, and can immediately incorporate words and the concepts they represent productively and compositionally into a larger linguistic or conceptual system. Despite the remarkable success of neural network models on many learning problems, this kind of few-shot compositional generalization remains largely out of reach. Most research on compositionality in machine learning has focused on new model architectures with explicit composition mechanisms; these models tend to be highly task-specific and don't generalize well to less structured problems involving a mix of rule-like and exceptional behavior. In this talk, I'll describe recent work aimed at identifying simpler sources of inductive bias, with a focus on rule-based and learned data augmentation schemes. Our recent results suggest that some failures of systematicity in neural models can be explained by simpler structural constraints on data distributions and corrected with weaker inductive bias than previously described.

Bio:

http://web.mit.edu/jda/www/bio.html

 

Add to your calendar

 vCal  iCal

Nov 02 2020 -

2nd November 2020 - 2pm - Jacob Andreas: Seminar

This event is co-organised by ILCC and by the UKRI Centre for Doctoral Training in Natural Language Processing, https://nlp-cdt.ac.uk

Blackboard invitation