Friday, 23rd February - 11am Nora Kassner : Seminar

Title:   Consistency in LMs

Abstract:

Although pretrained language models contain significant amounts of world knowledge, when probed, they can still produce inconsistent answers to questions. As a result, it can be hard to identify what the model actually "believes" about the world, making it susceptible to inconsistent behavior and simple errors. In this talk, I present work that studies different types of inconsistent behavior and outlines a neural-symbolic approach addressing these challenges.

 

Bio:

Nora Kassner is a research scientist at Google DeepMind. Before that she was a researcher at Meta AI and a PhD student at the Center for Information and Language Processing, Munich. During her PhD, she was a research intern with the Allen Institute for AI and Meta AI. Her research focuses on knowledge and reasoning in deep learning models. She is interested in their internal mechanisms to capture knowledge, and by extension their limitations, to then use these insights for better performance.

Feb 23 2024 -

Friday, 23rd February - 11am Nora Kassner : Seminar

This event is co-organised by ILCC and by the UKRI Centre for Doctoral Training in Natural Language Processing, https://nlp-cdt.ac.uk.

IF G.03 and online