Friday, 24th June 2022 - 11am - Greg Durrett: Seminar

Title: Why natural language is the right vehicle for complex reasoning

 

Abstract:

Despite their widespread success, end-to-end transformer models consistently fall short in settings involving complex reasoning. Transformers trained on question answering (QA) tasks that seemingly require multiple steps of reasoning often achieve high performance by taking "reasoning shortcuts." We still do not have models that robustly combine many pieces of information in a logically consistent way. In this talk, I argue that a very attractive solution to this problem is within our grasp: doing multi-step reasoning directly in natural language. Text is flexible and expressive, capturing all of the semantics we need to represent intermediate states of a reasoning process. Working with text allows us to interface with knowledge in pre-trained models and in resources like Wikipedia. And finally, text is easily interpretable and auditable by users. I describe two pieces of work that manipulate language to do inference. First, transformation of question-answer pairs and evidence sentences allows us to seamlessly move between QA and natural language inference (NLI) settings, advancing both calibration of QA models and capabilities of NLI systems. Second, we show how synthetically-constructed data can allow us to build a deduction engine in natural language, which is a powerful building block for putting together natural language "proofs" of claims. Finally, we assess the ability of GPT-3 to do such reasoning ("chain-of-thought" prompting) and show that the explanations it produces even for simple text tasks are unreliable, suggesting that scaling language models will not be sufficient to solve complex reasoning problems. Bio:

Greg Durrett is an assistant professor of Computer Science at UT Austin. His current research focuses on making natural language processing systems more interpretable, controllable, and generalizable, spanning application domains including question answering, textual reasoning, summarization, and information extraction. His work is funded by a 2022 NSF CAREER award and other grants from agencies including the NSF, DARPA, Salesforce, and Amazon. He completed his Ph.D. at UC Berkeley in 2016, where he was advised by Dan Klein, and from 2016-2017, he was a research scientist at Semantic Machines.

 

 

 

Add to your calendar

 vCal  iCal

Jun 24 2022 -

Friday, 24th June 2022 - 11am - Greg Durrett: Seminar

Organised by ILCC

G.03, Informatics Forum, 10 Crichton Street