24 May 2017 - Marie-Catherine de Marneffe: Seminar

Title:

Automatically drawing inferences

Abstract:

When faced with a piece of text, humans understand far more than just the literal meaning of the words in the text. In our interactions, much of what we communicate is not said explicitly but rather inferred. However extracting information that is expressed without actually being said remains an issue for NLP. For instance, given (1) and (2), we want to derive that people will generally take that it is war from (1), but will take that relocating species threatened by climate is not a panacea from (2), even though both events are embedded under "(s)he doesn’t believe".

(1) The problem, I’m afraid, with my colleague here, he really doesn’t believe that it’s war. (2) Transplanting an ecosystem can be risky, as history shows. Hellmann doesn’t believe that relocating species threatened by climate change is a panacea.

Automatically extracting systematic inferences of that kind is fundamental to a range of NLP tasks, including information extraction, opinion detection, and textual entailment. But surprisingly, at present the vast majority of information extraction systems work at the clause level and regard any event they find as true without taking into account the context in which the event appears in the sentence.
 
In this talk, I will discuss two case studies of extracting such inferences, to illustrate the general approach I take in my research: use linguistically-motivated features, conjoined with surface-level ones, to enable progress in achieving robust text understanding. First, I will look at how to automatically assess the veridicality of events —  whether events described in a text are viewed as actual (as in (1)), non-actual (as in (2)) or uncertain. I will describe a statistical model that balances lexical features like hedges or negations with structural features and approximations of world knowledge, thereby providing a nuanced picture of the diverse factors that shape veridicality. Second, I will examine how to identify (dis)agreement in dialogue, where people rarely overtly (dis)agree with their interlocutor, but their opinion can nonetheless be inferred (in (1) for instance, we infer that the speaker disagrees with his colleague).
 
Bio:  
Marie-Catherine de Marneffe is an assistant professor in Linguistics at The Ohio State University. She received her Ph.D. in Linguistics from Stanford University in December 2012 under the supervision of Christopher D. Manning. She is developing computational linguistic methods that capture what is conveyed by speakers beyond the literal meaning of the words they say. Primarily she wants to ground meanings in corpus data, and show how such meanings can drive pragmatic inference. She has worked on Recognizing Textual Entailment, contradiction detection, and coreference resolution. She has been one of the principal developers of the Stanford Dependencies and she contributed to defining the Universal Dependencies representations, which are practical representations of grammatical relations and predicate argument structure. She is the recipient of a best NAACL paper award, a Google Faculty award, and an NSF CRII. She serves as a member of the NAACL board and the Computational Linguistics editorial board. 
 

Add to your calendar

 vCal  iCal

May 24 2017 -

24 May 2017 - Marie-Catherine de Marneffe: Seminar

ILCC seminar by Marie-Catherine de Marneffe in IF G07

Informatics Forum G07