Natural Language Processing and Computational Linguistics
A list of potential topics for PhD students in the area of Language Processing.
Concurrency in (computational) linguistics
Improving understanding of synchronic and diachronic aspects of phonology.
Supervisor: Julian Bradfield
In several aspects of linguistic analysis, it is natural to think of some form of concurrent processing, not least because the brain is a massively concurrent system. This is particularly true in phonology and phonetics, and descriptions such as feature analyses, and especially autosegmental phonology, go some way to recognizing this. Although there has been some work on rigorous formal models of such descriptions, there has been little if any application of the extensive body of research in theoretical computer science on concurrent processes. Such a project has the potential to give better linguistic understanding of synchronic and diachronic aspects of phonology and perhaps syntax, and even to improve speech generation and recognition, by adding formal underpinning and improvement to the existing agent-based approaches.
Spectral learning for natural language processing
Latent variable modeling is a common technique for improving the expressive power of natural language processing models. The values of these latent variables are missing in the data, but we are still required to predict these values and estimate the model parameters while assuming these variables exist in the model. This project seeks to improve the expressive power of NLP models at various levels (such as morphological, syntactic and semantic) using latent variable modeling, and also to identify key techniques, based on spectral algorithms, in order to learn these models. The family of latent-variable spectral learning algorithms is a recent exciting development that was seeded in the machine learning community. It presents a principled, well-motivated approach for estimating the parameters of latent variable models using tools from linear algebra.
Natural language semantics and question answering
Supervisor: Shay Cohen
How can we make computers understand language? This is a question at the core of the area of semantics in natural language processing. Question answering, an NLP application in which a computer is expected to respond to natural language questions, provides a lens to look into this challenge. Most modern search engines offer some level of functionality for factoid question answering. These systems have high precision, but their recall could significantly be improved. From the technical perspective, question answering offers a sweet spot between challenging semantics problems that are not expected to be solved in the near future, and problems that will be solved in the foreseeable future. As such, it is an excellent test-bed for semantic representation theories and for other attempts at describing the meaning of text. The most recent development in question answering is that of the retrieval of answers from open knowledge bases such as Freebase (a factoid database of various facts without a specific domain tying them all together). The goal of this project is to explore various methods to improve semantic representations in language, with open question answering being potentially an important application for testing them. These semantic representations can either be symbolic (enhanced with a probabilistic interpretation) or they can be projections in a continuous geometric space. Both of these ideas have been recently explored in the literature.
Topics in morphology (NLP or cognitive modelling)
Supevisor: Sharon Goldwater
Many NLP systems developed for English ignore the morphological structure of words and (mostly) get away with it. Yet morphology is far more important in many other languages. Handling morphology appropriately can reduce sparse data problems in NLP, and understanding human knowledge of morphology is a long-standing scientific question in cognitive science. New methods in both probabilistic modeling and neural networks have the potential to improve word representations for downstream NLP tasks and perhaps to shed light on human morphological acquisition and processing. Projects in this area could involve working to combine distributional syntactic/semantic information with morphological information to improve word representations for low-resource languages or sparse datasets, evaluating new or existing models of morphology against human behavioral benchmarks, or related topics.
Neural Network Models of Human Language and Visual Processing
Supervisor: Frank Keller
Recent neural models have used attention mechanisms as a way of focusing the processing of a neural networks on certain parts of the input. This has proved successful for diverse applications such as image description, question answering, or machine translation. Attention is also a natural way of understanding human cognitive processing: during language processing, humans attend words in a certain order; during visual processing, they view image regions in a certain sequence. Crucially, human attention can be captured precisely using an eye-tracker, a device that measures which parts of the input the eye fixates, and for how long. Projects within this area will leverage neural attention mechanisms to model aspects of human attention. Examples include reading: when reading text, humans systematically skip words, spend more time on difficult words, and sometimes re-read passages. Another example is visual search: when looking for a target, human make a sequence of fixations which depend a diverse range of factors, such as visual salience, scene type, and object context. Neural attention models that capture such behaviors need to combine different types of knowledge, while also offering a cognitively plausible story how such knowledge is acquired, often based on only small amounts of training data.
Neural Network Models of Long-form, Multimodal Narratives
Supervisor: Frank Keller
Deep learning approaches are very successful in classical NLP tasks. However, they often assume a limited input context, and are designed to work on short texts. Standard architectures such as LSTMs or transformers therefore struggle to process long-form narratives such as books or screenplays. Prior work shows that such narratives have a particular structure, which can be analyzed in terms of events and characters. This structure can be used for applications such as question answering or summarization of long-form texts. Projects within this area will leverage recent advances in language modeling, such as retrieval-based or memory-based models, to analyze narrative structure. The analysis can take the form of sequences or graphs linking events or characters. Based on such structures, higher level concepts (e.g., schemas or tropes) can be identified, and user reaction such as suspense, surprise, or sentiment can be predicted. Multimodal narratives (illustrated stories, comics, or movies) pose a particular challenge, as narrative elements need to be grounded in both the linguistic and the visual modality to infer structure.
Supervisor: Bonnie Webber
A multi-sentence question (MSQ) is a short text specifying a question or set of related questions. Evidence suggests that the sentences in an MSQ relate to each other in difference ways and that recognizing these relations can enable a system to produce a better response. We have been gathering a corpus of MSQs and beginning to characterize relations within them. Research will involve collecting human responses to MSQs and using them to design and implement a system that produces similar responses.
Concurrent Discourse Relations
Evidence from crowd-sourced conjunction-completion experiments shows that people systematically infer implicit discourse relations that hold in addition to discourse relations signalled explicitly. Research on shallow discourse parsing has not yet reflected these findings. It is possible that enabling a shallow discourse parser to recognize implicit relations that hold concurrently with explicitly signalled relations may also help in the recognition of implicit relations without additional signals. Work in this area could also involve crowd-sourcing additional human judgments on discourse relations.
Low-resource language and speech processing
Supervisors: Sharon Goldwater
The most effective language and speech processing systems are based on statistical models learned from many annotated examples, a classic application of machine learning on input/ output pairs. But for many languages and domains we have little data. Even in cases where we do have data, it is government or news text. For the vast majority of languages and domains, there is hardly anything. However, in many cases, there is side information that we can exploit: dictionaries or other knowledge sources, or text paired with weak signals, such as images, speech, or timestamps. How can we exploit such heterogeneous information in statistical language processing? The goal of projects in this area is to develop statistical models and inference techniques that exploit such data, and apply them to real problems.
Communicative efficiency approaches to language processing/typology
Supervisor: Frank Mollica
In recent years, communicative efficiency has been formalized in terms of information theory and case studies (e.g., word order patterns, color-naming systems) have been used to demonstrate that linguistic forms and meanings support efficient communication. Despite having communication as a universal objective, the languages of the world are still highly diverse in both their communicative objectives and the strategies they use to achieve efficiency. Projects in this area would involve using information theoretic models and conducting experiments to: identify and characterize communicative functions and grammar strategies; predict and explain the prevalence of communicative functions and grammar strategies across cultures and groups; and investigate the developmental and evolutionary dynamics of grammar.
Incremental interpretation for robust NLP using CCG and dependency parsing
Supervisor: Mark Steedman
Combinatory Categorial Grammar (CCG) is a computational grammar formalism that has recently been used widely in NLP applications including wide-coverage parsing, generation, and semantic parser induction. The present project seeks to apply insights from these and other sources including dependency parsing to the problem of incremental word-by-word parsing and interpretation using statistical models. Possible evaluation tasks include language modeling for automatic speech recognition, as well as standard parsing benchmarks.
Data-driven learning of temporal semantics for NLP
Supervisor: Mark Steedman
Mike Lewis' Edinburgh thesis (2015) shows how to derive a natural language semantics for wide coverage parsers that directly captures relations of paraphrase and entailment using machine learning and parser-based machine reading of large amounts of text. The present project
seeks to extend the semantics to temporal and causal relations between events, such as that being somewhere is the consequent state of arriving there, using large amounts of timestamped text.
Constructing large knowledge graphs from text using machine reading
Supervisor: Mark Steedman
Knowledge graphs like Freebase are constructed by hand using relation labels that are not easy to map onto natural language semantics, especially for languages other than English. An obvious alternative is to build the knowledge graph in terms of language-independent natural language semantic relations, which Lewis and Steedman 2013b can be mined by machine reading from multi-lingual text. The project will investigate the extension of the language independent semantics and its application to the construction of large knowledge resources using parser-based machine reading.
Semantic Parsing for Sequential Question Answering
Supervisor: Mirella Lapata
Semantic parsing maps natural language queries into machine interpretable meaning representations (e.g.,~logical forms or computer programs). These representations can be executed in a task-specific environment to help users navigate a database, compare products, or reach a decision. Semantic parsers to date can handle queries of varying complexity and a wealth of representations including lambda calculus, dependency-based compositional semantics, variable-free logic, and SQL.
However, the bulk of existing work has focused on isolated queries, ignoring the fact that most natural language interfaces receive inputs in streams. Users typically ask questions or perform tasks in multiple steps, and they often decompose a complex query into a sequence of inter-related sub-queries. The aim of this project is to develop novel neural architectures for training semantic parsers in a context-dependent setting. The task involves simultaneously parsing individual queries correctly and resolving co-reference links between them. An additional challenge involves eliciting datasets which simulate the task of answering sequences of simple but inter-related questions.
Knowledge Graph Completion with Rich Semantics
Supervisor: Jeff Pan
Knowledge Graphs have been shown useful for improving the performance and explain-ability of machine learning methods (such as transfer learning and zero-shot learning) and their downstream tasks, such as NLP tasks. The present project seeks to investigate how to integrate rich semantics into knowledge graph completion models and methods. Projects in this area could involve integrations of schema of knowledge graph, temporal and spatial constraints, updates of knowledge graphs, or information from language models.
Complex Query Answering over Knowledge Graphs
Supervisor: Jeff Pan
Answering complex queries on large-scale incomplete knowledge graphs is a fundamental yet challenging task. There have been two extremes in current research: one is to completely relying on logical reasoning of both schema and data sub-graphs but suffers from knowledge incompleteness; the other is to mainly relying embeddings but paying less attention to schema information. This present project will investigate alternative approaches including those that can make use of schema and embedding effectively.