Computational Cognitive Science
A list of potential topics for PhD students in the area of Computational Cognitive Science.
Leverage attention-based deep learning approaches to build cognitive models of human language processing or human visual processing
Supervisor: Frank Keller
Recent advances in deep learning have used attention mechanisms as a way of focusing the processing of a neural networks on certain parts of the input. This has proved successful for diverse applications such as image description, question answering, or machine translation. Attention is also a natural way of understanding human cognitive processing: during language processing, humans attend words in a certain order; during visual processing, they view image regions in a certain sequence. Crucially, human attention can be captured precisely using an eye-tracker, a device that measures which parts of the input the eye fixates, and for how long.
The aim of this project is to leverage neural attention mechanisms to model aspects of human attention. Examples include reading: when reading text, humans systematically skip words, spend more time on difficult words, and sometimes re-read passages. Another example is visual search: when looking for a target, human make a sequence of fixations which depend a diverse range of factors, such as visual salience, scene type, and object context. Neural attention models that capture such behaviors need to combine different types of knowledge, while also offering a cognitively plausible story how such knowledge is acquired, often based on only small amounts of training data.
Topics in morphology (NLP or cognitive modelling)
Supevisor: Sharon Goldwater
Many NLP systems developed for English ignore the morphological structure of words and (mostly) get away with it. Yet morphology is far more important in many other languages. Handling morphology appropriately can reduce sparse data problems in NLP, and understanding human knowledge of morphology is a long-standing scientific question in cognitive science. New methods in both probabilistic modeling and neural networks have the potential to improve word representations for downstream NLP tasks and perhaps to shed light on human morphological acquisition and processing. Projects in this area could involve working to combine distributional syntactic/semantic information with morphological information to improve word representations for low-resource languages or sparse datasets, evaluating new or existing models of morphology against human behavioral benchmarks, or related topics.
Topics in unsupervised speech processing and/or modelling infant speech perception
Supervisor: Sharon Goldwater
Work in unsupervised (or 'zero-resource') speech processing (see Aren Jansen et al., ICASSP 2013) has begun to investigate methods for extracting repeated units (phones, words) from raw acoustic data as a possible way to index and summarize speech files without the need for transcription. This could be especially useful in languages where there is little data to develop supervised speech recognition systems. In addition, it raises the possibility of whether similar methods could be used to model the way that human infants begin to identify words in the speech stream of their native language. Unsupervised speech processing is a growing area of research with many interesting open questions, so a number of projects are possible. Projects could focus mainly on ASR technology or mainly on modeling language acquisition; specific research questions will depend on this choice. here are just two possibilities: (1) unsupervised learners are more sensitive to input representation than are supervised learners, and preliminary work suggests that MFCCs are not necessarily the best option. Investigate how to learn better input representations (e.g., using neural networks) that are robust to speaker differences but encode linguistically meaningful differences. (2) existing work in both speech processing and cognitive modeling suggests that trying to learn either words or phones alone may be too difficult and in fact we need to develop *joint learners* that simultaneously learn at both levels. Investigate models that can be used to do this and evaluate how joint learning can improve performance.