Postgraduate Research Opportunities

A list of all the possible topics of research within ICSA.

Postgraduate Study

For information on postgraduate study within Informatics please refer to the Postgraduate section on the Informatics website.

Prospective postgraduates

Possible PhD Topics in ICSA

This is the list of possible PhD topics suggested by members of staff in ICSA. These topics are meant to give PhD applicants an idea of the scope of the work in the Institute. Of course applicants can also suggest their own topic. In both cases, they should contact the potential supervisor before submitting an application. 

Dynamic Code Analysis and Optimisation

Prospective Supervisors: Björn Franke

While static analysis attempts to derive code properties from source code or some kind of intermediate representation much more information about a program becomes available during its execution. For example, many concrete values of variables are not known at compilation time, but only become available until the program is running. Dynamic code analysis attempts to extract useful program information at runtime, either in an offline profiling stage or in a runtime system much like a just-in-time compiler. In the later case, there is an additional challenge in that code instrumentation and analysis must not impact performance too much. Dynamic information can be used to drive code optimisations, possibly speculatively, including parallelisation.

Advanced JIT compilation

Prospective Supervisor: Björn Franke

Just-in-time (JIT) compilation is a frequently dynamic compilation strategy. It aims to provide application portability whilst minimising the compilation overhead on the target device. In this project we explore approaches to JIT compilation for mobile, server, and data centre applications alike and investigate decentralised and hardware supported JIT compilation technologies.

Auto-Parallelisation

Prospective Supervisors: Michael O'BoyleBjörn Franke

The aim of this project is to develop advanced compiler technology that can take emerging applications and automatically map them on to the next generation multi-core processors. This PhD will involve new research into discovering parallelism within multimedia and streaming applications going beyond standard data parallel analysis. The project will also investigate cost-effective mapping of parallelism to processors which may include dynamic or adaptive compilation.

Compilers that Learn to Optimise

Prospective Supervisors: Michael O'Boyle

Develop a compiler framework that can automatically learn how to optimise programs.

Rather than hard-coding a compiler strategy for each platform, we aim to develop a novel portable compiler approach that can automatically tune itself to any fixed hardware and can improve its performance over time. This is achieved by employing machine learning approaches to optimisation, where the machine learning algorithm first learns the optimisation space and then automatically derives a compilation strategy that attempts to generate the ``best'' optimised version of any user program. Such an approach, if successful, will have a wide range of applications. It will allow portability and performance of compilers across platforms, eliminating the human compiler-development bottleneck.

Memory Consistency Models and Cache Coherency for Parallel Architectures

Prospective Supervisor: Vijay Nagarajan

Parallel architectures (e.g. multicores, manycores and GPUs) are here. Since performance on parallel architectures is contingent on programmers writing parallel software, it is crucial that the parallel architectures are "programmable". The memory consistency model which essentially specifies what a memory read can return is at the heart of concurrency semantics. The cache coherency sub-system which provides a view of a coherent shared memory is at the heart of shared memory programming. The goal of this project is to design and implement memory consistency models and cache coherence subsystem for future parallel architectures. 

Parallelism Discovery

Prospective Supervisor: Björn Franke

Most legacy applications are written in a sequential programming language and expose very little scope for immediate parallelisation. The broad availability of multicore computers, however, necessitates the parallelisation of such applications if the users want to further improve application performance. In this project we investigate dynamic methods for the discovery of parallelism within sequential legacy applications.

Patterns and Skeleton in Parallel Programming

Prospective Supervisors: Murray Cole

The skeletal approach to parallel programming advocates the use of program forming constructs which abstract commonly occurring patterns of parallel computation and interaction.

Many parallel programs can be expressed as instances of more generic patterns of parallelism, such as pipelines, stencils, wavefronts and divide-and-conquer. In our work we call these patterns "skeletons". Providing a skeleton API simplifies programming: the programmer only has to write code which customizes selected skeletons to the application. This also makes the resulting programs more performance portable: the compiler and/or run-time can exploit structural information provided by the skeleton to choose the best implementation strategy for a range of underlying architectures, from GPU, through manycore, and on to large heterogeneous clusters.

Opportunities for research in this area include the full integration of skeletons into the language and compilation process, dynamic optimization of skeletons for diverse heterogeneous systems, the extension of skeleton approaches to applications which are "not quite" skeleton instances, the automatic discovery of new (and old) skeletons in existing applications, and the design and implementation of skeleton languages in domain-specific contexts.

Searching the Embedded Program Optimisation Space

Prospective Supervisors: Michael O'Boyle

Investigate the use of automatically generated performance predictors based on machine learning to act as proxies for the machine.

Efficient implementation is critical for embedded systems. Current optimising compiler approaches based on static analysis fail to deliver performance as they rely on a hardwired idealised model of the processor. This project is concerned with using feedback directed search techniques which can dramatically outperform standard approaches. This project will investigate the use of automatically generated performance predictors based on machine learning to act as proxies for the machine. This will allow extremely rapid determination of good optimisations and allow greater coverage of the optimisation space.