Potential PhD Project Topics

List of PhD projects and topic areas proposed by CISA supervisors.

This page lists a number of topic areas and concrete topics some of our supervisors are interested in, and is frequently updated. Please feel free to contact them if you are interested in any of these.

Fairness in algorithmic decision making

In many data-driven domains, from social media to sharing economy applications, algorithms are making increasingly decisions that affect the information or resources made available to users. As many recent controversies have shown, these algorithmic decisions often embed biases that may introduce behaviour in the system that might be considered "unfair" by humans. Such issues are closely related to making AI "safe" and human-friendly, as they are ultimately about bridging the gap between machine-interpretable optimisation objectives and heuristics versus human norms, moral, and social expectations. The aim of this project is to develop algorithms that are provably fair, by endowing the machine agents that execute these algorithms with methods for judging their fairness based on notions of fairness elicited from human users. In particular, we want to explore the challenges of deciding what rules to apply in decision making when multiple, potentially conflicting definitions of fairness have been provided by different  stakeholders in the system. The project will involve identifying appropriate application scenarios within which this problem can be investigated, formally modelling the underlying decision-making situation, designing and executing experiments with human users to elicit notions of fairness, and evaluating the adequacy of different candidate algorithmic solutions with regard to these fairness metrics. In the process, a computational framework for reasoning about fairness will be developed (which may involve either symbolic or numerical reasoning and learning methods), and its usefulness will have to be assessed in terms of how much it aids the design of fair algorithms. Required skills involve a solid background in AI methods, programming experience, and experience with human experimentation.

contact: Michael Rovatsos

Human factors and ergonomics of configuration languages

There is a good deal of evidence to suggest that configuration errors are responsible a large proportion of system failures. These are often due to misunderstandings about features of the configuration language or related procedures. There has been very little work done on the usability of such languages, and it would be interesting to study the way in language constructs are (mis)understood by real system administrators, and how common errors occur. This would be very useful in informing the design of new languages and configuration tools ...

contact: Paul Anderson

Configuration language design

In contrast to modern programming languages, most current production configuration languages are defined by a single implementation which often has no clear specification for the semantics, or even the syntax. This makes it difficult to create alternative implementations, or other tools which process the language in different ways. The lack of clarity is also a potential source of configuration errors which risk creating a weakness in large system whose application-layer software has been verified with care. This motivates the development of new languages which support configuration-specific operations with a clear and simple semantics.

contact: Paul Anderson

Intelligent analysis of system configurations

Modern-day computing infrastructures are complex, collaborative systems involving a plethora of interacting components, in a continual state of flux, which are managed in a federated way by different people and organisations. In the absence of centralised control and global co-ordination, such infrastructures evolve organically in unpredictable ways. The result is usually a system which less efficient, less reliable and difficult to manage. However, it is possible to imagine a set of intelligent agents gathering information about the configuration of a system and its running state, hypothesising about problems, and making suggestions for improvements.

contact: Paul Anderson

Mixed initiative system configuration

Modern computing infrastructures are usually managed by specifying their configuration in a special-purpose configuration language which is then deployed by a configuration tool. This specification is usually a large collaborative endeavour which ultimately specifies every detail of the resulting system. This is a source of complexity (and hence error), as well as unnecessarily constraining the available options. Our previous work has shown that a constraint-based approach to the specification allows many of the details to be filled-in automatically. But such arbitrary, automatic choices of these details may not always be appropriate and administrators will usually want to understand and possibly influence a particular choice. This motivates a "mixed-initiative" approach which supports a dialog between the administrator and an automatic system to select an acceptable configuration.

contact: Paul Anderson

Learning Explainable Representations via Statistical Relational Learning 

Numerous applications in robotics and data science require us to parse unstructured data in an automated fashion. However, many of these models are not human-interpretable. Given the increasing focus on the needed Explainable machine learning and AI algorithms, this project looks to interface general-purpose human-readable representations (databases, programs, logics) with cutting-edge machine learning. 

contact: Vaishak Belle

 Probabilistic programming 

Probabilistic reasoning and machine learning over multi-modal data is a deeply challenging problem: programs can enable modularity and descriptive clarity to build complex machine learning applications. This area look at new methods for inferring and learning probabilistic programs. 

contact: Vaishak Belle

 Solving Science Problems 

Since the early days of AI, people have been fascinated by machines solving puzzles, exercises and exams that are used to test the intelligence of humans. The ability to solve such problems is an important cognitive and intellectual skill as it is evaluated as part of academic admission tests such as SAT, GMAT and GRE. Recently, there has been a surge of interest in mathematical and scientific problem solving. This area looks to expand the frontiers of this literature, by combining ideas from Bayesian Machine learning, NLP, and probabilistic programming.

contact: Vaishak Belle

Epistemic Planning and multi-agent systems 

Classical automated planning has traditionally focused on achieving a goal state given a sequence of actions. In numerous AI applications involving human-machine interaction, the underlying model may need to reason and learn wrt beliefs and intentions of other agents, that is, it needs to have a mental model for the other systems in the environment. This area looks at automated planning in this richer context, by building on ideas from modal logic and reasoning about knowledge. 

contact: Vaishak Belle

Autonomous agents modelling other agents in complex environments

The design of autonomous agents which can complete tasks in complex dynamic environments is one of the core areas of research of modern artificial intelligence. A crucial requirement in such agents is the ability to interact competently with other agents whose behaviours, beliefs, plans, and goals may be unknown. To interact with such agents requires the ability to reason about their unknown elements based on observed actions and other available data. While much research has been devoted to the development of such reasoning methods, there are still many open questions. A recent survey by Albrecht and Stone (https://arxiv.org/abs/1709.08071) provides a comprehensive overview of the existing methodologies and concludes with a section on open problems. I am interested in supervising projects in this general area, especially projects addressing open problems from the AS survey.

contact: Stefano Albrecht

Multi-agent learning

Multi-agent learning is concerned with the design and analysis of algorithms that enable autonomous agents to learn how to interact with one another. This includes teams of agents which must learn to collaborate in order to complete a given task, as well as agents which are in competition and must learn to solve tasks in the presence of adversaries. Reinforcement learning has emerged as one of the principal methodologies used in multi-agent learning, and a recent tutorial by Albrecht and Stone provides an overview of existing algorithms (http://www.cs.utexas.edu/~larg/ijcai17_tutorial). I am interested in supervising projects addressing important open problems in multi-agent learning, including algorithms for efficient multi-agent reinforcement learning and communication, as well as theoretical and empirical analyses of multi-agent learning processes and their convergence properties.

contact: Stefano Albrecht