List of PhD projects and topic areas proposed by CISA supervisors.
This page lists a number of topic areas and concrete topics some of our supervisors are interested in, and is frequently updated. Please feel free to contact them if you are interested in any of these.
The overall goal of these projects is to develop new methods and formal languages that can effectively bridge the areas of knowledge representation, probabilistic reasoning and machine learning. Formal languages and symbolic techniques have a long and distinguished history in AI, and have widely impacted many scientific and commercial endeavors in diverse areas such as verification, robotics, planning, logistics and human-level commonsense reasoning. However, many of the applications in these areas often need to handle inherent uncertainty, complemented by an increased prominence of data-oriented algorithms and statistical techniques. From a foundational perspective, the question of how knowledge representation languages need to be augmented to handle these complex notions of uncertainty is an open and challenging one. From a practical perspective, enriching existing machine learning algorithms by human-readable representations and background knowledge can be very useful. See also http://bit.ly/2f4tYS5.
contact: Vaishak Belle
In many data-driven domains, from social media to sharing economy applications, algorithms are making increasingly decisions that affect the information or resources made available to users. As many recent controversies have shown, these algorithmic decisions often embed biases that may introduce behaviour in the system that might be considered "unfair" by humans. Such issues are closely related to making AI "safe" and human-friendly, as they are ultimately about bridging the gap between machine-interpretable optimisation objectives and heuristics versus human norms, moral, and social expectations. The aim of this project is to develop algorithms that are provably fair, by endowing the machine agents that execute these algorithms with methods for judging their fairness based on notions of fairness elicited from human users. In particular, we want to explore the challenges of deciding what rules to apply in decision making when multiple, potentially conflicting definitions of fairness have been provided by different stakeholders in the system. The project will involve identifying appropriate application scenarios within which this problem can be investigated, formally modelling the underlying decision-making situation, designing and executing experiments with human users to elicit notions of fairness, and evaluating the adequacy of different candidate algorithmic solutions with regard to these fairness metrics. In the process, a computational framework for reasoning about fairness will be developed (which may involve either symbolic or numerical reasoning and learning methods), and its usefulness will have to be assessed in terms of how much it aids the design of fair algorithms. Required skills involve a solid background in AI methods, programming experience, and experience with human experimentation.
contact: Michael Rovatsos
There is a good deal of evidence to suggest that configuration errors are responsible a large proportion of system failures. These are often due to misunderstandings about features of the configuration language or related procedures. There has been very little work done on the usability of such languages, and it would be interesting to study the way in language constructs are (mis)understood by real system administrators, and how common errors occur. This would be very useful in informing the design of new languages and configuration tools ...
In contrast to modern programming languages, most current production configuration languages are defined by a single implementation which often has no clear specification for the semantics, or even the syntax. This makes it difficult to create alternative implementations, or other tools which process the language in different ways. The lack of clarity is also a potential source of configuration errors which risk creating a weakness in large system whose application-layer software has been verified with care. This motivates the development of new languages which support configuration-specific operations with a clear and simple semantics.
contact: Paul Anderson
Modern-day computing infrastructures are complex, collaborative systems involving a plethora of interacting components, in a continual state of flux, which are managed in a federated way by different people and organisations. In the absence of centralised control and global co-ordination, such infrastructures evolve organically in unpredictable ways. The result is usually a system which less efficient, less reliable and difficult to manage. However, it is possible to imagine a set of intelligent agents gathering information about the configuration of a system and its running state, hypothesising about problems, and making suggestions for improvements.
contact: Paul Anderson
Modern computing infrastructures are usually managed by specifying their configuration in a special-purpose configuration language which is then deployed by a configuration tool. This specification is usually a large collaborative endeavour which ultimately specifies every detail of the resulting system. This is a source of complexity (and hence error), as well as unnecessarily constraining the available options. Our previous work has shown that a constraint-based approach to the specification allows many of the details to be filled-in automatically. But such arbitrary, automatic choices of these details may not always be appropriate and administrators will usually want to understand and possibly influence a particular choice. This motivates a "mixed-initiative" approach which supports a dialog between the administrator and an automatic system to select an acceptable configuration.
contact: Paul Anderson