Potential PhD Project Topics

List of PhD projects and topic areas proposed by AIAI supervisors.

This page lists a number of topic areas and concrete topics some of our supervisors are interested in, and is frequently updated. Please feel free to contact them if you are interested in any of these.

 

Using AI and Machine Learning to study Fairness in Education  

Supervisor: Kobi Gal

In many data-driven domains, from social media to sharing economy applications, algorithms are making increasingly decisions that affect the information or resources made available to users. As many recent controversies have shown, these algorithmic decisions often embed biases that may introduce behaviour in the system that might be considered "unfair" by humans. Such issues are closely related to making AI "safe" and human-friendly, as they are ultimately about bridging the gap between machine-interpretable optimisation objectives and heuristics versus human norms, moral, and social expectations. The aim of this project is to develop algorithms that are provably fair, by endowing the machine agents that execute these algorithms with methods for judging their fairness based on notions of fairness elicited from human users. In particular, we want to explore the challenges of deciding what rules to apply in decision making when multiple, potentially conflicting definitions of fairness have been provided by different  stakeholders in the system. The project will involve identifying appropriate application scenarios within which this problem can be investigated, formally modelling the underlying decision-making situation, designing and executing experiments with human users to elicit notions of fairness, and evaluating the adequacy of different candidate algorithmic solutions with regard to these fairness metrics. In the process, a computational framework for reasoning about fairness will be developed (which may involve either symbolic or numerical reasoning and learning methods), and its usefulness will have to be assessed in terms of how much it aids the design of fair algorithms. Required skills involve a solid background in AI methods, programming experience, and experience with human experimentation.

contact: Kobi Gal  

 

Developing the FRANK Query Answering System

Supervisors: Alan Bundy and Kwabena Nuamah

The FRANK System (Functional Reasoning Acquires New Knowledge) infers new knowledge by combining existing knowledge stored in diverse knowledge sources on the Internet. FRANK  curates existing knowledge into a common formal representation and combines deductive and statistical reasoning, including making predictions. It returns not just an answer to the query, but also an estimate of that answer’s uncertainty, e.g., an error bar. Various PhD projects are possible, for instance: improving the natural language capabilities of FRANK; extending FRANK’s deductive and/or statistical capabilities; improving FRANK’s abilities in qualitative reasoning and uncertainty estimation; or applying FRANK to a new kind of query. Please contact A.Bundy@ed.ac.uk to discuss possible options. 

Contact: Alan Bundy

 

Repairing Faulty Logical Theories with the ABC System

Supervisor: Alan Bundy

The ABC system repairs faulty logical theories via a combination of abduction, belief revision and conceptual change. A logical theory is viewed as a mechanism to make predictions about the environment, including predicting the effects of an action by, say, a robot. The theory can be seen to be faulty if it predicts things that are observed to be false or fails to predict things that are observed to be true. Various PhD projects are possible, for instance: modifying ABC to different logics; improving heuristics for selecting the best repair; suggesting meanings for newly invented predicates and constants; or incorporating ABC into autonomous agents. Please contact A.Bundy@ed.ac.uk to discuss possible options. 

Contact: Alan Bundy

 

Learning Explainable Representations and/or Explainable Machine Learning  

Supervisor: Vaishak Belle

Numerous applications in robotics and data science require us to parse unstructured data in an automated fashion. However, many of these models are not human-interpretable. Given the increasing focus on the needed explainable machine learning and AI algorithms, this project looks to interface general-purpose human-readable representations (databases, programs, logics) with cutting-edge machine learning, including statistical relational learning and probabilistic programming. Please see our recent work here: https://vaishakbelle.com/lab and the general thrust of the research direction is discussed here: https://doi.org/10.1042/BIO04105016 

Contact: Vaishak Belle

 

Ethical and Responsible AI

Supervisor: Vaishak Belle

Artificial intelligence (AI) provides many opportunities to improve private and public life. Discovering patterns and structures in large troves of data in an automated manner is a core component of data science, and currently drives applications in computational biology, finance, law and robotics. However, such a highly positive impact is coupled with significant challenges: how do we understand the decisions suggested by these systems in order that we can trust them? How can they be held accountable for those decisions? We would be looking at topics touching on fairness and morality through the lens of causality. Please see our recent work here: https://vaishakbelle.com/lab and the general thrust of the research direction is discussed here: https://doi.org/10.1042/BIO04105016 Our particular interest is extending our work on implementing fairness https://arxiv.org/abs/1905.07026 and morality https://arxiv.org/abs/1810.03736 

Contact: Vaishak Belle

 

Epistemic Planning, modal logics for AI and multi-agent systems 

Supervisor: Vaishak Belle

Classical automated planning has traditionally focused on achieving a goal state given a sequence of actions. In numerous AI applications involving human-machine interaction, the underlying model may need to reason and learn wrt beliefs and intentions of other agents, that is, it needs to have a mental model for the other systems in the environment. This area looks at automated planning in this richer context, by building on ideas from modal logic and reasoning about knowledge. Please see our recent work here: https://vaishakbelle.com/lab 

Contact: Vaishak Belle

 

Probabilistic inference and symbolic machine learning 

Supervisor: Vaishak Belle

In this topic, we are interested in the unification of symbolic systems and machine learning. This is motivated by the need to augment learning and perception with high-level structured, commonsensical knowledge, to enable AI systems to learn faster and more accurate models of the world, while also handling constraints. Please see our recent work here: https://vaishakbelle.com/lab  including our work on PAC learning https://arxiv.org/abs/1906.10106 and learning probabilistic deep circuits https://arxiv.org/abs/1807.05464 

Contact: Vaishak Belle

 

Logic-based workflow modelling and management

Supervisor: Petros Papapanagiotou

Digital workflows are models of coordination patterns for collaborative human and computer agents. They help streamline and optimize processes by clarifying standard practices, automating administrative tasks, monitoring progress in real-time, and providing predictive analysis and decision support. I am interested in supervising research around novel symbolic AI and data-driven techniques to innovate across the stages of the workflow lifecycle, for instance in terms of intuitive design, collaborative modelling, rigorous validation, automated deployment, distributed enactment, and adaptive monitoring. Application areas include health and social care, manufacturing, business process, workforce and supply chain management, and social machines. More info and related work: http://homepages.inf.ed.ac.uk/ppapapan/ 

Contact: Petros Papapanagiotou

 

Workflow management meets the Internet of Things

Supervisor: Petros Papapanagiotou

IoT devices and sensors can provide real-time, heterogeneous data that fully describe the current state of people, assets, machines, computers, resources, etc. at any given time, with little human intervention. Combining workflow management with such data can lead to a highly contextualized, complete view of the operations in any collaborative environment, as well as key insights for process optimization and prediction. I am interested in supervising research on the integration of IoT data streams into different stages of the workflow lifecycle. This involves filtering, analysing, and contextualising low level sensor data to generate high level event logs using both symbolic (rule-based) and statistical AI techniques. Application areas include health and social care, manufacturing, construction, business process, workforce and supply chain management. More info and related work: http://homepages.inf.ed.ac.uk/ppapapan/ 

Contact: Petros Papapanagiotou

 

Conceptualised scalable methods 

Supervisors: Malcolm Atkinson and Rosa Filgueira

Researchers need powerful scientific workflows that combine data with computational models and analyses. This research would develop further conceptualisation and mappings to enable domain experts to control both innovation and production stages without depending on local or co-located computing. The mapping would raise the universe of discourse so that its semantics remains stable as new target technologies are exploited.

Contact: Malcolm Atkinson

 

Incremental dynamic deployment of data-streaming work

Supervisors: Rosa Filgueira and Malcolm Atkinson

As Miron Livney observed, in distributed and high-throughput computations making decisions as late as possible increases the probability that they are well informed. In data-streaming computations the graph of processing elements can be deployed as the first data unit is about to be delivered to them. Strategies for acquiring and exploiting the additional upstream knowledge are needed. The PhD would open up this territory by developing those strategies and demonstrating their power. 

Contact: Malcolm Atkinson

 

Learning optimisation parameters

Supervisors: Rosa Filgueira and Malcolm Atkinson 

The current workflow and orchestration systems gather both provenance and monitoring data. Typical usage patterns include much repetition. Optimisation to minimise a cost, such as GHG emissions, depends on accurate parameterisation of cost functions. In principle the collected data should contain relevant latent information to better determine the parameters. The challenge is to recognise when runs are similar and then to learn better parameters to inform deployment and optimisation of future runs. The PhD would explore this potential with our existing data archive and test inference and ML strategies with live applications.

Contact: Malcolm Atkinson