Action and Decision Making
A list of potential topics for PhD students in the area of Action and Decision Making.
Adapting behaviour to the discovery of unforeseen possibilities
Supervisor: Subramanian Ramamoorthy
To design, implement and evaluate a model of agents whose intrinsic preferences change as they learn about unforeseen states and options in decision or game problems that they are engaged in.
Most models of rational action assume that all possible states and actions are pre-defined and that preferences change only when beliefs do. But there are many decision and game problems that lack these features: games where an agent starts playing without knowing the hypothesis space, but rather discovers unforeseen states and options as he plays. For example, an agent may start by preferring fish to meat, but when he discovers saffron for the first time, likes it enormously, but finds it goes with fish better than with meat, his preferences change to preferring fish to meat as long as saffron is available. In effect, an agent may find that the language he can use to describe his decision or game problem changes as he plays it (in this example, state descriptions are refined via the introduction of a new variable, saffron). The aim of this project is to design, implement and evaluate a model of action and decision making that supports reasoning about newly discovered possibilities and options. This involves a symbolic component, which reasons about how a game changes (both beliefs and preferences) as one adds or removes random variables or their range of values, and it involves a probabilistic component, that reasons about how the effects of these changes to the description of the game affect Bayesian calculations of optimal behaviour.
Discovering hidden causes
Supervisor: Chris Lucas
In order to explain, predict, and influence events, human leaners must infer the presence of causes that cannot be directly observed. For example, we understand the behavior of other people by appealing to mental states, we can infer that a new disease is spreading when we see several individuals with novel symptoms, and we can speculate about why a device or computer program is prone to crashing. The aim of this project is to better understand how humans search for and discover hidden causes, using Bayesian models and behavioural experiments.
Approximate inference in human causal learning
Supervisor: Chris Lucas
A fundamental problem in casual learning is understanding what relationships hold among a large set of variables, and in general this problem is intractable. Nonetheless, humans are able to learn efficiently about the causal structure of the world around them, often making the same inferences that one would expect of an ideal or rational learner. How we achieve this performance is not yet well understood -- we rely on approximate inferences that deviate in certain systematic ways from what an ideal observer would do, but those deviations are still being catalogued and there are few detailed hypotheses about the underlying processes. This project is concerned with exploring these processes and developing models that reproduce human performance -- including error patterns -- in complex causal learning problems, with the aim of understanding and correcting for human errors and building systems that are of practical use
Making AI Trustworthy in Sociotechnical Systems
Supervisor: Nadin Kökciyan
Goal: To develop computational tools to support humans/organizations in their ethical decision-making.
In recent years, AI systems are deployed widely in low-stakes (e.g. automated captioning) and high-stakes situations (e.g., medical diagnoses or loan applications). The automated decisions can sometimes result in significant harms. In such cases, both organizations and users struggle. Organizations may not be able to explain their decision-making process because: (i) they are not sure either, (ii) they may not want to reveal sensitive information. On the other hand, the users harmed by such AI systems need some answers for their questions. In current systems, they do not have the means to contest automated decisions. This project aims to operationalize accountability to make AI systems trustworthy via intelligent tools, such as chatbots, which could facilitate communication between humans and organizations.