Action and Decision Making topics

A list of potential topics for PhD students in the area of Action and Decision Making.

Adapting behaviour to the discovery of unforeseen possibilities

Supervisor: Alex Lascarides, Subramanian Ramamoorthy

To design, implement and evaluate a model of agents whose intrinsic preferences change as they learn about unforeseen states and options in decision or game problems that they are engaged in.

Most models of rational action assume that all possible states and actions are pre-defined and that preferences change only when beliefs do. But there are many decision and game problems that lack these features: games where an agent starts playing without knowing the hypothesis space, but rather discovers unforeseen states and options as he plays. For example, an agent may start by preferring fish to meat, but when he discovers saffron for the first time, likes it enormously, but finds it goes with fish better than with meat, his preferences change to preferring fish to meat as long as saffron is available. In effect, an agent may find that the language he can use to describe his decision or game problem changes as he plays it (in this example, state descriptions are refined via the introduction of a new variable, saffron). The aim of this project is to design, implement and evaluate a model of action and decision making that supports reasoning about newly discovered possibilities and options. This involves a symbolic component, which reasons about how a game changes (both beliefs and preferences) as one adds or removes random variables or their range of values, and it involves a probabilistic component, that reasons about how the effects of these changes to the description of the game affect Bayesian calculations of optimal behaviour.

Make hyperlinks safe to click on 

Supervisor: Kami Vaniea

Security experts tell users to not click on "dangerous" URLs, such as URLs in emails, raw IP addresses, or URLs that have obviously been through a URL shortener. However, most users cannot read a URL, making it very challenging to detect "dangerous" ones. Not to mention how challenging it is for humans to identify similar looking UTF 8 characters, redirects, or cloaked redirects. Computers are better than humans at reading URLs, but computers have a limited sense of context and no ability to reason about privacy risks.   The purpose of this project is to determine if any given link is or is not safe to click on and be able to explain why to a user. The bulk of the work in this project will be in understanding how links work on the internet and finding ways to programmatically evaluate them for both privacy and security risks. The evaluation could use Machine Learning approaches or rule based evaluations if they can be shown to work well. The project would also involve the development of a small user interface to convey important information about the link to the user.

Usability of software updates 

Supervisor: Kami Vaniea

One of the best ways to protect a computer is to update its software. Experts recommend this practice above all other protection behaviors. Yet the many users choose to delay or completely avoid updating software. The reason is simple, users do not use their computer just for security, they use it for many tasks, many of which might be disrupted by an unanticipated update to the software.  The purpose of this project is to make updating software more usable. Investigate what makes updates so problematic to users right now and device technological and user interface methods for decreasing the disruption updates cause. This research could be carried out with end users, system administrators (managed systems need updating too), or with software developers who design the updates. 

Static software analysis feedback

Supervisor: Kami Vaniea

Writing a secure program is incredibly challenging especially in an era when most programs incorporate libraries and packages that are written by third parties. Simple mobile applications such as flashlight apps are frequently malicious not because of the code the programmer wrote, but because of the advertising code the developer included. This project would work to bridge the gap between static analysis of applications and humans that are expected to determine if a given mobile application is "safe" to use.

The purpose of this project is to build a tool that enables a low-skill software analyst, or a software developer, to ask questions of an app to determine if it meets a set of qualifications or not. The research questions addressed would include understanding the work flows of these types of stakeholders, understanding the types of questions that can be answered through code analysis, and finding ways to make the output of code analysis practically usable by a novice developer. This project would focus primarily on the human computer interaction component of the problem; however, a student would be expected to develop a good working understanding of code  analysis.

Usable API design for SSL/TLS

Supervisor: Kami Vaniea

When used correctly, the SSL/TLS protocol enables two computers to communicate securely across a network. Unfortunately, SSL/TLS implementations, such as OpenSSL, can be challenging to use correctly particularly for novice mobile application developers. The rise of the app development model has opened computer programming to more people who have minimal to no training in security. Yet they still need to be able to accurately write code that uses security features such as SSL/TLS. 

The purpose of this project is to evaluate the current usability of security APIs and design a new API that follows the standard Human Computer Interaction principles. In particular, the new API needs to minimize errors and assist users in recovering from errors when they do occur.

Discovering hidden causes

Supervisor: Chris Lucas

In order to explain, predict, and influence events, human leaners must infer the presence of causes that cannot be directly observed. For example, we understand the behavior of other people by appealing to mental states, we can infer that a new disease is spreading when we see several individuals with novel symptoms, and we can speculate about why a device or computer program is prone to crashing. The aim of this project is to better understand how humans search for and discover hidden causes, using Bayesian models and behavioural experiments.

Approximate inference in human causal learning

Supervisor: Chris Lucas   A fundamental problem in casual learning is understanding what relationships hold among a large set of variables, and in general this problem is intractable. Nonetheless, humans are able to learn efficiently about the causal structure of the world around them, often making the same inferences that one would expect of an ideal or rational learner. How we achieve this performance is not yet well understood -- we rely on approximate inferences that deviate in certain systematic ways from what an ideal observer would do, but those deviations are still being catalogued and there are few detailed hypotheses about the underlying processes. This project is concerned with exploring these processes and developing models that reproduce human performance -- including error patterns -- in complex causal learning problems, with the aim of understanding and correcting for human errors and building systems that are of practical use