AIAI Seminar - 29/03/21 - Cillian Brewitt, Nick Hoernle & Ibrahim Ahmed

 

Speaker: Cillian Brewitt

 

Title: Verifiable Goal Recognition for Autonomous Driving using Decision Trees

 

Abstract:

It is useful for autonomous vehicles to have the ability to infer the goals of other vehicles (goal recognition), in order to safely interact with other vehicles and predict their future trajectories. Goal recognition methods must be fast to run in real time and make accurate inferences. As autonomous driving is safety-critical, it is important to have methods which are human interpretable and for which safety can be formally verified. Existing goal recognition methods for autonomous vehicles fail to satisfy all four objectives of being fast, accurate, interpretable and verifiable. We propose Goal Recognition with Interpretable Trees (GRIT), a goal recognition system for autonomous vehicles which achieves these objectives. GRIT makes use of decision trees trained on vehicle trajectory data. Evaluation on two vehicle trajectory datasets demonstrates the inference speed and accuracy of GRIT compared to an ablation and two deep learning baselines. We show that the learned trees are human interpretable and demonstrate how properties of GRIT can be formally verified using an SMT solver.

 

Bio: I am a PhD student, supervised by Dr. Stefano Albrecht. I am interested in interpretable planning and prediction for autonomous vehicles.

 

 

Speaker: Ibrahim Ahmed

 

Title: Secure Authentication and Key Establishment Using Abstract Multi-Agent Interaction

 

Abstract:

We introduce a novel quantum-safe protocol for authentication and key establishment based on abstract multi-agent interaction. In a multi-agent system, autonomous agents abide by behavioural models, and intelligently make decisions regarding their goals and their interactions with other agents. This novel method treats communicating parties as agents and makes use of their behavior during interactions for authentication and key establishment. We empirically validate our method for authenticating legitimate users while detecting different types of adversarial attacks. We also show how reinforcement learning techniques can be used to train agents to achieve a more sample-efficient authentication.

 

Bio: I am a third-year PhD student under the supervision of Dr. Stefano Albrecht. I am currently working on a project relating to cryptography and multi-agent systems. My interests lie in the areas of security and machine learning.

 

 

Speaker: Nick Hoernle

 

Title: Domain Constraints for Structured Network Output

 

Abstract: A modern goal in AI is to integrate symbolic reasoning with neural architectures. I present a perspective on this integration problem that aims to use expert knowledge to constrain the output domain of a network. Standard approaches append to a loss function specific terms which encode the prior belief of an expert. Rather, I demonstrate how to reparameterize the output of the network such that constraints, if satisfiable, are followed by construction. I argue that this technique can be used in an automated way using standard knowledge compilation techniques and solvers. I show the efficacy of this approach on a wide variety of tasks and I present some preliminary results that help to demonstrate why this approach is preferable to the standard setting (of modifying a loss function).

 

 

Mar 29 2021 -

AIAI Seminar - 29/03/21 - Cillian Brewitt, Nick Hoernle & Ibrahim Ahmed

AIAI Seminar talk hosted by Cillian Brewitt, Nick Hoernle & Ibrahim Ahmed

Online