15 February 2021 - Rafael Karampatsis

 

Speaker: Rafael Karampatsis

 

Title: Towards explainable AI

Abstract: Explainable AI (XAI) refers to methods and techniques in the application of AI such that the results of the solution can be understood by humans. Although current AI systems has started becoming extremely powerful in their decisions and predictions, we do not have a clear understanding of neither how they operate nor how they ended up with their outputs. It stands out that we cannot explain what they have actually learned and as an extension we cannot trust their decisions in any sort of ethical and moral standard. In this seminar, I'll briefly talk about a couple of projects that I am involved regarding imposing constraints to the output of neural networks as well as attempts to impose logical constraints to what a network should learn over a specific problem. Finally, I'll briefly discuss whether there is potential to use adversarial attacks as a means to extract explanations for why a neural network chose to perform a certain action or which part of the input should be held accountable for it.

 

Bio: I got my PhD in Data Science from the University of Edinburgh in 2020 and my thesis revolved around machine learning for software engineering, specifically on the problem of automatic bug detection.            Currently, I am a research associate/post doc working with Dr Vaishak Belle on unifying logical methods and probabilistic modelling and learning techniques, including deep learning.

 

 

 

 

 

Feb 15 2021 -

15 February 2021 - Rafael Karampatsis

AIAI Seminar talk hosted by Rafael Karampatsis

Online