ANC Workshop - William Toner, Ajitha Rajan
Tuesday, 25th April 2023
Classifying in the presence of label noise by fitting to the noisy distribution - William Toner
Abstract: When training a neural network classifier using a dataset containing label noise there is danger of overfitting to this noise. Employing losses which are inherently robust to label noise can help mitigate this. Many such robust losses are ostensibly based on convincing theoretical robustness guarantees. In practice, however, robustness to label noise has little to do with such theoretical results. Robust losses gain most of their robustness by being hard to optimise. Gradients vanish during training leading, effectively, to early stopping. The downside of this approach is that these losses can struggle when the dataset complexity or noise levels vary. We propose instead a more principled approach to handling label noise. We observe that one may bound the generalised risk below by the entropy of the noising distribution. Thus, instead of minimising a given loss we propose training against an `entropy budget’.
Explainable AI for Antigen Presentation - Ajitha Rajan
Abstract: In this talk, I will provide a very brief overview of the different kinds of research in my group, paying particular attention to one of them - Explainable AI for Antigen Presentation. The Major Histocompatibility Complex (MHC) class-I pathway supports the detection of cancer and viruses by the immune system by presenting parts of proteins (peptides) from inside a cell on its membrane surface enabling visiting immune cells that detect non-self peptides to terminate the cell. The ability to predict whether a peptide will get presented on MHC Class I molecules has profound implications in designing vaccines. Numerous deep learning-based predictors for peptide presentation on MHC-I molecules exist with high levels of accuracy. However, these MHC-I predictors are treated as black-box function approximators, mapping a given input peptide and MHC allele to a classification output. To build transparency and trust in these predictors, it is crucial to understand the rationale behind the decision of the predictor with human-interpretable justifications that could include input features used by the predictor. In this article, we present two explainable AI (XAI) techniques to help interpret the outputs from MHC-I predictors in terms of input features. In our experiments, we explain the outputs of four state-of-the-art MHC-I predictors over a large data set of peptides and MHC alleles. Additionally, we evaluate the reliability of the explanations from the two XAI techniques by comparing against ground truth and checking their robustness.
Event type: Workshop
Date: Tuesday, 25th April 2023
Time: 11:00
Location: G.03
Speaker(s): William Toner, Ajitha Rajan
Chair/Host: Oisin Mac Aodha