AIAI Seminar - 27 March 2023 - Talks by Lauren Delong, Patrick Kage and Bhargavi Ganesh

 

Speaker:     Lauren Delong

Title:            Neurosymbolic AI for Reasoning on Graph Structures: A Survey

Abstract:

In this talk, I will discuss our latest survey paper, which can be found here: https://arxiv.org/abs/2302.07200

Neurosymbolic AI is an increasingly active area of research which aims to combine symbolic reasoning methods with deep learning to generate models with both high predictive performance and some degree of human-level comprehensibility. As knowledge graphs are becoming a popular way to represent heterogeneous and multi-relational data, methods for reasoning on graph structures have attempted to follow this neurosymbolic paradigm. Traditionally, such approaches have utilized either rule-based inference or generated representative numerical embeddings from which patterns could be extracted. However, several recent studies have attempted to bridge this dichotomy in ways that facilitate interpretability, maintain performance, and integrate expert knowledge. Within this article, we survey a breadth of methods that perform neurosymbolic reasoning tasks on graph structures. To better compare the various methods, we propose a novel taxonomy by which we can classify them. Specifically, we propose three major categories: (1) logically-informed embedding approaches, (2) embedding approaches with logical constraints, and (3) rule-learning approaches. Alongside the taxonomy, we provide a tabular overview of the approaches and links to their source code, if available, for more direct comparison. Finally, we discuss the applications on which these methods were primarily used and propose several prospective directions toward which this new field of research could evolve.

 

Speaker:      Patrick Kage

Title:           Strategies for Effective Learning under Corrupted or Inaccurate Labelings

Abstract:

A major challenge facing practical applications of deep learning is the difficulty in procuring high quality labelings for novel data. In scientific domains where subject matter experts are required or with extremely large datasets, acquiring accurate labelings in sufficient volume to train neural models can be prohibitively expensive, leading to datasets with either missing labeling or labels which fail to accurately capture the ground truth information in the data. Our research aims to create a set of a strategies for effectively training classifiers under these conditions, with a focus towards training small networks with minimal computational requirements by borrowing from (and bridging) existing literature from weak supervision and semi-supervision. In this talk, I will outline the state of the art, the current research focus, and future directions and applications.

 

Speaker:       Bhargavi Ganesh

Title:              Managing responsibility gaps in AI

Abstract:

My talk will be exploring the issue of responsibility gaps, theorized in the philosophy literature as the problem of society bearing the cost of an outcome, without anyone ultimately being held responsible. I will explore this theory as it has been applied to AI and its many applications. I will discuss how this gap emerges, and the ways that regulatory interventions intend to address the many governance issues introduced by responsibility gaps. I will conclude with an example in healthcare diagnostic systems, and provide some reflections on the implications for governance of these systems.

 

 

 

 

 

 

 

 

 

 

 

 

Mar 27 2023 -

AIAI Seminar - 27 March 2023 - Talks by Lauren Delong, Patrick Kage and Bhargavi Ganesh

AIAI Seminar hosted by Lauren Delong, Patrick Kage and Bhargavi Ganesh

G.03, Informatics Forum