Monday, 22nd April - 3pm Ekaterina Komendantskaya, Marco Casadio, Tanvi Dinkar : Seminar

Title: Mind Your Language: How to Make LLM Applications Trustworthy

 

Abstract:

Natural Language Processing (NLP) applications, including applications using Large Language Models (LLMs) are now widespread and are starting to be subjects of legal concern. For instance, there have been several pieces of legislation proposed that enforce the requirement of a chatbot to correctly disclose non-human identity, when prompted by the user to do so. As another example, LLMs are predicted to be deployed as backends in medical applications, with consequent implications for safety of their users.  Ensuring safety and reliability of such applications is about to become a primary concern. 

Deep Neural Network verification is a promising approach that stems from formal methods and has made significant progress in recent years in machine learning domains other than NLP or LLMs. Can this methodology be effectively applied in NLP applications?

Within the research project AISEC, an inter-discilplinary team of NLP and Verification researchers undertook a large study of the above research question. In this talk, we will overview the major pitfalls we discovered, as well as methods we proposed to alleviate their negative effects.  We will highlight positive results and outline promising future directions.  

 

Bio:

Ekaterina Komendantskaya is a Professor in Computer Science at Southampton University and at Heriot-Watt University. She is an expert in methods linking AI and Machine Learning on the one hand, and Logic and Programming Languages, on the other hand. Sheleads the Lab for AI and Verification (www.laiv.uk). She has received more than £19.5M of funding from EPSRC/UKRI, NCSC, SICSA (including CDT grants). Currently she is leading a £3M EPSRC project "AISEC: AI Secure and Explainable by Construction (AISEC)" and is preparing to start a novel training program in the new CDT "DAIR: Dependable and Deployable AI for Robotics" in Edinburgh.

Tanvi is a postdoctoral researcher at HWU, working on safety in conversational AI. She received her PhD in Computer Science at Telecom Paris in 2022. Her research focuses on dialogue and spoken communication, and in particular, the robustness of NLP models to inputs arising from speech.

Marco is a PhD student at Heriot-Watt University. His interests involve verification and machine learning and his research focuses on applying and adapting verification techniques in Natural Language Processing problems and systems.

 

 

Apr 22 2024 -

Monday, 22nd April - 3pm Ekaterina Komendantskaya, Marco Casadio, Tanvi Dinkar : Seminar

This event is a TAS Node seminar, co-organised by ILCC and by the UKRI Centre for Doctoral Training in Natural Language Processing, https://nlp-cdt.ac.uk.

IF G.03