The Manchester Centre for AI Fundamentals and Manchester's ELLIS Unit are co-hosting a series of seminars featuring expert researchers working in the fundamentals of AI.
Title: Uncertainty in NLP & beyond: quantification, interpretation, evaluation
Abstract:
As language models grow in popularity, size, and application across a wide range of tasks, they are becoming ubiquitous in modern society. This, in turn, brings forward questions on reliability, trust and interpretability. We know models don’t always “know what they don’t know” and may end up generating seemingly convincing answers that are entirely fabricated. Hence, obtaining reliable confidence estimates, i.e. being able to quantify the uncertainty over their predictions, is a key step in the path towards the reliability of language models and more responsible AI solutions.
This talk will discuss the challenges of uncertainty estimation for natural language processing, emphasising aspects such as multiple sources of uncertainty and limited access to the model parameters (black box models) as well as aspects of interpretation and evaluation. I will focus on generation and evaluation tasks, using machine translation as the main paradigm, and discuss how the conformal prediction framework can be leveraged to provide meaningful confidence intervals with statistical guarantees, while also allowing us to calibrate our confidence to obtain more interpretable and fair uncertainty representations.
Chrysoula (Chryssa) Zerva is an Assistant Professor for Artificial Intelligence at IST and a researcher at Instituto de Telecomunicações (IT) in Lisbon. She is a member of the European Laboratory for Learning & Intelligent Systems (ELLIS) and LUMLIS, the Lisbon Unit for Learning & Intelligent Systems. She obtained her PhD in 2019 from the University of Manchester on "Automated Identification of Textual Uncertainty". In 2019 she also received the EPSRC Doctoral Prize Fellowship, and in 2021, she joined IT as a postdoc for the DeepSPIN project.
She is a co-PI in the Centre for Responsible AI project, a PRR initiative for trustworthy, sustainable, fair, and transparent AI. She is also part of the UTTER project, with her focus being on uncertainty-aware, adaptable and context-aware models. Her research focuses on elucidating uncertainty in machine learning and especially NLP. She also explores topics on explainability, fairness and quality estimation under multilingual and multimodal setups.