AI-Fun & ELLIS Invited Speaker Series | Pablo Moreno Muñoz
Dates: | 8 October 2025 |
Times: | 11:00 - 12:00 |
What is it: | Seminar |
Organiser: | Faculty of Science and Engineering |
Who is it for: | University staff, External researchers, Alumni, Current University students |
|
This month’s AI-Fun and ELLIS Invited Speaker lecture will take place in Engineering Building B, room 2B.003. It’s easy to find: from the second floor (where Eng B and Nancy Rothwell Building join), follow the signs (take a left turn and then a right turn and the room is on the left); OR from the ground floor, take the stairs to the second floor and follow the signs (right and then right again; the room is on the left).
October’s speaker is Pablo Moreno Muñoz from Universitat Pompeu Fabra (UPF), Barcelona.
Bio: Pablo is a research fellow at the Artificial Intelligence & Machine Learning Group in the Universitat Pompeu Fabra (UPF), Barcelona and recipient of one Junior Leader grant from the Fundación “La Caixa”. Pable is also a member of the ELLIS society.
Talk Title: A probabilistic view of self-supervised learning: Challenges and opportunities
Abstract: Self-supervised learning (SSL) methods have been demonstrated to result in models that generalize very well to new settings, with a remarkable success in both computer vision (CV) and natural language processing (NLP) tasks. In essence, SSL comprises a large family of learning algorithms that are able to capture meaningful representations from unlabeled data through auxiliary data augmentations. In one of its simplest versions, widely known as *masking* or just *masked pre-training*, SSL methods induce random missing values to the observations (i.e., patches or dimensions), forcing the model to correctly reconstruct missing items by optimizing parameters while conditioning on the remaining data values. Despite this sort of self-induced data scarcity within its recursive reconstruction showing promising results (masked pre-training for BERT and masked autoencoders for CV) since the very beginning, the power and elegance of conditional missing data imputation is not new for those interested in probabilistic modelling. Whereas unsupervised and supervised learning problems are at the core of SOTA probabilistic methods, there are still big links to be discovered between SSL and many well-studied approaches. In this talk, I will show how these connections exist indeed, ranging from probabilistic representation learning to classic cross-validation, as well as a *peculiar* result related to Bayesian methods. Additionally, I will show the opportunities that arise from applying such a style of SSL algorithms to some old-fashioned models. Last but not least, one path that superficially touches certain information theory principles, seeking new explanations and challenges, will also be drawn.
Travel and Contact Information
Find event
2B.003
Engineering Building B
Upper Brooke Street
Manchester