AI Security: Language Models, Data Encryption, Software Verification
Dates: | 12 February 2025 |
Times: | 14:00 - 15:00 |
What is it: | Seminar |
Organiser: | Department of Computer Science |
How much: | Free |
Who is it for: | University staff, Current University students |
Speaker: | Dr. Edoardo Manino |
|
Neural networks are slowly getting integrated into safety-critical systems. Unfortunately, we still lack a full suite of algorithms and tools to guarantee their safety. In this talk, I will present a few open challenges in AI safety and security: consistent behaviour in language models, machine learning over encrypted data, model compression with error guarantees, bug-free floating-point software. Here, I will claim that formal methods are the key to address these challenges, as long as we can settle on an unambiguous specification.
Speaker
Dr. Edoardo Manino
Role: Lecturer (Assistant Professor) in AI Security
Organisation: The University of Manchester
Biography: Edoardo Manino is a Lecturer (Assistant Professor) in AI Security at The University of Manchester. He has a lifelong interest in AI algorithms, from symbolic AI to machine learning. He spent most of his research career at Russell Group institutions in the UK, funded by EPSRC and the Alan Turing Institute. His background is in Bayesian machine learning, a topic he was awarded a PhD from the University of Southampton in 2020. In the past years, he has been interested in all variations of provably safe machine learning, from pen and paper proofs on tractable models to automated testing and verification of deep neural networks and large language models. He has a strong record of cross-disciplinary publications, spanning human computation, software engineering, hardware design, signal processing, network science and game theory.
Travel and Contact Information
Find event
Kilburn Building
Manchester