Computer Science Research Talk: : Exposing Previously Undetectable Faults in Deep Neural Networks
|Dates:||5 May 2021|
|Times:||14:00 - 15:00|
|What is it:||Seminar|
|Organiser:||Department of Computer Science|
|Who is it for:||University staff, Adults, Current University students|
You are welcome to the forthcoming Research Talk in Computer Science (online)
Join Zoom Meeting
Speaker: Isaac Dunn
Host: Dr Lucas Cordeiro
Title: Exposing Previously Undetectable Faults in Deep Neural Networks
Abstract: Existing methods for testing DNNs solve the oracle problem by constraining the raw features (e.g. image pixel values) to be within a small distance of a dataset example for which the desired DNN output is known. But this limits the kinds of faults these approaches are able to detect. In this paper, we introduce a novel DNN testing method that is able to find faults in DNNs that other methods cannot. The crux is that, by leveraging generative machine learning, we can generate fresh test cases that vary in their high-level features (for images, these include object shape, location, texture, and colour). We demonstrate that our approach is capable of detecting deliberately injected faults as well as new faults in state-of-the-art DNNs, and that in both cases, existing methods are unable to find these faults.
Bio: Isaac Dunn is a PhD candidate (third year) at the University of Oxford. Their research focuses on improving our understanding of limitations in current machine learning models, with a view to making improvements and working towards genuinely trustworthy ML systems.
Role: PhD student
Organisation: Department of Computer Science
Travel and Contact Information