Peter Loxley - Dynamic programming with sparse codes: investigating a new computational role for sparse representations of natural image sequences
|Starts:||14:00 24 Sep 2019|
|Ends:||15:00 24 Sep 2019|
|What is it:||Seminar|
|Organiser:||Department of Mathematics|
|Who is it for:||University staff, External researchers, Current University students|
|Speaker:||Dr Peter Loxley|
Join us for this research seminar, part of the SQUIDS (Statistics, quantification of uncertainty, inverse problems and data science) seminar series. ***Please note unusual time and location***
Abstract: Dynamic programming (DP) is a general algorithmic approach used in optimal control and Markov decision processes that balances desire for low present costs with undesirability of high future costs when choosing a sequence of controls to apply over time. Recent interest in this field has grown since Google Deepmind's algorithms have beaten humans and world-champion programs in Atari games, and games such as chess, shogi, and go. But why do these algorithms work so well? In many image-based tasks, early-layer weights of trained deep neural networks often resemble neural receptive field profiles found in the mammalian visual system. From modelling efforts in the neuroscience and signal processing communities we know this architecture generates efficient (low bit-rate) representations of natural images called sparse codes. In this work, I investigate the computational role of sparse codes by applying DP to solve the optimal control problem of tracking an object (dragonfly) over a sequence of natural images. By comparing speed of learning, memory capacity, interference, generalization, and fault tolerance for different codes, I will show some distinct computational advantages of sparse codes.
Dr Peter Loxley
Role: Lecturer - School of Science and Technology
Organisation: University of New England
Biography: My background is in statistical physics and nonlinear dynamics (I was originally a theoretical condensed-matter physicist). My current interests include algorithms, neural coding and information theory, and probabilistic models in machine learning. I completed postdocs at the University of Sydney, and at the Center for Nonlinear Studies at Los Alamos National Laboratory. I am now at the University of New England (Australia) where I teach into the mathematics and computer science disciplines. I am currently on sabbatical at UC Berkeley, and University of Manchester (UK).
Travel and Contact Information
Frank Adams 2
Alan Turing Building