BEGIN:VCALENDAR
PRODID:-//Columba Systems Ltd//NONSGML CPNG/SpringViewer/ICal Output/3.3-
 M3//EN
VERSION:2.0
CALSCALE:GREGORIAN
METHOD:PUBLISH
BEGIN:VEVENT
DTSTAMP:20260204T115828Z
DTSTART:20260211T130000Z
DTEND:20260211T140000Z
SUMMARY:SQUIDS-Statistics Joint Seminar: Automatic Tuning for Gradient-ba
 sed Bayesian Inference
UID:{http://www.columbasystems.com/customers/uom/gpp/eventid/}j1ki-ml7z4o
 z1-5geexh
DESCRIPTION:Speaker: Professor Christopher Nemeth  (Lancaster University)
 \n\nAbstract: In Bayesian inference\, the central computational task is 
 to approximate a posterior distribution—often by designing dynamics whos
 e stationary law is the posterior\, or by directly minimising a variatio
 nal objective such as a KL divergence. A unifying way to view many of th
 ese approaches is as optimisation over probability measures\, where one 
 seeks to minimise a functional F(\\mu) on a Wasserstein space (most nota
 bly F(\\mu)=\\mathrm{KL}(\\mu\\|\\pi) for a target posterior \\pi)\, wit
 h close connections to Langevin-type samplers and particle-based variati
 onal methods. A persistent practical obstacle is that time-discretized W
 asserstein gradient flows typically require careful step-size tuning: to
 o small yields prohibitively slow mixing and convergence\, while too lar
 ge can destabilize the iterates and undermine theoretical guarantees. Wo
 rse still\, the “optimal” fixed step sizes suggested by non-asymptotic a
 nalyses usually depend on unknown problem quantities (properties of the 
 posterior\, the minimiser\, and the evolving iterate law)\, making princ
 ipled tuning difficult and often forcing practitioners into expensive tr
 ial-and-error approaches.\n\nThis talk presents FUSE (Functional Upper B
 ound Step-Size Estimator): a principled\, adaptive\, tuning-free family 
 of step-size schedules tailored to two canonical discretisations of Wass
 erstein gradient flows—the forward-flow and forward Euler schemes. The r
 esulting methodology yields tuning-free variants of widely used gradient
 -based samplers and particle optimisers\, including the unadjusted Lange
 vin algorithm (ULA)\, stochastic gradient Langevin dynamics (SGLD)\, mea
 n-field Langevin dynamics\, stein variational gradient descent (SVGD)\, 
 and variational gradient descent (VGD)\, and more broadly applies to sto
 chastic optimisation problems on the space of measures. Under mild condi
 tions (notably geodesic convexityand locally bounded stochastic gradient
 s)\, the theory recovers the performance of optimally tuned methods up t
 o logarithmic factors\, in both nonsmooth and smooth regimes. Empiricall
 y\, across representative sampling and learning benchmarks\, the propose
 d algorithms achieve performance comparable to the best hand-tuned basel
 ines—without any step-size tuning.
STATUS:TENTATIVE
TRANSP:TRANSPARENT
CLASS:PUBLIC
LOCATION:Frank Adams Room 2\, Alan Turing Building\, Manchester
END:VEVENT
END:VCALENDAR
