\dm_csml_event_details UCL ELLIS

Efficient MCMC Sampling with Dimension-Free Convergence Rate using ADMM-type Splitting


Speaker

Daniel Paulin

Affiliation

University of Edinburgh

Date

Friday, 27 May 2022

Time

12:00-13:00

Location

Function Space, UCL Centre for Artificial Intelligence, 1st Floor, 90 High Holborn, London WC1V 6BH

Link

https://ucl.zoom.us/j/97245943682

Event series

DeepMind/ELLIS CSML Seminar Series

Abstract

Performing exact Bayesian inference for complex models is computationally intractable. Markov chain Monte Carlo (MCMC) algorithms can provide reliable approximations of the posterior distribution but are expensive for large data sets and high-dimensional models. A standard approach to mitigate this complexity consists in using subsampling techniques or distributing the data across a cluster. However, these approaches are typically unreliable in high-dimensional scenarios. We focus here on a recent alternative class of MCMC schemes exploiting a splitting strategy akin to the one used by the celebrated alternating direction method of multipliers (ADMM) optimization algorithm. These methods appear to provide empirically state-of-the-art performance but their theoretical behaviour in high dimensions is currently unknown. In this paper, we propose a detailed theoretical study of one of these algorithms known as the split Gibbs sampler. Under regularity conditions, we establish explicit convergence rates for this scheme using Ricci curvature and coupling ideas. We support our theory with numerical illustrations. This is joint work with Maxime Vono (Criteo AI Lab) and Arnaud Doucet (Oxford).

Biography

Daniel Paulin obtained his PhD in mathematics at the National University of Singapore in 2014. He has done some postdoc years in NUS working with Alexandre Thiery and Ajay Jasra, and in Oxford working with Arnaud Doucet and George Deligiannidis. Since 2019, he has been a Lecturer at the University of Edinburgh. His research interests are mainly in applied probability, computational statistics, and optimization.