PIPLA [mostly MCMC’nar]

The first “mostly MCMC ” seminar (Season 2) had our new Ocean postdoc Tim Johnston, freshly graduated from the University of Edinburgh, involved in both talks, with proximal approximations for discontinuity! The first talk was given by Francesca Crucinio (formerly Warwick and formerly CREST, to point out potential COI!!), about the Proximal Interacting Particle Langevin algorithm (PIPLA) developed with her coauthors Paula Cordero Encinar, Deniz Akyildiz, Tim Johnston, and Mark Girolami, concerned with  maximising likelihoods with latent variables (i.e., an EM setting).  While offering one of many stochastic versions of EM, incl. simulated annealing, the solution they adopt very close to our SAME (2002) method, with duplicating latent variables N times to get near the marginal MAP (which as we noted differs from the join MAP). They start from interacting particle system with (unadjusted) Langevin dynamics, discretised over time, but the value of N does not move with iterations, which steps away from the simulated annealing motivation, thus requiring an evaluation of the error for a given N and possibly further runs with larger values of N. PIPLA is an extension of the above to non-differentiable targets, by using a proximity map, in continuation of MY-ULA [for Moreau-Yoshida] by Pereyra (2016), yet again fixing both N and the proximal parameter λ. With non-asymptotic convergence results requiring strong assumptions on the target.

In his talk, Tim started with interesting (and novel for me) arguments for proving strong convergence (Wasserstein, multilevel MC, unbiased MCMC), proceeding to establishing (again under favourable assumptions, and almost √n convergence speed for the proximal scheme with no regularity assumption on drift besides boundedness.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.