gradient flow for projected Langevin dynamics

Daniel Lacker (Columbia U) gave a talk at the probability seminar of Paris Dauphine this week which I happened to attend by happenstance, on a recent paper, Projected Langevin dynamics and a gradient flow for entropic optimal transport, written with Giovanni Conforti and, Soumik Pal. The talk was quite progressive and I hence could follow most of it. The core idea is in studying Langevin-type diffusion dynamics that sample from an entropy-regularized optimal transport, i.e. looking for an optimal distribution (in the sense of achieving entropy minimisation problem within a Wasserstein space, with regularisation) obtained via a gradient flow equation (as eg in variational inference) that couples two SDEs that are recentred by conditional expectation terms. Expectations in the equations are estimated by a Nadaraya-Watson estimate in optimal transport problem (reminding me of SMC), with no theoretical derivation of an optimal bandwidth, and they achieve quantitive bounds on the convergence, namely for exponential convergence, energy decay and new logarithmic Sobolev inequalities. From the talk and a quick glance at the paper, it is unclear to me there are direct algorithmic consequences, since the SDEs need be discretised, while the expectation approximations are costly, being repeated at each iteration of the discretised SDE.

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.