Archive for Paris

incoming mostly Monte Carlo [14 April, PariSanté campus]

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , , , , , on April 9, 2026 by xi'an

The next Mostly Monte Carlo seminar will be this very Friday, 10/04/26, at PariSanté Campus. With Shiva Darshan and Pierre Monmarché speaking on the following topics:
15h: Shiva Darshan Maximal-reflection couplings on manifolds: some specific examples
Explicit Markovian couplings can be used to build Markov Chain Monte Carlo methods such unbiased MCMC or coupling based control variates. For sampling from probability measures supported on Euclidean space, one typically uses a synchronous coupling, a maximal-reflection coupling (also known as a discrete-time sticky coupling), or some variant of the two. For probability measures supported on Riemannian manifolds, the situation is less clear cut. While the Kendall-Cranston coupling of Brownian motions on manifolds has been successfully applied in theoretical works, it is ill-suited for building explicit algorithms. In this talk, we will discuss some of the obstacles to extending Euclidean maximal-reflection couplings to manifolds and present some special cases for which these obstacles can be easily overcome. With applications to Stereographic MCMC in mind, we detail particular couplings of random walks on the sphere.
16h: Pierre Monmarché A post-sampling reweighting method for multi-modal target measures
Even when the modes are identified and sampled locally with MCMC methods, a difficulty to sample multi-modal measures is to correctly estimate the relative probabilities of each of these modes, which requires to observe many transitions between them (which are rare events). We will present an approach based on variational inference which exploits the local samples, aiming only at estimating the relative weights between them. When the modes are well separated, this amount to some entropy estimations.

Bob’s talk at PariSanté

Posted in Books, Statistics, University life with tags , , , , , , , , , , , , , , , on March 25, 2026 by xi'an

We had a wonderful time (and an unusually large audience) at the mostly Monte Carlo seminar last week as Pierre del Moral and Bob Carpenter both presented on exciting recent developments of theirs! Pierre talked about Kantorovich contraction of Markov semigroups, which sounds rather daunting!, but actually covers fairly general and generic convergence results, using tools like potentials and Lyapunov contractions, reminding me of the early days of MCMC and the papers of Gareth Roberts (University of Warwick), Jeff Rosenthal, Richard Tweedie and others.

Bob then spoke about the latest version of NUTS, the within-orbit adaptive NUTS (WALNUTS) sampler, which adapts the step size at every leapfrog step in order to conserve the Hamiltonian and keep the path stable enough. The adaptation is facilitated by incorporating this step size as an extra parameter with an attached distribution, that the authors call Gibbs self tuning (GIST), for coupling tuning parameters and conditionally Gibbs-sampling them per iteration in Hamiltonian Monte Carlo. This has been done in the past, incl. in some of my papers (e.g., Andrieu & Robert, 2004), but I could not cite a particular reference during the seminar.

Further light reflections that came to mind during Bob’s talk:

  • with NUTS, if cycling is feasible in a finite time, we could wait for a second passage at the starting point and then get back halfway (with the difficulty of detecting this second passage)
  • changing the kinetic matrix at each leapfrog jump is actually Riemannian HMC (and with cubic cost!)
  • the doubling mechanism in both the original NUTS and in biased progressive NUTS is simulation wasting
  • but so is (surprise, surprise!) finding adaptive mass matrices for WALNUTS at reasonable costs

SEINE AI

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , , , , , , , , , , , , on March 23, 2026 by xi'an

Ten days ago I took part in the SEINE AI 2026 workshop in Jouy-en-Josas, near Paris (homestead of HEC), organised by the Huawei Paris Research Center.. In which I was invited to speak, even though I felt sort of an outlier given the deeply machine-learning, entreprenarial orientation of the meeting, with its theme being Building the Agentic Future of ICT, given that I chose to present our most recent Bayesian adversarial privacy paper. Hence, I stood within a game-theoretic, Bayesian, formal landscape, presumably loosing most of the audience and keeping them away from their lunch!

Other speakers included Simon Lucas from Queen Mary London on Simulation-based AI, which I had trouble distinguishing from building a statistical model by goodness of fit (and using bandits used for update), while focussing on competing on some computer game challenges. And Volker Tresp from LMU München on a tensor brain model that he opposes to a Bayesian brain (with a related paper entitled Bayes or Heisenberg: Who(se) rules? which we discussed in general terms over lunch, namely Bayesian learning vs. quantum updating. And Michal Valko from INRIA Paris (and other companies), who went full blast against the Bradley-Terry model!, with a title of Nash and Nemirovski walk into a bar! With a half-time technique approximating Nash equilibria that reminded me of leapfrog. Much entertaining talk that further provided a game-theoretic transition to mine’s.

As an aside, I played yesterday with ChatGPT composing my talk slides out of our arXiv document and it proved a disaster, with hallucinations of results and concepts not in the paper and a complete mess of handling graphs, first creating generic, fake, unrelated pictures, then inserting actual graphs haphazardly throughout the slides. The sorry result I obviously did not use as the workshop did not seem the ideal place for this sort of prank! The actual version only recycles a few of its summarising slides. (With ye Norse farce proper colour choice!)

 

under wraps, if not enough for Nature

Posted in Books, Mountains, pictures, Travel, University life with tags , , , , , , , , , , , , , , , , , , , , , , on March 21, 2026 by xi'an

In its 01 January 2026 issue, Nature covers a current exhibit at the Musée de l’Homme, Paris, on mummies (or momies in French), incl. an Assassin’s Creed interactive device! With a complaint that the exhibit discloses too much about the individuals behind (or before?) the mummies, incl. age, cause of death and sometimes a scan… I find the complaint rather weird in that the individuals have been mummified for hundreds or thousands of years, mostly from cultures that have themselves vanished. (Note: As an atheist, I do not believe in an absolute “sanctity” of corpses and hope my dead body will be put to use for organ donations and medical student practice. The more so because people often have less concern for the living, just like anti-abortion activists rarely care about the children born from mothers denied a right to abortion.) Part of the article message is actually about de-colonising museums, even though transferring mummies back to where they were found does not include time travel to recreate the conditions the (hopefully) dead individuals were processed. (Note: As a universalist, I do not see much rationale in deeming multiple generation descendants (which ones?) or related ethnic groups having more say about handling these remains.) Which also bring to mind a puzzling, caricatural, “Perspective” Nature article in the 07 January 2026 issue arguing that conservation (towards protecting endangered species) is driven by “Western science”, colonialist, racist and marginalizing indigenous communities. Acknowledged as inspired by the Black Lives Matter movement and submitted in 2021, I am surprised the article ever got accepted given its focus on ideology rather than (universal) science, e.g., when referring to Michel Foucault’s theories as essential to conservation theory and practice or in opposing trophy hunting bans as providing income for communities. The nadir being the play on RACE (for rights, agency, challenge, and education) as the acronym for the supported model for conservation. (Note: As a frequent traveller, I do realise the tension between conservation of endangered animal populations and the survival needs of local communities. During our last trip to India, we had a hugely educative conversation with a Kerala farmer family, where they complained about the damages from and the dangers of local elephants on crops, as well as monkeys on their cocoa plantation, to the point they were considering giving up that crop.)

mostly Monte Carlo [13/03]

Posted in Statistics, Travel, University life with tags , , , , , , , , , , , , , , , , , , on March 10, 2026 by xi'an

A new episode of our mostly Monte Carlo seminar, very soon coming near you (if in Paris):

On Friday 13/02/26, from 3-5pm at PariSanté Campus

15h00: Pierre Del Moral (INRIA, Bordeaux)

On the Kantorovich contraction of Markov semigroup

We present a novel operator theoretic framework to study the contraction properties of Markov semigroups with respect to a general class of Kantorovich semi-distances, which notably includes Wasserstein distances. This rather simple contraction cost framework combines standard Lyapunov techniques with local contraction conditions. Our results can be applied to both discrete time and continuous time Markov semigroups, and we illustrate their wide applicability in the context of (i) Markov transitions on models with boundary states, including bounded domains with entrance boundaries, (ii) operator products of a Markov kernel and its adjoint, including two-block-type Gibbs samplers, (iii) iterated random functions and (iv) diffusion models, including overdampted Langevin diffusion with convex at infinity potentials.

16h00: Bob Carpenter (Flatiron Institute, New York)

GIST, WALNUTS, and Continuous Nutpie: mass-matrix and step-size adaptation for Hamiltonian Monte Carlo

I will introduce Gibbs self tuning (GIST), our new technique for coupling tuning parameters and conditionally Gibbs-sampling them per iteration in Hamiltonian Monte Carlo. Then I will turn to the within-orbit adaptive NUTS (WALNUTS) sampler, which adapts the step size every leapfrog step in order to conserve the Hamiltonian. Empirical evaluations on varying multi-scale target distributions, including Neal’s funnel and the Stock-Watson stochastic volatility time-series model, demonstrate that WALNUTS achieves substantial improvements in sampling efficiency and robustness. I will review the Nutpie mass-matrix adaptation scheme, which is designed to minimize Fisher divergence by estimating the mass matrix as the geometric midpoint (aka barycenter) between the inverse covariance of the draws and the covariance of the scores of the draws. Then I will describe a continuously adapting version that adapts per iteration by continuously discounting the past rather than updating in fixed blocks. I will also show how the Adam optimizer outperforms dual averaging for step-size adaptation. I will conclude by considering a lock-free multi-threading implementation that automatically monitors adaptation and sampling for convergence for automatic stopping.