The third and final day of the (main) conference started tih Emtiyaz Khan’s plenary talk on adaptive Bayesian intelligence. Or, imho, [adaptive [Bayesian]] intelligence, with the brackets indicating redundancy since intelligence need include adaptivity and [intelligent] adaptivity need proceed in a Bayesian way! Focussing first on the Bayesian learning rule via variational Bayes (with a stress on Kingma’s 1994 Adam optimisation algorithm, the “most cited paper” [in machine learning]) where learning boils down to gradient steps (due to the exponential family structure), themselves versions of Taylor (or Laplace) approximations). With an interesting vision of Bayesian updating as accounting for prediction mismatch. (I missed the connection Roberta in IMDb appearing in one slide!)
The following session offered no dilemma [sorry, Alex, Axel, Chris, Robert, Sumeet, Victor!] since it included the federated learning session I organised, with Louis Asslet, Conor Hassan, and Jean-Michel Marin as speakers. Louis’ talk was on confidential [homomorphic] accept-reject algorithms to learn from other sources, while preserving (differential?) privacy, part of which came during Les Houches workshops I organised this Spring and the one before. Exploiting the additive features of log-likelihoods and exponential variates and adopting a testing perspective on privacy. Conor motivated his model with the Australian cancer atlas project Kerrie Mengersen and others have been developing over the years. The federated approach relies on variational approximations that return the same answer as an exact resolution, but more efficiently. (From a privacy perspective, I wonder at the impact of variational approximations on protecting the data, which boils down to a choice of (sufficient) statistics for the exponential families behind those approximations.) For more complicated models incorporating spatial dependence prohibits full Bayesian inference, unfortunately. Jean-Michel commented on the richness of methods for simulation-based inference, incl. model choice. His focus was on using sequential neural likelihood estimation and sequential importance sampling to approximate evidence. As in the Read Paper of Del Moral et al. (2006). Mentioning a neural version of the harmonic mean estimator by Spurio Mancini et al. (2023)! I wondered at the degree of (Rao-Blackwell) recycling involved in the computation, Jean-Michel’s answer being that AMIS is soon coming [in a theatre near you!].

The afternoon sessions did offer any reprieve in the choice of topic! I first went to Approximate Methods for Accelerated Sampling, with Rong Tang evaluating the informativeness of summary statistics through a divergence evaluation. Using autoencoders to replace the intractable posterior, with sliced minimal model discrepancy (MMD) and (pseudo?) score matching loss for divergences (reminding me of indirect inference and synthetic likelihood). Yun Yang discussed a variational proposal to estimate the number of components in a mixture model. Surprising given the multimodal structure of mixture posteriors. And the overall irregularity of (evil!) mixture models. But I could not figure out from the talk the form of the approximation.

On the food scene, tasted a nice and spicy Peranakan rice vermicelli dish called Mee Siam yesterday in a campus restaurant, which sustained me fore the rest of the day, including the ABC s/webinar. And another spicy hot pot today at NUS, to catch up on veggies, while missing the chili crab local specialty on that trip.
The 


Minus one day at
A wee stressful trip, since the races in Caen cancelled all buses and delayed the taxi enough to miss the train to Paris by 30s, catching the next available one leaving me less than one hour between the arrival of the train (delayed by construction work on the rail line) and boarding the flight at Charles de Gaulle airport, but fortunately the RER trains in Paris were running okay, there were no queues in the airport, and I thus made it in time with a bit of post-marathon jogging! (Only to be delayed at departure by one hour for stormy conditions over Germany and Austria). All this exercise proved helpful to sleep soundly and lengthily in the plane!