
Bayesian contributed session as the first round of the third day (with a choice of five parallel sessions featuring Bayesian topics!!, actually easier to pick than among the following eight parallel sessions of the 10:30 schedule!!!), with a talk by Tahir Ekin on adversarial outlier detection that could connect with our Oceaner(c) privacy concerns. Then one involving spike & slab (a theme to figure prominently in this special day!!) in mixed response models by Sameer Deshpande, seeking a (unBayesian!) MAP for a latent variable model by Monte Carlo EM. Followed by a talk by Yunyi Shen on completely random measures for estimating the (distribution of the) number of species in heterogeneous populations. Next, Valentin Zulj on (frequentist rather than) Bayesian stacking, on estimating optimal weights for model averaging (which should be posterior probabilities in a pure Bayesian mindframe), including a score function that could lead to generalised Bayesian inference on said weights. Finishing with a talk by Chaegeun Song on correcting Bayesian credible sets towards (frequentist, again!!!) exact coverage for classification (which reminded me of my very first paper with George on correcting frequentist confidence for Binomial observations). With which I could not really engage as seeking a specific coverage level did not seem relevant, imho, but I appreciated the wheel plot representation.
My second morn session was about modern (what else?!) sampling algorithms, although I spent the first dozen minutes wondering whether or not I had entered the wrong room. Until Tianhao Wang focussed on Thompson sampling for bandits. It did prove far enough from my interest for my (sleep deprived) attention to drift too quickly. Only the talk by Yuchen Wu on a spike & slab (as suits the day!) challenge captured enough this wandering attention. Crossing further into my realm of primary topics by considering a target distribution that is a product of distributions. But I did not get from her presentation how a product measure decomposition was inducing higher efficiency (and did not find answers within the arXived preprint). Unless it exploited specific features of the target, like conditional independence between the components. The last talk was by Brice Huang on sampling low temperature Gibbs measures using stochastic localisation.
After coming upon a row of food trucks across the conference centre and being unfairly attracted by an Ethiopian injera picture into a terrible wrap, I returned for the Skeptical about AI session, just a few minutes late, only to find accessing the session was impossible! Quite sad to miss the presentations and the arguments (even though I had heard a previous talk by Genevera Allen when visiting Rutgers two years ago). As a second best, I then joined the recent (of course!) Advances in Bayesian Computation (aka ABC?!) session with a medley of topics, including a data subset versus data sketching model reduction by Sudipto Saha. Which could have consequences on our privacy strategies. And marginal evidence estimation for the Bayesian Lasso by Christopher Hans while avoiding data completion. And another latent variable model with a sequential variational Bayes approach by Bao Anh Vu, using at one point Cappé et al. (2005) EM-based approximation to the log likelihood gradient. Finishing by a back-to-the-future talk by Luke Duttweiler on MCMC convergence diagnostics. Comparing several chains via proximity maps that themselves require some preliminary knowledge about the MCMC kernel. (Nice title though, “the traceplot thickens”!)
The crux of the day was however the 2024 COPSS Award ceremony with several friends featuring among the recipients, Danielle Durante for the Emerging Leaders Award, Regina Liu for the Elizabeth L. Scott Award and Veronika Rockova for the Presidents’ Award. Congrats!!!



My last day (#4) at the workshop, as I had to return to Paris earlier. A rather theoretical morning again, with Morgane Austern on (probabilistic) concentration inequalities on transport distances, far from my comfort zone if lively, Jason Xu on replacing non-convex penalisation factors to distances to the corresponding manifold, which I found most interesting if not directly helpful for simulating over 