The next OWABI webinar will take place on 30 April, at 1pm Coventry time (2pm in Paris, 8am in Columbus, Ohio) and will feature
Oksana A. Chkrebtii (Ohio State University)
Generative models and those with computationally intractable likelihoods are widely used to describe complex systems in the natural sciences, social sciences, and engineering. Fitting these models to data requires likelihood-free inference methods that explore the parameter space without explicit likelihood evaluations, relying instead on sequential simulation, which comes at the cost of computational efficiency and extensive tuning. We develop an alternative framework called kernel-adaptive synthetic posterior estimation (KASPE) that uses deep learning to directly reconstruct the mapping between the observed data and a finite-dimensional parametric representation of the posterior distribution, trained on a large number of simulated datasets. We provide theoretical justification for KASPE and a formal connection to the likelihood-based approach of expectation propagation. Simulation experiments demonstrate KASPE’s flexibility and performance relative to existing likelihood-free methods including approximate Bayesian computation in challenging inferential settings involving posteriors with heavy tails, multiple local modes, and over the parameters of a nonlinear dynamical system.
Bob then spoke about the latest version of NUTS, the within-orbit adaptive NUTS (WALNUTS) sampler, which adapts the step size at every leapfrog step in order to conserve the Hamiltonian and keep the path stable enough. The adaptation is facilitated by incorporating this step size as an extra parameter with an attached distribution, that the authors call Gibbs self tuning (GIST), for coupling tuning parameters and conditionally Gibbs-sampling them per iteration in Hamiltonian Monte Carlo. This has been done in the past, incl. in some of my papers (e.g., Andrieu & Robert, 2004), but I could not cite a particular reference during the seminar.
Further light reflections that came to mind during Bob’s talk:

