Managed to get back from the Pentland hills in time for the Wednesday afternoon session, which proved most interesting as close to my research interests!
Nicola Branchini presented his work with Victor Elvira (a close friend and coauthor, incidentally one of the organisers of the workshop!) on improving self normalised importance sampling by interpreting it as a ratio of estimators based on two samples (which may be the same) and attempting to optimise the joint distribution of said sample. The starting assumption is having (good) marginal importance functions, which means the goal here is in optimising a copula distribution targeting the ratio as quantity of interest. Optimality is however defined in terms of the approximate asymptotic variance of the ratio, which remains an approximation. The idea is nonetheless quite interesting and shows potential for connecting with bridge sampling and… AMIS! As an aside, the talk considered cases when the margins are multivariate, which requires a généralisation of Sklar’s theorem. Simo Särkä then demonstrated how highly parallel processors like GPUs can accommodate Bayesian filters and smoothers in state space models not requiring simulation, gaining a reduction in complexity from O(T) to O(log T). I had not really thought of parallel processing in the recent years, hence was quite pleased at hearing this resolution based on so-called associative scans, and see that implementations were already available in Julia/CUDA.
This was followed by a highly enjoyable poster session, including chats about ABC-SMC for discovery rates, infinite dimensional diffusions, Pareto smoothed importance samplings, &tc with posters by Hugo Marival (coauthor of our importance Monte Carlo recent paper) and Shreya Roy (a student at U of Warwick). With sunny views of Arthur’s Seat (and plenty of people at the top), contrary to the above! Followed by a private party dinner occupying half of a nearby and novel South Indian restaurant that proved quite tasty, local and definitely enjoyable.

For my last morning in town, albeit it was unrelated to the posted abstract, Pierre Del Moral spoke about noisy versions of the ensemble Kalman filter on linear diffusions that allowed for stable solutions under strong enough conditions, encompassing an impressive corpus of work over the past ten years. Alex Beskos presented antithetic multilevel methods for diffusions, which allow to improve the error in the discretisation, even though I did not fully get the whole idea (partly due to dozing out from time to time, a consequence of my last early rounds of Arthur’s Seat in the very early morn).
Daniel Paulin presented a novel unbiased method based on kinetic Langevin dynamics that combines advanced splitting methods with enhanced gradients, avoiding Metropolis correction by coupling and multilevel Monte Carlo approach, achieving unbiasedness by telescoping, but involving an avalanche of acronyms in the leapfrog/Gibbs steps. And Adam Johansen (U of Warwick) on several recent papers of their divide-and-conquer filtering methods, introduced in a 2017 JCGS paper, following a decomposition of the state variable into low-dimensional components like branches and leaves of a tree.