Archive for HMM

All About that Bayes stroll

Posted in pictures, Statistics, University life with tags , , , , , , , , , , , , , on February 9, 2024 by xi'an

For all Bayesians and sympathisers in the Paris area, an incoming All about that Bayes seminars¹ by Elisabeth Gassiat (Institut de Mathématiques d’Orsay) on 13 February, 16h00, on Campus Pierre & Marie Curie, SCAI:

A stroll through hidden Markov models

Hidden Markov models are latent variables models producing dependent sequences. I will survey recent results providing guarantees for their use in various fields such as clustering, multiple testing, nonlinear ICA or variational autoencoders.


¹Incidentally, I came across an unrelated All about that Bayes YouTube video, a talk given by Kristin Lennox (Lawrence Livermore National Laboratory). And then found out a myriad of talks or courses using that pun.

simulation based composite likelihood

Posted in Statistics with tags , , , , , on December 29, 2023 by xi'an

Lorenzo Rimella, Chris Jewell, and Paul Fearnhead have recently arXived a paper entitled Simulation Based Composite Likelihood, where they consider a composite likelihood approximation for running inference on HMM parameters under the specific scenario of HMMs on finite, high-dimension N, state spaces X with huge cost of order card (Χ)2N when computing the likelihood  by the forward algorithm:

“Inference for high-dimensional hidden Markov models is challenging due to the exponential-in-dimension computational cost of the forward algorithm.”

The authors make an assumption (2) of total factorisation across dimensions for both current hidden and current observed terms, given the previous hidden states, which is very very strong, if not resulting in a complete separation into independent component-wise HMMs. This helps however in deriving a Monte Carlo approximation of the likelihood of one component of the HMM sequence, the full likelihood being then approximated in a composite (likelihood) manner by the product of these component marginals.  The remaining difficulty of computing the marginals of the component-wise observed (pseudo-) Markov chains is attenuated

“by fixing the state of all but one component n of the latent process, [since] we can leverage the factorisation and calculate probabilities related to the time-trajectory of the remaining [latent] state”

but it requires simulation of the hidden chain, overall of order  O(PTN²card (X)²) when P is the number of MCMC simulations, which can be improved by a factor N by removing a feedback step through a further marginal likelihood approximation. Interestingly falling into a prediction-correction pattern usual in sequential simulations. All this demonstrates craftsmanship of a high order, even though the issue of using an approximate composite likelihood does not seem to be addressed.

 

ISBA 2021.3

Posted in Kids, Mountains, pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , on July 1, 2021 by xi'an

Now on the third day which again started early with a 100% local j-ISBA session. (After a group run to and around Mont Puget, my first real run since 2020!!!) With a second round of talks by junior researchers from master to postdoc level. Again well-attended. A talk about Bayesian non-parametric sequential taxinomy by Alessandro Zito used the BayesANT acronym, which reminded me of the new vave group Adam and the Ants I was listening to forty years ago, in case they need a song as well as a logo! (Note that BayesANT is also used for a robot using Bayesian optimisation!) And more generally a wide variety in the themes. Thanks to the j-organisers of this 100% live session!

The next session was on PDMPs, which I helped organise, with Manon Michel speaking from Marseille, exploiting the symmetry around the gradient, which is distribution-free! Then, remotely, Kengo Kamatani, speaking from Tokyo, who expanded the high-dimensional scaling limit to the Zig-Zag sampler, exhibiting an argument against small refreshment rates, and Murray Pollock, from Newcastle, who exposed quite clearly the working principles of the Restore algorithm, including why coupling from the past was available in this setting. A well-attended session despite the early hour (in the USA).

Another session of interest for me [which I attended by myself as everyone else was at lunch in CIRM!] was the contributed C16 on variational and scalable inference that included a talk on hierarchical Monte Carlo fusion (with my friends Gareth and Murray as co-authors), Darren’s call to adopt functional programming in order to save Bayesian computing from extinction, normalising flows for modularisation, and Dennis’ adversarial solutions for Bayesian design, avoiding the computation of the evidence.

Wes Johnson’s lecture was about stories with setting prior distributions based on experts’ opinions. Which reminded me of the short paper Kaniav Kamary and myself wrote about ten years ago, in response to a paper on the topic in the American Statistician. And could not understand the discrepancy between two Bayes factors based on Normal versus Cauchy priors, until I was told they were mistakenly used repeatedly.

Rushing out of dinner, I attended both the non-parametric session (live with Marta and Antonio!) and the high-dimension computational session on Bayesian model choice (mute!). A bit of a schizophrenic moment, but allowing to get a rough picture in both areas. At once. Including an adaptive MCMC scheme for selecting models by Jim Griffin. Which could be run directly over the model space. With my ever-going wondering at the meaning of neighbour models.

ISBA 20.2.21

Posted in Kids, Mountains, pictures, Running, Statistics, Travel, University life, Wines with tags , , , , , , , , , , , , , , , , , , , , , , , , , , , on June 30, 2021 by xi'an

A second day which started earlier and more smoothly with a 100% local j-ISBA session. (Not counting the invigorating swim in Morgiou!) With talks by junior researchers from master to postdoc level, as this ISBA mirror meeting was primarily designed for them, so that they could all present their work, towards gaining more visibility for their research and facilitating more interactions with the participants. CIRM also insisted on this aspect of the workshop, which was well-attended.

I alas had to skip the poster session [and the joys of gather.town] despite skipping lunch [BND], due to organisational constraints. Then attended the Approximate Bayesian computation section, including one talk by Geoff Nicholls on confidence estimation for ABC, following upon the talk given by Kate last evening. And one by Florian Maire on learning the bound in accept-reject algorithms on the go, as in Caffo et al. (2002), which I found quite exciting and opening new possibilities, esp. if the Markov chain thus produced can be recycled towards unbiasedness without getting the constant right! For instance, by Rao-Blackwellisation, multiple mixtures, black box importance sampling, whatever. (This also reminded me of the earlier Goffinet et al. 1996.)

Followed by another Bayesian (modeling and) computation session. With my old friend Peter Müller talking about mixture inference with dependent priors (and a saturated colour scheme!), Matteo Ruggieri [who could not make it to CIRM!] on computable Bayesian inference for HMMs. Providing an impressive improvement upon particle filters for approximating the evidence. Also bringing the most realistic Chinese restaurant with conveyor belt! And Ming Yuan Zhou using optimal transport to define distance between distributions. With two different conditional distributions depending on which marginal is first fixed. And a connection with GANs (of course!).

And it was great to watch and listen to my friend Alicia Carriquiry talking on forensic statistics and the case for (or not?!) Bayes factors. And remembering Dennis Lindley. And my friend Jim Berger on frequentism versus Bayes! Consistency seems innocuous as most Bayes procedures are. Empirical coverage is another kind of consistency, isn’t it?

A remark I made when re-typing the program for CIRM is that there are surprisingly few talks about COVID-19 overall, maybe due to the program being mostly set for ISBA 2020 in Kunming. Maybe because we are more cautious than the competition…?!

And, at last, despite a higher density of boars around the CIRM facilities, no one got hurt yesterday! Unless one counts the impact of the French defeat at the Euro 2021 on the football fans here…

interdependent Gibbs samplers

Posted in Books, Statistics, University life with tags , , , , , , on April 27, 2018 by xi'an

Mark Kozdoba and Shie Mannor just arXived a paper on an approach to accelerate a Gibbs sampler. With title “interdependent Gibbs samplers“. In fact, it presents rather strong similarities with our SAME algorithm. More of the same, as Adam Johanssen (Warwick) entitled one of his papers! The paper indeed suggests multiplying replicas of latent variables (e.g., an hidden path for an HMM) in an artificial model. And as in our 2002 paper, with Arnaud Doucet and Simon Godsill, the focus here is on maximum likelihood estimation (of the genuine parameters, not of the latent variables). Along with argument that the resulting pseudo-posterior is akin to a posterior with a powered likelihood. And a link with the EM algorithm. And an HMM application.

“The generative model consist of simply sampling the parameters ,  and then sampling m independent copies of the paths”

If anything this proposal is less appealing than SAME because it aims directly at the powered likelihood, rather than utilising an annealed sequence of powers that allows for a primary exploration of the whole parameter space before entering the trapping vicinity of a mode. Which makes me fail to catch the argument from the authors that this improves Gibbs sampling, as a more acute mode has on the opposite the dangerous feature of preventing visits to other modes. Hence the relevance to resort to some form of annealing.

As already mused upon in earlier posts, I find it most amazing that this technique has been re-discovered so many times, both in statistics and in adjacent fields. The idea of powering the likelihood with independent copies of the latent variables is obviously natural (since a version pops up every other year, always under a different name), but earlier versions should eventually saturate the market!