Archive for normalising constant
ECMLE on CRAN
Posted in R, Statistics, University life with tags Bayesian model comparison, CRAN, ECMLE package, elliptical covering, github, HPD region, marginal likelihood, normalising constant, R, R package, statistical evidence on March 27, 2026 by xi'anWhen Is Generalized Bayes Bayesian?
Posted in Statistics, University life, Books with tags ABC, Bayesian model choice, decision theory, coherence, normalising constant, loss functions, Bayes factors, marginal likelihood, generalised Bayesian inference on February 13, 2026 by xi'an
I spotted this title in the new arXiv postings on Monday. When Is Generalized Bayes Bayesian? A Decision-Theoretic Characterization of Loss-Based Updating by Kenichiro McAlinn & Kōsaku Takanashi is discussing decision-theoretic consequences of generalized Bayes approaches based on losses and show that decisions based on a loss-based posterior coincides with those of ordinary Bayes if and only if the loss is essentially a negative log-likelihood (leading to a belief posterior). This is not very surprising in that, otherwise, there is no Bayesian update delivering the generalised Bayes pseudo-posteriors (which can be traced back to a 2007 result of Catoni). The authors also demonstrate that generalized marginal likelihoods are not delivering evidence for decision posteriors, and thus that Bayes factors are not well-defined in this context, which reminds me of our warning for ABC model choice. However, the reason here is much more mundane, as it is due to the decision posterior failing to identify the normalising constant Z(x). Outside belief posteriors. The paper concludes with a coherence book, which is a table reproduced above.
bridging ratio estimators
Posted in Books, Statistics, University life with tags annealed importance sampling, evidence, finite mixtures, INRAE, marginal likelihood, Monte Carlo methods, Monte Carlo Statistical Methods, normalising constant, seminar, simulated annealing, Université Paris Dauphine on June 3, 2025 by xi'anXuriouser & Xuriouser
Posted in Books, Statistics with tags accept, Bayesian computational methods, cross validated, Metropolis-Hastings algorithms, Monte Carlo Statistical Methods, normalising constant, principle of proportionality, screenshot, simulation on May 8, 2024 by xi'anMCMC without evaluating the target [aatB-mMC joint seminar, 24 April]
Posted in pictures, Statistics, Travel, University life with tags All about that Bayes, Balard, Bayesian computational methods, doubly intractable problems, exchange algorithm, intractable likelihood, MCMC, mostly Monte Carlo seminar, normalising constant, PariSanté campus, Porte de Versailles, Rutgers University, sampling, seminar on April 11, 2024 by xi'an
On 24 April 2024, Guanyang Wang (Rutgers University, visiting ESSEC) will give a joint All about that Bayes – mostly Monte Carlo seminar on
MCMC when you do not want to evaluate the target distribution
In sampling tasks, it is common for target distributions to be known up to a normalizing constant. However, in many situations, evaluating even the unnormalized distribution can be costly or infeasible. This issue arises in scenarios such as sampling from the Bayesian posterior for large datasets and the ‘doubly intractable’ distributions. We provide a way to unify various MCMC algorithms, including several minibatch MCMC algorithms and the exchange algorithm. This framework not only simplifies the theoretical analysis of existing algorithms but also creates new algorithms. Similar frameworks exist in the literature, but they concentrate on different objectives.
The talk takes place at 4pm CEST, in room 8 at PariSanté Campus, Paris 15.


