Bayesian Inference: Theory, Methods, Computations [book review]
Bayesian Inference: Theory, Methods, Computations by Silvelyn Zwanzig and Rauf Ahmad, both from Uppsala University, is a recent book published by Chapman & Hall / CRC Press. About 300p long (plus appendices), it covers the core aspects of Bayesian inference, namely the decision theoretic motivations, its asymptotic validation, the specifics of estimation and testing, and the computational approximations (MC, MCMC, ABC, VB), with entries on prior specification and Normal linear models. And some R codes. It is (and feels like) constructed from Master and PhD courses (at Uppsala University), with a rigorous mathematical presentation and many examples, some related to biostatistics. Drawings from the first author’s daughter are included in most chapters, to this reviewer’s bemusement. From a further personal viewpoint, the book also reads rather close to my (Bayesian) choice of a Bayesian textbook, which proves rather accurate since several chapters are inspired by my own Bayesian Choice. as acknowledged therein. As well as by the more recent Statistical Decision Theory: Estimation, Testing, and Selection by Liese & Miescke (2008) and Introduction to the Theory of Statistical Inference by Liero & Zwanzig (2011). Witness, for instance, an example of prior construction for capture-recapture experiments on lizards as analysed by my PhD student Dupuis (1995) [with a curious switch to the authors on p.263] and also included in The Bayesian Choice (with drawing 2.9 incorrect in that the lizards there have marks on their backs, instead of the code adopted by the ecologists, namely cutting one specific phalange for each capture).
Other minor quandaries: The usual issue of quoting the wrong edition for creating a method, as when citing Jeffreys (1946) for inventing non-informative priors [p.53], failing to point out the parameterisation invariance of intrinsic losses [p.95]considering that Bayes factors are only relevant for obtaining evidence against the null hypothesis [p.216], recommending BIC and DIC (!) [pp.232-6], advocating sampling importance resampling (SIR) for approximate sampling from the target (omitting infinite variance issues) [p.253], defining annealing as using “several trial distributions” [p.261], a mistake in ABC-MCMC [p.274] since the case when the simulated data is too far from the actual data should lead to a repetition rather than a pure rejection.
All in all, a reasonable textbook with some recent input, but still lacking in originality, if I may subjectively say so.
[Disclaimer about potential self-plagiarism: this post or an edited version of it could possibly appear in my Books Review section in CHANCE.]
Leave a comment