
Archive for Monte Carlo methods
London Meeting on Computational Statistics 28-29 April 2026, plus a UCL lecture by Lester Mackey
Posted in Books, pictures, Statistics, Travel, University life with tags Augustus de Morgan, Britain, computational statistical physics, gradient flows, London, London Mathematical Society, Monte Carlo methods, Monte Carlo Statistical Methods, simulation-based inference, The Shard, UCL, UCL Institute for Mathematical and Statistical Sciences, United Kingdom, University College London, variational inference on February 6, 2026 by xi'an
registration opens for ISBA 2026 Satellite at the Institute of Statistical Mathematics
Posted in Mountains, pictures, Travel, University life with tags advanced MCMC, alcoholism, free registration, Fuji-san, information geometry, Institute of Statistical Mathematics, ISBA, ISBA 2026, Japan, Monte Carlo methods, PDMP samplers, privacy, satellite workshop, Tachikawa, Tokyo, variational methods on January 15, 2026 by xi'anAndré ou Jean Ville (1910-1989)
Posted in Books, pictures, Travel, University life with tags 25w5482, Abraham Wald, Alain Trognon, André Ville, Andrey Markov, École Normale Supérieure, Berlin, Bernard Bru, BIRS-CMI, Bull computers, Chennai, Edmond Malinvaud, Emile Borel, George Darmois, Glenn Shafer, history of Monte Carlo, history of statistics, India, Jean Ville, Jean-Paul Sartre, Karl Popper, Kurt Gödel, martingales, Maurice Fréchet, minimax strategy, Monte Carlo methods, Paris, Paul Lévy, plane crash, Richard von Mises, signal theory, Simone de Beauvoir, Sorbonne, supermartingale, two-player game, Université de Paris, Vienna, Ville's inequality, Wolfgang Doeblin, WW II on August 12, 2025 by xi'an
Throughover the workshop in Chennai floated (!) the figure of Jean/André Ville, with his inequality generalising Markov’s, who invented martingales. He is not such a well-known figure in France—at least to me!—, despite having led a rather exceptional life, from being a visiting scholar in Berlin (in the Maison académique de Berlin, along with a certain Jean-Paul Sartre) and Vienna in the 1930s, to his wife being (in Berlin) one of the many (disposable and despised) lovers of JP Sartre (to whom an open-minded or clueless Ville later sent his thèse d’université on martingales and collectives, a much more substantial piece of work than the current PhD), to him working with German and Austrian mathematicians and logicians, such as Popper, Gödel, and Wald–who, what a coïncidence!, died in India from a plane crash in 1950 that had left from Chennai–and being impressed enough by the latter to passing an economics degree in the Sorbonne when back in Paris, establishing a minimax result for a zero-sum matrix game with two players, to his counter-example to von Mises’ kollectiv, to his nickname of the King of Counterexamples in the Viennese mathematics seminar, to him operating the first (Bull) computer at the Université de Paris. (Glenn Shafer wrote a detailed accounting of his youth, on which this post is based, up to his thesis defence but a few days from France mobilising for war–where his collegue Wolfgang Doeblin would kill himself the year after, to avoid capture–. With Bernard Bru, Edmond Malinvaud and Alain Trognon among the people who helped.) After the war, he worked several years as a prépa maths teacher before working for a French State electricity companion on signal theory and Monte Carlo methods, and then returning to Université de Paris as a professor in 1957.
A modern introduction to probability and statistics [book review]
Posted in Books, R, Statistics, Travel, University life with tags animals, Bayesian statisticians, Bengaluru, bias, book reviews, capture-recapture, CHANCE, coin tossing, computational statistics, computer age, Domesday Book, Guillaume le Concquérant, hypothesis testing, Ig Nobel Prize, introductory textbooks, maximum likelihood estimation, Monte Carlo methods, OUP, Oxford University Press, p-values, statistical computing, Sussex on July 12, 2025 by xi'an
In the plane to Bengaluru, I read through the book A modern introduction to probability and statistics, by Graham Upton—whose Measuring Animal Abundance I reviewed for CHANCE a while ago—, which is based on the earlier Understanding Statistics, written jointly with Ian Cook. (Not to be confused with A modern introduction to probability and statistics by Dekking et al.) The subtitle is understanding statistical principles in the computer age. Sorry, in the age of the computer. While the cover is most pleasant (and modern), as noticed by an AF flight attendant, the contents are very very standard and could have been written decades ago since the main concession to “the” computer age is the inclusion of a few R commands at the end of most chapters. There are even a few distribution tables here and there (in case “the” computer is not available). But there is no other connection with computational statistics or statistical computing.
The classicism of the contents and the intended audience mean there is little therein on which to either object or criticise. The mixture of elementary probability and basic statistics in a single textbook always feels awkward to me and I think I would have trouble teaching solely from this material. Apart from the glaring typo on the variance of the sum of two correlated random variables on page 87, missing the factor 2 in front of the covariance, while correct(ed) p97 (and the inevitable “the the” typo spotted once). My main criticisms are on the potential confusion between samples and populations in the early chapters, when some statistics are used as motivational examples, as for instance in a (hidden) Monte Carlo stabilisation to the limiting values (p57), way before the Law of Large Numbers is introduced,, the variable mileage in mathematical rigour (while being uncertain that first year students can handle integrals and derivatives), the textbook examples, and the amount of the book contents spent on descriptive statistics and even more on the “classical” tests, with no critical perspective on using point nulls or p-values. The book concludes with a four page (benevolent) chapter on Bayesian statistics that is superfluous imho, or even counterproductive since my experience with a rushed introduction to Bayesian principles almost always result in a rejection of said principles. Plus, the illustration with the coin tossing is not particularly helpful since Andrew maintains that one can load a die, but cannot bias a coin. (A similar reservation on the half-page 289 coverage on pseudo-random generation and Monte Carlo principles for computing p-values.)
Minor (mostly idiosyncratic) remarks follow: CLT prior to LLN, n-1 in sample sd, little to no model criticism (ntbcf goodness of fit), missing an opportunity when mentioning the varying probability of a day being a birthday (p31) in contrast with BDA cover story, and another opportunity to cite the 2024 Ig Nobel Prize for coin tossing around the LLN, an unclear definition for random variables( p53) and a potentially confusing introduction of Poisson distributions through a informal reference to Poisson processes (and no reason why the years of accession of the kings of Sussex and England till Guillaume—making a return on p178 with the Domesday Book—in 1066 should follow such a process as suggested in Figure 3.5), a surprising definition of the constant e as the special case of exp(x) when x=1 and its series expansion (p70), omitting proofs on laws of sums of iid rv’s by introducing moment generating functions rather late, another obscure reference to a 16th German treatise on surveying as a precursor of the CLT (p131), a proof for the normalising constant of the Normal density that will most likely escape most first year students, a introduction of the t, F, and χ² distributions with no mention of their respective densities (pp141-147), never defining a joint Normal distribution density, insisting on unbiasedness without noting that maximum likelihood—with a strange motivation that it “makes the next sample of n observations most likely to resemble the data in the current sample (p228)—estimators are almost always biased, an abundance of footnotes that may prove of little interest for the youngest readers.
[Disclaimer about potential self-plagiarism as usual: this post or an edited version will eventually appear in my Books Review section in CHANCE.]

