Two UBC faculty, Harlan Campbell and Paul Gustafson, wrote a paper entitled “Defining a Credible Interval Is Not Always Possible with “Point-Null” Priors: A Lesser-Known Correlate of the Jeffreys-Lindley Paradox” in Bayesian Analysis (2024, 19, Number 3, pp. 925–984), which got discussed and presented on the BA webinar yesterday. I missed the call for discussion, on a topic I would have liked very much to discuss and an analysis I strongly disagree with. Fortunately, several of the discussants in the webinar and in the printed version advanced some of my points (as. e.g., Bertrand Clarke in the above slide screen-shot from the on-line video).
I find the paper somewhat missing in linking with the history of the topic, with no mention of Berger & Sellke (1987) that comes as a counterpoint to Casella &—the other—Berger (1987), opposing one sided to two sided tests. Or of matching priors, which connect credible and confidence intervals to higher orders. But the central issue with the apparent contradiction between rejecting the point null hypothesis and returning a credible interval that contains the null is that the construction proceeds from a model averaged posterior. Which fundamentally contradicts the construct of a pair of priors attached with each model towards selecting the fittest one. And requires a far-from-innocent choice of respective prior weights for both models, an ill-defined notion I have repeatedly criticised here and elsewhere. Model averaging clashes with model selection in both decision-theoretic and modelling terms. In model averaging terms, the disappearance of the opposition exhibited by the authors in the predictive distribution, as shown by discussants Held and Pawel, is unsurprising. And makes the spike-and-slab prior far of a necessity. Contrariwise to the model selection case where it proves unavoidable. And for which a merged credible interval does not make sense (to me at least) since it should be constructed once one (and only one) of the two models is chosen. At this point, that the other model ever was considered should not impact subsequent inference. And within that perspective I do not see the relevance of agnostic (ignoring the model choice ation) 5% confidence or credible regions.
“…considers the regime of a fixed true parameter value as n increases [and] of a fixed p-value…” (p928)
With regards with the connection with the Jeffreys-Lindley (or Lindley-Jeffreys) so-called paradox, on which I have already written a lot (or even too much!), many of the earlier objections resurface. Like the measure-theoretic difficulty in including within a continuous interval an atom, i.e., a value with a point mass. Which isolates this atom away from any other value in the interval (and of course creates discontinuities). Or fixing the p-value forever after (when n goes to infinity), as in the graph below (p929). Or treating an improper prior without further caution than with a proper prior. Especially when these are “created” by the decision problem itself.



[A quote from Jaynes about improper priors 