#graham
Testing whether a learning procedure is calibrated in JMLR
The article “Testing whether a learning procedure is calibrated” by Jon Cockayne, Matthew Graham, Chris Oates, Onur Teymur, and myself has just appeared in its final form in the Journal of Machine Learning Research. This article is part of our research on the theoretical foundations of probabilistic numerics and uncertainty quantification, as we seek to explore what it means for the uncertainty associated to a computational result to be “well calibrated”.
J. Cockayne, M. M. Graham, C. J. Oates, T. J. Sullivan, and O. Teymur. “Testing whether a learning procedure is calibrated.” Journal of Machine Learning Research 23(203):1–36, 2022.
Abstract. A learning procedure takes as input a dataset and performs inference for the parameters \(\theta\) of a model that is assumed to have given rise to the dataset. Here we consider learning procedures whose output is a probability distribution, representing uncertainty about \(\theta\) after seeing the dataset. Bayesian inference is a prime example of such a procedure, but one can also construct other learning procedures that return distributional output. This paper studies conditions for a learning procedure to be considered calibrated, in the sense that the true data-generating parameters are plausible as samples from its distributional output. A learning procedure whose inferences and predictions are systematically over- or under-confident will fail to be calibrated. On the other hand, a learning procedure that is calibrated need not be statistically efficient. A hypothesis-testing framework is developed in order to assess, using simulation, whether a learning procedure is calibrated. Several vignettes are presented to illustrate different aspects of the framework.
Published on Friday 5 August 2022 at 14:50 UTC #publication #prob-num #cockayne #graham #oates #teymur