### Preprint: Probabilistic numerics retrospective

Chris Oates and I have just uploaded a preprint of our paper “A modern retrospective on probabilistic numerics” to the arXiv.

**Abstract.** This article attempts to cast the emergence of probabilistic numerics as a mathematical-statistical research field within its historical context and to explore how its gradual development can be related to modern formal treatments and applications. We highlight in particular the parallel contributions of Sul'din and Larkin in the 1960s and how their pioneering early ideas have reached a degree of maturity in the intervening period, mediated by paradigms such as average-case analysis and information-based complexity. We provide a subjective assessment of the state of research in probabilistic numerics and highlight some difficulties to be addressed by future works.

Published on Tuesday 15 January 2019 at 12:00 UTC #preprint #prob-num

### Preprint: Optimality of probabilistic numerical methods

Chris Oates, Jon Cockayne, Dennis Prangle, Mark Girolami, and I have just uploaded a preprint of our paper “Optimality criteria for probabilistic numerical methods” to the arXiv.

**Abstract.** It is well understood that Bayesian decision theory and average case analysis are essentially identical. However, if one is interested in performing uncertainty quantification for a numerical task, it can be argued that the decision-theoretic framework is neither appropriate nor sufficient. To this end, we consider an alternative optimality criterion from Bayesian experimental design and study its implied optimal information in the numerical context. This information is demonstrated to differ, in general, from the information that would be used in an average-case-optimal numerical method. The explicit connection to Bayesian experimental design suggests several distinct regimes in which optimal probabilistic numerical methods can be developed.

Published on Tuesday 15 January 2019 at 11:00 UTC #preprint #prob-num

### Implicit Probabilistic Integrators in NeurIPS

The paper “Implicit probabilistic integrators for ODEs” by Onur Teymur, Han Cheng Lie, Ben Calderhead and myself has now appeared in *Advances in Neural Information Processing Systems 31 (NIPS 2018)*.
This paper forms part of an expanding body of work that provides mathematical convergence analysis of probabilistic solvers for initial value problems, in this case implicit methods such as (probabilistic versions of) the multistep Adams–Moulton method.

O. Teymur, H. C. Lie, T. J. Sullivan & B. Calderhead. “Implicit probabilistic integrators for ODEs” in *Advances in Neural Information Processing Systems 31 (NIPS 2018)*, ed. N. Cesa-Bianchi, K. Grauman, H. Larochelle & H. Wallach. 2018. http://papers.nips.cc/paper/7955-implicit-probabilistic-integrators-for-odes

**Abstract.** We introduce a family of implicit probabilistic integrators for initial value problems (IVPs), taking as a starting point the multistep Adams–Moulton method. The implicit construction allows for dynamic feedback from the forthcoming time-step, in contrast to previous probabilistic integrators, all of which are based on explicit methods. We begin with a concise survey of the rapidly-expanding field of probabilistic ODE solvers. We then introduce our method, which builds on and adapts the work of Conrad et al. (2016) and Teymur et al. (2016), and provide a rigorous proof of its well-definedness and convergence. We discuss the problem of the calibration of such integrators and suggest one approach. We give an illustrative example highlighting the effect of the use of probabilistic integrators — including our new method — in the setting of parameter inference within an inverse problem.

Published on Thursday 13 December 2018 at 12:00 UTC #publication #nips #neurips #prob-num

### Random Bayesian inverse problems in JUQ

The article “Random forward models and log-likelihoods in Bayesian inverse problems” by Han Cheng Lie, Aretha Teckentrup, and myself has now appeared in its final form in the *SIAM/ASA Journal on Uncertainty Quantification*, volume 6, issue 4.
This paper considers the effect of approximating the likelihood in a Bayesian inverse problem by a random surrogate, as frequently happens in applications, with the aim of showing that the perturbed posterior distribution is close to the exact one in a suitable sense.
This article considers general randomisation models, and thereby expands upon the previous investigations of Stuart and Teckentrup (2017) in the Gaussian setting.

H. C. Lie, T. J. Sullivan & A. L. Teckentrup. “Random forward models and log-likelihoods in Bayesian inverse problems.” *SIAM/ASA Journal on Uncertainty Quantification* **6**(4):1600–1629, 2018. doi:10.1137/18M1166523

**Abstract.** We consider the use of randomised forward models and log-likelihoods within the Bayesian approach to inverse problems. Such random approximations to the exact forward model or log-likelihood arise naturally when a computationally expensive model is approximated using a cheaper stochastic surrogate, as in Gaussian process emulation (kriging), or in the field of probabilistic numerical methods. We show that the Hellinger distance between the exact and approximate Bayesian posteriors is bounded by moments of the difference between the true and approximate log-likelihoods. Example applications of these stability results are given for randomised misfit models in large data applications and the probabilistic solution of ordinary differential equations.

Published on Monday 10 December 2018 at 12:00 UTC #publication #inverse-problems #juq #prob-num

### ProbNum 2018

Next week Chris Oates and I will host the SAMSI–Lloyds–Turing Workshop on Probabilistic Numerical Methods at the Alan Turing Institute, London, housed in the British Library. The workshop is being held as part of the SAMSI Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applied Mathematics.

The accuracy and robustness of numerical predictions that are based on mathematical models depend critically upon the construction of accurate discrete approximations to key quantities of interest. The exact error due to approximation will be unknown to the analyst, but worst-case upper bounds can often be obtained. This workshop aims, instead, to further the development of Probabilistic Numerical Methods, which provide the analyst with a richer, probabilistic quantification of the numerical error in their output, thus providing better tools for reliable statistical inference.

This workshop has been made possible by the generous support of SAMSI, the Alan Turing Institute, and the Lloyd's Register Foundation Data-Centric Engineering Programme.

Published on Friday 6 April 2018 at 07:00 UTC #event #prob-num #samsi