Tim Sullivan


Clear Search

A modern retrospective on probabilistic numerics

Preprint: Probabilistic numerics retrospective

Chris Oates and I have just uploaded a preprint of our paper “A modern retrospective on probabilistic numerics” to the arXiv.

Abstract. This article attempts to cast the emergence of probabilistic numerics as a mathematical-statistical research field within its historical context and to explore how its gradual development can be related to modern formal treatments and applications. We highlight in particular the parallel contributions of Sul'din and Larkin in the 1960s and how their pioneering early ideas have reached a degree of maturity in the intervening period, mediated by paradigms such as average-case analysis and information-based complexity. We provide a subjective assessment of the state of research in probabilistic numerics and highlight some difficulties to be addressed by future works.

Published on Tuesday 15 January 2019 at 12:00 UTC #preprint #prob-num #oates

Optimality criteria for probabilistic numerical methods

Preprint: Optimality of probabilistic numerical methods

Chris Oates, Jon Cockayne, Dennis Prangle, Mark Girolami, and I have just uploaded a preprint of our paper “Optimality criteria for probabilistic numerical methods” to the arXiv.

Abstract. It is well understood that Bayesian decision theory and average case analysis are essentially identical. However, if one is interested in performing uncertainty quantification for a numerical task, it can be argued that the decision-theoretic framework is neither appropriate nor sufficient. To this end, we consider an alternative optimality criterion from Bayesian experimental design and study its implied optimal information in the numerical context. This information is demonstrated to differ, in general, from the information that would be used in an average-case-optimal numerical method. The explicit connection to Bayesian experimental design suggests several distinct regimes in which optimal probabilistic numerical methods can be developed.

Published on Tuesday 15 January 2019 at 11:00 UTC #preprint #prob-num #oates #cockayne #prangle #girolami

Equivalence of weak and strong modes of measures on topological vector spaces

Preprint: Weak and strong modes

Han Cheng Lie and I have just uploaded a revised preprint of our paper, “Equivalence of weak and strong modes of measures on topological vector spaces”, to the arXiv. This addresses a natural question in the theory of modes (or maximum a posteriori estimators, in the case of posterior measure for a Bayesian inverse problem) in an infinite-dimensional space \(X\). Such modes can be defined either strongly (a la Dashti et al. (2013), via a global maximisation) or weakly (a la Helin & Burger (2015), via a dense subspace \(E \subset X\)). The question is, when are strong and weak modes equivalent? The answer turns out to be rather subtle: under reasonable uniformity conditions, the two kinds of modes are indeed equivalent, but finite-dimensional counterexamples exist when the uniformity conditions fail.

Abstract. A strong mode of a probability measure on a normed space \(X\) can be defined as a point \(u \in X\) such that the mass of the ball centred at \(u\) uniformly dominates the mass of all other balls in the small-radius limit. Helin and Burger weakened this definition by considering only pairwise comparisons with balls whose centres differ by vectors in a dense, proper linear subspace \(E\) of \(X\), and posed the question of when these two types of modes coincide. We show that, in a more general setting of metrisable vector spaces equipped with non-atomic measures that are finite on bounded sets, the density of \(E\) and a uniformity condition suffice for the equivalence of these two types of modes. We accomplish this by introducing a new, intermediate type of mode. We also show that these modes can be inequivalent if the uniformity condition fails. Our results shed light on the relationships between among various notions of maximum a posteriori estimator in non-parametric Bayesian inference.

Published on Monday 9 July 2018 at 08:00 UTC #publication #preprint #inverse-problems #modes #map-estimator #lie

Random forward models and log-likelihoods in Bayesian inverse problems

Preprint: Random Bayesian inverse problems

Han Cheng Lie, Aretha Teckentrup, and I have just uploaded a preprint of our latest article, “Random forward models and log-likelihoods in Bayesian inverse problems”, to the arXiv. This paper considers the effect of approximating the likelihood in a Bayesian inverse problem by a random surrogate, as frequently happens in applications, with the aim of showing that the perturbed posterior distribution is close to the exact one in a suitable sense. This article considers general randomisation models, and thereby expands upon the previous investigations of Stuart and Teckentrup (2017) in the Gaussian setting.

Abstract. We consider the use of randomised forward models and log-likelihoods within the Bayesian approach to inverse problems. Such random approximations to the exact forward model or log-likelihood arise naturally when a computationally expensive model is approximated using a cheaper stochastic surrogate, as in Gaussian process emulation (kriging), or in the field of probabilistic numerical methods. We show that the Hellinger distance between the exact and approximate Bayesian posteriors is bounded by moments of the difference between the true and approximate log-likelihoods. Example applications of these stability results are given for randomised misfit models in large data applications and the probabilistic solution of ordinary differential equations.

Published on Tuesday 19 December 2017 at 08:30 UTC #publication #preprint #bayesian #inverse-problems #lie #teckentrup

Exact active subspace Metropolis-Hastings, with applications to the Lorenz-96 system

Preprint: Active subspace Metropolis-Hastings

Ingmar Schuster, Paul Constantine and I have just uploaded a preprint of our latest article, “Exact active subspace Metropolis–Hastings, with applications to the Lorenz-96 system”, to the arXiv. This paper reports on our first investigations into the acceleration of Markov chain Monte Carlo methods using active subspaces as compared to other adaptivity techniques, and is supported by the DFG through SFB 1114 Scaling Cascades in Complex Systems.

Abstract. We consider the application of active subspaces to inform a Metropolis–Hastings algorithm, thereby aggressively reducing the computational dimension of the sampling problem. We show that the original formulation, as proposed by Constantine, Kent, and Bui-Thanh (SIAM J. Sci. Comput., 38(5):A2779–A2805, 2016), possesses asymptotic bias. Using pseudo-marginal arguments, we develop an asymptotically unbiased variant. Our algorithm is applied to a synthetic multimodal target distribution as well as a Bayesian formulation of a parameter inference problem for a Lorenz-96 system.

Published on Friday 8 December 2017 at 08:00 UTC #publication #preprint #mcmc #sfb1114 #schuster #constantine