### Random Bayesian inverse problems in JUQ

The article “Random forward models and log-likelihoods in Bayesian inverse problems” by Han Cheng Lie, Aretha Teckentrup, and myself has now appeared in its final form in the *SIAM/ASA Journal on Uncertainty Quantification*, volume 6, issue 4.
This paper considers the effect of approximating the likelihood in a Bayesian inverse problem by a random surrogate, as frequently happens in applications, with the aim of showing that the perturbed posterior distribution is close to the exact one in a suitable sense.
This article considers general randomisation models, and thereby expands upon the previous investigations of Stuart and Teckentrup (2017) in the Gaussian setting.

H. C. Lie, T. J. Sullivan & A. L. Teckentrup. “Random forward models and log-likelihoods in Bayesian inverse problems.” *SIAM/ASA Journal on Uncertainty Quantification* **6**(4):1600–1629, 2018. doi:10.1137/18M1166523

**Abstract.** We consider the use of randomised forward models and log-likelihoods within the Bayesian approach to inverse problems. Such random approximations to the exact forward model or log-likelihood arise naturally when a computationally expensive model is approximated using a cheaper stochastic surrogate, as in Gaussian process emulation (kriging), or in the field of probabilistic numerical methods. We show that the Hellinger distance between the exact and approximate Bayesian posteriors is bounded by moments of the difference between the true and approximate log-likelihoods. Example applications of these stability results are given for randomised misfit models in large data applications and the probabilistic solution of ordinary differential equations.

Published on Monday 10 December 2018 at 12:00 UTC #publication #inverse-problems #juq #prob-num

### Weak and strong modes in Inverse Problems

The paper “Equivalence of weak and strong modes of measures on topological vector spaces” by Han Cheng Lie myself has now appeared in *Inverse Problems*.
This paper addresses a natural question in the theory of modes (or maximum a posteriori estimators, in the case of posterior measure for a Bayesian inverse problem) in an infinite-dimensional space \(X\).
Such modes can be defined either *strongly* (a la Dashti et al. (2013), via a global maximisation) or *weakly* (a la Helin & Burger (2015), via a dense subspace \(E \subset X\)).
The question is, when are strong and weak modes equivalent?
The answer turns out to be rather subtle:
under reasonable uniformity conditions, the two kinds of modes are indeed equivalent, but finite-dimensional counterexamples exist when the uniformity conditions fail.

H. C. Lie & T. J. Sullivan. “Equivalence of weak and strong modes of measures on topological vector spaces.” *Inverse Problems* **34**(11):115013, 2018. doi:10.1088/1361-6420/aadef2

(See also H. C. Lie & T. J. Sullivan. “Erratum: Equivalence of weak and strong modes of measures on topological vector spaces (2018 *Inverse Problems* **34** 115013).” *Inverse Problems* **34**(12):129601, 2018. doi:10.1088/1361-6420/aae55b)

**Abstract.** A strong mode of a probability measure on a normed space \(X\) can be defined as a point \(u \in X\) such that the mass of the ball centred at \(u\) uniformly dominates the mass of all other balls in the small-radius limit. Helin and Burger weakened this definition by considering only pairwise comparisons with balls whose centres differ by vectors in a dense, proper linear subspace \(E\) of \(X\), and posed the question of when these two types of modes coincide. We show that, in a more general setting of metrisable vector spaces equipped with non-atomic measures that are finite on bounded sets, the density of \(E\) and a uniformity condition suffice for the equivalence of these two types of modes. We accomplish this by introducing a new, intermediate type of mode. We also show that these modes can be inequivalent if the uniformity condition fails. Our results shed light on the relationships between among various notions of maximum a posteriori estimator in non-parametric Bayesian inference.

Published on Saturday 22 September 2018 at 12:00 UTC #publication #inverse-problems #modes #map-estimator

### Summer School / Workshop on Computational Mathematics and Data Science

Next month, 22–24 August 2018, along with Matt Dunlop (Helsinki), Tapio Helin (Helsinki), and Simo Särkkä (Aalto), I will be giving guest lectures at a Summer School / Workshop on Computational Mathematics and Data Science at the University of Oulu, Finland.

While the the other lecturers will treat aspects such as machine learning using deep Gaussian processes, filtering, and MAP estimation, my lectures will tackle the fundamentals of the Bayesian approach to inverse problems in the function-space context, as increasingly demanded by modern applications.

**“Well-posedness of Bayesian inverse problems in function spaces: analysis and algorithms”**

The basic formalism of the Bayesian method is easily stated, and appears in every introductory probability and statistics course: the posterior probability is proportional to the prior probability times the likelihood. However, for inference problems in high or even infinite dimension, the Bayesian formula must be carefully formulated and its stability properties mathematically analysed. The paradigm advocated by Andrew Stuart and collaborators since 2010 is that one should study the infinite-dimensional Bayesian inverse problem directly and delay discretisation until the last moment. These lectures will study the role of various choices of prior distribution and likelihood and how they lead to well-posed or ill-posed Bayesian inverse problems. If time permits, we will also consider the implications for algorithms, and how Bayesian posterior are summarised (e.g. by maximum a posteriori estimators).

Published on Saturday 21 July 2018 at 07:30 UTC #event #inverse-problems

### Preprint: Weak and strong modes

Han Cheng Lie and I have just uploaded a revised preprint of our paper, “Equivalence of weak and strong modes of measures on topological vector spaces”, to the arXiv.
This addresses a natural question in the theory of modes (or maximum a posteriori estimators, in the case of posterior measure for a Bayesian inverse problem) in an infinite-dimensional space \(X\).
Such modes can be defined either *strongly* (a la Dashti et al. (2013), via a global maximisation) or *weakly* (a la Helin & Burger (2015), via a dense subspace \(E \subset X\)).
The question is, when are strong and weak modes equivalent?
The answer turns out to be rather subtle:
under reasonable uniformity conditions, the two kinds of modes are indeed equivalent, but finite-dimensional counterexamples exist when the uniformity conditions fail.

**Abstract.** A strong mode of a probability measure on a normed space \(X\) can be defined as a point \(u \in X\) such that the mass of the ball centred at \(u\) uniformly dominates the mass of all other balls in the small-radius limit. Helin and Burger weakened this definition by considering only pairwise comparisons with balls whose centres differ by vectors in a dense, proper linear subspace \(E\) of \(X\), and posed the question of when these two types of modes coincide. We show that, in a more general setting of metrisable vector spaces equipped with non-atomic measures that are finite on bounded sets, the density of \(E\) and a uniformity condition suffice for the equivalence of these two types of modes. We accomplish this by introducing a new, intermediate type of mode. We also show that these modes can be inequivalent if the uniformity condition fails. Our results shed light on the relationships between among various notions of maximum a posteriori estimator in non-parametric Bayesian inference.

Published on Monday 9 July 2018 at 08:00 UTC #publication #preprint #inverse-problems #modes #map-estimator

### Preprint: Random Bayesian inverse problems

Han Cheng Lie, Aretha Teckentrup, and I have just uploaded a preprint of our latest article, “Random forward models and log-likelihoods in Bayesian inverse problems”, to the arXiv. This paper considers the effect of approximating the likelihood in a Bayesian inverse problem by a random surrogate, as frequently happens in applications, with the aim of showing that the perturbed posterior distribution is close to the exact one in a suitable sense. This article considers general randomisation models, and thereby expands upon the previous investigations of Stuart and Teckentrup (2017) in the Gaussian setting.

**Abstract.** We consider the use of randomised forward models and log-likelihoods within the Bayesian approach to inverse problems. Such random approximations to the exact forward model or log-likelihood arise naturally when a computationally expensive model is approximated using a cheaper stochastic surrogate, as in Gaussian process emulation (kriging), or in the field of probabilistic numerical methods. We show that the Hellinger distance between the exact and approximate Bayesian posteriors is bounded by moments of the difference between the true and approximate log-likelihoods. Example applications of these stability results are given for randomised misfit models in large data applications and the probabilistic solution of ordinary differential equations.

Published on Tuesday 19 December 2017 at 08:30 UTC #publication #preprint #inverse-problems