### Preprint: Weak and strong modes

Han Cheng Lie and I have just uploaded a preprint of our latest paper, “Equivalence of weak and strong modes of measures on topological vector spaces” to the arXiv.
This addresses a natural question in the theory of modes (or maximum a posteriori estimators, in the case of posterior measure for a Bayesian inverse problem) in infinite-dimensional spaces, which are defined either *strongly* (a la Dashti et al. (2013), via a global maximisation) or *weakly* (a la Helin & Burger (2015), via a dense subspace):
when are strong and weak modes equivalent?

**Abstract.** Modes of a probability measure on an infinite-dimensional Banach space \(X\) are often defined by maximising the small-radius limit of the ratio of measures of norm balls. Helin and Burger weakened the definition of such modes by considering only balls with centres in proper subspaces of \(X\), and posed the question of when this restricted notion coincides with the unrestricted one. We generalise these definitions to modes of arbitrary measures on topological vector spaces, defined by arbitrary bounded, convex, neighbourhoods of the origin. We show that a coincident limiting ratios condition is a necessary and sufficient condition for the equivalence of these two types of modes, and show that the coincident limiting ratios condition is satisfied in a wide range of topological vector spaces.

Published on Wednesday 9 August 2017 at 05:00 UTC #publication #preprint #inverse-problems

### Heavy-tailed stable priors in Inverse Problems and Imaging

The final version of “Well-posed Bayesian inverse problems and heavy-tailed stable quasi-Banach space priors” has now been published online in *Inverse Problems and Imaging*; the print version will be available in October.

Published on Wednesday 19 July 2017 at 15:30 UTC #publication #inverse-problems

### Inverse Problems Summer School at the Alan Turing Institute

From 29 August–1 September 2017, the Alan Turing Institute will host a summer school on Mathematical Aspects of Inverse Problems organised by Claudia Schillings (Mannheim) and Aretha Teckentrup (Edingburgh and Alan Turing Institute), two of my former colleagues at the University of Warwick. The invited lecturers are:

- Masoumeh Dashti (Sussex): Bayesian Approach to Inverse Problems
- Michela Ottobre (Maxwell Institute, Edinburgh): Hamiltonian Monte Carlo
- Carola-Bibiane Schönlieb and Clarice Poon (Cambridge): A Comparison of Variational Methods and Deep Neural Networks for Inverse Problems
- Alison Fowler (Reading): The Value of Observations for Numerical Weather Prediction

Published on Friday 23 June 2017 at 09:00 UTC #event #inverse-problems

### Ingmar Schuster Joins the UQ Group

It is a pleasure to announce that Ingmar Schuster will join the UQ research group as a postdoctoral researcher with effect from 15 June 2017. He will be working on project A06 “Enabling Bayesian uncertainty quantification for multiscale systems and network models via mutual likelihood-informed dimension reduction” as part of SFB 1114 Scaling Cascades in Complex Systems.

Published on Thursday 15 June 2017 at 08:00 UTC #group

### Preprint: Computing with dense kernel matrices at near-linear cost

Florian Schäfer, Houman Owhadi and I have just uploaded a preprint of our latest paper, “Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity” to the arXiv. This paper builds upon the probabilistic-numerical ideas of “gamblets” (elementary gables upon the solution of a PDE) introduced by Owhadi (2016) to provide near-linear cost \(\varepsilon\)-approximate compression, inversion and principal component analysis of dense kernel matrices, the entries of which come from Green's functions of suitable differential operators.

**Abstract.** Dense kernel matrices \(\Theta \in \mathbb{R}^{N \times N}\) obtained from point evaluations of a covariance function \(G\) at locations \(\{x_{i}\}_{1 \leq i \leq N}\) arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green's functions elliptic boundary value problems and approximately equally spaced sampling points, we show how to identify a subset \(S \subset \{ 1,\dots, N \} \times \{ 1,\dots,N \}\), with \(\#S = O(N \log(N)\log^{d}(N/\varepsilon))\), such that the zero fill-in block-incomplete Cholesky decomposition of \(\Theta_{i,j} 1_{(i,j) \in S}\) is an \(\varepsilon\)-approximation of \(\Theta\). This block-factorisation can provably be obtained in \(O(N \log^{2}(N)(\log(1/\varepsilon)+\log^{2}(N))^{4d+1})\) complexity in time. Numerical evidence further suggests that element-wise Cholesky decomposition with the same ordering constitutes an \(O(N \log^{2}(N) \log^{2d}(N/\varepsilon))\) solver. The algorithm only needs to know the spatial configuration of the \(x_{i}\) and does not require an analytic representation of \(G\). Furthermore, an approximate PCA with optimal rate of convergence in the operator norm can be easily read off from this decomposition. Hence, by using only subsampling and the incomplete Cholesky decomposition, we obtain at nearly linear complexity the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky decomposition we also obtain a near-linear-time solver for elliptic PDEs.

Published on Tuesday 13 June 2017 at 07:00 UTC #publication #preprint #prob-num