Tim Sullivan

Junior Professor in Applied Mathematics:
Risk and Uncertainty Quantification

The Alan Turing Institute

Inverse Problems Summer School at the Alan Turing Institute

From 29 August–1 September 2017, the Alan Turing Institute will host a summer school on Mathematical Aspects of Inverse Problems organised by Claudia Schillings (Mannheim) and Aretha Teckentrup (Edingburgh and Alan Turing Institute), two of my former colleagues at the University of Warwick. The invited lecturers are:

Published on Friday 23 June 2017 at 09:00 UTC #event #inverse-problems

Ingmar Schuster

Ingmar Schuster Joins the UQ Group

It is a pleasure to announce that Ingmar Schuster will join the UQ research group as a postdoctoral researcher with effect from 15 June 2017. He will be working on project A06 “Enabling Bayesian uncertainty quantification for multiscale systems and network models via mutual likelihood-informed dimension reduction” as part of SFB 1114 Scaling Cascades in Complex Systems.

Published on Thursday 15 June 2017 at 08:00 UTC #group

Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity

Preprint: Computing with dense kernel matrices at near-linear cost

Florian Schäfer, Houman Owhadi and I have just uploaded a preprint of our latest paper, “Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity” to the arXiv. This paper builds upon the probabilistic-numerical ideas of “gamblets” (elementary gables upon the solution of a PDE) introduced by Owhadi (2016) to provide near-linear cost \(\varepsilon\)-approximate compression, inversion and principal component analysis of dense kernel matrices, the entries of which come from Green's functions of suitable differential operators.

Abstract. Dense kernel matrices \(\Theta \in \mathbb{R}^{N \times N}\) obtained from point evaluations of a covariance function \(G\) at locations \(\{x_{i}\}_{1 \leq i \leq N}\) arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green's functions elliptic boundary value problems and approximately equally spaced sampling points, we show how to identify a subset \(S \subset \{ 1,\dots, N \} \times \{ 1,\dots,N \}\), with \(\#S = O(N \log(N)\log^{d}(N/\varepsilon))\), such that the zero fill-in block-incomplete Cholesky decomposition of \(\Theta_{i,j} 1_{(i,j) \in S}\) is an \(\varepsilon\)-approximation of \(\Theta\). This block-factorisation can provably be obtained in \(O(N \log^{2}(N)(\log(1/\varepsilon)+\log^{2}(N))^{4d+1})\) complexity in time. Numerical evidence further suggests that element-wise Cholesky decomposition with the same ordering constitutes an \(O(N \log^{2}(N) \log^{2d}(N/\varepsilon))\) solver. The algorithm only needs to know the spatial configuration of the \(x_{i}\) and does not require an analytic representation of \(G\). Furthermore, an approximate PCA with optimal rate of convergence in the operator norm can be easily read off from this decomposition. Hence, by using only subsampling and the incomplete Cholesky decomposition, we obtain at nearly linear complexity the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky decomposition we also obtain a near-linear-time solver for elliptic PDEs.

Published on Tuesday 13 June 2017 at 07:00 UTC #publication #preprint #prob-num

Probabilistic numerical methods for PDE-constrained Bayesian inverse problems

Probabilistic numerics for PDE-constained inverse problems in MaxEnt

Jon Cockayne, Chris Oates, Mark Girolami and I have just had our paper “Probabilistic numerical methods for PDE-constrained Bayesian inverse problems” published in the Proceedings of the 36th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering. This paper complements our more extensive work “Probabilistic meshless methods for partial differential equations and Bayesian inverse problems” and gives a more concise presentation of the main ideas, aimed at a general audience.

J. Cockayne, C. Oates, T. J. Sullivan & M. Girolami. “Probabilistic Numerical Methods for PDE-constrained Bayesian Inverse Problems” in Proceedings of the 36th International Workshop on Bayesian Inference and Maximum Entropy Methods in Science and Engineering, ed. G. Verdoolaege. AIP Conference Proceedings 1853:060001-1–060001-8, 2017. doi:10.1063/1.4985359

Published on Friday 9 June 2017 at 11:40 UTC #publication #prob-num #inverse-problems

Hanne Kekkonen

UQ Talks: Hanne Kekkonen

In two weeks Hanne Kekkonen (University of Warwick) will give a talk on “Large noise in variational regularisation”.

Time and Place. Monday 12 June 2017, 11:00–12:00, ZIB Seminar Room 2006, Zuse Institute Berlin, Takustraße 7, 14195 Berlin

Abstract. We consider variational regularisation methods for inverse problems with large noise, which is in general unbounded in the image space of the forward operator. We introduce a Banach space setting that allows to define a reasonable notion of solutions for more general noise in a larger space provided one has sufficient mapping properties of the forward operator. As an example we study the particularly important cases of one- and p-homogeneous regularisation functionals. As a natural further step we study stochastic noise models and in particular white noise, for which we derive error estimates in terms of the expectation of the Bregman distance. As an example we study total variation prior. This is joint work with Martin Burger and Tapio Helin.

Published on Wednesday 31 May 2017 at 16:00 UTC #event #uq-talk #inverse-problems

← Newer | 1 | 2 | 3 | 4 | 5 | 6 | 7 | Older →