Tim Sullivan

Welcome!

I am Associate Professor in Predictive Modelling in the Mathematics Institute and School of Engineering at the University of Warwick. I am also a Turing Fellow at the Alan Turing Institute. I have wide interests in uncertainty quantification the broad sense, understood as the meeting point of numerical analysis, applied probability and statistics, and scientific computation. On this site you will find information about how to contact me, my research, publications, and teaching activities.

Randomised one-step time integration methods for deterministic operator differential equations

Randomised integration for deterministic operator differential equations in Calcolo

The article “Randomised one-step time integration methods for deterministic operator differential equations” by Han Cheng Lie, Martin Stahn, and myself has just appeared in its final form in Calcolo. In this paper, we extend the analysis of Conrad et al. (2016) and Lie et al. (2019) to the case of evolutionary systems in Banach spaces or even Gel′fand triples, this being the right setting for many evolutionary partial differential equations.

H. C. Lie, M. Stahn, and T. J. Sullivan. “Randomised one-step time integration methods for deterministic operator differential equations.” Calcolo 59(1):13, 33pp., 2022. doi:10.1007/s10092-022-00457-6

Abstract. Uncertainty quantification plays an important role in applications that involve simulating ensembles of trajectories of dynamical systems. Conrad et al. (Stat. Comput., 2017) proposed randomisation of deterministic time integration methods as a strategy for quantifying uncertainty due to time discretisation. We consider this strategy for systems that are described by deterministic, possibly non-autonomous operator differential equations defined on a Banach space or a Gel′fand triple. We prove pathwise and expected error bounds on the random trajectories, given an assumption on the local truncation error of the underlying deterministic time integration and an assumption that the absolute moments of the random variables decay with the time step. Our analysis shows that the error analysis for differential equations in finite-dimensional Euclidean space carries over to infinite-dimensional settings.

Published on Friday 25 February 2022 at 17:00 UTC #publication #prob-num #lie #stahn

GParareal: A time-parallel ODE solver using Gaussian process emulation

GParareal: A time-parallel ODE solver using Gaussian process emulation

Kamran Pentland, Massimiliano Tamborrino, James Buchanan, Lynton Appel and I have just uploaded a preprint of our latest article, “GParareal: A time-parallel ODE solver using Gaussian process emulation”, to the arXiv. In this paper, we show how a Gaussian process emulator for the difference between coarse/cheap and fine/expensive solvers for a dynamical system can be used to enable rapid and accurate solution of that dynamical system in a way that is parallel in time. This approach extends the now-classical Parareal algorithm in a probabilistic way that allows for efficient use of both runtime and legacy data gathered about the coarse and fine solvers, which may be a critical performance advantage for complex dynamical systems for which the fine solver is too expensive to run in series over the full time domain.

Abstract. Sequential numerical methods for integrating initial value problems (IVPs) can be prohibitively expensive when high numerical accuracy is required over the entire interval of integration. One remedy is to integrate in a parallel fashion, “predicting” the solution serially using a cheap (coarse) solver and “correcting” these values using an expensive (fine) solver that runs in parallel on a number of temporal subintervals. In this work, we propose a time-parallel algorithm (GParareal) that solves IVPs by modelling the correction term, i.e. the difference between fine and coarse solutions, using a Gaussian process emulator. This approach compares favourably with the classic parareal algorithm and we demonstrate, on a number of IVPs, that GParareal can converge in fewer iterations than parareal, leading to an increase in parallel speed-up. GParareal also manages to locate solutions to certain IVPs where parareal fails and has the additional advantage of being able to use archives of legacy solutions, e.g. solutions from prior runs of the IVP for different initial conditions, to further accelerate convergence of the method - something that existing time-parallel methods do not do.

Published on Tuesday 1 February 2022 at 12:00 UTC #preprint #prob-num #pentland #tamborrino #buchanan #appel

Γ-convergence of Onsager–Machlup functionals

Γ-convergence of Onsager-Machlup functionals in Inverse Problems

The articles “Γ-convergence of Onsager–Machlup functionals” (“I. With applications to maximum a posteriori estimation in Bayesian inverse problems” and “II. Infinite product measures on Banach spaces”) by Birzhan Ayanbayev, Ilja Klebanov, Han Cheng Lie, and myself have just appeared in their final form in the journal Inverse Problems.

The purpose of this work is to address a long-standing issue in the Bayesian approach to inverse problems, namely the joint stability of a Bayesian posterior and its modes (MAP estimators) when the prior, likelihood, and data are perturbed or approximated. We show that the correct way to approach this problem is to interpret MAP estimators as global weak modes in the sense of Helin and Burger (2015), which can be identified as the global minimisers of the Onsager–Machlup functional of the posterior distribution, and hence to provide a convergence theory for MAP estimators in terms of Γ-convergence of these Onsager–Machlup functionals. It turns out that posterior Γ-convergence can be assessed in a relatively straightforward manner in terms of prior Γ-convergence and continuous convergence of potentials (negative log-likelihoods). Over the two parts of the paper, we carry out this programme both in generality and for specific priors that are commonly used in Bayesian inverse problems, namely Gaussian and Besov priors (Lassas et al., 2009; Dashti et al., 2012).

B. Ayanbayev, I. Klebanov, H. C. Lie, and T. J. Sullivan. “Γ-convergence of Onsager–Machlup functionals: I. With applications to maximum a posteriori estimation in Bayesian inverse problems.” Inverse Problems 38(2):025005, 32pp., 2022. doi:10.1088/1361-6420/ac3f81

Abstract (Part I). The Bayesian solution to a statistical inverse problem can be summarised by a mode of the posterior distribution, i.e. a MAP estimator. The MAP estimator essentially coincides with the (regularised) variational solution to the inverse problem, seen as minimisation of the Onsager–Machlup functional of the posterior measure. An open problem in the stability analysis of inverse problems is to establish a relationship between the convergence properties of solutions obtained by the variational approach and by the Bayesian approach. To address this problem, we propose a general convergence theory for modes that is based on the Γ-convergence of Onsager–Machlup functionals, and apply this theory to Bayesian inverse problems with Gaussian and edge-preserving Besov priors. Part II of this paper considers more general prior distributions.

B. Ayanbayev, I. Klebanov, H. C. Lie, and T. J. Sullivan. “Γ-convergence of Onsager–Machlup functionals: II. Infinite product measures on Banach spaces.” Inverse Problems 38(2):025006, 35pp., 2022. doi:10.1088/1361-6420/ac3f82

Abstract (Part II). We derive Onsager–Machlup functionals for countable product measures on weighted \(\ell^{p}\) subspaces of the sequence space \(\mathbb{R}^\mathbb{N}\). Each measure in the product is a shifted and scaled copy of a reference probability measure on \(\mathbb{R}\) that admits a sufficiently regular Lebesgue density. We study the equicoercivity and Γ-convergence of sequences of Onsager–Machlup functionals associated to convergent sequences of measures within this class. We use these results to establish analogous results for probability measures on separable Banach or Hilbert spaces, including Gaussian, Cauchy, and Besov measures with summability parameter \( 1 \leq p \leq 2 \). Together with Part I of this paper, this provides a basis for analysis of the convergence of maximum a posteriori estimators in Bayesian inverse problems and most likely paths in transition path theory.

Published on Wednesday 5 January 2022 at 12:00 UTC #publication #inverse-problems #modes #map-estimators #ayanbayev #klebanov #lie

Dimension-independent Markov chain Monte Carlo on the sphere

Dimension-independent MCMC on spheres

Han Cheng Lie, Daniel Rudolf, Björn Sprungk and I have just uploaded a preprint of our latest article, “Dimension-independent Markov chain Monte Carlo on the sphere”, to the arXiv. In this paper, motivated by problems such as Bayesian binary classification over continuous spaces, for which the parameter space is naturally an infinite-dimensional sphere of functions, we consider MCMC methods for inference on spheres of Hilbert spaces. In particular, we construct MCMC methods that have dimension-independent performance in terms of their acceptance probability, spectral gap, etc.; we also show how more naive approaches may lack basic properties such as Markovianity and reversibility; how how even sophisticated geometric MCMC approaches can still suffer from the curse of dimension.

Abstract. We consider Bayesian analysis on high-dimensional spheres with angular central Gaussian priors. These priors model antipodally-symmetric directional data, are easily defined in Hilbert spaces and occur, for instance, in Bayesian binary classification and level set inversion. In this paper we derive efficient Markov chain Monte Carlo methods for approximate sampling of posteriors with respect to these priors. Our approaches rely on lifting the sampling problem to the ambient Hilbert space and exploit existing dimension-independent samplers in linear spaces. By a push-forward Markov kernel construction we then obtain Markov chains on the sphere, which inherit reversibility and spectral gap properties from samplers in linear spaces. Moreover, our proposed algorithms show dimension-independent efficiency in numerical experiments.

Published on Wednesday 22 December 2021 at 12:00 UTC #preprint #mcmc #lie #rudolf #sprungk

The linear conditional expectation in Hilbert space

Linear conditional expectation in Hilbert space in Bernoulli

The article “The linear conditional expectation in Hilbert space” by Ilja Klebanov, Björn Sprungk, and myself has just appeared in its final form in the journal Bernoulli. In this paper, we study the best approximation \(\mathbb{E}^{\mathrm{A}}[U|V]\) of the conditional expectation \(\mathbb{E}[U|V]\) of an \(\mathcal{G}\)-valued random variable \(U\) conditional upon a \(\mathcal{H}\)-valued random variable \(V\), where “best” means \(L^{2}\)-optimality within the class \(\mathrm{A}(\mathcal{H}; \mathcal{G})\) of affine functions of the conditioning variable \(V\). This approximation is a powerful one and lies at the heart of the Bayes linear approach to statistical inference, but its analytical properties, especially for \(U\) and \(V\) taking values in infinite-dimensional spaces \(\mathcal{G}\) and \(\mathcal{H}\), are only partially understood — which this article aims to rectify.

I. Klebanov, B. Sprungk, and T. J. Sullivan. “The linear conditional expectation in Hilbert space.” Bernoulli 27(4):2267–2299, 2021. doi:10.3150/20-BEJ1308

Abstract. The linear conditional expectation (LCE) provides a best linear (or rather, affine) estimate of the conditional expectation and hence plays an important rôle in approximate Bayesian inference, especially the Bayes linear approach. This article establishes the analytical properties of the LCE in an infinite-dimensional Hilbert space context. In addition, working in the space of affine Hilbert–Schmidt operators, we establish a regularisation procedure for this LCE. As an important application, we obtain a simple alternative derivation and intuitive justification of the conditional mean embedding formula, a concept widely used in machine learning to perform the conditioning of random variables by embedding them into reproducing kernel Hilbert spaces.

Published on Wednesday 25 August 2021 at 08:00 UTC #publication #tru2 #bayesian #rkhs #mean-embedding #klebanov #sprungk