Tim Sullivan

Welcome!

I am Junior Professor in Applied Mathematics with Specialism in Risk and Uncertainty Quantification at the Freie Universität Berlin and Research Group Leader for Uncertainty Quantification at the Zuse Institute Berlin. I have wide interests in uncertainty quantification the broad sense, understood as the meeting point of numerical analysis, applied probability and statistics, and scientific computation. On this site you will find information about how to contact me, my research, publications, and teaching activities.

A rigorous theory of conditional mean embeddings

Preprint: A rigorous theory of conditional mean embeddings

Ilja Klebanov, Ingmar Schuster, and I have just uploaded a preprint of our recent work “A rigorous theory of conditional mean embeddings” to the arXiv. In this work we take a close mathematical look at the method of conditional mean embedding. In this approach to non-parametric inference, a random variable \(Y \sim \mathbb{P}_{Y}\) in a set \(\mathcal{Y}\) is represented by its kernel mean embedding, the reproducing kernel Hilbert space element

\( \displaystyle \mu_{Y} = \int_{\mathcal{Y}} \psi(y) \, \mathrm{d} \mathbb{P}_{Y} (y) \in \mathcal{G}, \)

and conditioning with respect to an observation \(x\) of a related random variable \(X \sim \mathbb{P}_{X}\) in a set \(\mathcal{X}\) with RKHS \(\mathcal{H}\) is performed using the Woodbury formula

\( \displaystyle \mu_{Y|X = x} = \mu_Y + (C_{XX}^{\dagger} C_{XY})^\ast \, (\varphi(x) - \mu_X) . \)

Here \(\psi \colon \mathcal{Y} \to \mathcal{G}\) and \(\varphi \colon \mathcal{X} \to \mathcal{H}\) are the feature maps and the \(C\)'s denote the appropriate centred (cross-)covariance operators of the embedded random variables \(\psi(Y)\) in \(\mathcal{G}\) and \(\varphi(X)\) in \(\mathcal{H}\).

Our article aims to provide rigorous mathematical foundations for this attractive but apparently naïve approach to conditional probability, and hence to Bayesian inference.

Abstract. Conditional mean embeddings (CME) have proven themselves to be a powerful tool in many machine learning applications. They allow the efficient conditioning of probability distributions within the corresponding reproducing kernel Hilbert spaces (RKHSs) by providing a linear-algebraic relation for the kernel mean embeddings of the respective probability distributions. Both centered and uncentered covariance operators have been used to define CMEs in the existing literature. In this paper, we develop a mathematically rigorous theory for both variants, discuss the merits and problems of either, and significantly weaken the conditions for applicability of CMEs. In the course of this, we demonstrate a beautiful connection to Gaussian conditioning in Hilbert spaces.

Published on Tuesday 3 December 2019 at 07:00 UTC #publication #preprint #mathplus #tru2 #rkhs #mean-embedding #klebanov #schuster

Freie Universität Berlin

Postdoc position: Analysis of MAP estimators

There is still an opening for a full-time two-year postdoctoral research position in the UQ group at the Freie Universität Berlin. This position will be associated to the project “Analysis of maximum a posteriori estimators: Common convergence theories for Bayesian and variational inverse problems” funded by the DFG.

This project aims to advance the state of the art in rigorous mathematical understanding of MAP estimators in infinite-dimensional statistical inverse problems. In particular, the research in this project will connect the “small balls” approach of Dashti et al. (2013) to the calculus of variations and hence properly link the variational and fully Bayesian points of view on inverse problems. This is an exciting opportunity for someone well-versed in the calculus of variations and tools such as Γ-convergence to make an impact on fundamental questions of non-parametric statistics and inverse problems or, vice versa, for someone with a statistical inverse problems background to advance the rigorous state of the art for such methods.

Prospective candidates are encouraged to contact me with informal enquiries. Formal applications are to be sent by post or email, by 23 December 2019, under the heading MAP-Analysis, and should include a cover letter, a scientific CV including list of publications and research statement, and the contact details of two professional references.

Published on Monday 25 November 2019 at 08:00 UTC #group #job #fu-berlin #inverse-problems #dfg #map-estimators

Bayesian probabilistic numerical methods

Bayesian probabilistic numerical methods in SIAM Review

The 2019 Q4 issue of SIAM Review will carry an article by Jon Cockayne, Chris Oates, Mark Girolami, and myself on the Bayesian formulation of probabilistic numerical methods, i.e. the interpretation of deterministic numerical tasks such as quadrature and the solution of ordinary and partial differential equations as (Bayesian) statistical inference tasks.

J. Cockayne, C. J. Oates, T. J. Sullivan, and M. Girolami. “Bayesian probabilistic numerical methods.” SIAM Review 61(4):756–789, 2019. doi:10.1137/17M1139357

Abstract. Over forty years ago average-case error was proposed in the applied mathematics literature as an alternative criterion with which to assess numerical methods. In contrast to worst-case error, this criterion relies on the construction of a probability measure over candidate numerical tasks, and numerical methods are assessed based on their average performance over those tasks with respect to the measure. This paper goes further and establishes Bayesian probabilistic numerical methods as solutions to certain inverse problems based upon the numerical task within the Bayesian framework. This allows us to establish general conditions under which Bayesian probabilistic numerical methods are well defined, encompassing both the nonlinear and non-Gaussian contexts. For general computation, a numerical approximation scheme is proposed and its asymptotic convergence established. The theoretical development is extended to pipelines of computation, wherein probabilistic numerical methods are composed to solve more challenging numerical tasks. The contribution highlights an important research frontier at the interface of numerical analysis and uncertainty quantification, and a challenging industrial application is presented.

Published on Thursday 7 November 2019 at 07:00 UTC #publication #bayesian #siam-review #prob-num #cockayne #girolami #oates

A modern retrospective on probabilistic numerics

Probabilistic numerics retrospective in Statistics and Computing

The special issue of Statistics and Computing dedicated to probabilistic numerics carries the article “A modern retrospective on probabilistic numerics” by Chris Oates and myself, which gives a historical overview of the field and some perspectives on future challenges.

C. J. Oates and T. J. Sullivan. “A modern retrospective on probabilistic numerics.” Statistics and Computing 29(6):1335–1351, 2019. doi:10.1007/s11222-019-09902-z

Abstract. This article attempts to place the emergence of probabilistic numerics as a mathematical–statistical research field within its historical context and to explore how its gradual development can be related both to applications and to a modern formal treatment. We highlight in particular the parallel contributions of Sul′din and Larkin in the 1960s and how their pioneering early ideas have reached a degree of maturity in the intervening period, mediated by paradigms such as average-case analysis and information-based complexity. We provide a subjective assessment of the state of research in probabilistic numerics and highlight some difficulties to be addressed by future works.

Published on Tuesday 5 November 2019 at 12:30 UTC #publication #stat-comp #prob-num #oates

Strong convergence rates of probabilistic integrators for ordinary differential equations

Strong convergence rates of probabilistic integrators for ODEs in Statistics and Computing

The special issue of Statistics and Computing dedicated to probabilistic numerics carries the article “Strong convergence rates of probabilistic integrators for ODEs” by Han Cheng Lie, Andrew Stuart, and myself on the convergence analysis of randomised perturbation-based time-stepping methods for the solution of ODE initial value problems.

H. C. Lie, A. M. Stuart, and T. J. Sullivan. “Strong convergence rates of probabilistic integrators for ordinary differential equations.” Statistics and Computing 29(6):1265–1283, 2019. doi:10.1007/s11222-019-09898-6

Abstract. Probabilistic integration of a continuous dynamical system is a way of systematically introducing discretisation error, at scales no larger than errors introduced by standard numerical discretisation, in order to enable thorough exploration of possible responses of the system to inputs. It is thus a potentially useful approach in a number of applications such as forward uncertainty quantification, inverse problems, and data assimilation. We extend the convergence analysis of probabilistic integrators for deterministic ordinary differential equations, as proposed by Conrad et al. (2016), to establish mean-square convergence in the uniform norm on discrete- or continuous-time solutions under relaxed regularity assumptions on the driving vector fields and their induced flows. Specifically, we show that randomised high-order integrators for globally Lipschitz flows and randomised Euler integrators for dissipative vector fields with polynomially bounded local Lipschitz constants all have the same mean-square convergence rate as their deterministic counterparts, provided that the variance of the integration noise is not of higher order than the corresponding deterministic integrator. These and similar results are proven for probabilistic integrators where the random perturbations may be state-dependent, non-Gaussian, or non-centred random variables.

Published on Tuesday 5 November 2019 at 12:00 UTC #publication #stat-comp #prob-num #lie #stuart