Tim Sullivan

#preprint

Clear Search

Geodesic analysis in Kendall's shape space with epidemiological applications

Preprint: Geodesic analysis in Kendall’s shape space

Esfandiar Nava-Yazdani, Christoph von Tycowicz, Hans-Christian Hege, and I have just uploaded an updated preprint of our work “Geodesic analysis in Kendall's shape space with epidemiological applications” (previously entitled “A shape trajectories approach to longitudinal statistical analysis”) to the arXiv. This work is part of the ECMath / MATH+ project CH-15 “Analysis of Empirical Shape Trajectories”.

Abstract. We analytically determine Jacobi fields and parallel transports and compute geodesic regression in Kendall's shape space. Using the derived expressions, we can fully leverage the geometry via Riemannian optimization and thereby reduce the computational expense by several orders of magnitude. The methodology is demonstrated by performing a longitudinal statistical analysis of epidemiological shape data. As an example application we have chosen 3D shapes of knee bones, reconstructed from image data of the Osteoarthritis Initiative (OAI). Comparing subject groups with incident and developing osteoarthritis versus normal controls, we find clear differences in the temporal development of femur shapes. This paves the way for early prediction of incident knee osteoarthritis, using geometry data alone.

Published on Monday 1 July 2019 at 08:00 UTC #publication #preprint #ch15 #shape-trajectories #nava-yazdani #von-tycowicz #hege

Comments on the article “A Bayesian conjugate gradient method”

Preprint: Comments on A Bayesian conjugate gradient method

I have just uploaded a preprint of “Comments on the article ‘A Bayesian conjugate gradient method’” to the arXiv. This note discusses the recent paper “A Bayesian conjugate gradient method” in Bayesian Analysis by Jon Cockayne, Chris Oates, Ilse Ipsen, and Mark Girolami, and is an invitation to a rejoinder from the authors.

Abstract. The recent article “A Bayesian conjugate gradient method” by Cockayne, Oates, Ipsen, and Girolami proposes an approximately Bayesian iterative procedure for the solution of a system of linear equations, based on the conjugate gradient method, that gives a sequence of Gaussian/normal estimates for the exact solution. The purpose of the probabilistic enrichment is that the covariance structure is intended to provide a posterior measure of uncertainty or confidence in the solution mean. This note gives some comments on the article, poses some questions, and suggests directions for further research.

Published on Wednesday 26 June 2019 at 08:00 UTC #publication #preprint #prob-num

Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity

Preprint: Computing with dense kernel matrices at near-linear cost

Florian Schäfer, Houman Owhadi, and I have just uploaded a revised and improved version of our preprint “Compression, inversion, and approximate PCA of dense kernel matrices at near-linear computational complexity” to the arXiv. This paper shows how a surprisingly simple algorithm — the zero fill-in incomplete Cholesky factorisation — with respect to a cleverly-chosen sparsity pattern allows for near-linear complexity compression, inversion, and approximate PCA of square matrices of the form

\( \Theta = \begin{bmatrix} G(x_{1}, x_{1}) & \cdots & G(x_{1}, x_{N}) \\ \vdots & \ddots & \vdots \\ G(x_{N}, x_{1}) & \cdots & G(x_{N}, x_{N}) \end{bmatrix} \in \mathbb{R}^{N \times N} , \)

where \(\{ x_{1}, \dots, x_{N} \} \subset \mathbb{R}^{d}\) is a data set and \(G \colon \mathbb{R}^{d} \times \mathbb{R}^{d} \to \mathbb{R}\) is a covariance kernel function. Such matrices play a key role in, for example, Gaussian process regression and RKHS-based machine learning techniques.

Abstract. Dense kernel matrices \(\Theta \in \mathbb{R}^{N \times N}\) obtained from point evaluations of a covariance function \(G\) at locations \(\{ x_{i} \}_{1 \leq i \leq N}\) arise in statistics, machine learning, and numerical analysis. For covariance functions that are Green's functions of elliptic boundary value problems and homogeneously-distributed sampling points, we show how to identify a subset \(S \subset \{ 1 , \dots , N \}^2\), with \(\# S = O ( N \log (N) \log^{d} ( N /\varepsilon ) )\), such that the zero fill-in incomplete Cholesky factorisation of the sparse matrix \(\Theta_{ij} 1_{( i, j ) \in S}\) is an \(\varepsilon\)-approximation of \(\Theta\). This factorisation can provably be obtained in complexity \(O ( N \log( N ) \log^{d}( N /\varepsilon) )\) in space and \(O ( N \log^{2}( N ) \log^{2d}( N /\varepsilon) )\) in time; we further present numerical evidence that \(d\) can be taken to be the intrinsic dimension of the data set rather than that of the ambient space. The algorithm only needs to know the spatial configuration of the \(x_{i}\) and does not require an analytic representation of \(G\). Furthermore, this factorization straightforwardly provides an approximate sparse PCA with optimal rate of convergence in the operator norm. Hence, by using only subsampling and the incomplete Cholesky factorization, we obtain, at nearly linear complexity, the compression, inversion and approximate PCA of a large class of covariance matrices. By inverting the order of the Cholesky factorization we also obtain a solver for elliptic PDE with complexity \(O ( N \log^{d}( N /\varepsilon) )\) in space and \(O ( N \log^{2d}( N /\varepsilon) )\) in time.

Published on Tuesday 26 March 2019 at 12:00 UTC #publication #preprint #prob-num #schaefer #owhadi

A modern retrospective on probabilistic numerics

Preprint: Probabilistic numerics retrospective

Chris Oates and I have just uploaded a preprint of our paper “A modern retrospective on probabilistic numerics” to the arXiv.

Abstract. This article attempts to cast the emergence of probabilistic numerics as a mathematical-statistical research field within its historical context and to explore how its gradual development can be related to modern formal treatments and applications. We highlight in particular the parallel contributions of Sul'din and Larkin in the 1960s and how their pioneering early ideas have reached a degree of maturity in the intervening period, mediated by paradigms such as average-case analysis and information-based complexity. We provide a subjective assessment of the state of research in probabilistic numerics and highlight some difficulties to be addressed by future works.

Published on Tuesday 15 January 2019 at 12:00 UTC #preprint #prob-num #oates

Optimality criteria for probabilistic numerical methods

Preprint: Optimality of probabilistic numerical methods

Chris Oates, Jon Cockayne, Dennis Prangle, Mark Girolami, and I have just uploaded a preprint of our paper “Optimality criteria for probabilistic numerical methods” to the arXiv.

Abstract. It is well understood that Bayesian decision theory and average case analysis are essentially identical. However, if one is interested in performing uncertainty quantification for a numerical task, it can be argued that the decision-theoretic framework is neither appropriate nor sufficient. To this end, we consider an alternative optimality criterion from Bayesian experimental design and study its implied optimal information in the numerical context. This information is demonstrated to differ, in general, from the information that would be used in an average-case-optimal numerical method. The explicit connection to Bayesian experimental design suggests several distinct regimes in which optimal probabilistic numerical methods can be developed.

Published on Tuesday 15 January 2019 at 11:00 UTC #preprint #prob-num #oates #cockayne #prangle #girolami