### #preprint

### UQ for Si using interatomic potentials

Iain Best, James Kermode, and I have just uploaded a preprint of our paper “Uncertainty quantification in atomistic simulations of silicon using interatomic potentials” to the arXiv.

**Abstract.** Atomistic simulations often rely on interatomic potentials to access greater time- and length- scales than those accessible to first principles methods such as density functional theory (DFT). However, since a parameterised potential typically cannot reproduce the true potential energy surface of a given system, we should expect a decrease in accuracy and increase in error in quantities of interest calculated from simulations. Quantifying the uncertainty on the outputs of atomistic simulations is thus an important, necessary step so that there is confidence in results and available metrics to explore improvements in said simulations. Here, we address this research question by forming ensembles of Atomic Cluster Expansion (ACE) potentials, and using Conformal Prediction with DFT training data to provide meaningful, calibrated error bars on several quantities of interest for silicon: the bulk modulus, elastic constants, relaxed vacancy formation energy, and the vacancy migration barrier. We evaluate the effects on uncertainty bounds using a range of different potentials and training sets.

Published on Saturday 24 February 2024 at 12:00 UTC #preprint #kermode #best #uq #interatomc-potentials

### Hille’s theorem for locally convex spaces

I have just uploaded a preprint of the paper “Hille's theorem for Bochner integrals of functions with values in locally convex spaces” to the arXiv. This paper extends the relatively well known result that Bochner integration commutes with closed operators from the classical Banach space setting to the case of more general locally convex spaces.

Published on Thursday 18 January 2024 at 09:00 UTC #preprint #functional-analysis #hille-theorem

### A periodic table of modes and MAP estimators

Ilja Klebanov and I have just uploaded a preprint of our paper “A ‘periodic table’ of modes and maximum a posteriori estimators” to the arXiv.

This paper forms part of the growing body of work on the ‘small balls’ theory of modes for probability measures on metric spaces, which is needed e.g. for the treatment of MAP estimation for Bayesian inverse problems with functional unknowns. There are already several versions in the literature: the strong mode, the weak mode, and the generalised strong mode. We take an axiomatic approach to the problem and identify a system of 17 essentially distinct notions of mode, proving implications between them and providing explicit counterexamples to distinguish them. From an axiomatic point of view, all these 17 seem to be ‘equally good’, suggesting that further research is needed in this area.

**Abstract.**
The last decade has seen many attempts to generalise the definition of modes, or MAP estimators, of a probability distribution \(\mu\) on a space \(X\) to the case that \(\mu\) has no continuous Lebesgue density, and in particular to infinite-dimensional Banach and Hilbert spaces \(X\). This paper examines the properties of and connections among these definitions. We construct a systematic taxonomy – or ‘periodic table’ – of modes that includes the established notions as well as large hitherto-unexplored classes. We establish implications between these definitions and provide counterexamples to distinguish them. We also distinguish those definitions that are merely ‘grammatically correct’ from those that are ‘meaningful’ in the sense of satisfying certain ‘common-sense’ axioms for a mode, among them the correct handling of discrete measures and those with continuous Lebesgue densities. However, despite there being 17 such ‘meaningful’ definitions of mode, we show that none of them satisfy the ‘merging property’, under which the modes of \(\mu|_{A}\), \(\mu|_{B}\), and \(\mu|_{A \cup B}\) enjoy a straightforward relationship for well-separated positive-mass events \( A, B \subseteq X\).

Published on Monday 17 July 2023 at 09:00 UTC #preprint #modes #map-estimators #klebanov

### Unbounded images of Gaussian and other stochastic processes

Tadashi Matsumoto and I have just uploaded a preprint of our note “Images of Gaussian and other stochastic processes under closed, densely-defined, unbounded linear operators” to the arXiv.

The purpose of this note is to provide a self-contained rigorous proof of the well-known formula for the mean and covariance function of a stochastic process — in particular, a Gaussian process — when it is acted upon by an *unbounded* linear operator such as an ordinary or partial differential operator, as used in probabilistic approaches to the solution of ODEs and PDEs.
This result is easy to establish in the case of a bounded operator, but the unbounded case requires a careful application of Hille's theorem for the Bochner integral of a Banach-valued random variable.

**Abstract.**
Gaussian processes (GPs) are widely-used tools in spatial statistics and machine learning and the formulae for the mean function and covariance kernel of a GP \(v\) that is the image of another GP \(u\) under a linear transformation \(T\) acting on the sample paths of \(u\) are well known, almost to the point of being folklore. However, these formulae are often used without rigorous attention to technical details, particularly when \(T\) is an unbounded operator such as a differential operator, which is common in several modern applications. This note provides a self-contained proof of the claimed formulae for the case of a closed, densely-defined operator \(T\) acting on the sample paths of a square-integrable stochastic process. Our proof technique relies upon Hille's theorem for the Bochner integral of a Banach-valued random variable.

Published on Monday 8 May 2023 at 13:00 UTC #preprint #prob-num #gp #matsumoto

### Learning linear operators

Mattes Mollenhauer, Nicole Mücke, and I have just uploaded a preprint of our latest article, “Learning linear operators: Infinite-dimensional regression as a well-behaved non-compact inverse problem”, to the arXiv.

**Abstract.**
We consider the problem of learning a linear operator \(\theta\) between two Hilbert spaces from empirical observations, which we interpret as least squares regression in infinite dimensions. We show that this goal can be reformulated as an inverse problem for \(\theta\) with the undesirable feature that its forward operator is generally non-compact (even if \(\theta\) is assumed to be compact or of \(p\)-Schatten class). However, we prove that, in terms of spectral properties and regularisation theory, this inverse problem is equivalent to the known compact inverse problem associated with scalar response regression. Our framework allows for the elegant derivation of dimension-free rates for generic learning algorithms under Hölder-type source conditions. The proofs rely on the combination of techniques from kernel regression with recent results on concentration of measure for sub-exponential Hilbertian random variables. The obtained rates hold for a variety of practically-relevant scenarios in functional regression as well as nonlinear regression with operator-valued kernels and match those of classical kernel regression with scalar response.

Published on Thursday 17 November 2022 at 10:00 UTC #preprint #learning #regression #mollenhauer #muecke