### #preprint

### Autoencoders in function space

Justin Bunker, Mark Girolami, Hefin Lambley, Andrew Stuart and I have just uploaded a preprint of our paper “Autoencoders in function space” to the arXiv.

**Abstract.** Autoencoders have found widespread application, in both their original deterministic form and in their variational formulation (VAEs). In scientific applications it is often of interest to consider data that are comprised of functions; the same perspective is useful in image processing. In practice, discretisation (of differential equations arising in the sciences) or pixellation (of images) renders problems finite dimensional, but conceiving first of algorithms that operate on functions, and only then discretising or pixellating, leads to better algorithms that smoothly operate between different levels of discretisation or pixellation. In this paper function-space versions of the autoencoder (FAE) and variational autoencoder (FVAE) are introduced, analysed, and deployed. Well-definedness of the objective function governing VAEs is a subtle issue, even in finite dimension, and more so on function space. The FVAE objective is well defined whenever the data distribution is compatible with the chosen generative model; this happens, for example, when the data arise from a stochastic differential equation. The FAE objective is valid much more broadly, and can be straightforwardly applied to data governed by differential equations. Pairing these objectives with neural operator architectures, which can thus be evaluated on any mesh, enables new applications of autoencoders to inpainting, superresolution, and generative modelling of scientific data.

Published on Monday 5 August 2024 at 12:00 UTC #preprint #bunker #girolami #lambley #stuart #autoencoders

### UQ for Si using interatomic potentials

Iain Best, James Kermode, and I have just uploaded a preprint of our paper “Uncertainty quantification in atomistic simulations of silicon using interatomic potentials” to the arXiv.

**Abstract.** Atomistic simulations often rely on interatomic potentials to access greater time- and length- scales than those accessible to first principles methods such as density functional theory (DFT). However, since a parameterised potential typically cannot reproduce the true potential energy surface of a given system, we should expect a decrease in accuracy and increase in error in quantities of interest calculated from simulations. Quantifying the uncertainty on the outputs of atomistic simulations is thus an important, necessary step so that there is confidence in results and available metrics to explore improvements in said simulations. Here, we address this research question by forming ensembles of Atomic Cluster Expansion (ACE) potentials, and using Conformal Prediction with DFT training data to provide meaningful, calibrated error bars on several quantities of interest for silicon: the bulk modulus, elastic constants, relaxed vacancy formation energy, and the vacancy migration barrier. We evaluate the effects on uncertainty bounds using a range of different potentials and training sets.

Published on Saturday 24 February 2024 at 12:00 UTC #preprint #kermode #best #uq #interatomc-potentials

### Hille’s theorem for locally convex spaces

I have just uploaded a preprint of the paper “Hille's theorem for Bochner integrals of functions with values in locally convex spaces” to the arXiv. This paper extends the relatively well known result that Bochner integration commutes with closed operators from the classical Banach space setting to the case of more general locally convex spaces.

Published on Thursday 18 January 2024 at 09:00 UTC #preprint #functional-analysis #hille-theorem

### A periodic table of modes and MAP estimators

Ilja Klebanov and I have just uploaded a preprint of our paper “A ‘periodic table’ of modes and maximum a posteriori estimators” to the arXiv.

This paper forms part of the growing body of work on the ‘small balls’ theory of modes for probability measures on metric spaces, which is needed e.g. for the treatment of MAP estimation for Bayesian inverse problems with functional unknowns. There are already several versions in the literature: the strong mode, the weak mode, and the generalised strong mode. We take an axiomatic approach to the problem and identify a system of 17 essentially distinct notions of mode, proving implications between them and providing explicit counterexamples to distinguish them. From an axiomatic point of view, all these 17 seem to be ‘equally good’, suggesting that further research is needed in this area.

**Abstract.**
The last decade has seen many attempts to generalise the definition of modes, or MAP estimators, of a probability distribution \(\mu\) on a space \(X\) to the case that \(\mu\) has no continuous Lebesgue density, and in particular to infinite-dimensional Banach and Hilbert spaces \(X\). This paper examines the properties of and connections among these definitions. We construct a systematic taxonomy – or ‘periodic table’ – of modes that includes the established notions as well as large hitherto-unexplored classes. We establish implications between these definitions and provide counterexamples to distinguish them. We also distinguish those definitions that are merely ‘grammatically correct’ from those that are ‘meaningful’ in the sense of satisfying certain ‘common-sense’ axioms for a mode, among them the correct handling of discrete measures and those with continuous Lebesgue densities. However, despite there being 17 such ‘meaningful’ definitions of mode, we show that none of them satisfy the ‘merging property’, under which the modes of \(\mu|_{A}\), \(\mu|_{B}\), and \(\mu|_{A \cup B}\) enjoy a straightforward relationship for well-separated positive-mass events \( A, B \subseteq X\).

Published on Monday 17 July 2023 at 09:00 UTC #preprint #modes #map-estimators #klebanov

### Unbounded images of Gaussian and other stochastic processes

Tadashi Matsumoto and I have just uploaded a preprint of our note “Images of Gaussian and other stochastic processes under closed, densely-defined, unbounded linear operators” to the arXiv.

The purpose of this note is to provide a self-contained rigorous proof of the well-known formula for the mean and covariance function of a stochastic process — in particular, a Gaussian process — when it is acted upon by an *unbounded* linear operator such as an ordinary or partial differential operator, as used in probabilistic approaches to the solution of ODEs and PDEs.
This result is easy to establish in the case of a bounded operator, but the unbounded case requires a careful application of Hille's theorem for the Bochner integral of a Banach-valued random variable.

**Abstract.**
Gaussian processes (GPs) are widely-used tools in spatial statistics and machine learning and the formulae for the mean function and covariance kernel of a GP \(v\) that is the image of another GP \(u\) under a linear transformation \(T\) acting on the sample paths of \(u\) are well known, almost to the point of being folklore. However, these formulae are often used without rigorous attention to technical details, particularly when \(T\) is an unbounded operator such as a differential operator, which is common in several modern applications. This note provides a self-contained proof of the claimed formulae for the case of a closed, densely-defined operator \(T\) acting on the sample paths of a square-integrable stochastic process. Our proof technique relies upon Hille's theorem for the Bochner integral of a Banach-valued random variable.

Published on Monday 8 May 2023 at 13:00 UTC #preprint #prob-num #gp #matsumoto