In two weeks Hanne Kekkonen (University of Warwick) will give a talk on “Large noise in variational regularisation”.
Time and Place. Monday 12 June 2017, 11:00–12:00, ZIB Seminar Room 2006, Zuse Institute Berlin, Takustraße 7, 14195 Berlin
Abstract. We consider variational regularisation methods for inverse problems with large noise, which is in general unbounded in the image space of the forward operator. We introduce a Banach space setting that allows to define a reasonable notion of solutions for more general noise in a larger space provided one has sufficient mapping properties of the forward operator. As an example we study the particularly important cases of one- and p-homogeneous regularisation functionals. As a natural further step we study stochastic noise models and in particular white noise, for which we derive error estimates in terms of the expectation of the Bregman distance. As an example we study total variation prior. This is joint work with Martin Burger and Tapio Helin.
Next week Jon Cockayne (University of Warwick) will give a talk on “Probabilistic Numerics for Partial Differential Equations”.
Time and Place. Friday 14 October 2016, 12:00–13:00, ZIB Seminar Room 2006, Zuse Institute Berlin, Takustraße 7, 14195 Berlin
Abstract. Probabilistic numerics is an emerging field which constructs probability measures to capture uncertainty arising from the discretisation which is often necessary to solve complex problems numerically. We explore probabilistic numerical methods for Partial differential equations (PDEs). We phrase solution of PDEs as a statistical inference problem, and construct probability measures which quantify the epistemic uncertainty in the solution resulting from the discretisation .
We analyse these probability measures in the context of Bayesian inverse problems, parameter inference problems whose dynamics are often constrained by a system of PDEs. Sampling from parameter posteriors in such problems often involves replacing an exact likelihood with an approximate one, in which a numerical approximation is substituted for the true solution of the PDE. Such approximations have been shown to produce biased and overconfident posteriors when error in the forward solver is not tightly controlled. We show how the uncertainty from a probabilistic forward solver can be propagated into the parameter posteriors, thus permitting the use of coarser discretisations while still producing valid statistical inferences.
 Jon Cockayne, Chris Oates, Tim Sullivan, and Mark Girolami. “Probabilistic Meshless Methods for Partial Differential Equations and Bayesian Inverse Problems.” arXiv preprint, 2016. arXiv:1605.07811
Time and Place. Tuesday 14 June 2016, 12:15–13:15, ZIB Seminar Room 2006, Zuse Institute Berlin, Takustraße 7, 14195 Berlin
Abstract. In an ongoing push to construct probabilistic extensions of classic ODE solvers for application in statistics and machine learning, two recent papers have provided distinct methods that return probability measures instead of point estimates, based on sampling and filtering respectively. While both approaches leverage classical numerical analysis, by building on well-studied solutions of existing seminal solvers, the different constructions of probability measures strike a divergent balance between a formal quantification of epistemic uncertainty and a low computational overhead.
On the one hand, Conrad et al. proposed to randomise existing non-probabilistic one-step solvers by adding suitably scaled Gaussian noise after every step and thereby inducing a probability measure over the solution space of the ODE which contracts to a Dirac measure on the true unknown solution in the order of convergence of the underlying classic numerical method. But the computational cost of these methods is significantly above that of classic solvers.
On the other hand, Schober et al. recast the estimation of the solution as state estimation by a Gaussian (Kalman) filter and proved that employing a integrated Wiener process prior returns a posterior Gaussian process whose maximum likelihood (ML) estimate matches the solution of classic Runge–Kutta methods. In an attempt to amend this method's rough uncertainty calibration while sustaining its negligible cost overhead, we propose a novel way to quantify uncertainty in this filtering framework by probing the gradient using Bayesian quadrature.
This week Steven Niederer (King's College London) will talk about “Linking physiology and cardiology through mathematical models”
Time and Place. Thursday 28 April 2016, 11:00–12:00, Room 4027 of the Zuse Institute Berlin, Takustraße 7, 14195 Berlin
Abstract. Much effort has gone into the analysis of cardiac function using mathematical and computational models. To fully realise the potential of these studies requires the translation of these models into clinical applications to aid in diagnosis and clinical planning.
To achieve this goal requires the integration of multiple disparate clinical data sets into a common modelling framework. To this end we have developed a coupled electro-mechanics model of the human heart. This model combines patient specific anatomical geometry, active contraction, electrophysiology, tissue heterogeneities and boundary conditions fitted to comprehensive imaging and catheter clinical measurements.
This multi-scale computational model allows us to link sub cellular mechanisms to whole organ function. This provides a novel tool to determine the mechanisms that underpin treatment out comes and offers the ability to determine hidden variables that provide new metrics of cardiac function. Specifically we report on the application of these methods in patients receiving cardiac resynchronisation therapy and ablation for atrial fibrillation.
Ingmar Schuster (Université Paris-Dauphine) “Gradient Importance Sampling”
Time and Place. Friday 11 March 2016, 11:15–12:45, Room 126 of Arnimallee 6 (Pi-Gebäude), 14195 Berlin
Abstract. Adaptive Monte Carlo schemes developed over the last years usually seek to ensure ergodicity of the sampling process in line with MCMC tradition. This poses constraints on what is possible in terms of adaptation. In the general case ergodicity can only be guaranteed if adaptation is diminished at a certain rate. Importance Sampling approaches offer a way to circumvent this limitation and design sampling algorithms that keep adapting. Here I present an adaptive variant of the discretized Langevin algorithm for estimating integrals with respect to some target density that uses an Importance Sampling instead of the usual Metropolis–Hastings correction.