The third SIAM Conference on Uncertainty Quantification (SIAM UQ16) will take place at the SwissTech Convention Center in Lausanne, Switzerland, this week, 5–8 April 2016.
As part of this conference, Mark Girolami and I will organise a mini-symposium on “Over-confidence in numerical predictions: challenges and solutions” (MS138 and MS153), which will feature a wide range of perspectives, including Bayesian and frequentist (in)consistency, probabilistic numerics, and application fields.
Published on Sunday 3 April 2016 at 21:00 UTC #event
Anirban Mukhopadhyay will give a short course on Machine Learning in Image Analysis on 29–31 March 2016 at the Zuse Institute Berlin.
Time and Place. 10:00–12:00 on 29, 30, and 31 March 2016, ZIB Lecture Hall.
For further information, see the course web-page at www.zib.de/MLIA.
Published on Tuesday 22 March 2016 at 10:00 UTC #event
From 12–16 September 2016, there will be a summer school on uncertainty quantification jointly organized by the GAMM Activity Group on Uncertainty Quantification, the International Research and Training Group Munich-Graz and WIAS. The invited lecturers are:
- Howard Elman (U Maryland): Reduced Basis Methods for Random PDEs
- Lars Grasedyck (RWTH Aachen): Sparse Tensor Approximations
- Claudia Schillings (U Warwick): Ensemble Kalman Filters and MCMC for Bayesian Inversion
- Raul Tempone (KAUST): Multilevel and Multiindex Monte-Carlo Methods
- Bruno Sudret (ETH Zürich): Engineering Risk Analysis with UQ-LAB
Contact the organisers at email@example.com for further information.
Published on Thursday 17 March 2016 at 16:00 UTC #event
Ingmar Schuster (Université Paris-Dauphine) “Gradient Importance Sampling”
Time and Place. Friday 11 March 2016, 11:15–12:45, Room 126 of Arnimallee 6 (Pi-Gebäude), 14195 Berlin
Abstract. Adaptive Monte Carlo schemes developed over the last years usually seek to ensure ergodicity of the sampling process in line with MCMC tradition. This poses constraints on what is possible in terms of adaptation. In the general case ergodicity can only be guaranteed if adaptation is diminished at a certain rate. Importance Sampling approaches offer a way to circumvent this limitation and design sampling algorithms that keep adapting. Here I present an adaptive variant of the discretized Langevin algorithm for estimating integrals with respect to some target density that uses an Importance Sampling instead of the usual Metropolis–Hastings correction.