Assistant Professor of Applied Mathematics, University of Washington

Details

Statistical and probabilistic methods are promising approaches to solving inverse problems – the process of recovering unknown parameters from indirect measurements. Of these, the Bayesian methods provide a principled approach to incorporating our existing beliefs about the parameters (the prior model) and randomness in the data. These approaches are at the forefront of extensive current investigation. Overwhelmingly, Gaussian prior models are used in Bayesian inverse problems since they provide mathematically simple and computationally efficient formulations of important inverse problems. Unfortunately, these priors fail to capture a range of important properties including sparsity and natural constraints such as positivity, and so we are motivated to study non-Gaussian priors. In this talk we introduce the theory of well-posed Bayesian inverse problems with non-Gaussian priors in infinite dimensions. We show that the well-posedness of a Bayesian inverse problem relies on a balance between the growth rate of the forward map and the tail decay of the prior. Next, we turn our attention to a concrete application of non-Gaussian priors in recovery of sparse or compressible parameters. We construct new classes of prior measures based on the Gamma distribution and develop a Markov Chain Monte Carlo algorithm for exploring the posterior measures that arise from our compressible priors in infinite dimensions.