Abstract: The numerical solution of inverse problems is an important yet challenging task. Available data is often indirect, noisy, and limited in number, making the inverse problem ill-posed. Quantifying the resulting uncertainty in the solution is an essential part of the task. Uncertainty in the solution, in turn, drives uncertainty in predictions. In the Bayesian approach to inverse problems, the solution and the available data are consequently treated as random variables. The goal of computation then is to characterize the posterior distribution, whose density can be connected to the prior and likelihood density, which encodes our prior belief about the solution and the data model, respectively.
In this talk, we discuss hierarchical Bayesian models that promote some linear transformation of the solution (e.g. corresponding to edges in an image) to be sparse. Particular focus will be given to conditionally Gaussian priors, which allow for developing efficient computational inference algorithms (at least for MAP estimates). Finally, we will address some potential future applications and open problems.
Bio: Jan Glaubitz is currently a postdoctoral research associate and lecturer in the Department of Mathematics at Dartmouth College, working with Anne Gelb. His research focuses on advancing foundational computational methodologies in numerical conservation laws and Bayesian inverse problems. He enjoys working at the intersection of theoretical numerical analysis (provable approximation, convergence, and stability results), method development (preferably using theoretical insights), and uncertainty quantification (quantifying confidence in computational predictions).
Before joining Dartmouth in 2020, he obtained a Dr. rer. nat. In Mathematics from the Technical University Braunschweig in Germany. His dissertation was on high-order numerical methods and shock-capturing for hyperbolic conservation laws.