Can uncertainty be quantified? On confident hallucinations in deep learning-based methods for inverse problems

Ben Adcock
SFU

Deep learning is currently transforming how inverse problems arising in imaging reconstruction are solved. However, it is increasingly well-known that such deep learning-based methods are susceptible to hallucinations. In this talk, I will present a series of theoretical explanations for why hallucinations occur, in both deterministic and statistical estimators. I will conclude by observing that hallucinations can only be avoided by careful design of the forwards operator in tandem with the recovery algorithm, and then provide a theoretical framework for how this can be achieved when solving inverse problems using generative models.