Seminario del 2024
Maggio
07
2024
An inverse problem is the task of retrieving an unknown quantity from indirect observations. When the model describing the measurement acquisition is linear, this results in the inversion of a linear operator (a matrix, in a discrete formulation) which is usually ill-posed or ill-conditioned. A common strategy to tackle ill-posedness in inverse problems is to use regularizers, which are (families of) operators providing a stable approximation of the inverse map. Model-based regularization techniques often leverage prior knowledge of the exact solution, such as smoothness or sparsity with respect to a suitable representation; on the other side, in recent years many data-driven methods have been developed in the context of machine learning. Those techniques tackle the approximation of the inverse operator in suitable spaces of parametric functions (i.e., neural networks) and rely on large datasets of paired measurements and ground-truth objects. In this talk, I will focus on hybrid strategies, which aim at blending model-based and data-driven approaches, providing both satisfying numerical results and sound theoretical guarantees. I will describe a general framework to comprise many existing techniques in the theory of statistical learning, also reporting some recent theoretical advances (in the direction of generalization guarantees). I will help the discussion by presenting some relevant examples in the context of medical imaging and, specifically, in computed tomography.