Giugno
09
2024
Seminario di analisi numerica
ore 08:00
presso plesso Belmeloro
Abstract: In this mini-course, we show how various forms of supervised learning can be recast as optimization problems over suitable function spaces, subject to regularity constraints. Our family of regularization functionals has two components: (1) a regularization operator, which can be composed with an optional projection mechanism (Radon transform), and (2) a (semi-)norm, which may be Hilbertian (RKHS) or sparsity-promoting (total variation). By invoking an abstract representer theorem, we obtain an explicit parametrization of the extremal points of the solution set. The latter translates into a concrete neuronal architecture and training procedure. We demonstrate the use of this variational formalism on a variety of examples, including several variants of spline-based regression. We also draw connections with classical kernel-based techniques and modern ReLU neural networks. Finally, we show how our framework is applicable to the learning of non-linearities in deep and not-so-deep networks.
Torna alla pagina dei seminari del Dipartimento di Matematica di Bologna