Questo sito utilizza solo cookie tecnici per il corretto funzionamento delle pagine web e per il miglioramento dei servizi.
Se vuoi saperne di più o negare il consenso consulta l'informativa sulla privacy.
Proseguendo la navigazione del sito acconsenti all'uso dei cookie.
Se vuoi saperne di più o negare il consenso consulta l'informativa sulla privacy.
Proseguendo la navigazione del sito acconsenti all'uso dei cookie.
Elenco seminari del ciclo di seminari
“SEMINARS IN MATHEMATICAL PHYSICS AND BEYOND”
Dicembre
12
2024
Alexander Zlokapa
nell'ambito della serie: SEMINARS IN MATHEMATICAL PHYSICS AND BEYOND
nel ciclo di seminari: SEMINARS IN MATHEMATICAL PHYSICS AND BEYOND
Seminario di fisica matematica
In classical algorithms, tools such as the overlap gap property and free energy barrier are used to provide lower bounds for algorithms that are local, stable, or low-degree. In this talk, we review quantum algorithms for Gibbs sampling and show that they face analogous obstructions due to a general quantum bottleneck lemma. When applied to Metropolis-like algorithms and classical Hamiltonians, our result reproduces classical slow mixing arguments. Unlike previous techniques to bound mixing times of quantum Gibbs samplers, however, our bottleneck lemma provides bounds for non-commuting Hamiltonians. We apply it to systems such as random classical CSPs, quantum code Hamiltonians, and the transverse field Ising model. Key to our work are two notions of distance, which we use to measure the locality of quantum samplers and to construct the bottleneck.
Dicembre
17
2024
Francesco Camilli
nel ciclo di seminari: SEMINARS IN MATHEMATICAL PHYSICS AND BEYOND
Seminario di fisica matematica, interdisciplinare
Matrix denoising is central to signal processing and machine learning. Its analysis when the matrix to infer has a factorised structure with a rank growing proportionally to its dimension remains a challenge, except when it is rotationally invariant. In this case, the information theoretically optimal estimator, called rotational invariant estimator, is known and its performance is rigorously controlled. Beyond this setting few results can be found. The reason is that the model is not a usual spin system because of the growing rank dimension, nor a matrix model due to the lack of rotation symmetry, but rather a hybrid between the two. It is rather a "matrix glass". In this talk I shall illustrate our progresses towards the understanding of Bayesian matrix denoising when the hidden signal is a factored matrix XX⊺ that is not rotationally invariant. Monte Carlo simulations suggest the existence of a denoising-factorisation transition separating a phase where denoising using the rotational invariant estimator remains optimal due to universality properties of the same nature as in random matrix theory, from one where universality breaks down and better denoising is possible by exploiting the signal's prior and factorised structure, though algorithmically hard. We also argue that it is only beyond the transition that factorisation, i.e., estimating X itself, becomes possible up to sign and permutation ambiguities. On the theoretical side, we combine different mean-field techniques in order to access the minimum mean-square error and mutual information. Interestingly, our alternative method yields equations which can be reproduced using the replica approach of Sakata and Kabashima that were deemed wrong for a long time. Using numerical insights, we then delimit the portion of the phase diagram where this mean-field theory is reliable, and correct it using universality when it is not. Our ansatz matches well the numerics when accounting for finite size effects.
Febbraio
07
2025
Alessandro Ingrosso
nel ciclo di seminari: SEMINARS IN MATHEMATICAL PHYSICS AND BEYOND
Seminario di fisica matematica
Transfer learning (TL) is a well-established machine learning technique to boost the generalization performance on a specific (target) task using information gained from a related (source) task, and it crucially depends on the ability of a network to learn useful features. I will present a recent work that leverages analytical progress in the proportional regime of deep learning theory (i.e. the limit where the size of the training set P and the size of the hidden layers N are taken to infinity keeping their ratio P/N finite) to develop a novel statistical mechanics formalism for TL in Bayesian neural networks. I'll show how such single-instance Franz-Parisi formalism can yield an effective theory for TL in one-hidden-layer fully-connected neural networks. Unlike the (lazy-training) infinite-width limit, where TL is ineffective, in the proportional limit TL occurs due to a renormalized source-target kernel that quantifies their relatedness and determines whether TL is beneficial for generalization.