Place: Online Seminar: Please sign up for our mailing list at www.physicsmeetsml.org for zoom link. We will also livestream the talk in Chamberlin 5280.
Speaker: Sho Yaida, Meta AI
Abstract: Large neural networks perform extremely well in practice, providing the backbone of modern machine learning. The goal of this talk is to provide a blueprint for theoretically analyzing these large models from first principles. In particular, we’ll overview how the statistics and dynamics of deep neural networks drastically simplify at large width and become analytically tractable. In so doing, we’ll see that the idealized infinite-width limit is too simple to capture several important aspects of deep learning such as representation learning. To address them, we’ll step beyond the idealized limit and systematically incorporate finite-width corrections.