Place: Online Seminar: Please sign up for our mailing list at www.physicsmeetsml.org for zoom link. We will also livestream the talk in Chamberlin 5280.
Speaker: Greg Yang, Microsoft Research
Abstract: You can't train GPT-3 on a single GPU, much less tune its hyperparameters (HPs)...or so it seems. I'm here to tell you this is not true: you *can* tune its HPs on a single GPU even if you can't train it that way!
In the first half of this talk, I'll describe how, in the so-call maximal update parametrization (abbreviated µP), narrow and wide neural networks share the same set of optimal HPs. This lets us tune any large model by just tuning a small version of it — we call this *µTransfer*. In particular, this allowed us to tune the 6.7 billion parameter version of GPT-3 using only 7% of its pretraining compute budget, and, with some asterisks, we get a performance comparable to the original GPT-3 model with twice the parameter count.
In the second half of this talk, I'll discuss the theoretical reason µP has this special property and the connection to the study of infinite-width neural networks and, more generally, the theory of Tensor Programs.
The first half will target general practitioners or empirical researchers in machine learning, while the second half targets those who are more theoretically curious. This talk is based on