abhayesian

Trying to become a shoggoth whisperer

Posts

Sorted by New

Wiki Contributions

Comments

Transformers, for example, seem to do a step of gradient descent in Transformer blocks on an abstracted version of the problem, as a small explicit inference step at runtime, where the learned abstractions do most of the work during pretraining which is then amortized over all runtimes

Do you have a reference for this?  I have a hard time believing that this is generally true of anything other than toy models trained on toy tasks.  I think you're referencing this paper, which trains a shallow attention-only transformer where they get rid of the nonlinearity in the attention, trained to perform linear regression.  There are too many dissimilarities between the setting in this work and LLMs to convince me that this is true of LLama or GPT4.