This is a paper by Weiss et al. about the "Restricted Access Sequence Processing Language", which is a computational framework for what transformers can do.
DL architecture
Computational framework
Feedforward networks
Circuits
RNNs
Finite state machines
Transformers
RASP (this paper)
Any program written in RASP can be implemented straightforwardly by a transformer. Indeed, the authors demonstrate that RASP can be "compiled" to a transformer, and that transformers can be trained to "mimic a RASP solution".
From the abstract (emphasis mine):
"We map the basic components of a transformer-encoder—attention and feed-forward computation—into simple primitives, around which we form a programming language: the Restricted Access Sequence Processing Language (RASP). We show how RASP can be used to program solutions to tasks that could conceivably be learned by a Transformer, and how a Transformer can be trained to mimic a RASP solution. In particular, we provide RASP programs for histograms, sorting, and Dyck-languages. We further use our model to relate their difficulty in terms of the number of required layers and attention heads: analyzing a RASP program implies a maximum number of heads and layers necessary to encode a task in a transformer. Finally, we see how insights gained from our abstraction might be used to explain phenomena seen in recent works."
One implication of this is that program length (simplicity) in RASP may be a good approximation to the inductive bias of transformers. I will probably use this in future work along the lines of "How complex are myopic imitators?".
If you know of any tricks that transformers use that are not covered by RASP, please leave a comment -- it would be highly useful to me.
This is a paper by Weiss et al. about the "Restricted Access Sequence Processing Language", which is a computational framework for what transformers can do.
Any program written in RASP can be implemented straightforwardly by a transformer. Indeed, the authors demonstrate that RASP can be "compiled" to a transformer, and that transformers can be trained to "mimic a RASP solution".
From the abstract (emphasis mine):
"We map the basic components of a transformer-encoder—attention and feed-forward computation—into simple primitives, around which we form a programming language: the Restricted Access Sequence Processing Language (RASP). We show how RASP can be used to program solutions to tasks that could conceivably be learned by a Transformer, and how a Transformer can be trained to mimic a RASP solution. In particular, we provide RASP programs for histograms, sorting, and Dyck-languages. We further use our model to relate their difficulty in terms of the number of required layers and attention heads: analyzing a RASP program implies a maximum number of heads and layers necessary to encode a task in a transformer. Finally, we see how insights gained from our abstraction might be used to explain phenomena seen in recent works."
One implication of this is that program length (simplicity) in RASP may be a good approximation to the inductive bias of transformers. I will probably use this in future work along the lines of "How complex are myopic imitators?".
If you know of any tricks that transformers use that are not covered by RASP, please leave a comment -- it would be highly useful to me.