AI ALIGNMENT FORUM
AF

Academic PapersConjecture (org)Interpretability (ML & AI)AI
Frontpage

29

A technical note on bilinear layers for interpretability

by Lee Sharkey
8th May 2023
1 min read
0

29

This is a linkpost for https://arxiv.org/abs/2305.03452
Academic PapersConjecture (org)Interpretability (ML & AI)AI
Frontpage
New Comment
Moderation Log
More from Lee Sharkey
View more
Curated and popular this week
0Comments

Summary

In this short theoretical note (now on Arxiv) I examine bilinear layers, which are MLP layers that take the form

MLPBilinear(x)=(W1x)⊙(W2x).

When used in language models, they perform better than standard MLPs with elementwise activation functions (but appear very slightly below state of the art). 

Despite their competitiveness, they are mathematically much easier to analyze: Although they are nonlinear functions of their input, bilinear layers can be expressed using only linear operations and third order tensors. 

Because they can be linearized, we can extend 'A Mathematical Framework for Transformer Circuits' (Elhage et al. 2022) beyond attention-only transformers to transformers with both attention and MLP layers.

In a similar way to how the analysis of Elhage et al. (2022) helped to reveal QK- and OV-circuits, induction heads, and virtual attention heads, the analyzability of bilinear layers may lend them to deeper safety insights by allowing us to talk more formally about circuits in large language models.

Additionally, and more speculatively, bilinear layers might offer an alternative path for mechanistic interpretability through understanding the mechanisms of feature construction instead of having to enumerate and understand a (potentially exponentially) large number of features in large models.

Mentioned in
52A List of 45+ Mech Interp Project Ideas from Apollo Research’s Interpretability Team