The typical noise on feature caused by 1 unit of activation from feature , for any pair of features , , is (derived from Johnson–Lindenstrauss lemma)
1. ... This is a worst case scenario. I have not calculated the typical case, but I expect it to be somewhat less, but still same order of magnitude
Perhaps I'm misunderstanding your claim here, but the "typical" (i.e. RMS) inner product between two independently random unit vectors in is . So I think the shouldn't be there, and the rest of your estimates are incorrect.
This means that we can have at most simultaneously active features
This conclusion gets changed to .
Nice work! I'm not sure I fully understand what the "gated-ness" is adding, i.e. what the role the Heaviside step function is playing. What would happen if we did away with it? Namely, consider this setup:
Let f and ^x be the encoder and decoder functions, as in your paper, and let x be the model activation that is fed into the SAE.
The usual SAE reconstruction is ^x(f(x)), which suffers from the shrinkage problem.
Now, introduce a new learned parameter t∈Rnfeatures, and define an "expanded" reconstruction yexpanded=^x(t⊙f(x)), where ⊙ denotes elementwise multiplication.
Finally, take the loss to be:
L=||^xcopy(f(x))−x||22+||yexpanded−x||22+λ||f(x)||1.
where ^xcopy ensures the decoder gets no gradients from the first term. As I understand it, this is exactly the loss appearing in your paper. The only difference in the setup is the lack of the Heaviside step function.
Did you try this setup? Or does it fail for an obvious reason I missed?