Great work! Love the push for intuitions especially in the working notes.
My understanding of superposition hypothesis from TMS paper has been(feel free to correct me!):
When there's no privileged basis polysemanticity is the default as there's no reason to expect interpretable neurons.
When there's a privileged basis either because of non linearity on the hidden layer or L1 regularisation, default is monosemanticity and superposition pushes towards polysemanticity when there's enough sparsity.
Is it possible that the features here are not enough basis aligned and is closer to case 1? As you already commented demonstrating polysemanticity when the hidden layer has a non linearity and m>n would be principled imo.
Great work! Love the push for intuitions especially in the working notes.
My understanding of superposition hypothesis from TMS paper has been(feel free to correct me!):
Is it possible that the features here are not enough basis aligned and is closer to case 1? As you already commented demonstrating polysemanticity when the hidden layer has a non linearity and m>n would be principled imo.