Polysemantic Attention Head in a 4-Layer Transformer — AI Alignment Forum