The case for unlearning that removes information from LLM weights — AI Alignment Forum