x
ASI existential risk: Reconsidering Alignment as a Goal — AI Alignment Forum