This is well-written, but I feel like it falls into the same problem a lot of AI-risk stories do. It follows this pattern:
And like, the Step 1 stuff is fascinating and a worthy sci-fi story on its own, but the big question everyone has about AI risk is "How does the AI get from Step 1 to Step 3?"
(T...
Is it something like the AI-box argument? "If I share my AI breakout strategy, people will think 'I just won't fall for that strategy' instead of noticing the general problem that there are strategies they didn't think of"? I'm not a huge fan of that idea, but I won't argue it further.
I'm not expecting a complete explanation, but I'd like to see a story that doesn't skip directly to "AI can reformat reality at will" without at least one intermediate step. Like, this is the third time I've seen an author pull this trick and I'm starting to wonde... (read more)