Once you have minds those minds start perceiving differentiation since they need to extract information from the environment to function.
How can there be information for minds to extract, unless the environment already has some kind of structure?
I have a theory that belief in a good God is the main delusion of western religion, and belief in a fundamentally undifferentiated reality is the main delusion of eastern religion.
I see no way around the conclusion that differences are real. Experience is part of reality, and experience contains difference. Also, my experience is objectively distinct from yours - I don't know what you had for breakfast today (or indeed if you had any); that act was part of your experience, and not part of mine.
We can divide up the world in different ways, but the undivided world is already objectively differentiated.
How can self-observation be the cause of my existence as a differentiated being? Don't I have to already exist as a differentiated being, in order to be doing that?
Are you saying my existence is "undifferentiated" from "the wholeness of the world" so long as no one else is observing me or thinking of me?
there are only phenomena
Do I only exist because you "reify" me?
The "alignment problem" humanity has as its urgent task is exactly the problem of aligning cognitive work that can be leveraged to prevent the proliferation of tech that destroys the world. Once you solve that, humanity can afford to take as much time as it needs to solve everything else.
OK, I disagree very much with that strategy. You're basically saying, your aim is not to design ethical/friendly/aligned AI, you're saying your aim is to design AI that can take over the world without killing anyone. Then once that is accomplished, you'll settle down to fi...
You're basically saying, your aim is not to design ethical/friendly/aligned AI [...]
My goal is an awesome, eudaimonistic long-run future. To get there, I strongly predict that you need to build AGI that is fully aligned with human values. To get there, I strongly predict that you need to have decades of experience actually working with AGI, since early generations of systems will inevitably have bugs and limitations and it would be catastrophic to lock in the wrong future because we did a rush job.
(I'd also expect us to need the equivalent of subjective ce...
The "stable period" is supposed to be a period in which AGI already exists, but nothing like CEV has yet been implemented, and yet "no one can destroy the world with AGI". How would that work? How do you prevent everyone in the whole wide world from developing unsafe AGI during the stable period?
Thank you for the long reply. The 2017 document postulates an "acute risk period" in which people don't know how to align, and then a "stable period" once alignment theory is mature.
So if I'm getting the gist of things, rather than focus outright on the creation of a human-friendly superhuman AI, MIRI decided to focus on developing a more general theory and practice of alignment; and then once alignment theory is sufficiently mature and correct, one can focus on applying that theory to the specific crucial case, of aligning superhuman AI with extrapo...
The 2017 document postulates an "acute risk period" in which people don't know how to align, and then a "stable period" once alignment theory is mature.
"Align" is a vague term. Let's distinguish "strawberry alignment" (where we can safely and reliably use an AGI to execute a task like "Place, onto this particular plate here, two strawberries identical down to the cellular but not molecular level.") from "CEV alignment" (where we can safely and reliably use an AGI to carry out a CEV-like procedure.)
Strawberry alignment seems vastly easier than CEV ali...
Eliezer and Nate feel that their past alignment research efforts failed
I find this a little surprising. If someone had asked me what MIRI's strategy is, I would have said that the core of it was still something like CEV, with topics like logical induction and new decision theory paradigms as technical framework issues. I mean, part of the MIRI paradigm has always been that AGI alignment is grounded in how the human brain works, right? The mechanics of decision-making in human brains, are the starting point in constructing the mechanics of decision-making i...
Quoting from the "strategic background" summary we shared in 2017:
...[...] In very broad terms, however, our approach to global risk mitigation is to think in terms of desired outcomes, and to ask: “What is the likeliest way that the outcome in question might occur?” We then repeat this process until we backchain to interventions that actors can take today. [...]
1. Long-run good outcomes. Ultimately, we want humanity to figure out the best possible long-run future and enact that kind of future, factoring in good outcomes for all sentient beings. However, ther
Does that paper actually mention any overall models of the human mind? It has a list of ingredients, but does it say how they should be combined?
Seems like there's a difference between viability of AI, and ability of AI to shape a randomized environment. To have AI, you just need stable circuits, but to have an AI that can shape, you need a physics that allows observation and manipulation... It's remarkable that googling "thermodynamics of the game of life" turns up zero results.
Do you have an endgame strategy ready?