Suppose there are a hundred copies of you, in different cells. At random, one will be selected - that one is going to be shot tomorrow. A guard notifies that one that they're going to be shot.
There is a mercy offered, though - there's a memory-eraser-ray handy. The one who knows they'te going to be shot is given the option to erase their memory of the warning and everything that followed, putting them in the same information state, more or less, as any of the other copies.
"Of course!" They cry. "Erase my memory, and I could be any of them - why, when you shoot someone tomorrow, there's a 99% chance it won't even be me!"
Then the next day comes, and they get shot.
Sure. This setup couldn't really be exploited for optimizing the universe. If we assume that the self-selection assumption is a reasonable assumption to make, inducing amnesia doesn't actually improve outcomes across possible worlds. One out of 100 prisoners still dies.
It can't even be considered "re-rolling the dice" on whether the specific prisoner that you are dies. Under the SSA, there's no such thing as a "specific prisoner", "you" are implemented as all 100 prisoners simultaneously, and so regardless of whether you choose to erase your memory o...
Well, I'm very forgetful, and I notice that I do happen to be myself so... :p
But yeah, I've bitten this bullet too, in my case, as a way to avoid the Boltzmann brain problem. (Roughly: "you" includes lots of information generated by a lawful universe. Any specific branch has small measure, but if you aggregate over all the places where "you" exist (say your exact brain state, though the real thing that counts might be more or less broad than this), you get more substantial measure from all the simple lawful universes that only needed 10^X coincidences to make you instead of the 10^Y coincidences required for you to be a Boltzmann brain.)
I think that what anthropically "counts" is most likely somewhere between conscious experience (I've woken up as myself after anesthesia), and exact state of brain in local spacetime (I doubt thermal fluctuations or path dependence matter for being "me").
It's a standard assumption, in anthropic reasoning, that effectively, we simultaneously exist in every place in Tegmark IV that simulates this precise universe (see e. g. here).
How far does this reasoning go?
Suppose that the universe's state is described by n low-level variables x1,…,xn. However, your senses are "coarse": you can only view and retain the memory of m variables y1,…,ym, where m≪n and each yi is a deterministic function of some subset of x1,…,xn.
Consider a high-level state Y∗, corresponding to each yi being assigned some specific value. For any Y∗, there's an equivalence class of low-level states X∗ precisely consistent with Y∗.
Given this, if you observe Y∗, is it valid to consider yourself simultaneously existing in all corresponding low-level states X∗ consistent with Y∗?
Note that, so far, this is isomorphic to the scenario from Nate's post, which considers all universes that only differ by the choices of gauge (which is undetectable from "within" the system) equivalent.
Now let's examine increasingly weirder situations based on the same idea.
Scenario 1:
I'm inclined to bite this bullet: yes, you exist in all universes consistent with your high-level observations, even if their low-level states differ.
Scenario 2: if you absolutely forget a detail, would the set of the universes you're embedded in increase? Concretely:
I'm inclined to bite this bullet too, though it feels somewhat strange. Weird implication: you can increase the amount of reality-fluid assigned to you by giving yourself amnesia.[1]
Scenario 3: Now imagine that you're a flawed human being, prone to confabulating/misremembering details, and also you don't hold the entire contents of your memories in your mind all at the same time. If I ask you whether you saw a small red flash 1 minute ago, and you confirm that you did, will you end up in a universe where there's an extra photon, or in a universe where you've confabulated this memory? Or in both?
Scenario 4: Suppose you observe some macro-level event, such as learning that there are 195 countries in the world. Suppose there are similar-ish Everett branches where there's only 194 internationally recognized countries. This difference isn't small enough to get lost in thermal noise. The existence vs. non-existence of an extra country doubtlessly left countless side-evidence in your conscious memories, such that AIXI would be able to reconstruct the country's (non-)existence even if you're prone to forgetting or confabulating the exact country-count.
... Or would it? Are you sure that the experiential content you're currently perceiving, and the stuff currently in your working memory, anchor you only to Everett branches that have 195 countries?
Sure, if you went looking through your memories, you'd doubtlessly uncover some details that'd be able to distinguish a branch where you confabulated an extra country with a branch where it really exists. But you haven't been doing that before reading the preceding paragraphs. Was the split made only when you started looking? Will you merge again, once you unload these memories?
This setup seems isomorphic, in the relevant sense, to the initial setup with only perceiving high-level variables yi. In this case, we just model you as a system with even more "coarse" senses.[2] Which, in turn, is isomorphic to the standard assumption of simultaneously exist in every place in Tegmark IV that simulates this precise universe.
One move you could make, here, is to claim that "you" only identify with systems that have some specific personality traits and formative memories. As a trivial example, you could claim that a viewpoint which is consistent with your current perceptions and working-memory content, but who, if they query their memories for their name, and then experience remembering "Cass" as the answer, is not really "you".
But then, presumably you wouldn't consider "I saw a red flash one minute ago" part of your identity, else you'd consider naturally forgetting such a detail a kind of death. Similarly, even some macro-scale details like "I believe there are 195 countries in the world" are presumably not part of your identity. A you who confabulated an extra country is still you.
Well, I don't think this is necessarily a big deal, even if true. But it's relevant to some agent-foundation work I've been doing, and I haven't seen this angle discussed before.
The way it can matter: Should we expect to exist in universes that abstract well, by the exact same argument that we use to argue that we should expect to exist in "alt-simple" universes?
That is: suppose there's a class of universes in which the information from the "lower levels" of abstraction becomes increasingly less relevant to higher levels. It's still "present" on a moment-to-moment basis, such that an AIXI which retained the full memory of an embedded agent's sensory stream would be able to narrow things down to a universe specified up to low-level details.
But the actual agents embedded in such universes don't have such perfect memories. They constantly forget the low-level details, and presumably "identify with" only high-level features of their identity. For any such agent, is there then an "equivalence class" of agents that are different at the low level (details of memories/identity), but whose high-level features match enough that we should consider them "the same" agent for the purposes of the "anthropic lottery"?
For example, suppose there are two Everett branches that differ by whether you saw a dog run across your yard yesterday. The existence of an extra dog doubtlessly left countless "microscopic" traces in your total observations over your lifetime: AIXI would be able to tell the universes apart. But suppose our universe is well-abstracting, and this specific dog didn't set off any butterfly effects. The consequences of its existence were "smoothed out", such that its existence vs. non-existence never left any major differences in your perceptions. Only various small-scale details that you forgot/don't matter.
Does it then mean that both universes contain an agent that "counts as you" for the purposes of the "anthropic lottery", such that you should expect to be either of them at random?
If yes, then we should expect ourselves to be agents that exist in a universe that abstracts well, because "high-level agents" embedded in such universes are "supported" by a larger equivalence class of universes (since they draw on reality fluid from an entire pool of "low-level" agents).
So: are there any fatal flaws in this chain of reasoning? Undesirable consequences to biting all of these bullets that I'm currently overlooking?
Please don't actually do that.
As an intuition-booster, imagine that we implemented some abstract system that got only very sparse information about the wider universe. For example, a chess engine. It can't look at its code, and the only inputs it gets are the moves the players make. If we imagine that there's a conscious agent "within" the chess engine, the only observations of which are the chess moves being made, what "reason" does it have to consider itself embedded in our universe specifically, as opposed to any other universe in which chess exists? Including universes with alien physics, et cetera.