This is a follow-up to a previous post on critical agentialism, to explore the straightforward decision-theoretic consequences. I call this subjective implication decision theory, since the agent is looking at the logical implications of their decision according to their beliefs.

We already covered observable action-consequences. Since these are falsifiable, they have clear semantics in the ontology. So we will in general assume observable rewards, as in reinforcement learning, while leaving un-observable goals for later work.

Now let's look at a sequence of decision theory problems. We will assume, as before, the existence of some agent that falsifiably believes itself to run on at least one computer, C.

5 and 10

Assume the agent is before a table containing a 5 dollar bill and a 10 dollar bill. The agent will decide which dollar bill to take. Thereafter, the agent will receive a reward signal: 5 if the 5 dollar bill is taken, and 10 if the 10 dollar bill is taken.

The agent may have the following beliefs about action-consequences: "If I take action 5, then I will get 5 reward. If I take action 10, then I will get 10 reward." These beliefs follow directly from the problem description. Notably, the beliefs include beliefs about actions that might not actually be taken; it is enough that these actions are possible that their consequences are falsifiable.

Now, how do we translate these beliefs about action-consequences into decisions? The most straightforward way to do so is to select the policy that is believed to return the most reward. (This method is ambiguous under conditions of partial knowledge, though that is not a problem for 5 and 10).

This method (which I will call "subjective implication decision theory") yields the action 10 in this case.

This is all extremely straightforward. We directly translated the problem description into a set of beliefs about action consequences. And these beliefs, along with the rule of subjective causal decision theory, yield an optimal action.

The difficulty of 5 and 10 comes when the problem is naturalized. The devil is in the details: how to naturalize the problem? The previous post examined a case of both external and internal physics, compatible with free will. There is no obvious obstacle to translating these physical beliefs to the 5 and 10 case: the dollar bills may be hypothesized to follow physical laws, as may the computer C.

Realistically, the agent should assume that the proximate cause of the selection of the dollar bill is not their action, but C's action. Recall that the agent falsifiably believes it runs on C, in the sense that its observations/actions necessarily equal C's.

Now, "I run on C" implies in particular: "If I select 'pick up the 5 dollar bill' at time t, then C does. If I select 'pick up the 10 dollar bill' at time t, then C does." And the assumption that C controls the dollar bill implies: "If C selects 'pick up the 5 dollar bill at time t', then the 5 dollar bill will be held at some time between t and t+k", and also for the 10 dollar bill (for some k that is an upper bound of the time it takes for the dollar bill to be picked up). Together, these beliefs imply: "If I select 'pick up the 5 dollar bill' at time t, then the 5 dollar bill will be held at some time between t and t+k", and likewise for the 10 dollar bill. At this point, the agent's beliefs include ones quite similar to the ones in the non-naturalized case, and so subjective implication decision theory selects the 10 dollar bill.

Twin prisoner's dilemma

Consider an agent that believes itself to run on computer C. It also believes there is another computer, C', which has identical initial state and dynamics to C.

Each computer will output an action; the agent will receive 10 reward if C' cooperates plus 1 reward if C defects (receiving 0 reward for defection).

As in 5 and 10, the agent believes: "If I cooperate, C cooperates. If I defect, C defects." However, this does not specify the behavior of C' as a function of the agent's action.

It can be noted at this point that, because the agent believes C' has identical initial state and dynamics to C, the agent believes (falsifiably) that C' must output the same actions as C on each time step, as long as C and C' receive idential observations. Since, in this setup, observations are assumed to be equal until C receives the reward (with C' perhaps receiving a different reward), these beliefs imply: "If I cooperate, C' cooperates. If I defect, C' defects".

In total we now have: "If I cooperate, C and C' both cooperate. If I defect, C and C' both defect". Thus the agent believes itself to be straightforwardly choosing between a total reward of 10 for cooperation, and a total of 1 reward for defection. And so subjective implication decision theory cooperates.

Note that this comes apart from the conventional interpretation of CDT, which considers interventions on C's action, rather than on "my action". CDT's hypothesized intervention updates C but not C', as C and C' are physically distinct.

Newcomb's problem

This is very much similar to twin prisoner's dilemma. The agent may falsifiably believe: "The Predictor filled box A with $1,000,000 if and only if I will choose only box A." From here it is straightforward to derive that the agent believes: "If I choose to take only box A, then I will have $1,000,000. If I choose to take both boxes, then I will have $1,000." Hence subjective CDT selects only box A.

The usual dominance argument for selecting both boxes does not apply. The agent is not considering interventions on C's action, but rather on "my action", which is falsifiably predicted to be identical with C's action.

Counterfactual mugging

In this problem, a Predictor flips a coin; if the coin is heads, the Predictor asks the agent for $10 (and the agent may or may not give it); if the coin is tails, the Predictor gives the agent $1,000,000 iff the Predictor predicts the agent would have given $10 in the heads case.

We run into a problem with translating this to a critical agential ontology. Since both branches don't happen in the same world, it is not possible to state the Predictor's accuracy as a falsifiable statement, as it relates two incompatible branches.

To avoid this problem, we will say that the Predictor predicts the agent's behavior ahead of time, before flipping the coin. This prediction is not told to the agent in the heads case.

Now, the agent falsifiably believes the following:

  • If the coin is heads, then the Predictor's prediction is equal to my choice.
  • If the coin is tails, then I get $1,000,000 if the Predictor's prediction is that I'd give $10, otherwise $0.
  • If the coin is heads, then I get $0 if I don't give the predictor $10, and -$10 if I do give the predictor $10.

From the last point, it is possible to show that, after the agent observes heads, the agent believes they get $0 if they don't give $10, and -$10 if they do give $10. So subjective implication decision theory doesn't pay.

This may be present a dynamic inconsistency in that the agent's decision does not agree with what they would previously have wished they would decide. Let us examine this.

In a case where the agent chooses their action before the coin flip, the agent believes that, if they will pay up, the Predictor will predict this, and likewise for not paying up. Therefore, the agent believes they will get $1,000,000 if they decide to pay up and then the coin comes up tails.

If the agent weights the heads/tails branches evenly, then the agent will decide to pay up. This presents a dynamic inconsistency.

My sense is that this inconsistency should be resolved by considering theories of identity other than closed individualism. That is, it seems possible that the abstraction of receiving an observation and taking on action on each time step, while having a linear lifetime, is not a good-enough fit for the counterfactual mugging problem to achieve dynamic consistency.

Conclusion

It seems that subjective implication decision theory agrees with timeless decision theory and evidential decision theory on the problems considered, while diverging from causal decision theory and functional decision theory.

I consider this a major advance, in that the ontology is more cleanly defined than the ontology of timeless decision theory, which considers interventions on logical facts. It is not at all clear what it means to "intervene on a logical fact"; the ontology of logic does not natively contain the affordance of intervention. The motivation for considering logical interventions was the belief that the agent is identical with some computation, such that its actions are logical facts. Critical agential ontology, on the other hand, does not say the agent is identical with any computation, but rather than the agent effectively runs on some computer (which implements some computation), while still being metaphysically distinct. Thus, we need not consider "logical counterfactuals" directly; rather, we consider subjective implications, and consider whether these subjective implications are consistent with the agent effectively running on some computer.

To handle cases such as counterfactual mugging in a dynamically consistent way (similar to functional decision theory), I believe that it will be necessary to consider agents outside the closed-individualist paradigm, in which one is assumed to have a linear lifetime with memory and observations/actions on each time step. However, I have not proceeded exploring in this direction yet.

[ED NOTE: After the time of writing I realized subjective implication decision theory, being very similar to proof-based UDT, has problems with spurious counterfactuals by default, but can similarly avoid these problems by "playing chicken with the universe", i.e. taking some action it has proven it will not take.]

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 3:07 PM

I'm kind of tired right now, so I might be missing something obvious, but:

It seems that subjective implication decision theory agrees with timeless decision theory on the problems considered, while diverging from causal decision theory, evidential decision theory, and functional decision theory.

Why do you say that it diverges from evidential decision theory (EDT)? AFAICT on all problems listed it does the same thing as EDT, and the style of reasoning seems pretty similar. Would you mind saying what SIDT would do in XOR mugging? (I'd try to work this out myself but for the aforementioned tiredness and the fear that I don't quite understand SIDT well enough).

Looking back on this, it does seem quite similar to EDT. I'm actually, at this point, not clear on how EDT and TDT differ, except in that EDT has potential problems in cases where it's sure about its own action. I'll change the text so it notes the similarity to EDT.

On XOR blackmail, SIDT will indeed pay up.