In my previous post, I explained how a set of programs predicting each other using a probabilistic oracle and taking utility-maximizing actions will naturally play a Nash equilibrium. However, this seems to be suspiciously CDT-like. A program will, for example, betray in a Nash equilibrium when playing against itself.

Naturally, probabilistic oracle machines are a good way to formalize Newcomblike problems (since we have prediction), so we can explore alternative decision theories in this framework. Assume we have some probabilistic world program that runs the world and calls the oracle (which is not considered part of the world), returning an outcome. There may be (probabilistic) agent programs embedded in this world program (as is common in formalizations of UDT). Different agents value outcomes differently: define to be the utility player assigns to the outcome . I am using instead of to differentiate this utility function from the game theoretic utility function in the previous post.

Assume that player 's output is for some machine . This is a convenient assumption and it might be the case that this is enough, but I haven't actually proven this.

Here's an example of player 1 playing Prisoner's dilemma against player 2: Note that and are the results of 2 independent calls to . We assume that action 1 is defection and action 0 is cooperation. As a side note, perhaps I should be using monadic do-notation to show that these calls are independent, but this is simple enough that it doesn't seem necessary.

Here's an example of player 1 playing Newcomb's problem: Here, action 1 is 2-boxing and action 0 is 1-boxing. is the player's actual action while is Omega's prediction of the player's action. Note the strong resemblance between Newcomb's problem and playing prisoner's dilemma against yourself.

How might agents in this environment act to maximize utility? In the previous post, we had a quantity we wanted to estimate, . We had equal to this expectation, so will return a utility-maximizing action for player . The quantities and are causal counterfactuals, but maybe we want to replace them with other kinds of counterfactuals.

To do this, we could define masked oracles for state , probabilities and , and oracle as follows:

This is the same as except that, when given the query , it returns 1 with probability . Note that might not be a reflective oracle even if is.

For a UDT-style decision theory, we would like to define the machine to compute a utility difference between oracular counterfactuals (similar to logical counterfactuals):

In other words, we try masking the oracle to make our agent return either always 1 or always 0, and then simulate the world twice with these different oracles and get the expected utility difference. It is not hard to confirm that this reasoner succeeds in cooperating on the prisoner's dilemma with itself and one-boxing in Newcomb's problem. However, since the masked oracle need not be reflective, it will not propagate any consequences of the change we made, and we get a few counterintuitive results, such as not cooperating in a prisoner's dilemma with a slightly different but logically equivalent program.

There's also an issue with Newcomb's problem when we add a step of indirection to Omega's prediction:

Since the counterfactual only modifies and not , the player in this case will conclude that 2-boxing is correct.

So, maybe we want to get a modified version of so that , but is also reflective, and is probably mostly similar to when possible. However, we have no guarantee that actually exists. For example, if is a machine that always returns 1, we cannot have a reflective oracle in which .

Perhaps we can construct our machines so that such an oracle always exists. I'm not sure what approach makes sense here. This seems similar to the problem that playing chicken with the universe is meant to solve, but I don't know how well this translates to a context based on probabilistic oracles rather than logic.

If we succeeded in specifying more reasonable modified oracles, we might end up with a nice alternative formalism of UDT that can be computed using Nash equilibria.

Oracle AI
Personal Blog
New Comment
1 comment, sorted by Click to highlight new comments since:

One natural approach is to try to specify an algorithm that uses reflective oracles to reason about itself under logical uncertainty (under axioms characterizing the reflective oracle), and then to implement UDT by conditioning on logical facts. I don't know if this adds anything beyond implementing logical uncertainty with a halting oracle. If this doesn't work there are a few other similar approaches (e.g. restrict to Sigma_1 sentences, or see if the oracle's behavior can motivate the definition of an idealized math intuition module with nicer behavior than our current candidates).

I suspect that something like this could give us clearer insight into whether or not conditioning on logical facts is an acceptable method of taking counterfactuals. Right now that seems hard to settle, because the answer seems dependent on the algorithm used for reasoning under logical uncertainty. I think most people aren't very optimistic about conditioning, but it's still my leading contender (unless I've missed some recent discouraging news).