This post was inspired by Benja's SUDT post. I'm going to describe another simplified model of UDT which is equivalent to Benja's proposal, and is based on standard game theory concepts as described in this Wikipedia article.
First let's define what is a "single player extensive-form game with chance moves and imperfect information":
A "single player extensive-form game" is a tree of nodes. Each leaf node is a utility value. A play of the game starts at the root and ends at some leaf node.
Some non-leaf nodes are "chance nodes", with probabilities assigned to branches going out of that node. All other non-leaf nodes are "decision nodes", where the player can choose which branch to take. (Thanks to badger for helping me fix an error in this part!)
"Imperfect information" means the decision nodes are grouped into "information sets". The player doesn't know which node they're currently at, only which information set it belongs to.
"Imperfect recall" is a special case of imperfect information, where knowing the current information set doesn't even allow the player to figure out which information sets were previously visited, like in the Absent-Minded Driver problem.
We will assume that the player can use "behavioral strategies", where the player can make a random choice at each node independently, rather than "mixed strategies", which randomize over the set of pure strategies for the entire game. See Piccione and Rubinstein's paper for more on this difference. (Thanks to Coscott for pointing out that assumption!)
The behavioral strategy with the highest expected utility will be taken as the solution of the game.
Now let's try using that to solve some UDT problems:
Absent-Minded Driver is the simplest case, since it's already discussed in the literature as a game of the above form. It's strange that not everyone agrees that the best strategy is indeed the best, but let's skip that and move on.
Psy-Kosh's non-anthropic problem is more tricky, because it has multiple players. We will model it as a single-player game anyway, putting the decision nodes of the different players in sequence and grouping them together into information sets in the natural way. The resulting game tree is complicated, but the solution is the same as UDT's. As a bonus, we see that our model does not need any kind of anthropic probabilities, because it doesn't specify or use the probabilities of individual nodes within an information set.
Wei Dai's coordination problem is similar to the previous one, but with multiple players choosing different actions based on different information. If we use the same trick of folding all players into one, and group the decision nodes into information sets in the natural way, we get the right solution again. It's nice to see that our model automatically solves problems that require Wei's "explicit optimization of global strategy".
Counterfactual Mugging is even more tricky, because writing it as an extensive-form game must include a decision node for Omega's simulation of the player. Some people are okay with that, and our model gives the right solution. But others feel that it leads to confusing questions about the nature of observation. For example, what if Omega used a logical coin, and the player could actually check which way the coin came up by doing a long calculation? Paying up is probably the right decision, but our model here doesn't have enough detail.
Finally, Agent Simulates Predictor is the kind of problem that cannot be captured by our model at all, because logical uncertainty is the whole point of ASP.
It's instructive to see the difference between the kind of UDT problems that fit our model and those that require something more. Also it would be easy to implement the model as a computer program, and solve some UDT problems automatically. (Though the exercise wouldn't have much scientific value, because extensive-form games are a well known idea.) In this way it's a little similar to Patrick's work on modal agents, which made certain problems solvable on the computer by using modal logic instead of enumerating proofs. Now I wonder if other problems that involve logical uncertainty could also be solved by some simplified model?
This post was inspired by Benja's SUDT post. I'm going to describe another simplified model of UDT which is equivalent to Benja's proposal, and is based on standard game theory concepts as described in this Wikipedia article.
First let's define what is a "single player extensive-form game with chance moves and imperfect information":
Now let's try using that to solve some UDT problems:
Absent-Minded Driver is the simplest case, since it's already discussed in the literature as a game of the above form. It's strange that not everyone agrees that the best strategy is indeed the best, but let's skip that and move on.
Psy-Kosh's non-anthropic problem is more tricky, because it has multiple players. We will model it as a single-player game anyway, putting the decision nodes of the different players in sequence and grouping them together into information sets in the natural way. The resulting game tree is complicated, but the solution is the same as UDT's. As a bonus, we see that our model does not need any kind of anthropic probabilities, because it doesn't specify or use the probabilities of individual nodes within an information set.
Wei Dai's coordination problem is similar to the previous one, but with multiple players choosing different actions based on different information. If we use the same trick of folding all players into one, and group the decision nodes into information sets in the natural way, we get the right solution again. It's nice to see that our model automatically solves problems that require Wei's "explicit optimization of global strategy".
Counterfactual Mugging is even more tricky, because writing it as an extensive-form game must include a decision node for Omega's simulation of the player. Some people are okay with that, and our model gives the right solution. But others feel that it leads to confusing questions about the nature of observation. For example, what if Omega used a logical coin, and the player could actually check which way the coin came up by doing a long calculation? Paying up is probably the right decision, but our model here doesn't have enough detail.
Finally, Agent Simulates Predictor is the kind of problem that cannot be captured by our model at all, because logical uncertainty is the whole point of ASP.
It's instructive to see the difference between the kind of UDT problems that fit our model and those that require something more. Also it would be easy to implement the model as a computer program, and solve some UDT problems automatically. (Though the exercise wouldn't have much scientific value, because extensive-form games are a well known idea.) In this way it's a little similar to Patrick's work on modal agents, which made certain problems solvable on the computer by using modal logic instead of enumerating proofs. Now I wonder if other problems that involve logical uncertainty could also be solved by some simplified model?