Acausal Trade
Personal Blog

0

A putative new idea for AI control; index here.

Other posts in the series: Introduction, Double decrease, Pre-existence deals, Full decision algorithms, Breaking acausal trade, Trade in different types of utility functions, Being unusual, and Summary.

In a previous post, I discussed how one might convince an agent not to engage in acausal trade.

The idea was to reward the agent only for extra utility that accrued because the agent was turned on (by a stochastic event ). Since causally disconnected agents couldn't observe , they would "offer" the same "deals" whether or not the agent was turned on.

So the agent might be able to get a tremendous boost in utility from an acausal deal, but that boost would happen in the world as well as the world, so the agent wouldn't count that boost as a benefit.

That was effective as far as it went, but there was one kind of situation it didn't deal with: what if the agent was simulated? Then the event would be within the simulation, and the simulating 'lords of the Matrix' would be causally connected with the agent, hence the agent would act taking their preferences into account.

That in itself is still not a problem; but what if the agent had uncertainty about its own location? It might be in the "real" world, or it might be a simulation made other entities, causally disconnected from the "real" world. Then if the agent acted given that uncertainty, it would be in effect doing a form of acausal trade.

Grounding the world

There is no costless solution, for any such solution must rule out the agent acting like it was in a simulation, which means that we incur a real cost if we are in a simulation.

But if we're willing to pay that cost, then one way of reducing the problem is to ground in our understanding of physics. So instead of ="human flourishing", we have:

  • ="human flourishing in a universe that roughly follows the known laws of physics, will last at least this many trillion years, has these restrictions on how fast information moves and how causality works".

The idea is that a simulation that detailed would be indistinguishable with the real world (and the simulated humans therein would be real moral subjects).

Graded miracles

Of course, if hits a constant when the laws of physics are violated, then the agent will ignore all "miracles", no matter how convincing. The booming voice of god coming from all electrons in the universe, would be interpreted as just an unlikely quantum fluke.

We might not want our agent to be so incompetent in those worlds. So one solution would be to multiply by , where is a measure of how "realistic" world is. For very plausible worlds, . For miraculous or clearly simulated worlds, is much lower.

Thus the agent would be capable of functioning in those worlds, once it had accumulated enough evidence it was in one, but would not expect ahead of time to be in a miraculous world (to avoid Pascal's muggings, it helps if is bounded with a bound reasonably easy to approach - eg getting to within of maximum is not hard).

If the agent has no ability to modify through its own actions, this is equivalent with modifying the prior probabilities of various simulations vs realistic worlds. We should be careful to ensure that the total probability of all realistic worlds is much higher than that of all the simulated worlds, and that generic events do not cause the ratio between them to change much.

Acausal Trade
Personal Blog