Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Decius00
We can't tell we're in the all-zero universe by examining any finite number of bits.

What does it mean for the all-zero universe to be infinite, as opposed to not being infinite? Finite universes have a finite number of bits of information describing them (This doesn't actually negate the point that uncomputable utility functions exist, merely that utility functions that care whether they are in a mostly-empty vs perfectly empty universe are a weak example.


These preferences are required to be coherent with breaking things up into sums, so U(E) = U(E∧A)⋅P(E∧A)+U(E∧¬A)⋅P(E∧¬A)/P(E) -- but we do not define one from the other.

What happens if the author/definer of U(E) is wrong about the probabilities? If U(E) is not defined from, nor defined by, the value of its sums, what bad stuff happens if they aren't equal? Consider the dyslexic telekinetic at a roulette table, who places a chip on 6, but thinks he placed the chip on 9; Proposition A is "I will win if the ball lands in the '9' cup (or "I have bet on 9", or all such similar propositions), and event E is that agent exercising their telekinesis to cause the ball to land in the 9 cup. (Putting decisions and actions in the hypothetical to avoid a passive agent)

Is that agent merely *mistaken* about the value of U(E), as a result of their error on P(A) and following the appropriate math? Does their error result in a major change in their utility _function_ _computation_ measurement when they correct their error? Is it considered safe for an agent to justify cascading major changes in utility measurement over many (literally all?) events after updating a probability?


An instantiated entity (one that exists in a world) can only know of events E where such events are either observations that they make, or decisions that they make; I see flaws with an agent who sets forth actions that it believes sufficient to bring about a desired outcome and then feels satisfied that it is done, and also with an agent that is seeking spoofable observations about that desired outcome (in particular, the kind of dynamic where agents will seek evidence that tends to confirm desirable event E, because that evidence makes the agent happy, and evidence against E makes the agent sad, so they avoid such evidence).