Review

FixDT is not a very new decision theory, but little has been written about it afaict, and it's interesting. So I'm going to write about it.

TJ asked me to write this article to "offset" not engaging with Active Inference more. The name "fixDT" is due to Scott Garrabrant, and stands for "fixed-point decision theory". Ideas here are due to Scott Garrabrant, Sam Eisenstat, me, Daniel Hermann, TJ, Sahil, and Martin Soto, in roughly that priority order; but heavily filtered through my own lens.

This post may provide some useful formalism for thinking about issues raised in The Parable of Predict-O-Matic.

Self-fulfilling prophecies & other spooky map-territory connections.

A common trope is for magic to work only when you believe in it. For example, in Harry Potter, you can only get to the magical train platform 9 if you believe that you can pass through the wall to get there. 

A plausible normative-rationality rule, when faced with such problems: if you want the magic to work, you should believe that it will work (and you should not believe it will work, if you want it not to work).

Can we sketch a formal decision theory which handles such problems?

We can't start by imagining that the agent has a prior probability distribution, like we normally would, since the agent would already be stuck -- either it lucked into a prior which believed the magic could work, or, it didn't. 

Instead, the "beliefs" of the agent start out as maps from probability distributions to probability distributions. I'll use "" as the type for probability distributions (little  for a specific probability distribution). So the type of "beliefs", , is a function type:  (little  for a specific belief). You can think of these as "map-territory connections":  is a (causal?) story about what actually happens, if we believe . A "normal" prior, where we don't think our beliefs influence the world, would just be a constant function: it always outputs the same  no matter what the input is.

Given a belief , the agent then somehow settles on a probability distribution . We can now formalize our rationality criteria:

Epistemic Constraint: The probability distribution  which the agent settles on cannot be self-refuting according to the beliefs. It must be a fixed point of : a  such that .

Instrumental Constraint: Out of the options allowed by the epistemic constraint,  should be as good as possible; that is, it should maximize expected utility. 

We can also require that  be a continuous function, to guarantee the existence of a fixed point[1], so that the agent is definitely able to satisfy these requirements. This might seem like an arbitrary requirement, from the perspective where  is a story about map-territory connections; why should they be required to be continuous? But remember that  is representing the subjective belief-formation process of the agent, not a true objective story. Continuity can be thought of as a limit to the agent's own self-knowledge.

For example, the self-referential statement X: "" suggests an "objectively true" belief which maps  to 1 if it's below 1/2, and maps it to 0 if it's above or equal to 1/2. But this belief has no fixed-point; an agent with this belief cannot satisfy the epistemic constraint on its rationality. If we require  to be continuous, we can only approximate the "objectively true" belief function, by rapidly but not instantly transitioning from 1 to 0 as  rises from slightly less that 1/2 to slightly more. 

These "beliefs" are a lot like "trading strategies" from Garrabrant Induction.

We can also replace the continuity requirement with a Kakutani requirement, to get something more like Paul's self-referential probability theory.

"Beliefs" are mathematically nice!

This section isn't even about the decision theory; I suppose it's skippable.

But this notion of "beliefs" is more useful that it may first appear.

First, notice that you can combine beliefs by weighted sum, in much the same way you can combine probability distributions into mixture models: . This means we can represent our overall beliefs as a "mixture of hypotheses", just like with probabilities. The weights  are analogous to probabilities; but we can also think of them as "wealths" to reflect the Garrabrant Induction idea.

As I mentioned already, we can think of "normal priors" as a special case of beliefs, where the belief is just a constant function, outputting the same probability distribution regardless of input. In this case, weighted sums of beliefs behave exactly like regular weighted sums of probability distributions.

However, while regular probabilistic mixture models only act like "alternative possibilities", belief mixtures can also combine constraints.

Let's focus on two events,  and . The belief  knows that  and knows nothing else. So it reacts to a given  by Jeffrey-updating the probabilities so that  but the probability distribution is otherwise changed as little as possible. The belief  knows that  and nothing else. It reacts to a given  by updating on this, to rule out worlds where the two events differ; but it is agnostic about what exact probabilities the two events should have.

Any mixture of these two beliefs will result in a belief which enforces both constraints; its only fixed points will have , and . The set of fixed points will not depend on the relative weight of the two hypotheses; relative weight only comes into play when you mix together inconsistent constraints.

So, belief functions allow us to represent abstract beliefs which are agnostic about some details of the probability distribution, as well as concrete beliefs which are fully detailed, and combine all of these things together with simple arithmetic. You could say that they can represent beliefs at multiple granularities. For this reason, Scott calls these things "multigrain models", which is a much better term for general use than the term "beliefs" I'm using in this essay.

Can this be the whole decision theory?

So we've got a nice generalized notion of "belief", and a proposed decision procedure which takes that generalized notion and chooses the best fixed-point, to handle self-fulfilling prophecies (as well as self-refuting beliefs and other spooky map-territory connections). 

But we still have to make "normal" decisions; that is, we need to take "external" actions, not just decide on probabilities. The standard picture is that probabilities are an input to the action-deciding process. So it sounds like the new pipeline is: beliefs -> FixDT 'decision' -> probabilities -> ordinary 'decision' -> actions. 

This is a bit complex and inelegant. It would be nice if we could "make a decision" just once, instead of twice. So, let's suppose that actions are controlled by self-fulfilling prophecies. For example, if a robot has a motor that can turn on or off, we want to wire it directly to the robot's belief about the motor. Maybe the motor turns on or off with precisely the probability given by the belief. Or perhaps there's a threshold; strong enough beliefs turn the motor on, and otherwise it shuts off.  The details don't matter too much, so long as there's a consistent fixed-point where the motor is on, and a consistent fixed-point where the motor is off. (Although we will explore some problems with this soon.) 

Great! Now we've unified all decisions into one type. All we need is FixDT; once the probabilities have been chosen, all of the decisions are already made. This picture has other advantages, too. The agent no longer needs to have a special category of "actions" which it can take. "Actions" are just things in the world that are influenced by the agent's probabilities. This results in a picture of agency where there's no ontologically special "output" or "action" type! Actuators are just parts of the world which somehow pay attention to the agent.

We can also use the "belief" datatype to unify the notion of input (observation/evidence) with the notion of "hypothesis" -- although this deserves its own write-up. The short version: imagine that  is defined in reference to the world; that is, it modifies probabilities not by guessing, but rather, by looking at the world and reporting what it sees. Under some additional assumptions, 's influence will behave like a Bayesian update in the limit of  having infinite weight with which to influence the probability distribution.

So we've dissolved the usual notions of "input" and "output" -- now we've just got a market of beliefs, "observations" are just things which influence the market, and "actions" are just things which are influenced by the market.

This seems like a great picture. 

  • We've reversed the common picture that we first figure out what we believe, and then figure out what to do. The decision lives inside the computation of probabilities.
  • We can represent something resembling a Lobian handshake in a probabilistic setting: if I believe that your probability of cooperation is tied to mine, I can select a fixed-point with a high probability of cooperation for both of us. And if I'm right in my beliefs, you'll do the same.[2]
  • We don't need to consider "actions" at all. Instead, there are just parts of the environment which react to our chosen probabilities; and we choose our probabilities with this in mind. Me choosing to type these words is no different in kind from a general choosing where to station troops; the fingers react to what I expect them to type, and the troops react to where I expect them to go.

Sadly, this nice picture falls apart when we look at learning-theoretic considerations.

Reasons for pessimism.

For the picture to work out, we need to be able to learn what we can control.

Eliminating the traditional decision-theoretic need for a list of possible actions to choose from doesn't do us much good if we still have to hard-code the beliefs which say that the robot's motors listen to the robot's probabilities in a particular way. Instead, we'd like the robot to be able to notice this for itself. This would also give us reassurance that it is controlling other aspects of the environment as appropriate.

To make discussion of this simple, I'm going to imagine that there is a "true" belief, , which tells us the "actual" counterfactual relationship between our probabilities and reality. This is metaphysically questionable, but it makes sense in practice. For example, if I hook up my robot's motor to turn on if the robot's probability of the motor turning on is above , then  should map  for which  to some  such that .

If it helps, you can think of  as a "calibration" function which maps uncalibrated probabilities to the probability where it would be calibrated. Normally, we think of calibration functions as representing underconfidence and overconfidence -- if when I say "90%" the event actually occurs an average of 80% of the time, then I'm overconfident and should adjust my probabilities downward. The idea here is exactly the same, except that here we're considering a case where the 80% observed frequency we see in the world might be a reaction to the 90% probability -- so if we move down to 80%, the world might move down further, to 70%, or might move up to 100%, etc. (This is why we need to select a fixed point of the calibration function, rather than just naively adjust in the right direction.)

Seeing  as a calibration function will be more comfortable for a frequentist, who can consider all of this well-defined so long as we can place situations into sequences of random experiments. Causal decision theorists may prefer to think of  as giving the true causal relationship between our probabilities and the world.[3]

So, basically, we want beliefs to approximate  as we learn. More specifically, our beliefs should approximate the set of fixed points for 

This implies some kind of iterated setting, where the agent updates its beliefs over time and selects fixed points repeatedly, rather than just once. I will assume that things look similar to Garrabrant Induction, in that respect. But this is not a formal impossibility proof! I am sketching reasons for pessimism, not formally showing that FixDT will never work. So don't worry about the details -- make up your own assumptions if my reasoning doesn't make sense to you. Let me know if you get it to work!

It would be easy if we could try out different probabilities  and see  for each. It would just be a regression problem. The problem is, we don't get to observe probabilities. We only observe what happens.[4] 

Imagine that our beliefs  are a weighted mixture of , and  is already one of the . (This is usually the easiest case for learning -- the "realizable" case. If this doesn't work, there would seem to be little hope more generally.) How can we reward  for getting things right?

Our chosen probabilities  will be a fixed-point of , but will not necessarily be a fixed-point of every  in our mixture. We can reward beliefs which were pushing in the right direction. If  was 1/2, and , we could say that  was trying to pull the probability down. If we then observe that  turned out to be false, then  should get rewarded with a higher weight in our mixture.

Now, here's the problem: we can't, in general, reward beliefs which correctly identify fixed-points of , or punish beliefs which incorrectly rule out  which are fixed points of .

Suppose that  has two fixed-points, a good one  and a bad one . Our only other hypothesis, , is defined as follows: ; that is, it drags things halfway from wherever they are to . This can (with enough weight relative to other hypotheses) completely eliminate  as a fixed point, leaving only  will never lose credibility for doing this, since at  it makes the same prediction as  -- which is to say, neither of them want to make any corrections to the probabilities at that point, so no learning will happen no matter what gets observed.

In general, if we are at some fixed-point of , then  will not be making any correction to that fixed-point; so it seems difficult to reward or punish . FixDT chooses some probability; then we observe what happens; it seems like we can only reward beliefs which were trying to push the probability towards the thing that happened (and punish those who pulled in the other direction). 

Attraction & Repulsion

Actually, we can distinguish between fixed-points of  which are attractor points vs those which are repulsive. (More generally, points can be varyingly attractive/repulsive when approached from different directions.)

For example, suppose I wire up a motor response like this:

The 50% point will be a fixed-point, but it will be repulsive: beliefs very close to the fixed-point would map to beliefs a bit further away, so that if we iterated , points initially near 50% would shoot away.

Similarly, 100% and 0% are attractive fixed-points; probabilities near to them rapidly converge toward them if we iterate .

If the full market's fixed-point ends up being close to an attractive point of , then reality will respond by being even closer to the attractive point. This suggests that we can learn such points! Beliefs which are pushing toward the fixed-point will be increasingly vindicated (in expectation, if we use a proper scoring rule to reward/punish beliefs). 

On the other hand, belief in repulsive fixed-points will be correspondingly punished. 

This suggests that we can get some positive learning-theoretic results if we limit our aspirations: perhaps we cannot learn  in general, but can learn its attractive fixed-points.

(But don't forget that this can be a big disappointment from a decision-theoretic perspective. The attractive fixed-points can be terrible, and the repulsive fixed-points can be wonderful.)

Active inference to the rescue?

Some might say that the problem, here, is that I am using some of the ideas from Active Inference without adopting the full package.

Specifically, FixDT has in common with Active Inference that motor outputs are a function of what the agent believes its motor outputs will be, rather than the more common idea of being a function of expected utility.

But FixDT is trying to get away with this move without the accompanying Active Inference idea of skewing beliefs toward success

Can we fix FixDT by adding in more ideas from Active Inference? Sort of, but I don't find it very satisfying.

Friendly Actuators?

I observed that attractive fixed-points appear to be learnable, while repulsive fixed-points appear unlearnable. But whether a point is attractive vs repulsive depends on , which is to say, it depends on how the environment reacts to our beliefs. For example, we could wire up the motor responses to be as follows instead of the suggestion I illustrated earlier:

The important thing to note, here, is that I've flipped which fixed-points are attractive vs repulsive. This is not very nice for the agent; it means the 50% point is learnable, but properly turning the motors on/off is no longer learnable.

So we could define "friendly actuators" as ones which have been designed so as to be easy for the agent to learn how to use. Is there a systematic way to design friendly actuators?

Well, we could take the idea from Active Inference. Rather than copy the action probabilities from the agent's chosen probabilities (which would make every distribution over actions a fixed-point of , but neither attractive nor repulsive, and therefore not very learnable) we should instead take the agent's probabilities, bias them toward success, and copy those probabilities. Since action-probabilities will always be shifted toward better outcomes, only optimal actions will be fixed-points. 

(This prevents us from learning full control; but who cares about failing to learn suboptimal fixed-points? We really only need to be able to learn the ones we actually want to choose.)

My problem with this idea is that we're introducing "actuator decision theory" -- the actuator is now asked to be intelligent itself, in order to cooperate with the agent. The more we skew towards success, the more desirable fixed-points we make learnable. We might as well go all the way and have the actuator just make the best decision based on the beliefs, then! This returns us to classical decision theory.

Biased Reporting?

A different way to try and import the Active Inference idea is to bias the agent's probabilities themselves, rather than putting that responsibility on the actuators. Again, the idea is to make better outcomes learnable by helping them to be attractive fixed-points. 

For example, imagine Popular News Network (PNN) finds itself regularly reporting on bank runs. Bank runs have become a big problem, and PNN is doing a service to its viewers by reporting on expert predictions about which banks are in the process of collapsing, which banks are unstable ground and might be next, which banks seem secure, etc. 

PNN is not blind to the fact that its reports can actually cause or prevent bank-runs. Thus far, PNN's ethical position has been that they're doing fine so long as they (1) report the truth as accurately as they are able (the epistemic constraint) and (2) when the accuracy constraint allows for multiple possible reports to be fixed-points, they choose whichever report results in the fewest bank-runs (the instrumental constraint). 

However, PNN has noticed that despite their judicious adherence to the above, more and more bank-runs seem to be happening. Their expert analysts have figured out that bank runs are attractive fixed-points, but non-bank-runs are repulsive; the number of bank-runs in a given week roughly tracks however many PNN forecasts, but looking at the details, there are about 5% more on average than whatever is forecast.

As a result, the reporters, bound by honesty, keep sliding in the direction of predicting more bank runs, since the numbers tend to prove their previous forecasts to be underestimates. 

Taking an idea from Active Inference, PNN executives ask reporters to reduce their forecasted numbers by 10% from whatever the honest forecast would be, in the hopes of putting pressure against bank-runs.

I have a couple of problems with this approach.

First, if we violate the epistemic constraint, are the reported numbers really "probabilities" any more? They're just some numbers we made up. By bending epistemic rationality, we lose the nice properties we invented it for. Why invoke probability theory at all, if you're no longer trying to make your probabilities calibrated?[5]

Second, and relatedly: the viewers of PNN can pick up on the biased reporting and adjust the numbers back up by 10%.

This gets us into murky philosophical issues behind FixDT. The idea of FixDT is that the world might somehow react to our probabilities. But how does the world zero in on "our probabilities" to react to them? If we're settled on a specific version of FixDT, we don't care; FixDT just tracks how the world reacts, and chooses fixed-points accordingly. 

But if we're trying to decide between versions of FixDT (or between FixDT and other options), it might start to matter how the world detects our probabilities in order to react. If we violate the Epistemic Constraint and adjust some numbers up by 10%, will the world adjust those numbers back down before reacting to them?

Obviously, it depends. In some cases, the Active Inference idea will work fine. But in many cases of interest, it won't. That's really all I can say, here.

Connection to the "futarchy hack".

Earlier, in my heuristic argument that FixDT can't learn , I divided the problem into two parts: (a) we can't reward traders who successfully make fixed-points of  into fixed-points of the market; (b) we can't punish traders who successfully rule out fixed-points of  as market fixed-points. 

The second problem is very similar to the untaken actions problem, often called the "futarchy hack" (often, in terms of the in-person LWDT community) because it is a way to control the decisions of a futarchy without risking any money: if you can bet enough money that the option you don't want will be bad for everyone, then that action won't get taken, so you'll simply get your money back. You put your money where your mouth was, but your predictions didn't get empirically tested.

One of the best remedies to this problem (perhaps the best remedy) is Decision Markets (aka BRIA), by Caspar Oesterheld. But I don't have a specific proposal for how to combine that with FixDT.

Future work?

  1. Combining updateless reasoning with FixDT.
  2. Further work on the learning-theoretic issues for FixDT.
  3. Spelling out the "dissolve the notion of evidence" thing I mentioned.
  4. Exploring the combination of BRIA and FixDT.
  5. FixDT can be seen as going up a single meta-level, from probabilities to  maps. But what if the world reacts to your "belief" (your  map)? Can we somehow deal with the implied infinite regress?
  6. FixDT game theory. Perhaps FixDT hierarchical game theory.
  7. Removing talk of "calibration" and ; motivating similar ideas in less ontologically questionable ways.
  8. Capitalizing on the nice ontology FixDT offers, to somehow further clarify "agent boundaries" stuff, or other issues in embedded agency?
  9. If we squint, we can see the Futarchy Hack as a failure of preference aggregation. We could say "the beliefs may actually have preferences" and attempting to rule out a fixed-point is a kind of vote. This is similar to the Active Inference idea, really. We can model Active Inference's way of biasing beliefs toward success by putting in a belief which pushes things toward success (rather than my much grosser, but basically similar, proposal of biasing things toward success after the fixed-point is chosen). Thus we can see "beliefs" as actually having a value component (based on which fixed-points they push things to). Can this get us anywhere??

 

  1. ^

    We also need to assume that the space of probability distributions being considered is compact, to apply Brouwer's fixed point theorem.

  2. ^

    This isn't a super-great "handshake" really -- I think it is little better than what EDT offers by allowing agents to believe that they are correlated with one another. The problem with both pictures is that there isn't a learning-theoretic story showing that agents can converge toward cooperation on such a basis, as far as I know.

  3. ^

    If neither of these pictures is satisfying to you, well... I think many conclusions one can reach by pretending there's a  can be defended more carefully by other means, but I fully admit I'm not doing the work here.

  4. ^

    Of course, we only get to observe what happens for some observable things; I can't directly observe whether my beliefs impact eddies in the currents deep within the sun, for example. But I don't even expect that problem to be solvable in principle -- agents just have to make due with some irreducible uncertainty about such things. But it does feel like I should be able to learn the calibration function for motor-control problems, in order for FixDT to be considered a success.

  5. ^

    Or, we could make this point in other ways, if "calibration" is meaningless to you. For example, biased probabilities will no longer maximize expected accuracy.

New Comment
12 comments, sorted by Click to highlight new comments since:
[-]SMK72

A common trope is for magic to work only when you believe in it. For example, in Harry Potter, you can only get to the magical train platform 9 3/4 if you believe that you can pass through the wall to get there.

Are you familiar with Greaves' (2013) epistemic decision theory? These types of cases are precisely the ones she considers, although she is entirely focused on the epistemic side of things. For example (p. 916):

Leap. Bob stands on the brink of a chasm, summoning up the courage to try and leap across it. Confidence helps him in such situations: specifically, for any value of  between  and , if Bob attempted to leap across the chasm while having degree of belief  that he would succeed, his chance of success would then be . What credence in success is it epistemically rational for Bob to have?

And even more interesting cases (p. 917):

Embezzlement. One of Charlie’s colleagues is accused of embezzling funds. Charlie happens to have conclusive evidence that her colleague is guilty. She is to be interviewed by the disciplinary tribunal. But Charlie’s colleague has had an opportunity to randomize the content of several otherwise informative files (files, let us say, that the tribunal will want to examine if Charlie gives a damning testimony). Further, in so far as the colleague thinks that Charlie believes him guilty, he will have done so. Specifically, if  is the colleague’s prediction for Charlie’s degree of belief that he’s guilty, then there is a chance  that he has set in motion a process by which each proposition originally in the files is replaced by its own negation if a fair coin lands Heads, and is left unaltered if the coin lands Tails. The colleague is a very reliable predictor of Charlie’s doxastic states. After such randomization (if any occurred), Charlie has now read the files; they (now) purport to testify to the truth of  propositions . Charlie’s credence in each of the propositions  conditional on the proposition that the files have been randomized, is ; her credence in each  conditional on the proposition that the files have not been randomized is . What credence is it epistemically rational for Charlie to have in the proposition  that her colleague is guilty and in the propositions  that the files purport to testify to the truth of?

In particular, Greaves' (2013, §8, pp. 43-49) epistemic version of Arntzenius' (2008) deliberational (causal) decision theory might be seen as a way of making sense of the first part of your theory. The idea, inspired by Skyrms (1990), is that deciding on a credence involves a cycle of calculating epistemic expected utility (measured by a proper scoring rule), adjusting credences, and recalculating utilities until an equilibrium is
obtained. For example, in Leap above, epistemic D(C)DT would find any credence permissible. And I guess that the second part of your theory serves as a way of breaking ties.

Yes, thanks for citing it here! I should have mentioned it, really.

I see the Skyrms iterative idea as quite different from the "just take a fixed point" theory I sketch here, although clearly they have something in common. FixDT makes it easier to combine both epistemic and instrumental concerns -- every fixed point obeys the epistemic requirement; and then the choice between them obeys the instrumental requirement. If we iteratively zoom in on a fixed point instead of selecting from the set, this seems harder?

If we try the Skyrms iteration thing, maybe the most sensible thing would be to move toward the beliefs of greatest expected utility -- but do so in a setting where epistemic utility emerges naturally from pragmatic concerts (such as A Pragmatists Guide to Epistemic Decision Theory by Ben Levinstein). So the agent is only ever revising its beliefs in pragmatic ways, but we assume enough about the environment that it wants to obey both the epistemic and instrumental constraints? But, possibly, this assumption would just be inconsistent with the sort of decision problem which motivates FixDT (and Greaves).

[-]SMK20

You might also find the following cases interesting (with self-locating uncertainty as an additional dimension), from this post.

Sleeping Newcomb-1. Some researchers, led by the infamous superintelligence Omega, are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a biased coin (Heads: once; Tails: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. The weight of the coin is determined by what the superintelligence predicts that you would say when you are awakened and asked to what degree ought you believe that the outcome of the coin toss is Heads. Specifically, if the superintelligence predicted that you would have a degree of belief  in Heads, then they will have weighted the coin such that the 'objective chance' of Heads is . So, when you are awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

Sleeping Newcomb-2. Some researchers, led by the superintelligence Omega, are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a biased coin (Heads: once; Tails: twice). After each waking, they will put you back to sleep with a drug that makes you forget that waking. The weight of the coin is determined by what the superintelligence predicts your response would be when you are awakened and asked to what degree you ought to believe that the outcome of the coin toss is Heads. Specifically, if Omega predicted that you would have a degree of belief  in Heads, then they will have weighted the coin such that the 'objective chance' of Heads is Then: when you are in fact awakened, to what degree ought you believe that the outcome of the coin toss is Heads?

This post proposes an approach to decision theory in which we notion of "actions" is emergent. Instead of having an ontologically fundamental notion of actions, the agent just has beliefs, and some of them are self-fulfilling prophecies. For example, the agent can discover that "whenever I believe my arm will move up/down, my arm truly moves up/down", and then exploit this fact by moving the arm in the right direction to maximize utility. This works by having a "metabelief" (a mapping from beliefs to beliefs; my terminology, not the OP's) and allowing the agent to choose its belief out of the metabelief fixed points.

The next natural question is then, can we indeed demonstrate that an agent will learn which part of the world it controls, under reasonable conditions. Abram implies that it should be possible if we only allow choice among attractive fixed point. He then bemoans the need for this restriction and tries to use ideas from Active Inference to fix it with limited success. Personally, I don't understand what's so bad with staying with the attractive fixed points.

Unfortunately, this post avoids spelling out a sequential version of the decision theory, which would be necessary to actually establish any learning-theoretic result. However, I think that I see how it can be done, and it seems to support Abram's claims. Details follows.

Let's suppose that the agent observes two systems, each of which can be in one of two positions. At each moment of time, it observes an element of , where . The agent beliefs it can control one of  and  whereas the other is a fair coin. However, it doesn't know which is which.

In this case, metabeliefs are mappings of type . Specifically, we have a hypothesis  that asserts  is controllable, a hypothesis  that asserts  is controllable and the overall metabelief is (say) .

The hypothesis  is defined by

Here,  and   is some "motor response function", e.g. .

Similarly,  is defined by

Now, let  be an attractive fixed point of  and consider some history . If the statistics of  in  seem biased towards  whereas the statistics of  in  seem like a fair coin, then the likelihoods will satisfy  and hence  will be close to  and therefore will be close to  (since  is an attractive fixed point). On the other hand, in the converse situation, the likelihoods will satisfy  and hence  will be close to . Hence, the agent effectively updates on the observed history and will choose some fixed point  which controls the available degrees of freedom correctly.

Notice that all of this doesn't work with repelling fixed points. Indeed, if we used  then  would have a unique fixed point and there would be nothing to choose.

I find these ideas quite intriguing and am likely to keep thing about them!

I don't think the learnability issues are really a problem. I mean, if doing a handstand with a burning 100 riyal bill between your toes under the full moon is an exception to all physical laws and actually creates utopia immediately, I'll never find out either. Assuming you agree that that's not a problem, why is the scenario you illustrate? In both cases, it's not like you can't find out, you just don't, because you stick to what you believe is the optimal action.

I don't think this would be a significant problem in practice any more than other kinds of humean trolling are. It always seems much more scary in these extremely barebones toy problems, where the connection between the causes and effects we create really are kind of arbitrary. I especially don't think it will be possible to learn the couterfactuals of FDTish cooperation and such in these small settings, no matter the method.

Plus you can still do value-of-information exploration. The repulsive fixed points are not that hard to find if you're looking for them. If you've encircled one and found repulsion all around the edge, you know there must be one in there, and can get there with a procedure that just reverses your usual steps. Combining this with simplicity priors over a larger setting into which the problem is integrated, I don't think its any more worrying than the handstand thing.

You can do exploration, but the problem is that (unless you explore into non-fixed-point regions, violating epistemic constraints) your exploration can never confirm the existence of a fixed point which you didn't previously believe in. However, I agree that the situation is analogous to the handstand example, assuming it's true that you'd never try the handstand. My sense is that the difficulties I describe here are "just the way it is" and only count against FixDT in the sense that we'd be happier with FixDT if somehow these difficulties weren't present. 

I think your idea for how to find repulsive fixed-points could work if there's a trader who can guess the location of the repulsive point exactly rather than approximately, and has the wealth to precisely enforce that belief on the market. However, the wealth of that trader will act like a martingale; there's no reliable profit to be made (even on average) by enforcing this fixed point. Therefore, such a trader will go broke eventually. On the other hand, attractive fixed points allow profit to be made (on average) by approximately guessing their locations.

Repulsive points effectively "drain willpower".

On reflection, I didn't quite understand this exploration business, but I think I can save a lot of it.

>You can do exploration, but the problem is that (unless you explore into non-fixed-point regions, violating epistemic constraints) your exploration can never confirm the existence of a fixed point which you didn't previously believe in.

I think the key here is in the word "confirm". Its true that unless you believe p is a fixed point, you can't just try out p and see the result. However, you can change your beliefs about p based on your results from exploring things other than p. (This is why I call the thing I'm objecting to humean trolling.) And there is good reason to think that the available fixed points are usually pretty dense in the space. For example, outside of the rule that binarizes our actions, there should usually be at least one fixed point for every possible action. Plus, as you explore, your beliefs change, creating new believed-fixed-points for you to explore.

>I think your idea for how to find repulsive fixed-points could work if there's a trader who can guess the location of the repulsive point exactly rather than approximately

I don't think thats needed. If my net beliefs have a closed surface in propability space on which they push outward, then necessarily those beliefs have a repulsive fixed point somewhere in that surface. I can then explore that believed fixed point. Then if its not a true fixed point, and I still believe in the closed surface, theres a new fixed point in that surface that I can again explore (generally more in the direction I just got pushed away from). This should converge on a true fixed point. The only thing that can go wrong is that I stop believing in the closed surface, and it seems like I should leave open that possibility - and even then, I might believe in it again after I do some checking along the outside.

>However, the wealth of that trader will act like a martingale; there's no reliable profit to be made (even on average) by enforcing this fixed point. 

This I don't understand at all. If you're in a certain fixed point, shouldn't the traders that believe in it profit from the ones that don't?

This I don't understand at all. If you're in a certain fixed point, shouldn't the traders that believe in it profit from the ones that don't?

This is the fundamental obstacle according to me,  so, unfortunate that I haven't successfully communicated this yet. 

Perhaps I could suggest that you try to prove your intuition here? 

It seems like fixed points could be used to replace the concept of utility, or at least to ground it as an inferred property of more fundamental features of the agent-environment system. The concept of utility is motivated by the observation that agents have preference orderings over different states. Those preference orderings are statements about the relative stability of different states, in terms of the direction in which an agent tends to transition between them. It seems duplicative to have both utilities and fixed points as two separate descriptions of state transition processes in the agent-environment system; utilities look like they could be defined in terms of fixed points.

 

As one preliminary idea for how to do this, you could construct a fully connected graph  in which the vertices are the probability distributions  that satisfy . The edges  are beliefs that represent hypothetical transitions between the fixed points. The graph  would take the place of a preference ordering by describing the tendency of the agent to move between the fixed points if given the option. (You could also model incomplete preferences by not making the graph fully connected.) Performing power iteration with the transition matrix of  would act as a counterpart to moving through the preference ordering.

 

Further exploration of this unification of utilities and fixed points could involve connecting  to the beliefs that are actually, rather than just counterfactually, present in the agent-environment system, to describe what parts of the system the agent can control. Having a way to represent that connection could let us rewrite the instrumental constraint to not rely on .

An intriguing perspective, but I'm not sure whether I agree. Naively, it would seem that a choice between fixed points in the FixDT setting is just a choice between different probability distributions, which brings us very close to the VNM idea of a choice between gambles. So VNM-like utility theory seems like the obvious outcome.

That being said, I don't really agree with the idea that an agent should have a fixed VNM-like utility function. So I do think some generalization is needed.

[-]SMK00

Epistemic Constraint: The probability distribution  which the agent settles on cannot be self-refuting according to the beliefs. It must be a fixed point of : a  such that .

Minor: there might be cases in which there is a fixed point , but where the agent doesn't literally converge or deliberate their way to it, right? (Because you are only looking for  to satisfy the conditions of Brouwer/Kakutani, and not, say, Banach, right?) In other words, it might not always be accurate to say that the agent "settles on ". EDIT: oh, maybe you are just using "settles on" in the colloquial way.

Yeah, "settles on" here meant however the agent selects beliefs. The epistemic constraint implies that the agent uses exhaustive search or some other procedure guaranteed to produce a fixed point, rather than Banach-style iteration. 

Moving to a Banach-like setting will often make the fixed points unique, which takes away the whole idea of FixDT.

Moving to a setting where the agent isn't guaranteed to converge would mean we have to re-write the epistemic constraint to be appropriate to that setting.