All of Slider's Comments + Replies

Slider00

As I understand expanding candy into A and B but not expanding the other will make the ratios go differently.

In probablity one can have the assumtion of equiprobability, if you have no reason to think one is more likely than other then it might be reaosnable to assume they are equally likely.

If we knew what was important and what not we would be sure about the optimality. But since we think we don't know it or might be in error about it we are treating that the value could be hiding anywhere. It seems to work in a world where each node is pretty comparably likely to contain value. I guess it comes from the effect of the relevant utility functions being defined in the terms of states we know about.

3Alex Turner
What do you mean? I'm not currently trying to make claims about what variants we'll actually be likely to specify, if that's what you mean. Just that in the reasonably broad set of situations covered by my theorems, the vast majority of variants of every objective function will make power-seeking optimal.
Slider80

This jumps from mathematical consistency to a kind of opinion when pareto improvement enters the picture. Sure if we have choice between two social policies and everyone prefers one over the other because their personal lot is better there is no conflict on the order. This could be warranted if for some reason we needed consensus to get a "thing passed". However where there is true conflict it seems to say that a "good" social policy can't be formed.

To be somewhat analogous with "utility monster", construct a "consen... (read more)

2Abram Demski
Yeah, I like your "consensus spoiler". Maybe needs a better name, though... "Contrarian Monster"? This way of defining the Consensus Spoiler seems needlessly assumption-heavy, since it assumes not only that we can already compare utilities in order to define this perfect antagonism, but furthermore that we've decided how to deal with cofrences. A similar option with a little less baggage is to define it as having the opposite of the preferences of our social choice function. They just hate whatever we end up choosing to represent the group's preferences. A simpler option is just to define the Contrarian Monster as having opposite preferences from one particular member of the collective. (Any member will do.) This ensures that there can be no Pareto improvements. Actually, the conclusion is that you can form any social choice function. Everything is "Pareto optimal". If we think of it as bargaining to form a coalition, then there's never any reason to include the Spoiler in a coalition (especially if you use the "opposite of whatever the coalition wants" version). In fact, there is a version of Harsanyi's theorem which allows for negative weights, to allow for this -- giving an ingroup/outgroup sort of thing. Usually this isn't considered very seriously for definitions of utilitarianism. But it could be necessary in extreme cases. (Although putting zero weight on it seems sufficient, really.) Pareto-optimality doesn't really give you the tools to mediate conflicts, it's just an extremely weak condition on how you do so, which says essentially that we shouldn't put negative weight on anyone. Granted, the Consensus Spoiler is an argument that Pareto-optimality may not be weak enough, in extreme situations.
Slider20

Vzcnpg vf gur nzbhag V zhfg qb guvatf qvssreragyl gb ernpu zl tbnyf
Ngyrnfg guerr ovt fgebat vaghvgvbaf. N guvat gung unccraf vs vg gheaf gur erfhygf bs zl pheerag npgvbaf gb or jnl jbefr vf ovt vzcnpg. N guvat gung unccraf vs gur srnfvovyvgl be hgvyvgl bs npgvba ng zl qvfcbfny vf punatrq n ybg gura gung vf n ovt qrny (juvpu bsgra zrnaf gung npgvba zhfg or cresbezrq be zhfg abg or cresbezrq). Vs gurer vf n ybg bs fhecevfr ohg gur jnl gb birepbzr gur fhecevfrf vf gb pneel ba rknpgyl nf V jnf nyernql qbvat vf ybj gb ab vzcnpg.

2Alex Turner
For ease of reference, I'm going to translate any ROT13 comments into normal spoilers.