2 min read17th Nov 20142 comments

2

Personal Blog

I've maintained a consistent focus on the approximable when it comes to idealized theories of intelligence. I don't care much how long it takes to converge, but if there \emph{is} no convergent approximation, I figure all bets are off. This is somewhat nonstandard.

There are many levels at which we could focus attention. The real target for algorithms is, of course, practical utility. This has several proxies, including polynomial-time computability and polynomial-time approximability (of various kinds).

Going beyond this, we could focus on computable distriutions (IE, distributions which can be approximated to any desired epsilon by a terminating algorithm); distributions which can be approximated from below, from above, or both; and my typical choice, that which has a convergent approximation at all.

It is also perfectly reasonable in some cases to go to even broader levels. For example, we might have approximable agents which are indexed by ordinals, such that higher ordinals are strictly better in some sense, but we cannot take the limit to give a single approximable super-agent better than all in the set. Rather than computations that converge to a theoretical ideal, we are now forced to choose some sufficiently high ordinal and approximate that. This kind of agent is effectively one Turing jump higher than mere approximability.

It is also possible to consider agents which encompass the whole arithmetic hierarchy, or the analytic hierarchy, or agents which have perfect knowledge of set theory, and so on.

Each of these levels gives rise to a significantly different landscape of possibilities. Our hope with any level we choose is that we can derive useful insights to apply at the practical level. However, the phenomena at each level are significantly different. A path may look promising when examined at one level and impossible when examined at another.

The example of this which I have in mind is the reflective oracle result, which intuitively tells us that it is possible for a rational Bayesian agent to understand environments which include equally-powerful agents, so long as we define a special self-reference call into the agent and allow the agents to take on mixed strategies in a way that makes consistent self-reference possible.

The method, however, is not approximable. I don't know exactly what happens when we try to put it into the realm of approximability, but we might instead find that no such thing is possible. This might lead us to have a very different feeling of what is possible at the practical level.

The level of approximability seems somewhat special to me in that anything less seems to give up quite a bit, and anything more seems difficult to connect with reality. I don't want to make my message here too strong, though; it seems the important thing is to be aware of the different computational levels we can work at and their connection to reality.

Personal Blog

2

New Comment
2 comments, sorted by Click to highlight new comments since: Today at 4:36 PM

Sorry for the delayed reply! I like this post, and agree with much of what you're saying. I guess I disagree with the particular line you draw, though; I think there's an interesting line, but it's at "is there a Turing machine which will halt and give the answer to this problem" rather than "is there a Turing machine which will spit out the correct answer for a large enough input (but will spit out wrong answers for smaller inputs, and you don't know what number is large enough)". The latter of these doesn't seem that qualitatively different to me from "is there a Turing machine which will give you the correct answer if you give it two large enough numbers (but will spit out wrong answers if the numbers aren't large enough, and what is 'large enough' for the second number depends on what the first number you give it is)", which takes you one more level up the arithmetic hierarchy; the line between the first and the second seems more significant to me than the line between the second and the third.

Regarding the reflective oracle result, yes the version presented in the post is higher up in the arithmetic hierarchy, but I think the variant discussed in this comment thread with Paul is probably approximable. As I said above, though, I'm not convinced that that's actually the right place to put the line. Also, there's of course an "AIXItl-like version" which is computable by limiting itself to hypotheses with bounded source code length and computation time. But most importantly in my mind, I don't actually think it would make sense for any actual agent to compute an oracle like this; the point is to define a notion of perfect Bayesian agent which can reason about worlds containing other perfect Bayesian agents, in the hope that this model will yield useful insights about the real world of logically uncertain agents reasoning about worlds containing other logically uncertain agents, and these logically uncertain agents certainly don't reason about each other by computing out an "oracle" first.

...though although I'm guessing the variant of the reflective oracle discussed in the comment thread may be approximable, it seems less likely that a version of AIXI can be defined based on it that would be approximable.