All of Morpheus's Comments + Replies

A physicalist hypothesis is a pair ), where  is a finite[4:2] set representing the physical states of the universe and  represents a joint belief about computations and physics. [...] Our agent will have a prior over such hypotheses, ranging over different .

I am confused what the state space  is adding to your formalism and how it is supposed to solve the ontology identification problem. Based on what I understood, if I want to use this for inference, I have this prior , and now I can use... (read more)

2Vanessa Kosoy
First, the notation ξ∈□c(Φ,Θ) makes no sense. The prior is over hypotheses, each of which is an element of □(Γ×Φ). Θ is the notation used to denote a single hypothesis. Second, having a prior just over Γ doesn't work since both the loss function and the counterfactuals depend on 2Γ×Γ. Third, the reason we don't just start with a prior over 2Γ×Γ, is because it's important which prior we have. Arguably, the correct prior is the image of a simplicity prior over physicalist hypotheses by the bridge transform. But, come to think about it, it might be about the same as having a simplicity prior over 2Γ×Γ, where each hypothesis is constrained to be invariant under the bridge transform (thanks to Proposition 2.8). So, maybe we can reformulate the framework to get rid of Φ (but not of the bridge transform). Then again, finding the "ultimate prior" for general intelligence is a big open problem, and maybe in the end we will need to specify it with the help of Φ. Fourth, I wouldn't say that Φ is supposed to solve the ontology identification problem. The way IBP solves the ontology identification problem is by asserting that 2Γ×Γ is the correct ontology. And then there are tricks how to translate between other ontologies and this ontology (which is what section 3 is about).

Last time I checked, you could not teach a banana basic arithmetic. This works for most humans, so obviously evolution did lots of leg work there.

1Donald Hobson
A lot of the human genome does biochemical stuff like ATP synthesis. These genes, we share with bananas. A fair bit goes into hands, etc. The number of genes needed to encode the human brain is fairly small. The file size of GPT3 code is also small. 

I don't see the problem. Your learning algorithm doesn't have to be "very" complicated. It has to work. Machine learning models don't consist of million lines of code. I do see the problem where one might expect evolution not to be very good at doing that compression, but I find the argument that there would actually be lots of bits needed very unconvincing.

2Tassilo Neubauer
Last time I checked, you could not teach a banana basic arithmetic. This works for most humans, so obviously evolution did lots of leg work there.

Thanks! Exactly what I was looking for :)

One claim I found very surprising:

To make computationalism well-defined, we need to define what it means for a computation to be instantiated or not. Most of the philosophical arguments against computationalism attempt to render it trivial by showing that according to any reasonable definition, all computations are occurring everywhere at all times, or at least there are far more computations in any complex object than a computationalist wants to admit. I won't be reviewing those arguments here; I personally think they fail if we define computation caref

... (read more)
2Abram Demski
I'd be happy to chat about it some time (PM me if interested). I don't claim to have a fully worked out solution, though. 
0Daniel Kokotajlo
I wrote my undergrad thesis on this problem and tentatively concluded it's unsolveable, if you read it and think you have a solution that might satisfy me I'd love to hear it! Maybe Chalmers (linked by Jacob) solves it, idk.
0Jacob Pfau
Here's Chalmers defending his combinatorial state automata idea.