All of LukasM's Comments + Replies

Thanks a lot for the link. I'll put it in the reading list (if you don't mind).

I would be interested to hear what you think about the more technical version of the problem. Do you also think that that can have no good solution, or do you think that a solution just won't have the nice philosophical consequences?

Also, I'm excited to know a smart waterfall apologist and if you're up for it I would really like to talk more with you about the argument in your thesis when I have thought about it a bit more.


2Daniel Kokotajlo
I'm glad you are interested, and I'd love to hear your thoughts on the paper if you read it. I'd love to talk with you too; just send me an email when you'd like and we can skype or something. What do you mean by "the more technical version of the problem" exactly? My take right now is that algorithmic similarity (and instantiation) at least the versions of it relevant for consciousness and decision theory and epistemology will have to be either a brute empirical fact about the world, or a subjective fact about the mind of the agent reasoning about it (like priors and utility functions). What it will not be is some reasonably non-arbitrary property/relation with interesting and useful properties (like nash equilibria, centers of mass, and temperature)

I definitely think the computational complexity approach is worth looking into, though I think computational complexity behaves kind of weirdly at low complexities.

I like the view that waterfalls are at least a bit conscious! Definitely goes against my own intuition.

I'm a bit worried that whether or not there is a low description complexity and low computational complexity algorithm that decodes a human from a waterfall might depend heavily on how we encode the waterfall as a mathematical object and that although it would be clear for "natural" encodings that it was unlike a human we might need a theory to tell us which encodings are natural and which are not.

1Vanessa Kosoy
Not sure what do you mean by "computational complexity behaves kind of weirdly at low complexities"? In this case, I would be tempted to try the complexity class L (logarithmic space complexity). The most natural encoding is your "qualia", your raw sense data. This still leaves some freedom for how do you represent it, but this freedom has only a very minor effect.

Thank you!

I hadn't thought about Rice's Theorem in this context before but it makes a lot of sense.

I guess I would say that Rice's Theorem tells us that you can't computably categorize Turing machines based on the functions they describe, but since algorithmic similarity calls for a much finer classification I don't immediately see how it would apply.

And even if we had an impossibility result of this kind, I don't think it would actually be a deal breaker, since we don't need the classification to be computable in general to be enlightening.



I'm going to post this anyway since its blog-day and not important-quality-writing day but I'm not sure this blog has much of a purpose anymore.

I liked the characterization of decision theory and the comment that the problem naively seems trivial from this perspective. Also liked the description of Newcomb's problem as a version of the prisoners dilemma. So it totally had a purpose!

I have already stated I see the third bullet as an unfair problem.

Should this be "the first bullet"?

1jaek
Fixed