All of Cullen's Comments + Replies

Cullen10

Thanks! I'm a bit confused by this though. Could you point me to some background information on the type of tracking that is done there?

Cullen10

Is there a publicly accessible version of the dataset?

Cullen30

Thanks, done. LW makes it harder than EAF to make sequences, so I didn't realize any community member could do so.

Cullen10

If some law is so obviously a good idea in all possible circumstances, the AI will do it whether it is law following or human preference following.

As explained in the second post, I don't agree that that's implied if the AI is intent-aligned but not aligned with some deeper moral framework like CEV.

The question isn't if there are laws that are better than nothing. Its whether we are better encoding what we want the AI to do into laws, or into terms of a utility function. Which format (or maybe some other format) is best for encoding our preferences.

... (read more)
1Donald Hobson
I mean you could say that if we haven't figured out how to do it well in the last 10,000 years, maybe don't plan on doing it in the next 10. That's kind of being mean though.  If you have a functioning arbitration process, can't you just say "don't do bad things" and leave everything down to the arbitration? I also kind of feel that adding laws is going in the direction of more complexity. And we really want as simple as possible. (Ie the minimal AI that can sit in a MIRI basement and help them figure out the rest of AI theory or something) I was talking about a scenario where the human has never imagined the possibility, and asking if the AI mentions the possibility to the human (knowing the human may change the law to get it) The human says "cure my cancer". The AI reasons that it can  1. Tell the human of a drug that cures its cancer in the conventional sense. 2. Tell the human about mind uploading, never mentioning the chemical cure. If the AI picks 2, the human will change the "law" (which isn't the actual law, its just some text file the AI wants to obey). Then the AI can upload the human and the human will have a life the AI judges as overall better for them.  You don't want the AI to never mention a really good idea because it happens to be illegal on a technicality. You also don't want all the plans to be "persuade humans to make everything legal, then ..." 
Cullen10

(I realized the second H in that blockquote should be an A)

Cullen20

I appreciate your engagement! But I think your position is mistaken for a few reasons:

First, I explicitly define LFAI to be about compliance with "some defined set of human-originating rules ('laws')." I do not argue that AI should follow all laws, which does indeed seem both hard and unnecessary. But I should have been more clear about this. (I did have some clarification in an earlier draft, which I guess I accidentally excised.) So I agree that there should be careful thought about which laws an LFAI should be trained to follow, for the reasons you cite... (read more)

2Donald Hobson
Sure, some  of the failure modes mentioned at the bottom disappear when you do that.  If some law is so obviously a good idea in all possible circumstances, the AI will do it whether it is law following or human preference following.  The question isn't if there are laws that are better than nothing. Its whether we are better encoding what we want the AI to do into laws, or into terms of a utility function. Which format (or maybe some other format) is best for encoding our preferences.   If their objective function is something like the CEV of humanity, any extra laws imposed on top of that are entropic.  If the AI's have no correlation to human wellbeing in their objectives, the weak correlation given by law following may be better than nothing. If the AI is already strongly correlated with human wellbeing, then any laws imposed are making the AI worse.  If the human has never imagined mind uploading, does A go up to the human and explain what it is, asking if maybe that law should be changed?