Wiki Contributions

Comments

>APS is less understood and poorly forecasted compared to AGI. 

I disagree with this. I have come to dislike the term AGI because (a) its meaning is so poorly defined, (b) the concept most people have in mind will never be achieved, nor needs to be to get to the capability level that is necessary for catastrophic scenarios, and (c) the concept of "AGI" doesn't get at the part of advanced AI that is relevant for reasoning about x-risk.

In this diagram, the AGI circle captures the concept that AGI is a system that subsumes all human capabilities. Realistically, this will never exactly happen. Is completing that AGI circle really the relevant concept? If we can shift that big Future AI bubble to the left and reduce the overlap with AGI, does that make us safer? No.

Granted, the APS concept is also highly deficient at this point in time. It also is too vague & ambiguous in its current form, especially in terms of how differently people interpret each of the six propositions. But, compared to the AGI term, it is at least a more constructive and relevant term. 

I do like the idea of adopting a concept of "general intelligence" that is contrasted to "narrow intelligence." It applies to a system can operate across a broad spectrum different domains and tasks, including ones it wasn't specifically designed or trained for, and avoid brittleness when slightly out-of-distribution. IMHO, GPT-4 crossed that line for the first time this year. I.e., I don't think it can be considered narrow (and brittle) anymore. However, this is not a dangerous line -- GPT-4 is not an x-risk. So "general" is a useful concept, but misses something relevant to x-risk (probably many things).