Epistemic status: free speculation

How intelligent should we expect AI to be, compared to humans, and how quickly should we expect it to reach this level of intelligence? This is of course an important strategic question, which has a large impact on our AI threat models. At the moment, the most common answers to these questions are "much more intelligent", and "very quickly". In this post, I will describe an arguably realistic scenario in which this would not be the case.

First of all, why should we expect AI to end up much more intelligent than humans? A common argument goes roughly like this: humans are much, much smarter than animals. We can build computers, and send things to space, but they can't. This proves that the range of possible intelligence is very wide. Moreover, it would be prima facie very surprising if human intelligence is at the top of this range. Therefore, we should expect it to be possible for AI systems to get much smarter than humans. Moreover, there is no reason to think that AI progress would slow down around human intelligence in particular. Therefore, we should expect AI intelligence to quickly far outstrip our intelligence.

This argument relies very crucially on the assumption that humans are much smarter than animals. But is this actually true? I'm not entirely convinced. First of all, there have been "feral" humans that grew up surrounded by animals. As far as I know, these humans are not obviously much more intelligent than animals (in terms of their ability to solve problems). This already casts some doubt on the notion that humans are much, much smarter than animals.

It is important to remember that humans, unlike all other species, are able to use complex language. This is a huge confounding factor, when we try to compare the intelligence of humans and animals. It is obviously very powerful to be able to exchange complex ideas, and build up knowledge intergenerationally. This would probably be enough to give humans a very large advantage, even if our intelligence was otherwise exactly the same as that of other primates.

Therefore, consider the following hypothesis: humans have an innate ability to use complex, combinatorial language, but all other species lack this ability (in the way Noam Chomsky thinks). In addition to this, humans are also somewhat (but not hugely) more intelligent than other primates (eg, 1.5x as intelligent, say).

As far as I can tell, this hypothesis roughly fits all our observations (?). However, if it is the case that the difference between humans and monkeys is mostly due to a one-shot discrete difference (ie language), then this cannot necessarily be repeated to get a similar gain in intelligence a second time. If that is the case, then we should perhaps expect AI to still end up quite a bit more intelligent than humans, but perhaps not to an incomprehensible extent (ie, we end up with AI geniuses, but not AI gods).

There are obviously a lot of ways that this conclusion could be wrong, and a lot of counter-arguments one could offer (eg, in most board games, AI has quickly gone from below human performance, to far above human performance, yadda yadda). However, I don't know of any knock-down arguments, and so I put some weight on something like this being true.

New Comment
7 comments, sorted by Click to highlight new comments since:

My stock counterargument to this: insofar as humans' advantage over other animals stems primarily from our ability to transmit knowledge/memes/etc across individuals and through generations, we should expect AI to have a much larger advantage, because they can do the same thing far, far better. This doesn't even require the AI to be all that "smart" - even just the ability to copy minds directly would allow transmission from "parent" to "child" with far less knowledge-loss than humans can achieve. (Consider, for instance, the notorious difficulty of training a student to the same level as an unusually accomplished human - and then imagine how much more effective it would be to just copy the accomplished human.) And that's just copying. The ability to merge two minds does not necessarily come for free with any AI (as copying does), but will likely be far easier for AI than for humans, and would provide a much more extreme version of what humans achieve by passing information via language. (Consider, for instance, the notorious difficulty of communicating well and avoiding misunderstanding - and then imagine how much easier that would be if two minds or components-of-minds could merge.)

What these arguments suggest is that humans' language capabilities are themselves relatively primitive ways to transmit knowledge/memes/etc between individuals, and it is not in-principle hard to do better once one starts to optimize directly for that goal without the constraints of biological wetware. More generally, we should expect that whatever did give humans such a large advantage (be it language or something else), humans have a relatively primitive version of it, because otherwise humans would have taken off sooner in our evolutionary history. As the saying goes, "humans are as stupid as they can be while still taking over the world, because otherwise they would have done so earlier".

Yes, I agree with this. I mean, even if we assume that the AIs are basically equivalent to human simulations, they still get obvious advantages from the ability to be copy-pasted, the ability to be restored to a checkpoint, the ability to be run at higher clock speeds, and the ability to make credible pre-commitments, etc etc. I therefore certainly don't think there is any plausible scenario in which unchecked AI systems wouldn't end up with most of the power on earth. However, there is a meaningful difference between the scenario where their advantages mainly come from overwhelmingly great intelligence, and the scenario where their advantages mainly (or at least in large part) come from other sources. For example, scaleable oversight is a more realistic possibility in the latter scenario than it is in the former scenario. Boxing methods are also more realistic in the latter scenario than the former scenario, etc.

I think this scenario is still strategically isomorphic to "advantages mainly come from overwhelmingly great intelligence". It's intelligence at the level of a collective, rather than the individual level, but the conclusion is the same. For instance, scalable oversight of a group of AIs which is collectively far smarter than any group of humans is hard in basically the same ways as oversight of one highly-intelligent AI. Boxing the group of AIs is hard for the same reasons as boxing one. Etc.

I think the broad strokes are mostly similar, but that a bunch of relevant details are different.

Yes, a large collective of near-human AI that is allowed to interact freely over a (subjectively) long period of time is presumably roughly as hard to understand and control as a Bostrom/Yudkowsky-esque God in a box. However, in this scenario, we have the option to not allow free interaction between multiple instances, while still being able to extract useful work from them. It is also probably much easier to align a system that is not of overwhelming intelligence, and this could be done before the AIs are allowed to interact. We might also be able to significantly influence their collective behaviour by controlling the initial conditions of their interactions (similarly to how institutions and cultural norms have a substantial long-term impact on the trajectory of a country, for example). It is also more plausible that humans (or human simulations or emulations) could be kept in the loop for a long time period in this scenario. Moreover, if intelligence is bottle-necked by external resources (such as memory, data, CPU cycles, etc) rather than internal algorithmic efficiency, then you can exert more control over the resulting intelligence explosion by controlling those resources. Etc etc.

It is important to remember that humans, unlike all other species, are able to use complex language. This is a huge confounding factor, when we try to compare the intelligence of humans and animals. It is obviously very powerful to be able to exchange complex ideas, and build up knowledge intergenerationally. This would probably be enough to give humans a very large advantage, even if our intelligence was otherwise exactly the same as that of other primates.

 

Communication is an aspect of intelligence. It takes place in the brain not the kidneys. Now you could argue that communication is a special extra boost above and beyond the normal gains of intelligence, that humans are near the top of the communication sigmoid, and that there are no other special extra boosts out there. 

Do monkeys have a mind capable of understanding calculus internally and just lack any language capable of learning it? (Such that a monkey given perfect communication but no other increases in intelligence would be able to learn calculus.) 

I am not convinced the question is meaningful. I doubt that "communication" is a clear boundary on the neurochemical level, with a sharp divide between communication neurons and other neurons.

First of all, there have been "feral" humans that grew up surrounded by animals. As far as I know, these humans are not obviously much more intelligent than animals (in terms of their ability to solve problems).

 

Think of that like a modern supercomputer being used to play pong. 

(well not that big of a gap, but you get the picture)

Animal brains have a relatively simple and limited range of pieces of software they can run. 

Human brains are able to run a much wider range of much more complicated programs. 

In other words, human intelligence shows up in that with the right training, we are able to do all sorts of complicated things. Whereas there are plenty of things most humans can do that animals can't be trained to do. 

To actually be able to do useful stuff, humans need the right training, both in the specific technical details, and more general stuff like the scientific method. (with some ability to figure those things out from large amounts of trial and error)

Train a human in a nutty cult, and their intelligence is useless. But the point is that humans can be trained to do physics. Not that every human crawls out of the womb doing calculus. 

 

I agree that if your only examples of humans were feral humans, then you would have no reason to think humans were much smarter. And then you would be very surprised by basically any educated human. 

Or another way to put this, for a slightly different definition of the word "intelligence" is that humans can be much more intelligent than animals with the right environment.

However, if it is the case that the difference between humans and monkeys is mostly due to a one-shot discrete difference (ie language), then this cannot necessarily be repeated to get a similar gain in intelligence a second time.

Perhaps language is a zero-one, i.e. language renders a mind "cognitively complete" in the sense that the mind can represent anything about the external world, and make any inferences using those representations. But intelligence is not thereby zero-one because intelligence depends on continuous variables like computional speed, memory, etc.

More concretely, I am sceptic that "we end up with AI geniuses, but not AI gods", because running a genius at 10,000x speed, parallelised over 10,000x cores, with instantaneous access to the internet does (I think) make an AI god. A difference is quantity is a difference in kind.

Thar said, there might exist plausible threat models which require an AI which doesn't spatiotemporally decompose into less smart AIs. Could you sketch one out?