AGI Skepticism

Applied to AIOS by Sam Healy ago
Created by Daniel Trenor at

There are also skeptics who think that the prospect of near-term AGI seems remote, but don't dismiss the issue entirely. An (AAAI presidential panel on long-term AI futures) concluded that

A typical argument is that we currently only have narrow AI, and that there is no sign of progress towards general intelligence. Some critics have gone as far as toeven argue that predictions of near-term AGI belong to the realm of religion, not science or engineering.

Some skeptics go even more far, sayingalso say that discussion about AGI risk is a dangerous waste of waste time that diverts attention from more important issues. Daniel Dennett considers AGI risk an "imprudent pastime" because it distracts our attention from more immediate threats, and the philosopher Alfred Nordmann holds the view that ethical concern is a scarce resource, not to be wasted on unlikely future scenarios (1, 2).

There are also skeptics who think that the prospect of near-term AGI seems remote, but don't go so far as to dismiss the issue entirely. An (AAAI presidential panel on long-term AI futures) concluded that

AGI skepticism involves objections to the possibility of Artificial General Intelligence being developed in the near future. AnSkeptics include various technology and science luminaries such as Douglas Hofstadter, Gordon Bell, Steven Pinker, and Gordon Moore:

"It might happen someday, but I think life and intelligence are far more complex than the current singularitarians seem to believe, so I doubt it will happen in the next couple of centuries." -- Douglas Hofstadter

A typical argument is that although AGI is possible in principle,we currently only have narrow AI, and that there is no reasonsign of progress towards general intelligence. Some critics have gone as far as to expect it in the near future. Typically, this is dueargue that predictions of near-term AGI belong to the belief that although there have been great strides in narrow AI, researchers are still no closer to understanding how to build AGI. Distinguished computer scientists such as Gordon Bell and Gordon Moore, as well as cognitive scientists such as Douglas Hofstadter and Steven Pinker, have expressed the opinion that AGI is remote (IEEE Spectrum 2008). Bringsjord et al. (2012) argue outright that a belief in AGI being developed within any time short of a century is fideistic, appropriate within the realm of religion but, not within science or engineering.

Some skeptics not only disagree with AGI being near, but also criticize anygo even more far, saying that discussion ofabout AGI risk in the first place. In their view, such discussionis a dangerous of waste time that diverts attention from more important issues. Daniel Dennett (2012) considers AGI risk an "imprudent pastime" because it distracts our attention from a more immediate threat: being enslaved by the internet. Likewise,threats, and the philosopher Alfred Nordmann holds the view that ethical concern is a scarce resource, not to be wasted on unlikely future scenarios (Nordmann (20071, 20092).

Others agreeThere are also skeptics who think that the prospect of near-term AGI is stillseems remote, but don't go so far away and not yet a major concern, but admit that it might be valuableas to givedismiss the issue some attention.entirely. An (AAAI presidential panel on long-term AI futures) concluded that there

There was overall skepticism about AGI risk, butthe prospect of an intelligence explosion as well as of a “coming singularity,” and also about the large-scale loss of control of intelligent systems. Nevertheless, there was a shared sense that additional research into the topic and related subjects would be valuable (Horvitz & Selman 2009).on methods for understanding and verifying the range of behaviors of complex computational systems to minimize unexpected outcomes. Some panelists recommended that more research needs to be done to better define “intelligence explosion,” and also to better formulate different classes of such accelerating intelligences. Technical work would likely lead to enhanced understanding of the likelihood of such phenomena, and the nature, risks, and overall outcomes associated with different conceived variants.

AGI skepticism involves objections to the possibility of Artificial General Intelligence being developed in the near future. AAn argument is that although AGI is possible in principle, there is no reason to expect it in the near future. Typically, this is due to the belief that although there have been great strides in narrow AI, researchers are still no closer to understanding how to build AGI. Distinguished computer scientists such as Gordon Bell and Gordon Moore, as well as cognitive scientists such as Douglas Hofstadter and Steven Pinker, have expressed the opinion that AGI is remote (IEEE Spectrum 2008). Bringsjord et al. (2012) argue outright that a belief in AGI being developed within any time short of a century is fideistic, appropriate within the realm of religion but not within science or engineering.

A number ofAGI skepticism involves objections have been raised to the possibility of Artificial General Intelligence being developed any time soon. Many of these arguments stem from opponents directly comparingin the near future. A argument is that although AGI is possible in principle, there is no reason to human cognition. However, human cognition may have littleexpect it in the near future. Typically, this is due to do with how AGI’s are eventually engineered.

It has been observedthe belief that since the 1950’salthough there have been several cyclesgreat strides in narrow AI, researchers are still no closer to understanding how to build AGI. Distinguished computer scientists such as Gordon Bell and Gordon Moore, as well as cognitive scientists such as Douglas Hofstadter and Steven Pinker, have expressed the opinion that AGI is remote (IEEE Spectrum 2008). Bringsjord et al. (2012) argue outright that a belief in AGI being developed within any time short of large investment (from both government and private enterprise) followed by disappointment caused by unrealistic predictions made by those workinga century is fideistic, appropriate within the realm of religion but not within science or engineering.

Some skeptics not only disagree with AGI being near, but also criticize any discussion of AGI risk in the field. Critics will pointfirst place. In their view, such discussion diverts attention from more important issues. Dennett (2012) considers AGI risk an "imprudent pastime" because it distracts our attention from a more immediate threat: being enslaved by the internet. Likewise, the philosopher Alfred Nordmann holds the view that ethical concern is a scarce resource, not to these failures asbe wasted on unlikely future scenarios (Nordmann 2007, 2009).

Others agree that AGI is still far away and not yet a means to attack the current generation of AGI scientists. This period of lack of progress is referred to as the "A.I winter".

Furthermore, a variety of high profile figures from computer and neuroscience, such as Steven Pinker and Douglas Hofstadter, have suggestedmajor concern, but admit that the complexity of intelligence is far greater than AGI advocates appreciate. Even if computing power continues to increase exponentially this does nothing to help understand how an AGIit might be built.valuable to give the issue some attention. An AAAI presidential panel on long-term AI futures concluded that there was overall skepticism about AGI risk, but that additional research into the topic and related subjects would be valuable (Horvitz & Selman 2009).

Furthermore, a variety of high profile figures from computer and neuroscience, such as Steven Pinker and Douglas Hofstadter, have suggested that the complexity of intelligence is far greater than AGI advocates appreciate. Even if computing power continues to increase exponentially this does nothing to help with understandingunderstand how wean AGI might build an AGI.be built.

The philosopher John Searle in his thought experiment “The Chinese Room” proposes a flaw in the functionality of digital computers that would prevent them from possessing a “mind”. In his example he asks you to imagine a computer program that can take part in a conversation in written Chinese by recognizing symbols and responding with suitable “answer” symbols. We could also have a English speaking human follow the same program rules, they would still be able to carry out a Chinese conversation but they would have no understanding of what was being said. Equally, Searle argues, a computer wouldn’t understand the conversation either. This line of reasoning leads to the assumption that AGI is impossible because digital computers are incapable of forming models that "understand" general concepts.

Stuart Hameroff and Roger Penrose have suggested that cognition in humans may rely on fundamental quantum phenomena unavailable to digital computers. Although quantum phenomena has been studied in brains, there is no evidence that this would be a barrier for general intelligence.

It has also been observed that since the 1950’s there have been several cycles of large investment (from both government and private enterprise) followed by disappointment caused by unrealistic predictions made by those working in the field. Critics will point to these failures as a means to attack the current generation of AGI scientists. This period of lack of progress is often referred to as the "A.I winter".

Blog Posts

A number of objections have been raised to the possibility of Artificial General Intelligence being developed any time soon. Many of these arguments stem from opponents directly comparing AGI to human cognition. However, human cognition may have little to do with how AGI’s are eventually engineered.

The philosopher John Searle in his thought experiment “The Chinese Room” proposes a flaw in the functionality of digital computers that would prevent them from possessing a “mind”. In his example he asks you to imagine a computer program that can take part in a conversation in written Chinese by recognizing symbols and responding with suitable “answer” symbols. We could also have a English speaking human follow the same program rules, the English speakerthey would still be able to carry out a Chinese conversation but they would have no understanding of what was being said. Equally, Searle argues, a computer wouldn’t understand the conversation either. This line of reasoning leads to the assumption that AGI is impossible because digital computers are incapable of forming models that "understand" anything.general concepts.

A number of objections have been raised to the possibility of Artificial General Intelligence being developed any time soon. Many of these arguments stem from opponents directly comparing AIAGI to human cognition. However, human cognition may have little to with how AGI’s are eventually engineered. Objections range from non-materialist models of the mind, to evidence based on the supposed lack of progress made in artificial intelligence over the last 60 years, to philosophical ideas that set fundamental limits on what digital computers can process.

Since the 1950’s there has been several cycles of large investment (from both government and private enterprise) followed by disappointment caused by unrealistic predictions made by those working in the field. Critics will point to these failures as a means to attack the current generation of AGI scientists. This period of apparent lack of progress is often referred to as the "A.I winter".

The philosopher John Searle in his thought experiment “The Chinese Room” proposes a flaw in the functionality of digital computers that would prevent them from possessing a “mind”. In his example he asks you to imagine a computer program that can take part in a conversation in written Chinese by recognizing symbols and responding with suitable “answer” symbols. We could thenalso have a human follow the same instructions,program rules, the English speaker would still be able to carry out a Chinese conversation but they would have no understanding of what was being said. Equally, Searle argues, a computer running the same program wouldn’t understand the conversation.conversation either. This line of reasoning leads to the assumption that AGI is impossible because digital computers are incapable of forming models that "understand" anything.

Stuart Hameroff and Roger Penrose have suggested that consciousness and learning patternscognition in humans may rely on fundamental quantum phenomena unavailable to digital computers. Although quantum phenomena has been studied in brains, there is no evidence that this would be a barrier for general intelligence.

It has also been observed that since the 1950’s there have been several cycles of large investment (from both government and private enterprise) followed by disappointment caused by unrealistic predictions made by those working in the field. Critics will point to these failures as a means to attack the current generation of AGI scientists. This period of lack of progress is often referred to as the "A.I winter".

A number of objections have been raised to the possibility of Artificial General Intelligence being developed any time soon. Many of these arguments stem from opponents directly comparing AI to human cognition. However, human cognition may have little to with how AGI’s are eventually engineered. Objections range from non-materialist models of the mind, to evidence based on the supposed lack of progress made in artificial intelligence over the last 60 years, to philosophical ideas that set fundamental limits on what digital computers can process.

Since the 1950’s there has been several cycles of large investment (from both government and private enterprise) followed by disappointment caused by unrealistic predictions made by those working in the field. Critics will point to these failures as a means to attack the current generation of AGI scientists. This period of apparent lack of progress is often referred to as the "A.I winter".

The philosopher John Searle in his thought experiment “The Chinese Room” proposes a flaw in the functionality of digital computers that would prevent them from possessing a “mind”. In his example he asks you to imagine a computer program that can take part in a conversation in written Chinese by recognizing symbols and responding with suitable “answer” symbols. We could then have a human follow the same instructions, the English speaker would still be able to carry out a Chinese conversation but they would have no understanding what was being said. Equally, Searle argues, a computer running the same program wouldn’t understand the conversation.

Stuart Hameroff and Roger Penrose have suggested that consciousness and learning patterns in humans may rely on fundamental quantum phenomena unavailable to digital computers. Although quantum phenomena has been studied in brains, there is no evidence that this would be a barrier for general intelligence.

External Links

See Also

[Artificial General Intelligence]