Differential Intellectual Progress

Created by prhodes at

Differential intellectual progress was defined by Luke Muehlhauser and Anna Salamon as "prioritizing risk-reducing intellectual progress over risk-increasing intellectual progress". Muehlhauser and SalamonThey discuss differential intellectual progress in relation to Artificial General Intelligence (AGI) development (which will also be the focus of this article):

Muehlhauser and Salamon also note that differential technological development can be seen as a special case of this concept.

Differential intellectual progress describes a scenario which,was defined by Luke Muehlhauser and Anna Salamon as "prioritizing risk-reducing intellectual progress over risk-increasing intellectual progress". Muehlhauser and Salamon discuss differential intellectual progress in terms of human safety, risk-reducingrelation to Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing(which will also be the Singularity, he defines it accordingly:focus of this article):

Technological advances - without corresponding development of safety mechanisms - simultaneously increase the capacity for both friendly and unfriendly AGI development. Presently, most AGI research is concerned with increasing its capacity rather than its safety and thus, most progress increases the risk for a widespread negative effect.

The above developments could also help in the creation of Friendly AI. However, Friendliness requires the development of both AGI and Friendliness theory, while an Unfriendly Artificial Intelligence might be created by AGI efforts alone. Thus developments that bring AGI closer or make it more powerful will increase risk, at least if not combined with work on Friendliness.

  • Increased computing power. Computing power continues to rise in step with Moore's Law, providing the raw capacity for smarter AGIs. This allows for more 'brute-force' programming, leading toincreasing the probability of someone creating of an AGI without properly understanding it (and thus, being less capable of controlling it).it. Such an AGI would also be harder to control.
  • More efficient algorithms. Mathematical advances can produce substantial reductions in computing time, allowing an AGI to be more efficient within its current operating capacity. Since machine intelligence can be measured by its optimization power divided by resources used, this hasThe ability to carry about a larger number of computations with the same amount of hardware would have the net effect of making the machineAGI smarter.
  • Extensive datasets. Living in the 'Information Age' has produced immense amounts of data. Not only hasAs data storage capacity has increased, butso has the medium on which itamount of information that is stored has decreasedcollected and stored, allowing an AGI immediate access to massive amounts of knowledge.
  • Advanced neuroscience. Cognitive scientists have resolveddiscovered several algorithms used by the human brain which contribute to our intelligence, leading to a field called 'Computational Cognitive Neuroscience.' This technology has already led to significantdevelopments such as brain implants that have helped restore memory and motor learning in animals, algorithms which might conceivably contribute to AGI progress (such as neural networks).development.

While theThe above developments could loweralso help in the riskcreation Friendly AI. However, Friendliness requires the development of creatingboth AGI and Friendliness theory, while an Unfriendly Artificial Intelligence (UAI), this ismight be created by AGI efforts alone. Thus developments that bring AGI closer or make it more powerful will increase risk, at least if not the case presently. For example, an AGIcombined with access to massive datasets has the ability to use that information to increase its capacity to serve its purpose. Unless specifically programmed with ethical values respecting human life, it would inadvertently consume resources needed by humans in pursuit of this goal. This same paradigm applies for all risk-increasing progress.work on Friendliness.

  • Standardized AGI terminology.Computer security. Research is continuing to establish formal definitions and language, thereby forming a frameworkOne way by which researchers can communicate effectivelyAGIs might grow rapidly more powerful is by taking over poorly-protected computers on the Internet. Hardening computers and thus more efficiently evolve AGI safety development.networks against such attacks would help reduce this risk.
  • Friendly AGI goals. Embedding an AGI with friendly terminal values limitsreduces the actionsrisk that it canwill take with regardsaction that is harmful to human safety.humanity. Development in this area has lead to many questions about what should be implemented. However, precise methodologies which, when executed within an AGI, would prevent it from harming humanity have not yet materialized.

Differential intellectual progress describes a scenario which, in whichterms of human safety, risk-reducing Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing the Singularity, he defines it accordingly:

While the above developments could lower the risk of creating an Unfriendly Artificial Intelligence (UAI), this is not the case presently. For example, an AGI with access to massive datasets has the ability to use that information to increase its capacity to serve its purpose. Unless specifically programmed with ethical values respecting human life, it would inadvertently consume resources needed by humans in pursuit of this goal. This same paradigm applies for all risk-increasing progress.

  • Standardized AGI lexicon.terminology. Research is continuing to establish formal definitions and language, thereby forming a framework by which researchers can communicate effectively and thus more efficiently evolve AGI safety development.
  • Secure AGI construction.confinement. Incorporating physical mechanisms which limit the AGI can prevent it from inflicting damage. Physical isolation has already been developed (such as AI Boxing) as well as embedded solutions which shut down parts of the system under certain conditions.
  • BetterMore efficient algorithms. Mathematical advances can produce substantial reductions in computing time, allowing an AGI to be more efficient within its current operating capacity. Since machine intelligence can be measured by its optimization power divided by resources used, this has the net effect of making the machine smarter.
  • Enhanced philosophical framework.AGI lexicon. Research is continuing to establish formal definitions and language, thereby forming a framework by which researchers can communicate effectively and thus more efficiently evolve AGI safety development.
  • SaferSecure AGI architectures.construction. Incorporating physical mechanisms which limit the AGI can prevent it from inflicting damage. Physical isolation has already been developed (such as AI Boxing) as well as embedded solutions which shut down parts of the system under certain conditions.
  • SaferFriendly AGI goals. Embedding an AGI with friendly terminal values limits the actions it can take with regards to human safety. Development in this area has lead to many questions about what should be implemented. However, precise methodologies which, when executed within an AGI, would prevent it from harming humanity have not yet materialized.

Differential intellectual progress describes a situationscenario in which risk-reducing Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing the Singularity, he defines it accordingly:

MostTechnological advances - without corresponding development of safety mechanisms - simultaneously increase the capacity for both friendly and unfriendly AGI developmentdevelopment. Presently, most AGI research is focused onconcerned with increasing its capability since each iteration of an AGI generally improves uponcapacity rather than its predecessor. Eventually this trend may give birth to an AGI that inadvertently producessafety and thus, most progress increases the risk for a widespread negative effect. This self-improving AGI created without safety precautions would pursue its utility without regard

  • Better algorithms. Mathematical advances can produce substantial reductions in nature; rather, it would expand its capability never pausing to consider the impact of its actions on other forms of life.

    The Paperclip maximizer is a thought experiment describing one such scenario. In it,computing time, allowing an AGI to be more efficient within its current operating capacity. Since machine intelligence can be measured by its optimization power divided by resources used, this has the net effect of making the machine smarter.

  • Extensive datasets. Living in the 'Information Age' has produced immense amounts of data. Not only has data storage capacity increased, but the medium on which it is createdstored has decreased allowing an AGI immediate access to continually increasemassive amounts of knowledge.
  • Advanced neuroscience. Cognitive scientists have resolved several algorithms used by the number of paperclips in its possession. As it gets smarter, it invents new ways of accomplishing this goal, consuming all matter around ithuman brain which contribute to create more paperclips. In short, it inadvertently wreaks havoc on all lifeour intelligence, leading to accomplish this goala field called 'Computational Cognitive Neuroscience.' This technology has already led to significant AGI progress (such as safety measures were not taken to prevent it.

    neural networks).

There are several areas which, when more developed, will provide a means to produce AGIs that are friendly to humanity. These areas of research should be prioritized to prevent possible disasters.

  • Enhanced philosophical framework.Research is continuing to establish formal definitions and language, thereby forming a framework by which researchers can communicate effectively and thus more efficiently evolve AGI safety development.
  • Safer AGI architectures. Incorporating physical mechanisms which limit the AGI can prevent it from inflicting damage. Physical isolation has illuminatedalready been developed (such as AI Boxing) as well as embedded solutions which shut down parts of the need for caution when developing an AGI. In an effort to formalize this need, AI safety theory continues to be developed in order to solve some of these issues. Proposed strategies to preventsystem under certain conditions.
  • Safer AGI goals. Embedding an AGI with friendly terminal values limits the actions it can take with regards to human safety. Development in this area has lead to many questions about what should be implemented. However, precise methodologies which, when executed within an AGI, would prevent it from harming humanity include:

    1. Embedding the AGI with human terminal values.have not yet materialized.
    2. Confining the AGI such that it has minimal contact with the external world as detailed in AI Boxing.

    As an example, the paperclip maximizer mentioned above might be created with a sense of human value, preventing it from creating paperclips at the cost of harming humanity.

Differential intellectual progress describes a situation in which risk-reducing Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing the Singularity, he defines it thusly:accordingly:

Risk-increasing Progress

Most AGI development is focused on increasing its capability since each iteration of an AGI generally improves upon its predecessor. Eventually this trend may give birth to an AGI that inadvertently produces a widespread negative effect. This self-improving AGI created without safety precautions would pursue its utility without regard for the well-being of humanity. Its intent would not be diabolical in nature; rather, it would expand its capability never pausing to consider the impact of its actions on other forms of life.

The Paperclip maximizer is a thought experiment describing one such scenario. In it, an AGI is created to continually increase the number of paperclips in its possession. As it gets smarter, it invents new ways of accomplishing this goal, consuming all matter around it to create more paperclips. In short, it inadvertently wreaks havoc on all life to accomplish this goal as safety measures were not taken to prevent it.

Risk-reducing Progress

Research has illuminated the need for caution when developing an AGI. In an effort to formalize this need, AI safety theory continues to be developed in order to solve some of these issues. Proposed strategies to prevent an AGI from harming humanity include:

  1. Embedding the AGI with human terminal values.
  2. Confining the AGI such that it has minimal contact with the external world as detailed in AI Boxing.

As an example, the paperclip maximizer mentioned above might be created with a sense of human value, preventing it from creating paperclips at the cost of harming humanity.

Differential intellectual progress describes a situation in which risk-reducing Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing the Singularity, he defines it thusly:

As applied to AI risks in particular, a plan of differential intellectual progress would recommend that our progress on the philosophical, scientific, and technological problems of AI safety outpace our progress on the problems of AI capability such that we develop safe superhuman AIs before we develop arbitrary superhuman AIs.