Differential intellectual progress describes a scenario which,was defined by Luke Muehlhauser and Anna Salamon as "prioritizing risk-reducing intellectual progress over risk-increasing intellectual progress". Muehlhauser and Salamon discuss differential intellectual progress in terms of human safety, risk-reducingrelation to Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing(which will also be the Singularity, he defines it accordingly:focus of this article):
Technological advances -— without corresponding development of safety mechanisms -— simultaneously increase the capacity for both friendly and unfriendly AGI development. Presently, most AGI research is concerned with increasing its capacity rather than its safety and thus, most progress increases the risk for a widespread negative effect.
The above developments could also help in the creation of Friendly AI. However, Friendliness requires the development of both AGI and Friendliness theory, while an Unfriendly Artificial Intelligence might be created by AGI efforts alone. Thus developments that bring AGI closer or make it more powerful will increase risk, at least if not combined with work on Friendliness.
While theThe above developments could loweralso help in the riskcreation Friendly AI. However, Friendliness requires the development of creatingboth AGI and Friendliness theory, while an Unfriendly Artificial Intelligence (UAI), this ismight be created by AGI efforts alone. Thus developments that bring AGI closer or make it more powerful will increase risk, at least if not the case presently. For example, an AGIcombined with access to massive datasets has the ability to use that information to increase its capacity to serve its purpose. Unless specifically programmed with ethical values respecting human life, it would inadvertently consume resources needed by humans in pursuit of this goal. This same paradigm applies for all risk-increasing progress.work on Friendliness.
Differential intellectual progress describes a scenario which, in whichterms of human safety, risk-reducing Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing the Singularity, he defines it accordingly:
While the above developments could lower the risk of creating an Unfriendly Artificial Intelligence (UAI), this is not the case presently. For example, an AGI with access to massive datasets has the ability to use that information to increase its capacity to serve its purpose. Unless specifically programmed with ethical values respecting human life, it would inadvertently consume resources needed by humans in pursuit of this goal. This same paradigm applies for all risk-increasing progress.
Differential intellectual progress describes a situationscenario in which risk-reducing Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing the Singularity, he defines it accordingly:
MostTechnological advances - without corresponding development of safety mechanisms - simultaneously increase the capacity for both friendly and unfriendly AGI developmentdevelopment. Presently, most AGI research is focused onconcerned with increasing its capability since each iteration of an AGI generally improves uponcapacity rather than its predecessor. Eventually this trend may give birth to an AGI that inadvertently producessafety and thus, most progress increases the risk for a widespread negative effect. This self-improving AGI created without safety precautions would pursue its utility without regard
The Paperclip maximizer is a thought experiment describing one such scenario. In it,computing time, allowing an AGI to be more efficient within its current operating capacity. Since machine intelligence can be measured by its optimization power divided by resources used, this has the net effect of making the machine smarter.
There are several areas which, when more developed, will provide a means to produce AGIs that are friendly to humanity. These areas of research should be prioritized to prevent possible disasters.
As an example, the paperclip maximizer mentioned above might be created with a sense of human value, preventing it from creating paperclips at the cost of harming humanity.
Differential intellectual progress describes a situation in which risk-reducing Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing the Singularity, he defines it thusly:accordingly:
Most AGI development is focused on increasing its capability since each iteration of an AGI generally improves upon its predecessor. Eventually this trend may give birth to an AGI that inadvertently produces a widespread negative effect. This self-improving AGI created without safety precautions would pursue its utility without regard for the well-being of humanity. Its intent would not be diabolical in nature; rather, it would expand its capability never pausing to consider the impact of its actions on other forms of life.
The Paperclip maximizer is a thought experiment describing one such scenario. In it, an AGI is created to continually increase the number of paperclips in its possession. As it gets smarter, it invents new ways of accomplishing this goal, consuming all matter around it to create more paperclips. In short, it inadvertently wreaks havoc on all life to accomplish this goal as safety measures were not taken to prevent it.
Research has illuminated the need for caution when developing an AGI. In an effort to formalize this need, AI safety theory continues to be developed in order to solve some of these issues. Proposed strategies to prevent an AGI from harming humanity include:
As an example, the paperclip maximizer mentioned above might be created with a sense of human value, preventing it from creating paperclips at the cost of harming humanity.
Differential intellectual progress describes a situation in which risk-reducing Artificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development. In Luke Muehlhauser's Facing the Singularity, he defines it thusly:
As applied to AI risks in particular, a plan of differential intellectual progress would recommend that our progress on the philosophical, scientific, and technological problems of AI safety outpace our progress on the problems of AI capability such that we develop safe superhuman AIs before we develop arbitrary superhuman AIs.
Differential intellectual progress describes a situation in which risk-reducingArtificial General Intelligence (AGI) development takes precedence over risk-increasing AGI development.
Differential intellectual progress was defined by Luke Muehlhauser and Anna Salamon as "prioritizing risk-reducing intellectual progress over risk-increasing intellectual progress".
Muehlhauser and SalamonThey discuss differential intellectual progress in relation to Artificial General Intelligence (AGI) development (which will also be the focus of this article):Muehlhauser and Salamon also note that differential technological development can be seen as a special case of this concept.