S-risks are an example of existential risk (also known as x-risks) according to Nick Bostrom's original definition, as they threaten to "curtail [humanity'"permanently and drastically curtail [Earth-originating intelligent life's] potential". Most existential risks are of the form "event E happens which drastically reduces the number of conscious experiences in the future". S-risks therefore serve as a useful reminder that some x-risks are scary because they cause bad experiences, and not just because they prevent good ones.
A related concept is hyperexistential risk, the risk of "fates worse than death."death" on an astronomical scale. It is not clear whether all hyperexistential risks are s-risks per se. It is clear that not all s-risks are hyperexistential, since "tiling the universe with mildly unhappy experiences" would be an s-risk but very likely wouldn't be a worse fate than death.
(Astronomical) suffering risks, also known as s-risks, are risks of the creation of intense suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.
| extinction risk | non-extinction risk | |
| suffering risk | Misaligned AGI wipes out humans, simulates many suffering alien civilizations. | Misaligned AGI tiles the universe with |
| non-suffering risk | Misaligned AGI wipes out humans. | Misaligned AGI keeps humans as "pets," limiting growth but not causing immense suffering. |
A related concept is hyperexistential risk, the risk of "fates worse than death" on an astronomical scale. It is not clear whether all hyperexistential risks are s-risks per se. It is clear that notBut arguably all s-risks are hyperexistential, since "tiling the universe with mildly unhappy experiences"experiences of severe suffering" would likely be an s-risk but very likely wouldn't be a worse fate than death.
Suffering risks (also known as s-risks) are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In this sense, many s-risks can be considered a form of existential risk according to Bostrom's original definition, as they threaten to "curtail [humanity's] potential". However, it is often useful to distinguish between risks that threaten to prevent future populations from coming into existence (standard x-(extinction risks) and those which would create a large amount of suffering (s-risks).
Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Center on Long-Term Risk. Much of FRI'CLR's work is on suffering-focused AI safety and crucial considerations.
There are two EA organizations with s-risk prevention research as their primary focus: the Center on Long-Term Risk (CLR) and the Center for Reducing Suffering. Much of CLR's work is on suffering-focused AI safety and crucial considerations. Although to a much lesser extent, the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Center on Long-Term Risk. Much of CLR's work is on suffering-focused AI safety and crucial considerations.risks too.
Another approach to reducing s-risk is to "expand the moral circle",together with raising concern for suffering, so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.
Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Foundational Research InstituteCenter on Long-Term Risk. Much of FRI's work is on suffering-focused AI safety and crucial considerations.
Suffering risks (also known as s-risks) are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In this sense, many s-
S-risks can be considered a formare an example of existential risk (also known as x-risks) according to Nick Bostrom's original definition, as they threaten to "curtail [humanity's] potential". However, it is oftenMost existential risks are of the form "event E happens which drastically reduces the number of conscious experiences in the future". S-risks therefore serve as a useful toreminder that some x-risks are scary because they cause bad experiences, and not just because they prevent good ones.
Within the space of x-risks, we can distinguish between x-risks that threaten to prevent future populations from coming into existence (extinction risks)are s-risks, x-risks involving human extinction, x-risks that involve immense suffering and human extinction, and those whichx-risks that involve neither. For example:
| extinction risk | non-extinction risk | |
| suffering risk | Misaligned AGI wipes out humans, simulates many suffering alien civilizations. | Misaligned AGI tiles the universe with unhappy human experiences. |
| non-suffering risk | Misaligned AGI wipes out humans. | Misaligned AGI keeps humans as "pets," limiting growth but not causing immense suffering. |
A related concept is hyperexistential risk, the risk of "fates worse than death." It is not clear whether all hyperexistential risks are s-risks per se. It is clear that not all s-risks are hyperexistential, since "tiling the universe with mildly unhappy experiences" would createbe an s-risk but very likely wouldn't be a large amount of suffering (s-risks).worse fate than death.
Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Foundational Research Institute. Much of FRI's work is on suffering-focused AI safety and crucial considerations. Another approach to reducing s-risk is to "expand the moral circle", so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.
Suffering(Astronomical) suffering risks (also, also known as s-risks), are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far.