Suffering risk
Suffering risks (also known as s-risks) are risks of the creation of suffering in the far future on an astronomical scale, vastly exceeding all suffering that has existed on Earth so far. In this sense, many s-risks can be considered a form of existential risk according to Bostrom's original definition, as they threaten to "curtail [humanity's] potential". However, it is often useful to distinguish between risks that threaten to prevent future populations from coming into existence (extinction risks) and those which would create a large amount of suffering (s-risks).
Although the Machine Intelligence Research Institute and Future of Humanity Institute have investigated strategies to prevent s-risks, the only EA organization with s-risk prevention research as its primary focus is the Foundational Research Institute. Much of FRI's work is on suffering-focused AI safety and crucial considerations. Another approach to reducing s-risk is to "expand the moral circle", so that future (post)human civilizations and AI are less likely to instrumentally cause suffering to non-human minds such as animals or digital sentience. Sentience Institute works on this value-spreading problem.
See also
External links
- Reducing Risks of Astronomical Suffering: A Neglected Global Priority (FRI)
- Introductory talk on s-risks (FRI)
- Risks of Astronomical Future Suffering (FRI)
- Suffering-focused AI safety: Why "fail-safe" measures might be our top intervention PDF (FRI)
- Artificial Intelligence and Its Implications for Future Suffering (FRI)
- Expanding our moral circle to reduce suffering in the far future (Sentience Politics)
- The Importance of the Far Future (Sentience Politics)