Singularity paper
A draft page for the upcoming Less Wrong paper on the Singularity. See http://lesswrong.com/lw/1fz/a_less_wrong_singularity_article/ .
Also see: Intelligence Explosion
Basic structure
Abstract
The creation of an artificial general intelligence (AGI) is plausible. Humans are the first and only example we have of a general intelligence, but we have strong empirical and theoretical reasons for believing that we're nowhere near maximally intelligent. Sufficient understanding of intelligence, or whatever similar concept can help us understand and account for human technological prowess, may allow researchers to duplicate the process in a computer. Computer-based AIs could potentially think much faster than humans, expand onto new hardware, and undergo recursive self-improvement. These possibilities suggest that such an AGI, once created, is likely to undergo an intelligence explosion, vastly increasing its capability to far beyond the human level on a very fast timescale.
The Power of Intelligence
Human civilization has had a massive impact on this planet. Humans have developed language, sophisticated tools, and advanced technology, by virtue of our general intelligence: human individuals have the ability to learn and invent new skills and teach them to other humans, as opposed to new adaptations gradually developing by natural selection. We are cultural animals, and over time, and especially since the scientific and industrial revolutions, our culture has developed more and more complex knowledge and a deeper division of labor. Once natural selection created humans with the general intelligence to sustain culture, culture took over and operates at a faster timescale than evolution.
Limitations of human intelligence, ways in which AI could be more powerful and faster
As great of an impact as human intelligence has had, there are specific reasons to think it would be greatly surpassed by AI. Almost all our advances over the past thousands of years have been cultural; we haven't had time to evolve new brain architecture (cite evopsych paper? The Adapted Mind?). Evolution didn't design our brains to build civilizations; it's just sort of something that happened. We can also point to specific ways in which the human brain is lacking, where our intuitive judgement goes horribly wrong (cite heuristics and biases).
- The question of FOOM
-improvement up to human-comparable programming ability -improvement at human-comparable programming ability and beyond -effects of hardware scale, minds that can be copied and run quickly, vs qualitative improvements
- Conclusions
- References
References
List here references to work that can be used for building the argument.
Minds and Machines submission guidelines
Works by Yudkowsky
Less Wrong posts
- Optimization and the Singularity
- Cascades, Cycles, Insight...
- ...Recursion, Magic
- Engelbart: Insufficiently Recursive
- Total Nano Domination
- Singletons Rule OK
- Recursive Self-Improvement
- Hard Takeoff
- Permitted Possibilities, & Locality
- Sustained Strong Recursion
- Disjunctions, Antipredictions, Etc.
- What I Think, If Not Why
Other Yudkowsky material
- What Is the Singularity?
- Why Work Toward the Singularity?
- Three Singularity Schools
- LOGI part 3, Seed AI
- The Power of Intelligence
- Cognitive Biases Potentially Affecting Judgement of Global Risks (PDF), in Global Catastrophic Risks
- Artificial Intelligence as a Positive and Negative Factor in Global Risk (PDF), in Global Catastrophic Risks
- Why We Need Friendly AI
Other sources and references
- Nick Bostrom, “Ethical Issues in Advanced Artificial Intelligence,” Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Vol. 2, ed. I. Smit et al., Int. Institute of Advanced Studies in Systems Research and Cybernetics, 2003.
- Ben Goertzel, “Thoughts on AI Morality,” Dynamical Psychology, 2002.
- Ben Goertzel, “The All-Seeing (A)I,” Dynamical Psychology, 2004.
- Ben Goertzel, “Encouraging a Positive Transcension” Dynamical Psychology, 2004.
- I. J. Good, "Speculations Concerning the First Ultraintelligent Machine"
- Stephan Vladimir Bugaj and Ben Goertzel, Five Ethical Imperatives and their Implications for Human-AGI Interaction.
- J. Storrs Hall, “Engineering Utopia”, Artificial General Intelligence 2008: Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.
- Steve Omohundro, “The Basic AI Drives”, Proceedings of the First AGI Conference, Volume 171, Frontiers in Artificial Intelligence and Applications, ed. P. Wang, B. Goertzel and S. Franklin, 2008.
- Artificial General Intelligence (Cognitive Technologies) Ben Goertzel, Cassio Pennachin, et al
- Carl Shulman, Henrik Jonsson, and Nick Tarleton, "Machine Ethics and Superintelligence" (PDF), APCAP09
- Kaj Sotala, "Evolved altruism, ethical complexity, anthropomorphic trust: three factors misleading estimates of the safety of artificial general intelligence," (PDF) ECAP09
- Carl Shulman, Henrik Jonsson, and Nick Tarleton "Which Consequentialism? Machine Ethics and Moral Divergence" (PDF), APCAP09
- Carl Shulman, "Arms Control and Intelligence Explosions," ECAP09
- Vernor Vinge, "The Coming Technological Singularity"
- Joel Veness et al. "A Monte Carlo AIXI Approximation" (Hat tip Shane Legg)