Superintelligence: Paths, Dangers, Strategies

Front Cover
Oxford University Press, 2014 - Computers - 328 pages
1 Review
The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of our species then would come to depend on the actions of the machine superintelligence. But we have one advantage: we get to make the first move. Will it be possible to construct a seed AI or otherwise to engineer initial conditions so as to make an intelligence explosion survivable? How could one achieve a controlled detonation? To get closer to an answer to this question, we must make our way through a fascinating landscape of topics and considerations. Read the book and learn about oracles, genies, singletons; about boxing methods, tripwires, and mind crime; about humanity's cosmic endowment and differential technological development; indirect normativity, instrumental convergence, whole brain emulation and technology couplings; Malthusian economics and dystopian evolution; artificial intelligence, and biological cognitive enhancement, and collective intelligence.
 

What people are saying - Write a review

User Review - Flag as inappropriate

nm

Contents

1 Past developments and present capabilities
1
2 Paths to superintelligence
26
3 Forms of superintelligence
63
4 The kinetics of an intelligence explosion
75
5 Decisive strategic advantage
95
6 Cognitive superpowers
110
7 The superintelligent will
127
8 Is the default outcome doom?
140
12 Acquiring values
226
13 Choosing the criteria for choosing
256
14 The strategic picture
280
15 Crunch time
314
Afterword
321
Notes
325
Bibliography
383
Partial Glossary
407

9 The control problem
155
10 Oracles genies sovereigns tools
177
11 Multipolar scenarios
194

Other editions - View all

Common terms and phrases

About the author (2014)

Nick Bostrom is Professor in the Faculty of Philosophy at Oxford University and founding Director of the Future of Humanity Institute and of the Programme on the Impacts of Future Technology within the Oxford Martin School. He is the author of some 200 publications, including Anthropic Bias(Routledge, 2002), Global Catastrophic Risks (ed., OUP, 2008), and Human Enhancement (ed., OUP, 2009), and a forthcoming book on Superintelligence. He previously taught at Yale, and he was a Postdoctoral Fellow of the British Academy. Bostrom has a background in physics, computational neuroscience, and mathematical logic as well as philosophy.

Bibliographic information