Skip Navigation
BlackBerry Blog

Book Review: SuperIntelligence - Paths, Dangers, Strategies

FEATURE / 09.04.19 / The Cylance Team

Author: Nick Bostrom
Publisher: Oxford University Press


Swedish philosophy professor Nick Bostrom burst onto the world stage by releasing his thought-provoking paper in 2003 called, “Are You Living in a Computer Simulation?” From that paper, he turns to artificial intelligence (AI) and this book. Here, he develops a multilayered discussion around the dangers of furthering the advancement of AI.

On the positive side, he recounts a thorough history of AI, beginning with Alan Turing and moving to IBM’s Deep Blue and Watson, but that’s where the real-world influence in the book stops. In the end, Bostrom offers little practical discussion of AI and what it could do for humanity and spends most of the book considering the inevitable perils. 

Bostrom takes a quasi-quantitative look at potential harm-to-humans scenarios by what he calls rapid reinforcement learning. He almost exclusively focuses on the concern over when and how artificial intelligence will become smarter than humans, as categorized by three kinds of superintelligence:

  • Brain emulation, which divides the brain into its billions of neurons and replicates it in a computer
  • Genetic engineering, which uses human embryos to iterate toward greater and greater intelligence
  • Synthetic/code-based AI, in which a computer gets smarter more or less on its own

He discusses how we will handle the crossover point where computers become smarter than humans and debates whether it will be a slow transition over many years or a speedy transition over hours, days, and weeks. He also considers ways we might diminish or slow the learning process and ultimate takeover.

At that point, he tackles the inevitable question: How do we make sure AI doesn’t kill us on purpose — or by accident?

While the threat of reinforcement learning systems is real, there are numerous reasons why many in the scientific community have argued that its dangers are more fantasy than reality.

The first reason is that today’s AI is simply learned patterns in a narrow band of data and is so-called artificial narrow intelligence (ANI). This form of AI represents over 99% of all efforts around leveraging machine learning (ML) today.

The other form of AI tries to learn from a general population of data and is called artificial general intelligence (AGI). This form of AI represents less than 1% of all commercial efforts today is the one that has the potential to get smarter than humans. Countless positive uses of ANI exist today and are growing by the thousands each year. 

Overall, Superintelligence is a sound historical representation of learning systems, but it falls short of addressing a realistic representation of AI or the future of AI outside of hyperbolic fear and uncertainty. 

This book review was originally featured in BlackBerry Cylance's bi-annual print publication, 'Phi Magazine' - coming soon as a digital download on Cylance.com. 

The Cylance Team

About The Cylance Team

Our mission: to protect every computer, user, and thing under the sun.

Cylance’s mission is to protect every computer, user, and thing under the sun. That's why we offer a variety of great tools and resources to help you make better-informed security decisions.