The danger of AI

A prominent group of thinkers has raised the alarm that humanity would do well to heed the inherent dangers of artificial intelligence.

Lyle Cantor, on Medium:

A superinteligence (sic) whose super-goal is to calculate the decimal expansion of pi will never reason itself into benevolence. It would be quite happy to convert all the free matter and energy in the universe (including humans and our habitat) into specialized computers capable only of calculating the digits of pi. Why? Because its potential actions will be weighted and selected in the context of its utility function. If its utility function is to calculate pi, any thought of benevolence would be judged of negative utility.

A lot of the concern centers around a runaway AI focused on a single task, but with a child’s capacity for judgment, or constraint. But Cantor goes on to illustrate the struggle of a chimp in a man’s world:

We don’t hate chimps or the other animals whose habitats we are rearranging; we just see higher-value arrangements of the earth and water they need to survive. And we are only every-so-slightly smarter than chimps.

In many respects our brains are nearly identical. Yes, the average human brain is about three times the size of an average chimp’s, but we still share much of the same gross structure. And our neurons fire at about 100 times per second and communicate through saltatory conduction, just like theirs do.

In a recent comment on Edge.org, Stuart Russell — co-author of Artificial Intelligence: A Modern Approach — said, “None of this proves that AI, or gray goo, or strangelets, will be the end of the world. But there is no need for a proof, just a convincing argument pointing to a more-than-infinitesimal possibility.”

To me — a complete outsider to this — this concern of ours should just mean that we set breakpoints and interrupts, as we would with any program in development. Am I being naive?