Our Fear of Artificial Intelligence

Photograph: Chris Ratcliffe/Bloomberg/Getty

Are We Smart Enough to Control Artificial Intelligence? 

A true AI might ruin the world—but that assumes it’s possible at all

Paul Ford writes: Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. “Don’t laugh at me,” he said, “but I was counting on the singularity.”

“The question ‘Can a machine think?’ has shadowed computer science from its beginnings.”

My friend worked in technology; he’d seen the changes that faster microprocessors and networks had wrought. It wasn’t that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humans—a moment that futurists call the singularity. A benevolent superintelligence might analyze the human genetic code at great speed and unlock the secret to eternal youth. At the very least, it might know how to fix your back.

turing-robot-hand

But what if it wasn’t so benevolent? Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence. Imagine a machine that we might call a “paper-clip maximizer”—that is, a machine programmed to make as many paper clips as possible. Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.

Agility: rapid advances in technology, including machine vision, tactile sensors and autonomous navigation, make today’s robots, such as this model from DLR, increasingly useful

Agility: rapid advances in technology, including machine vision, tactile sensors and autonomous navigation, make today’s robots, such as this model from DLR, increasingly useful

No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it “computronium”) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.

Superintelligence: Paths, Dangers, Strategies
BY NICK BOSTROM
OXFORD UNIVERSITY PRESS, 2014

Bostrom does not believe that the paper-clip maximizer will come to be, exactly; it’s a thought experiment, one designed to show how even careful system design can fail to restrain extreme machine intelligence. But he does believe that superintelligence could emerge, and while it could be great, he thinks it could also decide it doesn’t need humans around. Or do any number of other things that destroy the world. The title of chapter 8 is: “Is the default outcome doom?”

“Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term ‘artificial intelligence’ in 1955.”

If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative superintelligence Bostrom describes is far in the future and perhaps impossible.

female-robot-newsreader

Yet a lot of smart, thoughtful people agree with Bostrom and are worried now. Why?

The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in War Games. The androids of 1973’s Westworld went crazy and started killing.

“Extreme AI predictions are ‘comparable to seeing more efficient internal combustion engines… and jumping to the conclusion that the warp drives are just around the corner,’ Rodney Brooks writes.”

When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.

As Kurzweil described it, this would begin a beautiful new era. Read the rest of this entry »


David W. Buchanan: No, the Robots Are Not Going to Rise Up and Kill You

ROBOTS_B_400

David W. Buchanan is a researcher at IBM, where he is a member of the team that made the Watson “Jeopardy!” system.

David W. Buchanan writes: We have seen astonishing progress in artificial intelligence, and technology companies are pouring money into AI research. In 2011, the IBM system Watson competed on “Jeopardy!,” beating the best human playersSiri and Cortana have taken charge of our smartphones. As I write this, a vacuum called Roomba is cleaning my house on its own, using what the box calls “robot intelligence.” It is easy to feel like the world is on the verge of being taken over by computers, and the news media have indulged such fears with frequent coverage of the supposed dangers of AI.

But as a researcher who works on modern, industrial AI, let me offer a personal perspective to explain why I’m not afraid.

Thunder-Robots

Science fiction is partly responsible for these fears. A common trope works as follows: Step 1: Humans create AI to perform some unpleasant or difficult task. Step 2: The AI becomes conscious. Step 3: The AI decides to kill us all. As science fiction, such stories can be great fun. As science fact, the narrative is suspect, especially around Step 2, which assumes that by synthesizing intelligence, we will somehow automatically, or accidentally, create consciousness. I call this the consciousness fallacy. It seems plausible at first, but the evidence doesn’t support it. And if it is false, it means we should look at AI very differently.

Intelligence is the ability to analyze the world and reason about it in a way that enables more effective action. Our scientific understanding of intelligence is relatively advanced. There is still an enormous amount of work to do before we can create comprehensive, human-caliber intelligence. But our understanding is viable in the sense that there are real businesses that make money by creating AI.

Coming online: some 95,000 new professional service robots, worth some $17.1bn, are set to be installed for professional use between 2013 and 2015

Coming online: some 95,000 new professional service robots, worth some $17.1bn, are set to be installed for professional use between 2013 and 2015

Consciousness is a much different story, perhaps because there is less money in it. Consciousness is also a harder problem: While most of us would agree that we know consciousness when we see it, scientists can’t really agree on a rigorous definition, let alone a research program that would uncover its basic mechanisms. Read the rest of this entry »