[VIDEO] Gordon Moore: Thoughts on the 50th Anniversary of Moore’s Law

This April marks the 50th Anniversary of Moore’s Law. Three years before co-founding Intel, Gordon Moore made a simple observation that has revolutionized the computing industry. It states, the number of transistors – the fundamental building blocks of the microprocessor and the digital age – incorporated on a computer chip will double every two years, resulting in increased computing power and devices that are faster, smaller and lower cost.

 


Our Fear of Artificial Intelligence

Photograph: Chris Ratcliffe/Bloomberg/Getty

Are We Smart Enough to Control Artificial Intelligence? 

A true AI might ruin the world—but that assumes it’s possible at all

Paul Ford writes: Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. “Don’t laugh at me,” he said, “but I was counting on the singularity.”

“The question ‘Can a machine think?’ has shadowed computer science from its beginnings.”

My friend worked in technology; he’d seen the changes that faster microprocessors and networks had wrought. It wasn’t that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humans—a moment that futurists call the singularity. A benevolent superintelligence might analyze the human genetic code at great speed and unlock the secret to eternal youth. At the very least, it might know how to fix your back.

turing-robot-hand

But what if it wasn’t so benevolent? Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence. Imagine a machine that we might call a “paper-clip maximizer”—that is, a machine programmed to make as many paper clips as possible. Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.

Agility: rapid advances in technology, including machine vision, tactile sensors and autonomous navigation, make today’s robots, such as this model from DLR, increasingly useful

Agility: rapid advances in technology, including machine vision, tactile sensors and autonomous navigation, make today’s robots, such as this model from DLR, increasingly useful

No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it “computronium”) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.

Superintelligence: Paths, Dangers, Strategies
BY NICK BOSTROM
OXFORD UNIVERSITY PRESS, 2014

Bostrom does not believe that the paper-clip maximizer will come to be, exactly; it’s a thought experiment, one designed to show how even careful system design can fail to restrain extreme machine intelligence. But he does believe that superintelligence could emerge, and while it could be great, he thinks it could also decide it doesn’t need humans around. Or do any number of other things that destroy the world. The title of chapter 8 is: “Is the default outcome doom?”

“Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term ‘artificial intelligence’ in 1955.”

If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative superintelligence Bostrom describes is far in the future and perhaps impossible.

female-robot-newsreader

Yet a lot of smart, thoughtful people agree with Bostrom and are worried now. Why?

The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in War Games. The androids of 1973’s Westworld went crazy and started killing.

“Extreme AI predictions are ‘comparable to seeing more efficient internal combustion engines… and jumping to the conclusion that the warp drives are just around the corner,’ Rodney Brooks writes.”

When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.

As Kurzweil described it, this would begin a beautiful new era. Read the rest of this entry »