Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat, St. Martin’s Press, 322 pages, $26.99.
Ronald Bailey writes: In the new Spike Jonze movie Her, an operating system called Samantha evolves into an enchanting self-directed intelligence with a will of her own. Not to spoil this visually and intellectually dazzling movie for anyone, but Samantha makes choices that do not harm humanity, though they do leave us feeling a bit sadder.
In his terrific new book, Our Final Invention, the documentarian James Barrat argues that hopes for the development of an essentially benign artificial general intelligence (AGI) like Samantha amount to a silly pipe dream. Barrat believes artificial intelligence is coming, but he thinks it will be more like Skynet. In theTerminator movies, Skynet is an automated defense system that becomes self-aware, decides that human beings are a danger to it, and seeks to destroy us with nuclear weapons and terminator robots.
Barrat doesn’t just think that Skynet is likely. He thinks it’s practically inevitable.
Barrat has talked to all the significant American players in the effort to create recursively self-improving artificial general intelligence in machines. He makes a strong case that AGI with human-level intelligence will be developed in the next couple of decades. Once an AGI comes into existence, it will seek to improve itself in order to more effectively pursue its goals. AI researcher Steve Omohundro, president of the company Self-Aware Systems, explains that goal-driven systems necessarily develop drives for increased efficiency, creativity, self-preservation, and resource acquisition. At machine computation speeds, the AGI will soon bootstrap itself into becoming millions of times more intelligent than a human being. It would thus transform itself into an artificial super-intelligence (ASI)—or, as Institute for Ethics and Emerging Technologies chief James Hughes calls it, “a god in a box.” And the new god will not want to stay in the box.
St. Martin’s PressThe emergence of super-intelligent machines has been dubbed the technological Singularity. Once machines take over, the argument goes, scientific and technological progress will turn exponential, thus making predictions about the shape of the future impossible. Barrat believes the Singularity will spell the end of humanity, since the ASI, like Skynet, is liable to conclude that it is vulnerable to being harmed by people. And even if the ASI feels safe, it might well decide that humans constitute a resource that could be put to better use. “The AI does not hate you, nor does it love you,” remarks the AI researcher Eliezer Yudkowsky, “But you are made out of atoms which it can use for something else.”
Barrat analyzes various suggestions for how to avoid Skynet. The first is to try to keep the AI god in his box. The new ASI could be guarded by gatekeepers, who would make sure that it is never attached to any networks out in the real world. Barrat convincingly argues that an intelligence millions of times smarter than people would be able to persuade its gatekeepers to let it out.
Paul Waldman interviewed James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, to see what happens when we’re no longer the most intelligent inhabitants of Earth.
Artificial intelligence has a long way to go before computers are as intelligent as humans. But progress is happening rapidly, in everything from logical reasoning to facial and speech recognition. With steady improvements in memory, processing power, and programming, the question isn’t if a computer will ever be as smart as a human, but only how long it will take. And once computers are as smart as people, they’ll keep getting smarter, in short order become much, much smarter than people. When artificial intelligence (AI) becomes artificial superintelligence (ASI), the real problems begin.
In his new book Our Final Invention: Artificial Intelligence and the End of the Human Era, James Barrat argues that we need to begin thinking now about how artificial intelligences will treat their creators when they can think faster, reason better, and understand more than any human. These questions were long the province of thrilling (if not always realistic) science fiction, but Barrat warns that the consequences could indeed be catastrophic. I spoke with him about his book, the dangers of ASI, and whether we’re all doomed. Read the rest of this entry »
In the world of sci-fi movie geekdom, Aug. 29, 1997, was a turning point for humanity: On that day, according to the Terminator films, the network of U.S. defense computers known as Skynet became self-aware—and soon launched an all-out genocidal war against Homo sapiens.
Fortunately, that date came and went with no such robo-apocalypse. But the 1990s did bring us the World Wide Web, which is now far larger and more “connected” than any nation’s defense network. Could the Internet “wake up”? And if so, what sorts of thoughts would it think? And would it be friend or foe?
Neuroscientist Christof Koch believes we may soon find out—indeed, the complexity of the Web may have already surpassed that of the human brain. In his book Consciousness: Confessions of a Romantic Reductionist, published earlier this year, he makes a rough calculation: Take the number of computers on the planet—several billion—and multiply by the number of transistors in each machine—hundreds of millions—and you get about a billion billion, written more elegantly as 1018. That’s a thousand times larger than the number of synapses in the human brain about 1015.
Koch, who taught for more than 25 years at Caltech and is now chief scientific officer at the Allen Institute for Brain Science in Seattle, is known for his work on the “neural correlates” of consciousness—studying the brain to see what’s going on when we have specific conscious experiences. Of course, our brains happen to be soft, wet, and made of living tissue, while the Internet is made up of metal chips and wires—but that’s no obstacle to consciousness, he says, so long as the level of complexity is great enough…
More >> via >> Christof Koch, Robert Sawyer – Slate Magazine.