This is New York Times’ idea of a ‘misconception’.
Most artificial intelligence researchers still discount the idea of an “intelligence explosion” that will outstrip human capabilities.
John Markoff writes: In March when Alphago, the Go-playing software program designed by Google’s DeepMind subsidiary defeated Lee Se-dol, the human Go champion, some in Silicon Valley proclaimed the event as a precursor of the imminent arrival of genuine thinking machines.
The achievement was rooted in recent advances in pattern recognition technologies that have also yielded impressive results in speech recognition, computer vision and machine learning. The progress in artificial intelligence has become a flash point for converging fears that we feel about the smart machines that are increasingly surrounding us.
However, most artificial intelligence researchers still discount the idea of an “intelligence explosion.”
The idea was formally described as the “Singularity” in 1993 by Vernor Vinge, a computer scientist and science fiction writer, who posited that accelerating technological change would inevitably lead to machine intelligence that would match and then surpass human intelligence. In his original essay, Dr. Vinge suggested that the point in time at which machines attained superhuman intelligence would happen sometime between 2005 and 2030.
Ray Kurzweil, an artificial intelligence researcher, extended the idea in his 2006 book “The Singularity Is Near: When Humans Transcend Biology,” where he argues that machines will outstrip human capabilities in 2045. The idea was popularized in movies such as “Transcendence” and “Her.”
Recently several well-known technologists and scientists, including Stephen Hawking, Elon Musk and Bill Gates, have issued warnings about runaway technological progress leading to superintelligent machines that might not be favorably disposed to humanity. Read the rest of this entry »
“We’re going to gradually merge and enhance ourselves. In my view, that’s the nature of being human — we transcend our limitations.”
Kurzweil predicts that humans will become hybrids in the 2030s. That means our brains will be able to connect directly to the cloud, where there will be thousands of computers, and those computers will augment our existing intelligence. He said the brain will connect via nanobots — tiny robots made from DNA strands.
“As I wrote starting 20 years ago, technology is a double-edged sword. Fire kept us warm and cooked our food but also burnt down our houses. Every technology has had its promise and peril.”
“Our thinking then will be a hybrid of biological and non-biological thinking,” he said.
The bigger and more complex the cloud, the more advanced our thinking. By the time we get to the late 2030s or the early 2040s, Kurzweil believes our thinking will be predominately non-biological.
We’ll also be able to fully back up our brains. Read the rest of this entry »
Christopher Mims writes: A pair of advocates—they do legitimate research too, but their ardor is so intense, it’s hard to call them scientists—believe that they will, within their lifetimes, make ours the first generation of humans to live forever.
“Once we are really truly repairing things as fast as they go wrong, game over. We will have the ability to live indefinitely.”
— Aubrey de Grey
Their quest is elegantly laid out in The Immortalists, a new documentary making its way around the film festival circuit. The Immortalists follows the triumphs and tragedies of three years in the lives of William H. Andrews and Aubrey de Grey, two men who prove just as interesting as the work they’re doing. The Immortalists is really a film about death, not life, which is what makes it so fascinating.
Here’s the trailer:
The goal of Andrews and de Grey is not merely to extend life, but to actually reverse the aging process. “Once we are really truly repairing things as fast as they go wrong, game over,” de Grey says in the film. “We will have the ability to live indefinitely.”
Scientists in Japan are trying to create a computer program smart enough to pass the University of Tokyo‘s entrance exam, it appears.
The project, led by Noriko Arai at Japan’s National Institute of Informatics, is trying to see how fast artificial intelligence might replace the human brain so that people can start training in completely new areas. “If society as a whole can see a possible change coming in the future, we can get prepared now,” she tells the Kyodo news agency.
But there’s also another purpose behind the Can A Robot Get Into The University of Tokyo? project, which began in 2011. If machines cannot replace human beings, then “we need to clarify what is missing and move to develop the technology,” says Noriko Arai.
The Singularity is Coming and it’s Going To Be Awesome: ‘Robots Will Be Smarter Than Us All by 2029’Posted: February 23, 2014
Adam Withnall writes: By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google’s director of engineering Ray Kurzweil.
“Today, I’m pretty much at the median of what AI experts think and the public is kind of with them…”
One of the world’s leading futurologists and artificial intelligence (AI) developers, 66-year-old Kurzweil has previous form in making accurate predictions about the way technology is heading.
[Ray Kurzweil’s pioneering book The Singularity Is Near: When Humans Transcend Biology is available at Amazon]
When the internet was still a tiny network used by a small collection of academics, Kurzweil anticipated it would soon make it possible to link up the whole world.
This is a dense, maddening, challenging essay, I don’t agree with all of it. But the questions it raises are hard to ignore. Relevant stuff, merits further examination…
David Gelernter writes: The huge cultural authority science has acquired over the past century imposes large duties on every scientist. Scientists have acquired the power to impress and intimidate every time they open their mouths, and it is their responsibility to keep this power in mind no matter what they say or do. Too many have forgotten their obligation to approach with due respect the scholarly, artistic, religious,humanistic work that has always been mankind’s main spiritual support. Scientists are (on average) no more likely to understand this work than the man in the street is to understand quantum physics. But science used to know enough to approach cautiously and admire from outside, and to build its own work on a deep belief in human dignity. No longer.
Today science and the “philosophy of mind”—its thoughtful assistant, which is sometimes smarter than the boss—are threatening Western culture with the exact opposite of humanism. Call it roboticism. Man is the measure of all things, Protagoras said. Today we add, and computers are the measure of all men.
Many scientists are proud of having booted man off his throne at the center of the universe and reduced him to just one more creature—an especially annoying one—in the great intergalactic zoo. That is their right. But when scientists use this locker-room braggadocio to belittle the human viewpoint, to belittle human life and values and virtues and civilization and moral, spiritual, and religious discoveries, which is all we human beings possess or ever will, they have outrun their own empiricism. They are abusing their cultural standing. Science has become an international bully.
Nowhere is its bullying more outrageous than in its assault on the phenomenon known as subjectivity.
Your subjective, conscious experience is just as real as the tree outside your window or the photons striking your retina—even though you alone feel it. Many philosophers and scientists today tend to dismiss the subjective and focus wholly on an objective, third-person reality—a reality that would be just the same if men had no minds. They treat subjective reality as a footnote, or they ignore it, or they announce that, actually, it doesn’t even exist.
If scientists were rat-catchers, it wouldn’t matter. But right now, their views are threatening all sorts of intellectual and spiritual fields. The present problem originated at the intersection of artificial intelligence and philosophy of mind—in the question of what consciousness and mental states are all about, how they work, and what it would mean for a robot to have them. It has roots that stretch back to the behaviorism of the early 20th century, but the advent of computing lit the fuse of an intellectual crisis that blasted off in the 1960s and has been gaining altitude ever since.
The modern “mind fields” encompass artificial intelligence, cognitive psychology, and philosophy of mind. Researchers in these fields are profoundly split, and the chaos was on display in the ugliness occasioned by the publication of Thomas Nagel’s Mind & Cosmos in 2012. Nagel is an eminent philosopher and professor at NYU. In Mind & Cosmos, he shows with terse, meticulous thoroughness why mainstream thought on the workings of the mind is intellectually bankrupt. He explains why Darwinian evolution is insufficient to explain the emergence of consciousness—the capacity to feel or experience the world. He then offers his own ideas on consciousness, which are speculative, incomplete, tentative, and provocative—in the tradition of science and philosophy.
Paul Waldman interviewed James Barrat, author of Our Final Invention: Artificial Intelligence and the End of the Human Era, to see what happens when we’re no longer the most intelligent inhabitants of Earth.
Artificial intelligence has a long way to go before computers are as intelligent as humans. But progress is happening rapidly, in everything from logical reasoning to facial and speech recognition. With steady improvements in memory, processing power, and programming, the question isn’t if a computer will ever be as smart as a human, but only how long it will take. And once computers are as smart as people, they’ll keep getting smarter, in short order become much, much smarter than people. When artificial intelligence (AI) becomes artificial superintelligence (ASI), the real problems begin.
In his new book Our Final Invention: Artificial Intelligence and the End of the Human Era, James Barrat argues that we need to begin thinking now about how artificial intelligences will treat their creators when they can think faster, reason better, and understand more than any human. These questions were long the province of thrilling (if not always realistic) science fiction, but Barrat warns that the consequences could indeed be catastrophic. I spoke with him about his book, the dangers of ASI, and whether we’re all doomed. Read the rest of this entry »
‘In additional to radical life extension we’re going to have radical life expansion.
‘We’re going to have million of virtual environments to explore that we’re going to literally expand our brains – right now we only have 300 million patterns organised in a grand hierarchy that we create ourselves.
‘But we could make that 300 billion or 300 trillion. The last time we expanded it with the frontal cortex we created language and art and science. Just think of the qualitative leaps we can’t even imagine today when we expand our near cortex again.’
Read more: Daily Mail