Scientists in Japan are trying to create a computer program smart enough to pass the University of Tokyo‘s entrance exam, it appears.
The project, led by Noriko Arai at Japan’s National Institute of Informatics, is trying to see how fast artificial intelligence might replace the human brain so that people can start training in completely new areas. “If society as a whole can see a possible change coming in the future, we can get prepared now,” she tells the Kyodo news agency.
But there’s also another purpose behind the Can A Robot Get Into The University of Tokyo? project, which began in 2011. If machines cannot replace human beings, then “we need to clarify what is missing and move to develop the technology,” says Noriko Arai.
The Singularity is Coming and it’s Going To Be Awesome: ‘Robots Will Be Smarter Than Us All by 2029′Posted: February 23, 2014
Adam Withnall writes: By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google’s director of engineering Ray Kurzweil.
“Today, I’m pretty much at the median of what AI experts think and the public is kind of with them…”
One of the world’s leading futurologists and artificial intelligence (AI) developers, 66-year-old Kurzweil has previous form in making accurate predictions about the way technology is heading.
[Ray Kurzweil's pioneering book The Singularity Is Near: When Humans Transcend Biology is available at Amazon]
When the internet was still a tiny network used by a small collection of academics, Kurzweil anticipated it would soon make it possible to link up the whole world.
Frances Martel reports: 2013 was a banner year for uncalled for expansion of China’s borders, from the Senkaku Islands Air Identification Defense Zone to a state TV show claiming the entirety of the Philippines for China. But on the economic front, China plans an expansion of a completely different kind: the use of robots to make manufacturing even cheaper.
Canada’s Globe and Mail has a feature out this week on China’s increased push to replace human labor with automated work. While China boasts some of the cheapest labor in the world–hence their domination of the manufacture of many simple to make items–salaries are, by necessity, increasing. This, argues author Scott Barlow, is pressuring the Chinese government to stay competitive economically with other nations by suppressing the growing wages. And to do that, he continues, businesses need to hire fewer people.
Klint Finley writes: Nothing beats a movie recommendation from a friend who knows your tastes. At least not yet. Netflix wants to change that, aiming to build an online recommendation engine that outperforms even your closest friends.
The online movie and TV outfit once sponsored what it called the Netflix Prize, asking the world’s data scientists to build new algorithms that could better predict what movies and shows you want to see. And though this certainly advanced the state of the art, Netflix is now exploring yet another leap forward. In an effort to further hone its recommendation engine, the company is delving into “deep learning,” a branch of artificial intelligence that seeks to solve particularly hard problems using computer systems that mimic the structure and behavior of the human brain. The company details these efforts in a recent blog post.
Netflix is following in the footsteps of web giants like Google and Facebook, who have hired top deep-learning researchers in an effort to improve everything from voice recognition to image tagging.
With the project, Netflix is following in the footsteps of web giants like Google and Facebook, who have hired top deep-learning researchers in an effort to improve everything from voice recognition to image tagging. But Netflix is taking a slightly different tack. The company plans to run its deep learning algorithms on Amazon’s cloud service, rather than building their own hardware infrastructure a la Google and Facebook. This shows that, thanks to rise of the cloud, smaller web companies can now compete with the big boys — at least in some ways.
I came across this last night:
In mid-2015, the asteroid probe Dawn is scheduled to establish orbit around Ceres, the only dwarf planet in the inner Solar System, as well as the largest asteroid, to begin roughly six months of close-up observation. The level of interest in this mission has significantly increased with the detection by the ESA’s Herschel space observatory of plumes of water vapor being exuded from Ceres’ surface from a pair of local sources.
It turns out that Ceres may have more water than all the fresh water on Earth. If that’s true, it may well be the the best place to actually create a robust human presence off Earth (after a real foothold is established on Earth’s moon). Some people might think that water would be useful on Mars, but why put it at the bottom of a gravity well one-third as deep as Earth’s?
Now the only question is: Who’s going to grab this uniquely valuable spot?
David Rotman writes: Given his calm and reasoned academic demeanor, it is easy to miss just how provocative Erik Brynjolfsson’s contention really is. Brynjolfsson, a professor at the MIT Sloan School of Management, and his collaborator and coauthor Andrew McAfee have been arguing for the last year and a half that impressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.
Economic theory and government policy will have to be rethought if technology is indeed destroying jobs faster than it is creating new ones.
That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States. And, they suspect, something similar is happening in other technologically advanced countries.
If self-replicating machines are the next stage of human evolution, should we start worrying?
George Zarkadakis writes: When René Descartes went to work as tutor of young Queen Christina of Sweden, his formidable student allegedly asked him what could be said of the human body. Descartes answered that it could be regarded as a machine; whereby the queen pointed to a clock on the wall, ordering him to “see to it that it produces offspring”. A joke, perhaps, in the 17th century, but now many computer scientists think the age of the self-replicating, evolving machine may be upon us.
It is an idea that has been around for a while – in fiction. Stanislaw Lem in his 1964 novel The Invincible told the story of a spaceship landing on a distant planet to find a mechanical life form, the product of millions of years of mechanical evolution. It was an idea that would resurface many decades later in the Matrix trilogy of movies, as well as in software labs.
In fact, self-replicating machines have a much longer, and more nuanced, past. They were indirectly proposed in 1802, when William Paley formulated the first teleological argument of machines producing other machines.
Giuseppe Macri writes: The Drone User Group Network unveiled the latest — and smallest — in drone technology at the 2014 Consumer Electronics Show Wednesday night, the Pocket Drone, which surpassed its Kickstarter funding goal by more than $20,000 overnight.
Pocket Drone is a small multi-copter drone designed to carry high-quality cameras and shoot aerial footage, and can collapse into a transportable size smaller than a seven-inch tablet.
After debuting at CES Wednesday night, the project achieved its Kickstarter funding goal of $30,000 and was sitting at almost $60,000 as of Thursday afternoon, with 58 days of fundraising left to go.
Gary Marcus writes: According to the Times, true artificial intelligence is just around the corner. A year ago, the paper ran a front-page story about the wonders of new technologies, including deep learning, a neurally-inspired A.I. technique for statistical analysis. Then, among others, came an article about how I.B.M.’s Watson had been repurposed into a chef, followed by an upbeat post about quantum computation. On Sunday, the paper ran a front-page story about “biologically inspired processors,” “brainlike computers” that learn from experience.
This past Sunday’s story, by John Markoff, announced that “computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.” The deep-learning story, from a year ago, also by Markoff, told us of “advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking.” For fans of “Battlestar Galactica,” it sounds like exciting stuff.
But, examined carefully, the articles seem more enthusiastic than substantive. As I wrote before, the story about Watson was off the mark factually. The deep-learning piece had problems, too. Sunday’s story is confused at best; there is nothing new in teaching computers to learn from their mistakes. Instead, the article seems to be about building computer chips that use “brainlike” algorithms, but the algorithms themselves aren’t new, either. As the author notes in passing, “the new computing approach” is “already in use by some large technology companies.” Mostly, the article seems to be about neuromorphic processors—computer processors that are organized to be somewhat brainlike—though, as the piece points out, they have been around since the nineteen-eighties. In fact, the core idea of Sunday’s article—nets based “on large groups of neuron-like elements … that learn from experience”—goes back over fifty years, to the well-known Perceptron, built by Frank Rosenblatt in 1957. (If you check the archives, the Times billed it as a revolution, with the headline “NEW NAVY DEVICE LEARNS BY DOING.” The New Yorker similarly gushed about the advancement.) The only new thing mentioned is a computer chip, as yet unproven but scheduled to be released this year, along with the claim that it can “potentially [make] the term ‘computer crash’ obsolete.” Steven Pinker wrote me an e-mail after reading the Times story, saying “We’re back in 1985!”—the last time there was huge hype in the mainstream media about neural networks.
Our Final Invention: Artificial Intelligence and the End of the Human Era, by James Barrat, St. Martin’s Press, 322 pages, $26.99.
Ronald Bailey writes: In the new Spike Jonze movie Her, an operating system called Samantha evolves into an enchanting self-directed intelligence with a will of her own. Not to spoil this visually and intellectually dazzling movie for anyone, but Samantha makes choices that do not harm humanity, though they do leave us feeling a bit sadder.
In his terrific new book, Our Final Invention, the documentarian James Barrat argues that hopes for the development of an essentially benign artificial general intelligence (AGI) like Samantha amount to a silly pipe dream. Barrat believes artificial intelligence is coming, but he thinks it will be more like Skynet. In theTerminator movies, Skynet is an automated defense system that becomes self-aware, decides that human beings are a danger to it, and seeks to destroy us with nuclear weapons and terminator robots.
Barrat doesn’t just think that Skynet is likely. He thinks it’s practically inevitable.
Barrat has talked to all the significant American players in the effort to create recursively self-improving artificial general intelligence in machines. He makes a strong case that AGI with human-level intelligence will be developed in the next couple of decades. Once an AGI comes into existence, it will seek to improve itself in order to more effectively pursue its goals. AI researcher Steve Omohundro, president of the company Self-Aware Systems, explains that goal-driven systems necessarily develop drives for increased efficiency, creativity, self-preservation, and resource acquisition. At machine computation speeds, the AGI will soon bootstrap itself into becoming millions of times more intelligent than a human being. It would thus transform itself into an artificial super-intelligence (ASI)—or, as Institute for Ethics and Emerging Technologies chief James Hughes calls it, “a god in a box.” And the new god will not want to stay in the box.
St. Martin’s PressThe emergence of super-intelligent machines has been dubbed the technological Singularity. Once machines take over, the argument goes, scientific and technological progress will turn exponential, thus making predictions about the shape of the future impossible. Barrat believes the Singularity will spell the end of humanity, since the ASI, like Skynet, is liable to conclude that it is vulnerable to being harmed by people. And even if the ASI feels safe, it might well decide that humans constitute a resource that could be put to better use. “The AI does not hate you, nor does it love you,” remarks the AI researcher Eliezer Yudkowsky, “But you are made out of atoms which it can use for something else.”
Barrat analyzes various suggestions for how to avoid Skynet. The first is to try to keep the AI god in his box. The new ASI could be guarded by gatekeepers, who would make sure that it is never attached to any networks out in the real world. Barrat convincingly argues that an intelligence millions of times smarter than people would be able to persuade its gatekeepers to let it out.
This is a dense, maddening, challenging essay, I don’t agree with all of it. But the questions it raises are hard to ignore. Relevant stuff, merits further examination…
David Gelernter writes: The huge cultural authority science has acquired over the past century imposes large duties on every scientist. Scientists have acquired the power to impress and intimidate every time they open their mouths, and it is their responsibility to keep this power in mind no matter what they say or do. Too many have forgotten their obligation to approach with due respect the scholarly, artistic, religious,humanistic work that has always been mankind’s main spiritual support. Scientists are (on average) no more likely to understand this work than the man in the street is to understand quantum physics. But science used to know enough to approach cautiously and admire from outside, and to build its own work on a deep belief in human dignity. No longer.
Today science and the “philosophy of mind”—its thoughtful assistant, which is sometimes smarter than the boss—are threatening Western culture with the exact opposite of humanism. Call it roboticism. Man is the measure of all things, Protagoras said. Today we add, and computers are the measure of all men.
Many scientists are proud of having booted man off his throne at the center of the universe and reduced him to just one more creature—an especially annoying one—in the great intergalactic zoo. That is their right. But when scientists use this locker-room braggadocio to belittle the human viewpoint, to belittle human life and values and virtues and civilization and moral, spiritual, and religious discoveries, which is all we human beings possess or ever will, they have outrun their own empiricism. They are abusing their cultural standing. Science has become an international bully.
Nowhere is its bullying more outrageous than in its assault on the phenomenon known as subjectivity.
Your subjective, conscious experience is just as real as the tree outside your window or the photons striking your retina—even though you alone feel it. Many philosophers and scientists today tend to dismiss the subjective and focus wholly on an objective, third-person reality—a reality that would be just the same if men had no minds. They treat subjective reality as a footnote, or they ignore it, or they announce that, actually, it doesn’t even exist.
If scientists were rat-catchers, it wouldn’t matter. But right now, their views are threatening all sorts of intellectual and spiritual fields. The present problem originated at the intersection of artificial intelligence and philosophy of mind—in the question of what consciousness and mental states are all about, how they work, and what it would mean for a robot to have them. It has roots that stretch back to the behaviorism of the early 20th century, but the advent of computing lit the fuse of an intellectual crisis that blasted off in the 1960s and has been gaining altitude ever since.
The modern “mind fields” encompass artificial intelligence, cognitive psychology, and philosophy of mind. Researchers in these fields are profoundly split, and the chaos was on display in the ugliness occasioned by the publication of Thomas Nagel’s Mind & Cosmos in 2012. Nagel is an eminent philosopher and professor at NYU. In Mind & Cosmos, he shows with terse, meticulous thoroughness why mainstream thought on the workings of the mind is intellectually bankrupt. He explains why Darwinian evolution is insufficient to explain the emergence of consciousness—the capacity to feel or experience the world. He then offers his own ideas on consciousness, which are speculative, incomplete, tentative, and provocative—in the tradition of science and philosophy.
China’s first lunar rover separates from Chang’e-3 moon lander early Dec. 15, 2013. Screenshot taken from the screen of the Beijing Aerospace Control Center in Beijing. Credit: Xinhua/post processing by Marco Di Lorenzo/Ken Kremer
China’s first ever lunar rover rolled majestically onto the Moon’s soil on Sunday, Dec. 15, barely seven hours after the Chang’e-3 mothership touched down atop the lava filled plains of the Bay of Rainbows.
Check out the gallery of stunning photos and videos herein from China’s newest space spectacular atop stark lunar terrain.
The six wheeled ‘Yutu’, or Jade Rabbit, rover drove straight off a pair of ramps at 4:35 a.m. Beijing local time and sped right into the history books as it left a noticeably deep pair of tire tracks behind in the loose lunar dirt.
China’s first lunar rover separates from Chang’e-3 moon lander early Dec. 15, 2013. Screenshot taken from the screen of the Beijing Aerospace Control Center in Beijing. Credit: CCTV
The stunning feat was broadcast on China’s state run CCTV using images transmitted to Earth from cameras mounted on the Chang’e-3 lander and aimed directly at the rear of the departing moon buggy.
Watch this YouTube video from CCTV showing the separation of ‘Yutu’ from the lander: