Geoffrey Hinton: The Man Google Hired to Make AI a Reality

Geoff Hinton, the AI guru who now works for Google. Photo: Josh Valcarcel/WIRED

Geoff Hinton, the AI guru who now works for Google. Photo: Josh Valcarcel/WIRED

Daniela Hernandez  writes:  Geoffrey Hinton was in high school when a friend convinced him that the brain worked like a hologram.

To create one of those 3-D holographic images, you record how countless beams of light bounce off an object and then you store these little bits of information across a vast database. While still in high school, back in 1960s Britain, Hinton was fascinated by the idea that the brain stores memories in much the same way. Rather than keeping them in a single location, it spreads them across its enormous network of neurons.

‘I get very excited when we discover a way of making neural networks better — and when that’s closely related to how the brain works.’

This may seem like a small revelation, but it was a key moment for Hinton — “I got very excited about that idea,” he remembers. “That was the first time I got really into how the brain might work” — and it would have enormous consequences. Inspired by that high school conversation, Hinton went on to explore neural networks at Cambridge and the University of Edinburgh in Scotland, and by the early ’80s, he helped launch a wildly ambitious crusade to mimic the brain using computer hardware and software, to create a purer form of artificial intelligence we now call “deep learning.”

Read the rest of this entry »


Hyping Artificial Intelligence, Yet Again

Photograph: Chris Ratcliffe/Bloomberg/Getty

Photograph: Chris Ratcliffe/Bloomberg/Getty

Gary Marcus writes:  According to the Times, true artificial intelligence is just around the corner. A year ago, the paper ran a front-page story about the wonders of new technologies, including deep learning, a neurally-inspired A.I. technique for statistical analysis. Then, among others, came an article about how I.B.M.’s Watson had been repurposed into a chef, followed by an upbeat post about quantum computation. On Sunday, the paper ran a front-page story about “biologically inspired processors,” “brainlike computers” that learn from experience.

This past Sunday’s story, by John Markoff, announced that “computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.” The deep-learning story, from a year ago, also by Markoff, told us of “advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking.” For fans of “Battlestar Galactica,” it sounds like exciting stuff.

But, examined carefully, the articles seem more enthusiastic than substantive. As I wrote before, the story about Watson was off the mark factually. The deep-learning piece had problems, too. Sunday’s story is confused at best; there is nothing new in teaching computers to learn from their mistakes. Instead, the article seems to be about building computer chips that use “brainlike” algorithms, but the algorithms themselves aren’t new, either. As the author notes in passing, “the new computing approach” is “already in use by some large technology companies.” Mostly, the article seems to be about neuromorphic processors—computer processors that are organized to be somewhat brainlike—though, as the piece points out, they have been around since the nineteen-eighties. In fact, the core idea of Sunday’s article—nets based “on large groups of neuron-like elements … that learn from experience”—goes back over fifty years, to the well-known Perceptron, built by Frank Rosenblatt in 1957. (If you check the archives, the Times billed it as a revolution, with the headline “NEW NAVY DEVICE LEARNS BY DOING.” The New Yorker similarly gushed about the advancement.) The only new thing mentioned is a computer chip, as yet unproven but scheduled to be released this year, along with the claim that it can “potentially [make] the term ‘computer crash’ obsolete.” Steven Pinker wrote me an e-mail after reading the Times story, saying “We’re back in 1985!”—the last time there was huge hype in the mainstream media about neural networks.

Read the rest of this entry »