Netflix Is Building an Artificial Brain Using Amazon’s Cloud

Illustration: Hong Li/Getty

Illustration: Hong Li/Getty

Klint Finley writes:  Nothing beats a movie recommendation from a friend who knows your tastes. At least not yet. Netflix wants to change that, aiming to build an online recommendation engine that outperforms even your closest friends.

The online movie and TV outfit once sponsored what it called the Netflix Prize, asking the world’s data scientists to build new algorithms that could better predict what movies and shows you want to see. And though this certainly advanced the state of the art, Netflix is now exploring yet another leap forward. In an effort to further hone its recommendation engine, the company is delving into “deep learning,” a branch of artificial intelligence that seeks to solve particularly hard problems using computer systems that mimic the structure and behavior of the human brain. The company details these efforts in a recent blog post.

Netflix is following in the footsteps of web giants like Google and Facebook, who have hired top deep-learning researchers in an effort to improve everything from voice recognition to image tagging.

With the project, Netflix is following in the footsteps of web giants like Google and Facebook, who have hired top deep-learning researchers in an effort to improve everything from voice recognition to image tagging. But Netflix is taking a slightly different tack. The company plans to run its deep learning algorithms on Amazon’s cloud service, rather than building their own hardware infrastructure a la Google and Facebook. This shows that, thanks to rise of the cloud, smaller web companies can now compete with the big boys — at least in some ways.

Read the rest of this entry »


Hyping Artificial Intelligence, Yet Again

Photograph: Chris Ratcliffe/Bloomberg/Getty

Photograph: Chris Ratcliffe/Bloomberg/Getty

Gary Marcus writes:  According to the Times, true artificial intelligence is just around the corner. A year ago, the paper ran a front-page story about the wonders of new technologies, including deep learning, a neurally-inspired A.I. technique for statistical analysis. Then, among others, came an article about how I.B.M.’s Watson had been repurposed into a chef, followed by an upbeat post about quantum computation. On Sunday, the paper ran a front-page story about “biologically inspired processors,” “brainlike computers” that learn from experience.

This past Sunday’s story, by John Markoff, announced that “computers have entered the age when they are able to learn from their own mistakes, a development that is about to turn the digital world on its head.” The deep-learning story, from a year ago, also by Markoff, told us of “advances in an artificial intelligence technology that can recognize patterns offer the possibility of machines that perform human activities like seeing, listening and thinking.” For fans of “Battlestar Galactica,” it sounds like exciting stuff.

But, examined carefully, the articles seem more enthusiastic than substantive. As I wrote before, the story about Watson was off the mark factually. The deep-learning piece had problems, too. Sunday’s story is confused at best; there is nothing new in teaching computers to learn from their mistakes. Instead, the article seems to be about building computer chips that use “brainlike” algorithms, but the algorithms themselves aren’t new, either. As the author notes in passing, “the new computing approach” is “already in use by some large technology companies.” Mostly, the article seems to be about neuromorphic processors—computer processors that are organized to be somewhat brainlike—though, as the piece points out, they have been around since the nineteen-eighties. In fact, the core idea of Sunday’s article—nets based “on large groups of neuron-like elements … that learn from experience”—goes back over fifty years, to the well-known Perceptron, built by Frank Rosenblatt in 1957. (If you check the archives, the Times billed it as a revolution, with the headline “NEW NAVY DEVICE LEARNS BY DOING.” The New Yorker similarly gushed about the advancement.) The only new thing mentioned is a computer chip, as yet unproven but scheduled to be released this year, along with the claim that it can “potentially [make] the term ‘computer crash’ obsolete.” Steven Pinker wrote me an e-mail after reading the Times story, saying “We’re back in 1985!”—the last time there was huge hype in the mainstream media about neural networks.

Read the rest of this entry »