Advertisements

Humans May Have One Thing that Advanced Aliens Don’t: Consciousness

cosmos-martin-gee

It May Not Feel Like Anything To Be an Alien

Susan Schneider writes: Humans are probably not the greatest intelligences in the universe. Earth is a relatively young planet and the oldest civilizations could be billions of years older than us. But even on Earth, Homo sapiens may not be the most intelligent species for that much longer.

“Why would nonconscious machines have the same value we place on biological intelligence?”

The world Go, chess, and Jeopardy champions are now all AIs. AI is projected to outmode many human professions within the next few decades. And given the rapid pace of its development, AI may soon advance to artificial general intelligence—intelligence that, like human intelligence, can combine insights from different topic areas and display flexibility and common sense. From there it is a short leap to superintelligent AI, which is smarter than humans in every respect, even those that now seem firmly in the human domain, such as scientific reasoning and social skills. Each of us alive today may be one of the last rungs on the evolutionary ladder that leads from the first living cell to synthetic intelligence.

What we are only beginning to realize is that these two forms of superhuman intelligence—alien and artificial—may not be so distinct. The technological developments we are witnessing today may have all happened before, elsewhere in the universe. The transition from biological to synthetic intelligence may be a general pattern, instantiated over and over, throughout the cosmos. The universe’s greatest intelligences may be postbiological, having grown out of civilizations that were once biological. (This is a view I share with Paul Davies, Steven Dick, Martin Rees, and Seth Shostak, among others.) To judge from the human experience—the only example we have—the transition from biological to postbiological may take only a few hundred years.

AI robot Ava in the film Ex Machina. Photograph: Allstar/FILM4/Sportsphoto Ltd./Allstar

I prefer the term “postbiological” to “artificial” because the contrast between biological and synthetic is not very sharp. Consider a biological mind that achieves superintelligence through purely biological enhancements, such as nanotechnologically enhanced neural minicolumns. This creature would be postbiological, although perhaps many wouldn’t call it an “AI.” Or consider a computronium that is built out of purely biological materials, like the Cylon Raider in the reimagined Battlestar Galactica TV series.

The key point is that there is no reason to expect humans to be the highest form of intelligence there is. Our brains evolved for specific environments and are greatly constrained by chemistry and historical contingencies. But technology has opened up a vast design space, offering new materials and modes of operation, as well as new ways to explore that space at a rate much faster than traditional biological evolution. And I think we already see reasons why synthetic intelligence will outperform us.

Silicon microchips already seem to be a better medium for information processing than groups of neurons. Neurons reach a peak speed of about 200 hertz, compared to gigahertz for the transistors in current microprocessors. Although the human brain is still far more intelligent than a computer, machines have almost unlimited room for improvement. It may not be long before they can be engineered to match or even exceed the intelligence of the human brain through reverse-engineering the brain and improving upon its algorithms, or through some combination of reverse engineering and judicious algorithms that aren’t based on the workings of the human brain.

female-robot-newsreader

In addition, an AI can be downloaded to multiple locations at once, is easily backed up and modified, and can survive under conditions that biological life has trouble with, including interstellar travel. Our measly brains are limited by cranial volume and metabolism; superintelligent AI, in stark contrast, could extend its reach across the Internet and even set up a Galaxy-wide computronium, utilizing all the matter within our galaxy to maximize computations. There is simply no contest. Superintelligent AI would be far more durable than us.

[Read the full story here, at Cosmos on Nautilus]

Suppose I am right. Suppose that intelligent life out there is postbiological. What should we make of this? Here, current debates over AI on Earth are telling. Two of the main points of contention—the so-called control problem and the nature of subjective experience—affect our understanding of what other alien civilizations may be like, and what they may do to us when we finally meet.

Illustration by Norwegian cartoonist and illustrator, Kristian Hammerstad, from “Rise of the Robots,” a New York Times Sunday Book Review article, May 11, 2015.

Illustration by Kristian Hammerstad

Ray Kurzweil takes an optimistic view of the postbiological phase of evolution, suggesting that humanity will merge with machines, reaching a magnificent technotopia. But Stephen Hawking, Bill Gates, Elon Musk, and others have expressed the concern that humans could lose control of superintelligent AI, as it can rewrite its own programming and outthink any control measures that we build in. This has been called the “control problem”—the problem of how we can control an AI that is both inscrutable and vastly intellectually superior to us. Read the rest of this entry »

Advertisements

Rocket Men: Why Tech’s Biggest Billionaires Want their Place in Space

3500

Forget gilded mansions and super yachts. Among the tech elite, space exploration is now the ultimate status symbol.

On board was a $200m, 12,000lb communications satellite – part of Facebook CEO Mark Zuckerberg’s Internet.org project to deliver broadband access to sub-Saharan Africa.

Zuckerberg wrote, with a note of bitterness, on his Facebook page that he was “deeply disappointed to hear that SpaceX’s launch failure destroyed our satellite”. SpaceX founder Elon Musk told CNN it was the “most difficult and complex failure” the 14-year-old company had ever experienced.

Musk-photo-patrick-fallon-wsj

It was also the second dramatic explosion in nine months for SpaceX, following a “rapid unscheduled disassembly” of a booster rocket as it attempted to land after a successful mission to the International Space Station.

Later that day, Nasa’s official Twitter account responded: “Today’s @SpaceX incident – while not a Nasa launch – reminds us that spaceflight is challenging.”

Yet despite those challenges, a small band of billionaire technocrats have spent the past few years investing hundreds of millions of dollars into space ventures. Forget gilded mansions and super yachts; among the tech elite, space exploration is the ultimate status symbol.

drudg-moon

Musk, who founded SpaceX in 2002, is arguably the most visible billionaire in the new space race. The apparent inspiration for Robert Downey Jr’s Tony Stark character in Iron Man, Musk has become a god-like figure for engineers, making his fortune at PayPal and then as CEO of luxury electric car firm Tesla and clean energy company Solar City. Yet it is his galactic ambitions, insiders say, that really motivate him. “His passion is settling Mars,” says one.

[Read the full story here, at The Guardian]

SpaceX has completed 32 successful launches since 2006, delivered cargo to the International Space Station and secured more than $10bn in contracts with Nasa and other clients. Musk has much grander ambitions, though, saying he plans to create a “plan B” for humanity in case Earth ultimately fails. He once famously joked that he hoped to die on Mars – just not on impact. Read the rest of this entry »


Stephen Hawking Debuts With a Big Bang on Chinese Social Media

BN-NM794_hawkin_G_20160412063818

One of the world’s most celebrated cosmologists stretched the fabric of China’s social-media universe with a simple greeting on Tuesday.

“Greetings to my friends in China! It has been too long!” celebrated black-hole theorizer Stephen Hawking wrote in an inaugural, bilingual post on Chinese social media platform Weibo. “I hope to tell you more about my life and work through this page and also to learn from you in reply.”

The response was, well, astronomical.

The account amassed more than a million followers in its first six hours. In that time, Mr. Hawking’s first message was…(read more)

Source: China Real Time Report – WSJ


[PHOTOS] French Artist Miguel Chevalier’s Projection Mapping Fills Cambridge’s 16th-Century King’s College Chapel with Stars

Attendees to a recent fundraising event inside University of Cambridge’s 16th-century chapel were treated to a spectacular display far above. The Gothic arches of King’s College Chapel were transformed into a canvas for mesmerizing views of stars, foliage, psychedelic clouds and university crests. The work was created by French projection artist Miguel Chevalier.

The visuals were generated in real-time, contributing to the theme of each speaker. During a presentation on black holes by Stephen Hawking, the room was transformed into a vision of deep space. Other topics touched on subjects ranging from health, to Africa, biology and physics.

See more of Chevalier’s projection mapping work in unusual places, on his personal websiteFacebook or Instagram.

See more here….

Source: visualnews.com


Does Artificial Intelligence Pose a Threat?

AI-WSJ

A panel of experts discusses the prospect of machines capable of autonomous reasoning

Ted Greenwald writes: After decades as a sci-fi staple, artificial intelligence has leapt into the mainstream. Between Apple ’s Siri and Amazon ’s Alexa, IBM ’s Watson and Google Brain, machines that understand the world and respond productively suddenly seem imminent.

The combination of immense Internet-connected networks and machine-learning algorithms has yielded dramatic advances in machines’ ability to understand spoken and visual communications, capabilities that fall under the heading “narrow” artificial intelligence. Can machines capable of autonomous reasoning—so-called general AI—be far behind? And at that point, what’s to keep them from improving themselves until they have no need for humanity?

Meka's M1 robot is one of the systems that has been acquired by Google

The prospect has unleashed a wave of anxiety. “I think the development of full artificial intelligence could spell the end of the human race,” astrophysicist Stephen Hawking told the BBC. Tesla founder Elon Musk called AI “our biggest existential threat.” Former Microsoft Chief Executive Bill Gates has voiced his agreement.

How realistic are such concerns? And how urgent? We assembled a panel of experts from industry, research and policy-making to consider the dangers—if any—that lie ahead. Read the rest of this entry »


Protesters Stage Anti-Robot Rally at SXSW

“I say robot, you say no-bot!”

Jon Swartz reports: The chant reverberated through the air near the entrance to the SXSW tech and entertainment festival here.

About two dozen protesters, led by a computer engineer, echoed that sentiment in their movement against artificial intelligence.

“Machines have already taken over. If you drive a car, much of what it does is technology-driven.”

— Ben Medlock, co-founder of mobile-communications company SwiftKey

“This is is about morality in computing,” said Adam Mason, 23, who organized the protest.

Signs at the scene reflected the mood. “Stop the Robots.” “Humans are the future.”

The mini-rally drew a crowd of gawkers, drawn by the sight of a rare protest here.

635620541765495483-XXX-jmg-23017

The dangers of more developed artificial intelligence, which is still in its early stages, has created some debate in the scientific community. Tesla founder Elon Musk donated $10 million to the Future of Life Institute because of his fears.

Stephen Hawking and others have added to the proverbial wave of AI paranoia with dire predictions of its risk to humanity.

“I am amazed at the movement. I has changed life in ways as dramatic as the Industrial Revolution.”

— Stephen Wolfram, a British computer scientist, entrepreneur and former physicist known for his contributions to theoretical physics

The topic is an undercurrent in Steve Jobs: The Man in the Machine, a documentary about the fabled Apple co-founder. The paradoxical dynamic between people and tech products is a “double-edged sword,” said its Academy Award-winning director, Alex Gibney. “There are so many benefits — and yet we can descend into our smartphone.”

As non-plussed witnesses wandered by, another chant went up. “A-I, say goodbye.”

Several of the students were from the University of Texas, which is known for a strong engineering program. But they are deeply concerned about the implications of a society where technology runs too deep. Read the rest of this entry »


Read Stephen Hawking’s Sweet Note to Eddie Redmayne After His Oscar Win

Featured Image -- 62595

TIME

Stephen Hawking, who joined Facebook just a few months ago, used the social media site to write a brief but touching note to Eddie Redmayne, who won the Best Actor Oscar Sunday night. In The Theory of Everything, Redmayne portrayed the world-renowned physicist and his struggle with ALS.

Shortly after the Academy Awards ceremony, Hawking shared the following post, saying he was “very proud” of the actor:

In his acceptance speech, Redmayne said, “I’m fully aware that I am a lucky, lucky man. This Oscar belongs to all of those people around the world battling ALS.”

Read next: Stephen Hawking Wants to Be a Bond Villain

[time-brightcove videoid= 4075547464001]

[time-gallery id=”3717939″]

[newsletter-the-brief]

Listen to the most important stories of the day.

View original post


Our Fear of Artificial Intelligence

Photograph: Chris Ratcliffe/Bloomberg/Getty

Are We Smart Enough to Control Artificial Intelligence? 

A true AI might ruin the world—but that assumes it’s possible at all

Paul Ford writes: Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. “Don’t laugh at me,” he said, “but I was counting on the singularity.”

“The question ‘Can a machine think?’ has shadowed computer science from its beginnings.”

My friend worked in technology; he’d seen the changes that faster microprocessors and networks had wrought. It wasn’t that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humans—a moment that futurists call the singularity. A benevolent superintelligence might analyze the human genetic code at great speed and unlock the secret to eternal youth. At the very least, it might know how to fix your back.

turing-robot-hand

But what if it wasn’t so benevolent? Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence. Imagine a machine that we might call a “paper-clip maximizer”—that is, a machine programmed to make as many paper clips as possible. Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.

Agility: rapid advances in technology, including machine vision, tactile sensors and autonomous navigation, make today’s robots, such as this model from DLR, increasingly useful

Agility: rapid advances in technology, including machine vision, tactile sensors and autonomous navigation, make today’s robots, such as this model from DLR, increasingly useful

No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it “computronium”) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.

Superintelligence: Paths, Dangers, Strategies
BY NICK BOSTROM
OXFORD UNIVERSITY PRESS, 2014

Bostrom does not believe that the paper-clip maximizer will come to be, exactly; it’s a thought experiment, one designed to show how even careful system design can fail to restrain extreme machine intelligence. But he does believe that superintelligence could emerge, and while it could be great, he thinks it could also decide it doesn’t need humans around. Or do any number of other things that destroy the world. The title of chapter 8 is: “Is the default outcome doom?”

“Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term ‘artificial intelligence’ in 1955.”

If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative superintelligence Bostrom describes is far in the future and perhaps impossible.

female-robot-newsreader

Yet a lot of smart, thoughtful people agree with Bostrom and are worried now. Why?

The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in War Games. The androids of 1973’s Westworld went crazy and started killing.

“Extreme AI predictions are ‘comparable to seeing more efficient internal combustion engines… and jumping to the conclusion that the warp drives are just around the corner,’ Rodney Brooks writes.”

When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.

As Kurzweil described it, this would begin a beautiful new era. Read the rest of this entry »


David W. Buchanan: No, the Robots Are Not Going to Rise Up and Kill You

ROBOTS_B_400

David W. Buchanan is a researcher at IBM, where he is a member of the team that made the Watson “Jeopardy!” system.

David W. Buchanan writes: We have seen astonishing progress in artificial intelligence, and technology companies are pouring money into AI research. In 2011, the IBM system Watson competed on “Jeopardy!,” beating the best human playersSiri and Cortana have taken charge of our smartphones. As I write this, a vacuum called Roomba is cleaning my house on its own, using what the box calls “robot intelligence.” It is easy to feel like the world is on the verge of being taken over by computers, and the news media have indulged such fears with frequent coverage of the supposed dangers of AI.

But as a researcher who works on modern, industrial AI, let me offer a personal perspective to explain why I’m not afraid.

Thunder-Robots

Science fiction is partly responsible for these fears. A common trope works as follows: Step 1: Humans create AI to perform some unpleasant or difficult task. Step 2: The AI becomes conscious. Step 3: The AI decides to kill us all. As science fiction, such stories can be great fun. As science fact, the narrative is suspect, especially around Step 2, which assumes that by synthesizing intelligence, we will somehow automatically, or accidentally, create consciousness. I call this the consciousness fallacy. It seems plausible at first, but the evidence doesn’t support it. And if it is false, it means we should look at AI very differently.

Intelligence is the ability to analyze the world and reason about it in a way that enables more effective action. Our scientific understanding of intelligence is relatively advanced. There is still an enormous amount of work to do before we can create comprehensive, human-caliber intelligence. But our understanding is viable in the sense that there are real businesses that make money by creating AI.

Coming online: some 95,000 new professional service robots, worth some $17.1bn, are set to be installed for professional use between 2013 and 2015

Coming online: some 95,000 new professional service robots, worth some $17.1bn, are set to be installed for professional use between 2013 and 2015

Consciousness is a much different story, perhaps because there is less money in it. Consciousness is also a harder problem: While most of us would agree that we know consciousness when we see it, scientists can’t really agree on a rigorous definition, let alone a research program that would uncover its basic mechanisms. Read the rest of this entry »


New Measure of Literary Unpopularity: ‘The Picketty Index’

bored-kid-at-desk2-twj4oj

“Capital in the Twenty-First Century” by Thomas Piketty 

Yes, it came out just three months ago. But the contest isn’t even close. Mr. Piketty’s book is almost 700 pages long, and the last of the top five popular highlights appears on page 26. Stephen Hawking is off the hook; from now on, this measure should be known as the Piketty Index.

So take it easy on yourself, readers, if you don’t finish whatever edifying tome you picked out for vacation. You’re far from alone…

(read moreWSJ