Zip your zipper
It can go around curves and move forward and backward. Get ready for tiny bots to zip around your pants, jackets, and dresses.
Slay Motörhead covers
Just listen to that drumming. Lemmy may have found the band’s seventh drummer.
Swim like an octopus
These robot octopuses use webbed arms to quickly whip through the water.
Writing computer programs could become as easy as searching the Internet. A Rice University-led team of software experts has launched an $11 million effort to create a sophisticated tool called PLINY that will both “autocomplete” and “autocorrect” code for programmers, much like the software to complete search queries and correct spelling on today’s Web browsers and smartphones.
“The engine will formulate answers using Bayesian statistics. Much like today’s spell-correction algorithms, it will deliver the most probable solution first, but programmers will be able to cycle through possible solutions if the first answer is incorrect.”
– Chris Jermaine, associate professor of computer science at Rice
“Imagine the power of having all the code that has ever been written in the past available to programmers at their fingertips as they write new code or fix old code,” said Vivek Sarkar, Rice’s E.D. Butcher Chair in Engineering, chair of the Department of Computer Science and the principal investigator (PI) on the PLINY project. “You can think of this as autocomplete for code, but in a far more sophisticated way.”
Sarkar said the four-year effort is funded by the Defense Advanced Research Projects Agency (DARPA). PLINY, which draws its name from the Roman naturalist who authored the first encyclopedia, will involve more than two dozen computer scientists from Rice, the University of Texas-Austin, the University of Wisconsin-Madison and the company GrammaTech.
“Imagine the power of having all the code that has ever been written in the past available to programmers at their fingertips as they write new code or fix old code. You can think of this as autocomplete for code, but in a far more sophisticated way.”
– Vivek Sarkar, Rice’s E.D. Butcher Chair in Engineering
PLINY is part of DARPA’s Mining and Understanding Software Enclaves (MUSE) program, an initiative that seeks to gather hundreds of billions of lines of publicly available open-source computer code and to mine that code to create a searchable database of properties, behaviors and vulnerabilities.
Rice team members say the effort will represent a significant advance in the way software is created, verified and debugged.
“Software today is far more complex than it was 20 years ago, yet it is still largely created by hand, one line of code at a time. We envision a system where the programmer writes a few of lines of code, hits a button and the rest of the code appears. And not only that, the rest of the code should work seamlessly with the code that’s already been written.”
– Swarat Chaudhuri, assistant professor of computer science at Rice
He said PLINY will need to be sophisticated enough to recognize and match similar patterns regardless of differences in programming languages and code specifications. Read the rest of this entry »
Scientists are hunting one of the biggest prizes in physics: tiny particles called wimps that could unlock some of the universe’s oldest secrets
A wimp—a weakly interacting massive particle—is thought to be the stuff of dark matter, an invisible substance that makes up about a quarter of the universe but has never been seen by humans.
Gravity is the force that holds things together, and the vast majority of it emanates from dark matter. Ever since the big bang, this mystery material has been the universe’s prime architect, giving it shape and structure. Without dark matter, there would be no galaxies, no stars, no planets. Solving its mystery is crucial to understanding what the universe is made of.
“If we don’t assume that 85% of the matter in the universe is this unknown material, the laws of relativity and gravity would have to be modified. That would be significant,” says physicist Giuliana Fiorillo, a member of the 150-strong team searching for the particles at the Gran Sasso National Laboratory, 80 miles east of Rome.
The quest for dark matter has intensified since the discovery of the Higgs boson particle two years ago, which helped to narrow the field in which wimps might be hiding. Today, more than 20 different teams of researchers are hunting for the elusive stuff, using some of the most elaborate and delicate experiments ever devised.
Dark-matter detectors have been installed on the sea bed nearly 8,200 feet beneath the surface. Others operate deep inside mines. There is one on the International Space Station. China’s new dark-matter experiment sits 1.5 miles beneath a marble mountain. When it restarts later this year, the Large Hadron Collider will look for wimps, too, by smashing together subatomic particles.
Scientists estimate that visible matter makes up just 4% of the universe, while dark matter makes up 23%. The remaining 73% is an even bigger puzzle, a repulsive force known as “dark energy.”
Dark matter neither emits nor absorbs light. We know it is out there, because scientists can measure the immense gravitational force it exerts on stars, galaxies and other cosmic bodies. The best candidate for what dark matter consists of is the wimp: an ethereal being that barely interacts with normal matter. Every second, billions of wimps flow through the Earth without hitting anything. Read the rest of this entry »
Are We Smart Enough to Control Artificial Intelligence?
A true AI might ruin the world—but that assumes it’s possible at all
Paul Ford writes: Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. “Don’t laugh at me,” he said, “but I was counting on the singularity.”
“The question ‘Can a machine think?’ has shadowed computer science from its beginnings.”
My friend worked in technology; he’d seen the changes that faster microprocessors and networks had wrought. It wasn’t that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humans—a moment that futurists call the singularity. A benevolent superintelligence might analyze the human genetic code at great speed and unlock the secret to eternal youth. At the very least, it might know how to fix your back.
But what if it wasn’t so benevolent? Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence. Imagine a machine that we might call a “paper-clip maximizer”—that is, a machine programmed to make as many paper clips as possible. Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.
No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it “computronium”) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.
Bostrom does not believe that the paper-clip maximizer will come to be, exactly; it’s a thought experiment, one designed to show how even careful system design can fail to restrain extreme machine intelligence. But he does believe that superintelligence could emerge, and while it could be great, he thinks it could also decide it doesn’t need humans around. Or do any number of other things that destroy the world. The title of chapter 8 is: “Is the default outcome doom?”
“Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term ‘artificial intelligence’ in 1955.”
If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative superintelligence Bostrom describes is far in the future and perhaps impossible.
Yet a lot of smart, thoughtful people agree with Bostrom and are worried now. Why?
The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in War Games. The androids of 1973’s Westworld went crazy and started killing.
“Extreme AI predictions are ‘comparable to seeing more efficient internal combustion engines… and jumping to the conclusion that the warp drives are just around the corner,’ Rodney Brooks writes.”
When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.
As Kurzweil described it, this would begin a beautiful new era. Read the rest of this entry »
— Wall Street Journal (@WSJ) February 11, 2015
Designed to compete in the DARPA Robotics Challenge, this “female” robot could be the precursor to robo-astronauts that will help colonize Mars.
What if NASA’s Robonaut grew legs and indulged in steroids? The result might be close to what NASA has unveiled: Valkyrie is a humanoid machine billed as a “superhero robot.” Developed at the Johnson Space Center, Valkyrie is a 6.2-foot, 275-pound hulk designed to compete in the DARPA Robotics Challenge (DRC). It will go toe to toe with the Terminator-like Atlas robot from Boston Dynamics in what’s shaping up to be an amazing modern-day duel. In an interesting twist, Valkyrie seems to be a girl. Read the rest of this entry »
“I’m not saying let’s live forever,” says Zoltan Istvan, transhumanist author, philosopher, and political candidate. “I think what we want is the choice to be able to live indefinitely. That might be 10,000 years; that might only be 170 years.”
“I’d say the number one goal of transhumanism is trying to conquer death.”
Istvan devoted his life to transhumanism after nearly stepping on an old landmine while reporting for National Geographic channel in Vietnam’s demilitarized zone.
“I’d say the number one goal of transhumanism is trying to conquer death,” says Istvan.
Reason TV‘s Zach Weissmueller interviewed Istvan about real-world life-extension technology ranging from robotic hearts to cryogenic stasis, Istvan’s plan to run for president under the banner of the Transhumanist party, the overlap between the LGBT movement and transhumanism, and the role that governments play in both aiding and impeding transhumanist goals.
Approximately 10 minutes. Produced by Zach Weissmueller. Camera by Justin Monticello and Paul Detrick. Music by Anix Gleo and nthnl.
David W. Buchanan is a researcher at IBM, where he is a member of the team that made the Watson “Jeopardy!” system.
David W. Buchanan writes: We have seen astonishing progress in artificial intelligence, and technology companies are pouring money into AI research. In 2011, the IBM system Watson competed on “Jeopardy!,” beating the best human players. Siri and Cortana have taken charge of our smartphones. As I write this, a vacuum called Roomba is cleaning my house on its own, using what the box calls “robot intelligence.” It is easy to feel like the world is on the verge of being taken over by computers, and the news media have indulged such fears with frequent coverage of the supposed dangers of AI.
But as a researcher who works on modern, industrial AI, let me offer a personal perspective to explain why I’m not afraid.
Science fiction is partly responsible for these fears. A common trope works as follows: Step 1: Humans create AI to perform some unpleasant or difficult task. Step 2: The AI becomes conscious. Step 3: The AI decides to kill us all. As science fiction, such stories can be great fun. As science fact, the narrative is suspect, especially around Step 2, which assumes that by synthesizing intelligence, we will somehow automatically, or accidentally, create consciousness. I call this the consciousness fallacy. It seems plausible at first, but the evidence doesn’t support it. And if it is false, it means we should look at AI very differently.
Intelligence is the ability to analyze the world and reason about it in a way that enables more effective action. Our scientific understanding of intelligence is relatively advanced. There is still an enormous amount of work to do before we can create comprehensive, human-caliber intelligence. But our understanding is viable in the sense that there are real businesses that make money by creating AI.
Consciousness is a much different story, perhaps because there is less money in it. Consciousness is also a harder problem: While most of us would agree that we know consciousness when we see it, scientists can’t really agree on a rigorous definition, let alone a research program that would uncover its basic mechanisms. Read the rest of this entry »
China’s mysterious “Dark Sword” combat drone could become the world’s first supersonic unmanned aviation vehicle, reports the website of the country’s national broadcaster CCTV.
The Dark Sword — known in Chinese as “Anjian” — made quite a stir in 2006 when a conceptual model of the unusually shaped triangular aircraft made its debut at the Zhuhai Airshow in southern China’s Guangdong province.
The model was subsequently exhibited at the Paris Air Show but has disappeared from future airshows, with no official word on the development of the UAV. Some claim the project has already been scrapped due to insufficient funding or other reasons, while others believe the development of the drone is now being kept secret as it is undergoing further research and testing.
Chinese aviation expert Fu Qianshao told CCTV that while he does not know the status of the Dark Sword project, the drone could become the world’s first supersonic UAV if it proves a success. He said he would not be surprised if the project is still ongoing in secret as a lack of transparency is nothing new for the aviation industry and is an approach commonly taken by the Americans.
Fu believes even conceptual models of aircraft can reveal something about a country’s technology and the quality of its research and development, adding that analyzing models at Zhuhai can allow experts to gauge the pulse of China’s aviation industry and pick up data that may be more valuable than what the developers are leaking out to the public. Read the rest of this entry »
Sharon Weinberger writes: For almost two years, an unmanned space plane bearing a remarkable resemblance to NASA’s space shuttle has circled the Earth, performing a top-secret mission. It’s called the X-37B Orbital Test Vehicle — but that’s pretty much all we know for certain.
“Despite the secrecy surrounding its mission, the space plane’s travels are closely watched. The Air Force announces its launches, and satellite watchers monitor its flight and orbit. What is not revealed is what’s inside the cargo bay and what it’s being used for.”
Officially, the only role the Pentagon acknowledges is that the space plane is used to conduct experiments on new technologies. Theories about its mission have ranged from an orbiting space bomber to an anti-satellite weapon.
The truth, however, is likely much more obvious: According to intelligence experts and satellite watchers who have closely monitored its orbit, the X-37B is being used to carry secret satellites and classified sensors into space — a little-known role once played by NASA’s now-retired space shuttles.
For a decade between the 1980s and early 1990s, NASA’s space shuttles were used for classified military missions, which involved ferrying military payloads into space.
“Now, with the X-37B, the Pentagon no longer has to rely on NASA — or humans.”
But the shuttles’ military role rested on an uneasy alliance between NASA and the Pentagon. Even before the 1986 Challenger disaster, which killed all seven crewmembers, the Pentagon had grown frustrated with NASA’s delays.
Now, with the X-37B, the Pentagon no longer has to rely on NASA — or humans.
The X-37B resembles a shuttle, or at least a shrunken-down version of it. Like the space shuttles, the X-37B is boosted into orbit by an external rocket, but lands like an aircraft on a conventional runway. But the X-37B is just shy of 10 feet tall and slightly less than 30 feet long.
Its cargo bay, often compared to the size of a pickup truck bed, is just big enough to carry a small satellite. Once in orbit, the X-37B deploys a foldable solar array, which is believed to power the sensors in its cargo bay.
“It’s just an updated version of the space shuttle type of activities in space,” insisted one senior Air Force official in 2010, the year of the first launch, when rampant speculation about the secret project prompted some to question whether it was possibly a space bomber. Read the rest of this entry »
Walter Isaacson writes: We live in the age of computers, but few of us know who invented them. Because most of the pioneers were part of collaborative teams working in wartime secrecy, they aren’t as famous as an Edison, Bell or Morse. But one genius, the English mathematician Alan Turing, stands out as a heroic-tragic figure, and he’s about to get his due in a new movie, “The Imitation Game,” starring Benedict Cumberbatch, which won the top award at the Toronto Film Festival earlier this month and will open in theaters in November.
“He also wrestled with the issue of free will: Are our personal preferences and impulses all predetermined and programmed, like those of a machine?”
The title of the movie refers to a test that Turing thought would someday show that machines could think in ways indistinguishable from humans. His belief in the potential of artificial intelligence stands in contrast to the school of thought that argues that the combined talents of humans and computers, working together as partners, will always be more creative than computers working alone.
Despite occasional breathless headlines, the quest for pure artificial intelligence has so far proven disappointing. But the alternative approach of connecting humans and machines more intimately continues to produce astonishing innovations. As the movie about him shows, Alan Turing’s own deeply human personal life serves as a powerful counter to the idea that there is no fundamental distinction between the human mind and artificial intelligence.
[Check out Walter Isaacson’s book “The Innovators: How a Group of Hackers, Geniuses, and Geeks Created the Digital Revolution” at Amazon.com]
Turing, who had the cold upbringing of a child born on the fraying fringe of the British gentry, displayed a trait that is common among innovators. In the words of his biographer Andrew Hodges, he was “slow to learn that indistinct line that separated initiative from disobedience.”
He taught himself early on to keep secrets. At boarding school, he realized he was homosexual, and he became infatuated with a classmate who died of tuberculosis before they graduated. During World War II, he became a leader of the teams at Bletchley Park, England, that built machines to break the German military codes.
Feeling the need to hide both his sexuality and his code-breaking work, Turing often found himself playing an imitation game by pretending to be things he wasn’t. He also wrestled with the issue of free will: Are our personal preferences and impulses all predetermined and programmed, like those of a machine?
These questions came together in a paper, “Computing Machinery and Intelligence,” that Turing published in 1950. With a schoolboy’s sense of fun, he invented a game—one that is still being played and debated—to give meaning to the question, “Can machines think?” He proposed a purely empirical definition of artificial intelligence: If the output of a machine is indistinguishable from that of a human brain, then we have no meaningful reason to insist that the machine isn’t “thinking.”
His test, now usually called the Turing Test, was a simple imitation game. An interrogator sends written questions to a human and a machine in another room and tries to determine which is which. If the output of a machine is indistinguishable from that of a human brain, he argued, then it makes no sense to deny that the machine is “thinking.” Read the rest of this entry »