Marcus Woo writes: On February 28, 1998, the eminent medical journal The Lancet published an observational study of 12 children: Ileal-lymphoid-nodular hyperplasia, non-specific colitis, and pervasive development disorder in children. It might not sound sexy, but once the media read beyond the title, into the study’s descriptions of how those nasty-sounding symptoms appeared just after the kids got vaccinated, the impact was clear: The measles-mumps-rubella vaccine can cause autism.
“All of the incentives in science are aligned against publishing negative results or failures to replicate.”
This was the famous study by Andrew Wakefield, the one that many credit with launching the current hyper-virulent form of anti-vaccination sentiment. Wakefield is maybe the most prominent modern scientist who got it wrong—majorly wrong, dangerously wrong, barred-from-medical-practice wrong.
“People are forced to claim significance, or something new, extravagant, unusual, and positive.”
But scientists are wrong all the time, in far more innocuous ways. And that’s OK. In fact, it’s great.
When a researcher gets proved wrong, that means the scientific method is working. Scientists make progress by re-doing each other’s experiments—replicating them to see if they can get the same result. More often than not, they can’t. “Failure to reproduce is a good thing,” says Ivan Oransky, co-founder of Retraction Watch. “It happens a lot more than we know about.” That could be because the research was outright fraudulent, like Wakefield’s. But there are plenty of other ways to get a bum result—as the Public Libary of Science’s new collection of negative results, launched this week, will highlight in excruciating detail.
You might have a particularly loosey-goosey postdoc doing your pipetting. You might have picked a weird patient population that shows a one-time spike in drug efficacy. Or you might have just gotten a weird statistical fluke. No matter how an experiment got screwed up, “negative results can be extremely exciting and useful—sometimes even more useful than positive results,” says John Ioannidis, a biologist at Stanford who published a now-famous papersuggesting that most scientific studies are wrong.
The problem with science isn’t that scientists can be wrong: It’s that when they’re proven wrong, it’s way too hard for people to find out. Read the rest of this entry »
This isn’t a stained-glass sculpture or piece of delicate jewelry – it’s a real live spider. These spiders, called mirror or sequined spiders, are all members of several different species of the thwaitesia genus, which features spiders with reflective silvery patches on their abdomen.
The scales look like solid pieces of mirror glued to the spider’s back, but they can actually change size depending on how threatened the spider feels. The reflective scales are composed of reflective guanine, which these and other spiders use to give themselves color.
Not much information is available about these wonderful spiders, but the dazzling specimens in these photos were photographed primarily in Australia and Singapore…(read more)
Surgeon Sergio Canavero will be embarking on a project to implement the world’s first human head transplant
Sarah Zhang reports: An Italian neuroscientist who has been advocating for head transplants now wants to make one actually happen. He’ll be announcing a project at a surgical conference later this year. Here’s how the proposed human head transplant will work—supposedly.
“Canavero’s plan sounds pretty absurd, but the science of head transplants—at least in animals—is not as sparse as you might first think. The first head transplant in monkeys was done back in the 1970s, and the monkey lived for nine days…”
In 2013, Sergio Canavero of the Turin Advanced Neuromodulation Group proposed that human head transplants could soon be possible. Since then, he’s heard from several transplant volunteers, and New Scientist reports that Canavero will make a call to arms to find other interested surgeons at the American Academy of Neurological and Orthopaedic Surgeons annual meeting this June.
Canavero’s plan sounds pretty absurd, but the science of head transplants—at least in animals—is not as sparse as you might first think. The first head transplant in monkeys was done back in the 1970s, and the monkey lived for nine days. Its immune system eventually rejected the transplanted head, which is a major problem with such large transplants.
An even bigger problem, though, is how to fuse the spinal cords so brain and body are actually connected. (The monkey with a transplanted body—or was it a transplanted head?—couldn’t move.) Canavero recently published more details what he calls the GEMINI spinal cord fusion protocol. Here’s how New Scientist summarizes it:
The tissue around the neck is dissected and the major blood vessels are linked using tiny tubes, before the spinal cords of each person are cut. Cleanly severing the cords is key, says Canavero.
The recipient’s head is then moved onto the donor body and the two ends of the spinal cord – which resemble two densely packed bundles of spaghetti – are fused together. To achieve this, Canavero intends to flush the area with a chemical called polyethylene glycol, and follow up with several hours of injections of the same stuff. Just like hot water makes dry spaghetti stick together, polyethylene glycol encourages the fat in cell membranes to mesh…(read more)
Of course, this protocol is mostly theoretical. (A Chinese neuroscientist will be attempting it for the first time in mice and monkeys.) Even Canavero’s paper states that only 10 to 15 percent of the neurons will likely fuse. Yet he tells New Scientist that people could walk again a year after the procedure.
Attention to detail is something all model builders strive to accomplish when working on their new builds, which is obviously apparent in each of Heaquake’s RC vehicles. The hobbyist recreates every minute detail found on real vehicles and transfers the m over to his hand-built RC models…
Headquake’s builds usually start using 3mm PVC Komacel board that he uses for the exterior body, which he then contorts and sands into the desired shape. Read the rest of this entry »
Joseph Flaherty reports: As portions of the US are battered by snowstorms and shrouded beneath gray skies, a European startup is developing a light fixture that mimics the sun.
Each CoeLux fixture models the sunlight of a specific locale, be it the cool color and strong shadows of equatorial countries, the even glow of Mediterranean sunlight, or the slightly dimmer and warmer, but more striking patterns found along the Arctic Circle.
CoeLux fixtures use traditional LEDs, calibrated to the same wavelengths as the sun. However, accurately recreating sunlight also requires mimicking subtle variations caused by the atmosphere, which varies in thickness and composition depending upon where you are on earth. CoeLux uses a milimeters-thick layer of plastic, peppered with nanoparticles, that does essentially the same thing in your living room. CoeLux’s inventor, Professor Paolo Di Trapani hasn’t made any disclosures about how the nanotechnology works in practice, but an impressive list of peer-reviewed publications, industry awards, and testimonials from customers provide comfort that these devices actually work as advertised.
Despite the dynamic nature of the light, the fixtures feature no moving parts. Different qualities of light are created by manipulating the size and placement of the LED “hot spot”—the portion of the fixture meant to represent the sun—within the fixture’s two-foot wide and 5-foot long frame. The tropical unit has the largest hot spot, the Nordic unit the smallest. The thickness of the plastic sheet varies as well, thicker for the Nordic light than the equatorial light, to mirror the atmosphere. The light doesn’t emit any ultraviolet rays, so it won’t give you a tan or ease your seasonal affective disorder, but it will make the darkest basement, warehouse, or subterranean dwelling feel like a solarium.
Shining a New Light on an Old Problem
For thousands of years, man has tried to bring sunlight into dark spaces. Egyptians used complex arrays of mirrors to bring natural light deep within the pyramids, but this is labor intensive and difficult to achieve without a huge slave-labor force.
Northern European palaces from the 18th century feature bright Trompe l’oeil frescos of sunny skies, designed to bring cheer during long winters. Las Vegas casinos use similar techniques, augmented with LEDs and other technologies, to make you think you’re outdoors, not frittering away your money in the soulless confines of a casino. Read the rest of this entry »
Zip your zipper
It can go around curves and move forward and backward. Get ready for tiny bots to zip around your pants, jackets, and dresses.
Slay Motörhead covers
Just listen to that drumming. Lemmy may have found the band’s seventh drummer.
Swim like an octopus
These robot octopuses use webbed arms to quickly whip through the water.
Originally posted on TIME:
Apple may be experimenting in the virtual reality space. The company has been granted a patent for a head-mounted virtual reality device that would use the iPhone screen as the display. Apple first applied for the patent back in 2008, meaning VR has been on the company’s mind for a while.
The functionality of the product in the Apple patent seems to have the most in common with the Samsung Gear VR, a headset Samsung developed in conjunction with Oculus that uses the Samsung Galaxy Note 4 phablet as a display. These devices are a bit more complex than Google’s current solution, which slots a smartphone into a headset made out of cardboard.
Apple’s patent also calls for a separate remote control that would be able to manipulate the headset in some way.
Virtual reality isn’t the only new mode of interaction Apple is exploring. The tech giant had a…
View original 31 more words
Our computers have become too easy to use.
Joanna Stern writes: Right out of the box, they’re ready to go. No installing operating systems, no typing into a command-line prompt like in the old days. We don’t even have to hit save anymore.
Most weeks, I’m the first to celebrate this and to say I miss nothing about the way it used to be. But not this week.
This week I’ve been using the $35 Raspberry Pi 2, a bare-bones Linux computer no bigger than a juice box. And I’ve rediscovered something I had forgotten: the thrill of tinkering with a machine and its software. Of course, that thrill is accompanied, from time to time, with the urge to take a baseball bat to an inanimate object.
The Raspberry Pi is the antithesis of our polished, hermetically sealed Apple and Windows PCs. Open the cardboard box and all you’ll find inside is a green board covered with chips, circuits and ports. There’s no keyboard, monitor, or power cord. There isn’t even an operating system. And that’s all by design.
It was made by a U.K.-based nonprofit called the Raspberry Pi Foundation to encourage today’s children, around the age 10 and up, to learn more about how computers really work. Children today “have wonderful technology in their lives, but they are deprived of learning how it works,” Eben Upton, co-founder of the foundation, says. So while every other electronics maker has been slaving away on ease-of-use features, Mr. Upton decided to deliberately create a computer that dials back the user friendliness.
After using the Pi 2, there’s no doubt in my mind that it’s a great way for children and teenagers to learn about computer hardware and software. It’s also great for us curious adults who are interested in knowing more about the worlds of open-source and software coding, and don’t mind typing arcane commands into a DOS-looking interface to get there.
But don’t let that scare you. I challenged myself to see what I could do with the little thing and it put my problem-solving skills and patience to the test. Even if you’re someone like me, with little to no computer coding knowledge, you’ll be amazed by the number of things you can do with a $35 computer.
A $35 Linux Computer
My journey all started with gathering the right pieces to make the Pi my main computer for past few days.
Not only doesn’t the Pi come with an operating system, there isn’t even a hard drive inside. There is, however, a MicroSD card slot. So I did what the very helpful Raspberry Pi websites and community of experts tell beginners to do: I bought a $10 card preloaded with Raspbian, a basic Linux OS optimized for the Pi. (You can download the free software and put it on a card you already own, too.) Later this year, a new version of Windows will be released for the Pi.
OK, so it costs a little more than $35. I also bought a $5 plastic box to house the board, a $13 USB Wi-Fi dongle and a $8 Pi-compatible MicroUSB power cord from Adafruit.com, a website that sells the Pi and a selection of hardware add-ons for it, and provides tutorials.
With those things, plus a USB mouse and keyboard and an HDMI monitor I already had (TVs work fine, too), I was up and running. To get started, I did have to type some text into the command line and go through some installation processes, but believe it or not, it took less time to set up the computer than to bake a real raspberry pie. (Even with a pre-made crust!)
Raspbian, which launched a Windows-style graphic interface once I installed it, provides a basic desktop and menu with access to programs and settings. Using the preloaded Web browser, I’ve been able to do most of what I do on my laptop—check email, Twitter, Facebook. I also downloaded the free LibreOffice suite from the preloaded Pi Store. Read the rest of this entry »
Writing computer programs could become as easy as searching the Internet. A Rice University-led team of software experts has launched an $11 million effort to create a sophisticated tool called PLINY that will both “autocomplete” and “autocorrect” code for programmers, much like the software to complete search queries and correct spelling on today’s Web browsers and smartphones.
“The engine will formulate answers using Bayesian statistics. Much like today’s spell-correction algorithms, it will deliver the most probable solution first, but programmers will be able to cycle through possible solutions if the first answer is incorrect.”
– Chris Jermaine, associate professor of computer science at Rice
“Imagine the power of having all the code that has ever been written in the past available to programmers at their fingertips as they write new code or fix old code,” said Vivek Sarkar, Rice’s E.D. Butcher Chair in Engineering, chair of the Department of Computer Science and the principal investigator (PI) on the PLINY project. “You can think of this as autocomplete for code, but in a far more sophisticated way.”
Sarkar said the four-year effort is funded by the Defense Advanced Research Projects Agency (DARPA). PLINY, which draws its name from the Roman naturalist who authored the first encyclopedia, will involve more than two dozen computer scientists from Rice, the University of Texas-Austin, the University of Wisconsin-Madison and the company GrammaTech.
“Imagine the power of having all the code that has ever been written in the past available to programmers at their fingertips as they write new code or fix old code. You can think of this as autocomplete for code, but in a far more sophisticated way.”
– Vivek Sarkar, Rice’s E.D. Butcher Chair in Engineering
PLINY is part of DARPA’s Mining and Understanding Software Enclaves (MUSE) program, an initiative that seeks to gather hundreds of billions of lines of publicly available open-source computer code and to mine that code to create a searchable database of properties, behaviors and vulnerabilities.
Rice team members say the effort will represent a significant advance in the way software is created, verified and debugged.
“Software today is far more complex than it was 20 years ago, yet it is still largely created by hand, one line of code at a time. We envision a system where the programmer writes a few of lines of code, hits a button and the rest of the code appears. And not only that, the rest of the code should work seamlessly with the code that’s already been written.”
– Swarat Chaudhuri, assistant professor of computer science at Rice
He said PLINY will need to be sophisticated enough to recognize and match similar patterns regardless of differences in programming languages and code specifications. Read the rest of this entry »
Scientists are hunting one of the biggest prizes in physics: tiny particles called wimps that could unlock some of the universe’s oldest secrets
A wimp—a weakly interacting massive particle—is thought to be the stuff of dark matter, an invisible substance that makes up about a quarter of the universe but has never been seen by humans.
Gravity is the force that holds things together, and the vast majority of it emanates from dark matter. Ever since the big bang, this mystery material has been the universe’s prime architect, giving it shape and structure. Without dark matter, there would be no galaxies, no stars, no planets. Solving its mystery is crucial to understanding what the universe is made of.
“If we don’t assume that 85% of the matter in the universe is this unknown material, the laws of relativity and gravity would have to be modified. That would be significant,” says physicist Giuliana Fiorillo, a member of the 150-strong team searching for the particles at the Gran Sasso National Laboratory, 80 miles east of Rome.
The quest for dark matter has intensified since the discovery of the Higgs boson particle two years ago, which helped to narrow the field in which wimps might be hiding. Today, more than 20 different teams of researchers are hunting for the elusive stuff, using some of the most elaborate and delicate experiments ever devised.
Dark-matter detectors have been installed on the sea bed nearly 8,200 feet beneath the surface. Others operate deep inside mines. There is one on the International Space Station. China’s new dark-matter experiment sits 1.5 miles beneath a marble mountain. When it restarts later this year, the Large Hadron Collider will look for wimps, too, by smashing together subatomic particles.
Scientists estimate that visible matter makes up just 4% of the universe, while dark matter makes up 23%. The remaining 73% is an even bigger puzzle, a repulsive force known as “dark energy.”
Dark matter neither emits nor absorbs light. We know it is out there, because scientists can measure the immense gravitational force it exerts on stars, galaxies and other cosmic bodies. The best candidate for what dark matter consists of is the wimp: an ethereal being that barely interacts with normal matter. Every second, billions of wimps flow through the Earth without hitting anything. Read the rest of this entry »
Curiosity’s handlers sent no commands to the rover for most of April, because Mars was on the opposite side of the sun from Earth at the time. But this planetary alignment, known as a Mars solar conjunction, is now over, and the mission team is planning to drill into a Red Planet rock soon and then send Curiosity off on an epic, miles-long trek to the base of a huge and mysterious mountain.
“A couple of weeks to move to the site and drill, and then the experiments themselves can take also a couple of weeks — that’s about the time scale we’re looking at,” said Curiosity deputy project scientist Ashwin Vasavada, of NASA’s Jet Propulsion Laboratory in Pasadena, Calif. “And then we’d hopefully get going.”
He stressed, however, that this timeframe could shift depending on how the drilling operation goes, and what Curiosity discovers.
Curiosity healthy after ‘spring break’
The Curiosity rover wasn’t idle during conjunction. It continued monitoring Martian weather and radiation and perfomed some relatively simple science work using commands sent up in advance, Vasavada said.
“That all went fine — it kind of executed flawlessly a long set of preplanned activities,” he told SPACE.com. “We had never planned 30 days at once [before], so that was a relief.”
But things have picked up since mission controllers got back in touch with Curiosity late last week. They’ve already uploaded a minor software update to the rover, which emerged from conjunction in fine health, Vasavada said.
Curiosity continues to operate on its backup, or B-side, computer, which it switched to after a glitch knocked out its primary computer (or A-side) in late February.
The rover team has still not fully figured out what happened to the A-side, but engineers have made significant troubleshooting progress. For example, Curiosity would have been OK if an issue during conjunction had forced the rover to swap back over to the A-side computer, Vasavada said.
Drilling another hole
The rover team has already checked off this primary goal, announcing in March that a spot dubbed Yellowknife Bay was indeed habitable billions of years ago. Scientists reached this conclusion after studying Curiosity’s analyses of material pulled from a 2.5-inch-deep (6.4 centimeters) hole the rover drilled into a Red Planet outcrop.
Now that conjunction’s over, the mission team wants to drill another hole in a nearby rock, to confirm and perhaps extend the exciting results gleaned from the first drilling activity.
“Probably in the next week or two, we will slightly move the rover to a new location, which the science team is actively choosing right now,” Vasavada said. “Primarily, it will be to duplicate the results from the first hole, because they were so exciting and, in some cases, unexpected that the people who run the experiments just want to make sure it’s really correct before writing all the papers up.” Read the rest of this entry »
Based on the testing it actually does work well…(read more)
Are We Smart Enough to Control Artificial Intelligence?
A true AI might ruin the world—but that assumes it’s possible at all
Paul Ford writes: Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. “Don’t laugh at me,” he said, “but I was counting on the singularity.”
“The question ‘Can a machine think?’ has shadowed computer science from its beginnings.”
My friend worked in technology; he’d seen the changes that faster microprocessors and networks had wrought. It wasn’t that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humans—a moment that futurists call the singularity. A benevolent superintelligence might analyze the human genetic code at great speed and unlock the secret to eternal youth. At the very least, it might know how to fix your back.
But what if it wasn’t so benevolent? Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence. Imagine a machine that we might call a “paper-clip maximizer”—that is, a machine programmed to make as many paper clips as possible. Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.
No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it “computronium”) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.
Bostrom does not believe that the paper-clip maximizer will come to be, exactly; it’s a thought experiment, one designed to show how even careful system design can fail to restrain extreme machine intelligence. But he does believe that superintelligence could emerge, and while it could be great, he thinks it could also decide it doesn’t need humans around. Or do any number of other things that destroy the world. The title of chapter 8 is: “Is the default outcome doom?”
“Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term ‘artificial intelligence’ in 1955.”
If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative superintelligence Bostrom describes is far in the future and perhaps impossible.
Yet a lot of smart, thoughtful people agree with Bostrom and are worried now. Why?
The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in War Games. The androids of 1973’s Westworld went crazy and started killing.
“Extreme AI predictions are ‘comparable to seeing more efficient internal combustion engines… and jumping to the conclusion that the warp drives are just around the corner,’ Rodney Brooks writes.”
When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.
As Kurzweil described it, this would begin a beautiful new era. Read the rest of this entry »
Originally posted on TIME:
Scientists have finally answered one of humanities most pressing questions: how many licks does it take to get to the center of a Tootsie pop?
Turns out, it takes an estimated 1,000 licks to get to the center of a lollipop. In a study recently published in the Journal of Fluid Mechanics, researchers from New York University and Florida State University developed a theory for how flowing liquid dissolves and shrinks material which they then used to determine how long it would take to dissolve a lollipop. For Tootsie Rolls specifically, researchers told the New York Post, it’ll take about 2,500 licks. Go home, rest of the science world, there are no more questions left.
Though the lollipop finding is clearly the most pressing, the theory can also be used for important research in geology and pharmaceutical science, according to a report by Science Daily.
View original 5 more words
Originally posted on TIME:
Apple’s closing price Tuesday gave it a market value of $710.7 billion, making it the first-ever U.S. company to close at over $700 billion. That’s nearly double the next largest company on the list, Exxon Mobil.
The milestone is a significant one for the company, which previously breached the $700 billion mark in intraday trading but hasn’t closed above that point until now. Apple shares ended the day trading at $122.02.
Investors have shown nothing but love for Apple following its stellar first quarter earnings report, which revealed the company made $18 billion on $74.6 billion in revenue. Those results mark the most profitable three months ever recorded by any company.
Market value, or market capitalization, is determined by multiplying a company’s share price against the number of shares it has outstanding.
Here’s a quick look at how Apple has performed since 1998, the first full year after late…
View original 9 more words