China is Buying up American Companies Fast, and it’s Freaking People Out
Posted: February 22, 2016 Filed under: Asia, China, Economics | Tags: Beijing, Chinese New Year, Committee on Foreign Investment in the United States, Hainan Airlines, IBM, Information technology, Ingram Micro, Irvine, Korean Peninsula, North Korea, Park Geun-hye, President of the People's Republic of China, Tianjin, Xi Jinping Leave a commentGiven the recent volume of deals, it would appear that the Chinese government is supportive of the foreign-buying spree.
Portia Crowe reports: Here’s a story you’ll be hearing about a lot this year.
Chinese companies have been buying up foreign businesses, including American ones, at a record rate, and it’s freaking
lawmakers out.
There is General Electric’s sale of its appliance business to Qingdao-based Haier, Zoomlion’s bid for the heavy-lifting-equipment maker Terex Corp., and ChemChina’s record-breaking deal for the Swiss seeds and pesticides group Syngenta, valued at $48 billion.
Most recently, a unit of the Chinese conglomerate HNA Group said it would buy the technology distributor Ingram Micro for $6 billion.
And the most contentious deal so far might be the Chinese-led investor group Chongqing Casin Enterprise’s bid for the Chicago Stock Exchange.
A deal spree
To date, there have been 102 Chinese outbound mergers-and-acquisitions deals announced this year, amounting to $81.6 billion in value, according to Dealogic. That’s up from 72 deals worth $11 billion in the same period last year.
And they’re not expected to let up anytime soon. Slow economic growth in China and cheap prices abroad due to the stock market’s recent sell-off suggest the opposite.
[Read the full story here, at Business Insider]
“With the slowdown of the economy, Chinese corporates are increasingly looking to inorganic avenues to supplement their growth,” Vikas Seth, head of emerging markets in the investment-banking and capital-markets department at Credit Suisse, told Business Insider earlier this month.

Kim Kyung-Hoon/ReutersPresident Obama and President Xi Jinping.
China’s economic growth in 2015 was its slowest in 25 years.
The law firm O’Melveny & Myers recently surveyed their mainly China-based clients and found that the economic growth potential in the US was the main factor making it an attractive investment destination.
Nearly half of respondents agreed that the US was the most attractive market for investment, but 47% felt that US laws and regulations were a major barrier. They’d be right about that.
A major barrier
Forty-five members of Congress this week signed a letter to the Treasury Department’s Committee on Foreign Investment in the US, or CFIUS, urging it to conduct a “full and rigorous investigation” of the Chicago Stock Exchange acquisition.
“This proposed acquisition would be the first time a Chinese-owned, possibly state-influenced, firm maintained direct access into the $22 trillion US equity marketplace,” the letter reads. Read the rest of this entry »
Alex Gibney’s New Documentary Paints An Ambivalent Portrait Of Steve Jobs
Posted: September 6, 2015 Filed under: Entertainment, Mediasphere, Science & Technology | Tags: Academy Award for Best Documentary Feature, Alex Gibney, Apple II series, Apple Inc, Documentary film, Eddy Cue, IBM, iPhone, IPhone 4, IPod, iTunes, Steve Jobs, Steve Wozniak, Wheels of Zeus Leave a commentAnthony Ha writes: Earlier this week, I joined a group of journalists to meet with director Alex Gibney and discuss his new film, Steve Jobs: The Man in the Machine — and the first thing he did was put away his iPhone.
It was no big deal, but the action took a little extra humor and weight since the documentary is all about our relationship with Jobs and the products he created. It opens with footage of the mass outpouring of grief after Jobs’ death in 2011, and the rest of the movie asks: Why did people feel so much attachment to the CEO of an enormous tech company? And is Jobs really worthy of such admiration?
“The way the Jobs film ends, there’s no prescription there. To me, the best films are the ones that force you, not force you but encourage you to take something out of the theater and the questions roll around you in your head.”
Gibney isn’t trying to convince people that they should stop buying Apple products — after all, he’s still got that iPhone. Instead the aim is to raise questions about Jobs’ values and the influence those values had on the rest of Silicon Valley (where Gibney often sees a similar “rough-and-tumble libertarian vibe”).
“The film is not a slam. The film is a meditation on this guy’s life and what it meant to us. It’s not so simple.”
“The way the Jobs film ends, there’s no prescription there,” he said. “To me, the best films are the ones that force you, not force you but encourage you to take something out of the theater and the questions roll around you in your head.”
On the other hand, the film’s ability to address those questions may have been hampered by the fact that many people declined to be interviewed — there’s no Steve Wozniak, no one currently at Apple, and the closest you get to Jobs’ family is Chrisann Brennan, the mother of his first child Lisa. In fact, Gibney recounted how Apple employees walked out of a screening of the movie at South by Southwest — at the time, Apple’s Eddy Cue tweeted that it was “an inaccurate and mean-spirited view of my friend.” Read the rest of this entry »
[VIDEO] MIT: 7 Finger Robot
Posted: May 25, 2015 Filed under: Global, Mediasphere, Science & Technology | Tags: Cambridge, Graduate school, Human gastrointestinal tract, IBM, Internet, Jerome H. Lemelson, Lemelson Foundation, Massachusetts, Massachusetts Institute of Technology Leave a commentResearchers at MIT have developed a robot that enhances the grasping motion of the human hand. Learn more…
Does Artificial Intelligence Pose a Threat?
Posted: May 16, 2015 Filed under: Robotics, Science & Technology, Think Tank | Tags: Artificial Intelligence, Ashlee Vance, Boston Dynamics, Elon Musk, Google, IBM, Larry Page, SpaceX, Stephen Hawking, Tesla Motors 1 CommentA panel of experts discusses the prospect of machines capable of autonomous reasoning
Ted Greenwald writes: After decades as a sci-fi staple, artificial intelligence has leapt into the mainstream. Between Apple ’s Siri and Amazon ’s Alexa, IBM ’s Watson and Google Brain, machines that understand the world and respond productively suddenly seem imminent.
The combination of immense Internet-connected networks and machine-learning algorithms has yielded dramatic advances in machines’ ability to understand spoken and visual communications, capabilities that fall under the heading “narrow” artificial intelligence. Can machines capable of autonomous reasoning—so-called general AI—be far behind? And at that point, what’s to keep them from improving themselves until they have no need for humanity?
The prospect has unleashed a wave of anxiety. “I think the development of full artificial intelligence could spell the end of the human race,” astrophysicist Stephen Hawking told the BBC. Tesla founder Elon Musk called AI “our biggest existential threat.” Former Microsoft Chief Executive Bill Gates has voiced his agreement.
How realistic are such concerns? And how urgent? We assembled a panel of experts from industry, research and policy-making to consider the dangers—if any—that lie ahead. Read the rest of this entry »
Our Fear of Artificial Intelligence
Posted: February 11, 2015 Filed under: Robotics, Science & Technology, Think Tank | Tags: Artificial Intelligence, Association for the Advancement of Artificial Intelligence, Bill Gates, Elon Musk, Eric Horvitz, Exponential growth, Human, IBM, Microsoft Research, Paul Ford, Stephen Hawking 1 CommentAre We Smart Enough to Control Artificial Intelligence?
A true AI might ruin the world—but that assumes it’s possible at all
Paul Ford writes: Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. “Don’t laugh at me,” he said, “but I was counting on the singularity.”
“The question ‘Can a machine think?’ has shadowed computer science from its beginnings.”
My friend worked in technology; he’d seen the changes that faster microprocessors and networks had wrought. It wasn’t that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humans—a moment that futurists call the singularity. A benevolent superintelligence might analyze the human genetic code at great speed and unlock the secret to eternal youth. At the very least, it might know how to fix your back.
But what if it wasn’t so benevolent? Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence. Imagine a machine that we might call a “paper-clip maximizer”—that is, a machine programmed to make as many paper clips as possible. Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.

Agility: rapid advances in technology, including machine vision, tactile sensors and autonomous navigation, make today’s robots, such as this model from DLR, increasingly useful
No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it “computronium”) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.
“Superintelligence: Paths, Dangers, Strategies”
BY NICK BOSTROM
OXFORD UNIVERSITY PRESS, 2014
Bostrom does not believe that the paper-clip maximizer will come to be, exactly; it’s a thought experiment, one designed to show how even careful system design can fail to restrain extreme machine intelligence. But he does believe that superintelligence could emerge, and while it could be great, he thinks it could also decide it doesn’t need humans around. Or do any number of other things that destroy the world. The title of chapter 8 is: “Is the default outcome doom?”
“Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term ‘artificial intelligence’ in 1955.”
If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative superintelligence Bostrom describes is far in the future and perhaps impossible.
Yet a lot of smart, thoughtful people agree with Bostrom and are worried now. Why?
The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in War Games. The androids of 1973’s Westworld went crazy and started killing.
“Extreme AI predictions are ‘comparable to seeing more efficient internal combustion engines… and jumping to the conclusion that the warp drives are just around the corner,’ Rodney Brooks writes.”
When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.
As Kurzweil described it, this would begin a beautiful new era. Read the rest of this entry »
David W. Buchanan: No, the Robots Are Not Going to Rise Up and Kill You
Posted: February 7, 2015 Filed under: Mediasphere, Robotics, Science & Technology, Think Tank | Tags: Artificial Intelligence, Association for the Advancement of Artificial Intelligence, Bill Gates, Clive Sinclair, Cortana, Desktop computer, Elon Musk, Eric Horvitz, IBM, Microsoft, Microsoft Research, SpaceX, Stephen Hawking, Watson (computer) 1 CommentDavid W. Buchanan is a researcher at IBM, where he is a member of the team that made the Watson “Jeopardy!” system.
David W. Buchanan writes: We have seen astonishing progress in artificial intelligence, and technology companies are pouring money into AI research. In 2011, the IBM system Watson competed on “Jeopardy!,” beating the best human players. Siri and Cortana have taken charge of our smartphones. As I write this, a vacuum called Roomba is cleaning my house on its own, using what the box calls “robot intelligence.” It is easy to feel like the world is on the verge of being taken over by computers, and the news media have indulged such fears with frequent coverage of the supposed dangers of AI.
But as a researcher who works on modern, industrial AI, let me offer a personal perspective to explain why I’m not afraid.
Science fiction is partly responsible for these fears. A common trope works as follows: Step 1: Humans create AI to perform some unpleasant or difficult task. Step 2: The AI becomes conscious. Step 3: The AI decides to kill us all. As science fiction, such stories can be great fun. As science fact, the narrative is suspect, especially around Step 2, which assumes that by synthesizing intelligence, we will somehow automatically, or accidentally, create consciousness. I call this the consciousness fallacy. It seems plausible at first, but the evidence doesn’t support it. And if it is false, it means we should look at AI very differently.
Intelligence is the ability to analyze the world and reason about it in a way that enables more effective action. Our scientific understanding of intelligence is relatively advanced. There is still an enormous amount of work to do before we can create comprehensive, human-caliber intelligence. But our understanding is viable in the sense that there are real businesses that make money by creating AI.

Coming online: some 95,000 new professional service robots, worth some $17.1bn, are set to be installed for professional use between 2013 and 2015
Consciousness is a much different story, perhaps because there is less money in it. Consciousness is also a harder problem: While most of us would agree that we know consciousness when we see it, scientists can’t really agree on a rigorous definition, let alone a research program that would uncover its basic mechanisms. Read the rest of this entry »
Mad Men’s Don Draper: Woe to Those Who Deviate from ‘Pessimist Chic’
Posted: May 12, 2014 Filed under: Art & Culture, Entertainment | Tags: 2001: A Space Odyssey, Don Draper, DonDraper, HAL 9000, IBM, Mad Men, Megan Draper, Monolith, Peggy Olsen, Stanley Kubrick 2 Comments
For PopWatch, Jeff Jensen writes: The Monolith of 2001: A Space Odyssey is one of the most cryptic icons in all of pop culture. Back in the heat of the cultural conversation about the film, moviegoers wanting to crack the secrets of those sleek alien obelisks concerned themselves with many questions about their motive and influence. Do they mean to harm humanity or improve us? Do those who dare engage them flourish and prosper? Or do they digress and regress? To rephrase in the lexicon of Mad Men: Are these catalysts for evolutionary change subversive manipulators like Lou, advancing Peggy with responsibility and money just to trigger Don’s implosion, or are they benevolent fixers like Freddy, rescuing Don from self-destruction and nudging him forward with helpful life coaching?
“…optimism is a tough sell these days.”
Of course, Don Draper is something of a Monolith himself. The questions people once asked of those mercurial monuments are similar to the questions that the partners and employees of Sterling Cooper & Partners (and the audience) are currently asking of their former fearless leader during the final season of Mad Men, which last week fielded an episode entitled “The Monolith” rich with allusions to Stanley Kubrick’s future-fretting sci-fi stunner. Don, that one-time font of creative genius, is now a mystery of motives and meaning to his peeps following last season’s apocalyptic meltdown during the Hershey pitch. (For Don, Hershey Bars are Monoliths, dark rectangular totems with magical character-changing properties.)

From Stanley Kubrick’s “2001: A Space Odyssey”
Watching them wring their hands over Don evokes the way the ape-men of 2001 frantically tizzied over The Monolith when it suddenly appeared outside their cavern homes. Can Don be trusted? Do they dare let him work? What does he really want? “Why are you here?” quizzed Bert during a shoeless interrogation in his man-cave. Gloomy Lou made like Chicken Little: “He’s gonna implode!” (His pessimism wasn’t without bias: He did take Don’s old job.)
There are knowing, deeper ironies here. Read the rest of this entry »
The Singularity is Coming and it’s Going To Be Awesome: ‘Robots Will Be Smarter Than Us All by 2029’
Posted: February 23, 2014 Filed under: Robotics, Science & Technology | Tags: Artificial Intelligence, Garry Kasparov, Google, IBM, Joaquin Phoenix, Kurzweil, Ray Kurzweil, Turing test 2 Comments
World’s leading futurologist predicts computers will soon be able to flirt, learn from experience and even make jokes
Adam Withnall writes: By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google’s director of engineering Ray Kurzweil.
“Today, I’m pretty much at the median of what AI experts think and the public is kind of with them…”
One of the world’s leading futurologists and artificial intelligence (AI) developers, 66-year-old Kurzweil has previous form in making accurate predictions about the way technology is heading.
[Ray Kurzweil’s pioneering book The Singularity Is Near: When Humans Transcend Biology is available at Amazon]
In 1990 he said a computer would be capable of beating a chess champion by 1998 – a feat managed by IBM’s Deep Blue, against Garry Kasparov, in 1997.
When the internet was still a tiny network used by a small collection of academics, Kurzweil anticipated it would soon make it possible to link up the whole world.
Geoffrey Hinton: The Man Google Hired to Make AI a Reality
Posted: January 19, 2014 Filed under: Science & Technology, Think Tank | Tags: Artificial Intelligence, Facebook, Geoffrey Hinton, Google, IBM, University of Edinburgh, University of Toronto, Yann LeCun 2 Comments
Geoff Hinton, the AI guru who now works for Google. Photo: Josh Valcarcel/WIRED
Daniela Hernandez writes: Geoffrey Hinton was in high school when a friend convinced him that the brain worked like a hologram.
To create one of those 3-D holographic images, you record how countless beams of light bounce off an object and then you store these little bits of information across a vast database. While still in high school, back in 1960s Britain, Hinton was fascinated by the idea that the brain stores memories in much the same way. Rather than keeping them in a single location, it spreads them across its enormous network of neurons.
‘I get very excited when we discover a way of making neural networks better — and when that’s closely related to how the brain works.’
This may seem like a small revelation, but it was a key moment for Hinton — “I got very excited about that idea,” he remembers. “That was the first time I got really into how the brain might work” — and it would have enormous consequences. Inspired by that high school conversation, Hinton went on to explore neural networks at Cambridge and the University of Edinburgh in Scotland, and by the early ’80s, he helped launch a wildly ambitious crusade to mimic the brain using computer hardware and software, to create a purer form of artificial intelligence we now call “deep learning.”
Tech Time Warp of the Week: The World’s First Hard Drive, 1956
Posted: January 6, 2014 Filed under: History, Science & Technology | Tags: Apple, Computer History Museum, Disk storage, Hard disk drive, Hardware, IBM, IBM 305 RAMAC Leave a comment
The RAMAC hard drive at the Computer History Museum in Mountain View, California. Photo: Jon Snyder/WIRED.
Daniela Hernandez writes: IBM unleashed the world’s first computer hard disk drive in 1956. It was bigger than a refrigerator. It weighed more than a ton. And it looked kinda like one of those massive cylindrical air conditioning units that used to sit outside your grade school cafeteria.
This seminal hard drive was part of a larger machine known as the IBM 305 RAMAC, short for “Random Access Method of Accounting and Control,” and in the classic promotional video below, you can see the thing in action during its glory days. Better yet, if you ever happen to be in Mountain View, California, you can see one in the flesh. In 2002, two engineers named Dave Bennet and Joe Feng helped restore an old RAMAC at the Computer History Museum in Mountain View, where it’s now on display. And yes, the thing still works.
As we’re told in IBM’s 1956 film — which chronicles the five years of research and development that culminated in the RAMAC — Big Blue built the system “to keep business accounts up to date and make them available, not monthly or even daily, but immediately.” It was meant to rescue companies from a veritable blizzard of paper records, so adorably demonstrated in the film by a toy penguin trapped in a faux snow storm.
Before RAMAC, as the film explains, most businesses kept track of inventory, payroll, budgets, and other bits of business info on good old fashioned paper stuffed into filing cabinets. Or if they were lucky, they had a massive computer that could store data on spools of magnetic tape. But tape wasn’t the easiest to deal with. You couldn’t get to one piece of data on a tape without spooling past all the data that came before it.
Then RAMAC gave the world what’s called magnetic disk storage, which let you retrieve any piece of data without delay. The system’s hard drive included 50 vertically stacked disks covered in magnetic paint, and as they spun around — at speeds of 1,200 rpm — a mechanical arm could store and retrieve data from the disks. The device stored data by changing the magnetic orientation of a particular spot on a disk, and then retrieve it at any time by reading the orientation.
Who Dreams of Being Average?
Posted: October 28, 2013 Filed under: Economics, History, Reading Room, Think Tank | Tags: American Dream, Carnegie Mellon University, Cowen, Deep Blue, Garry Kasparov, IBM, James Truslow Adams, Tyler Cowen 1 Comment
The American Dream has long evoked the idea that the next generation will have a better life than the previous one. Today, many Americans feel that dream is in jeopardy.
Average Is Over—But the American Dream Lives On
Who dreams of being average? Americans define personal success in different ways, but certainly no one strives for mediocrity. The children of Lake Wobegon, after all, were “all above average.”
Perhaps this explains why some reviewers have understood the glum predictions of Tyler Cowen’s Average Is Over—that shifts in the labor market will cause the middle class to dwindle—as heralds of the death of the American Dream. This understanding misses the real thrust of Cowen’s book.
Everyone has their own notions of what constitutes the American Dream, but when writer and historian James Truslow Adams coined the phrase in the 1930s, he wrote that in America “life should be better and richer and fuller for everyone, with opportunity for each according to ability or achievement.” Cowen’s vision of our future actually reinforces this idea. Read the rest of this entry »
100 Unintended Consequences of Obamacare
Posted: October 1, 2013 Filed under: Politics, Reading Room, Think Tank, White House | Tags: Andrew Johnson, Caterpillar, Delta Air Lines, Heritage Foundation, IBM, Obamacare, Patient Protection and Affordable Care Act, SeaWorld, United States 2 CommentsCompanies, workers, retirees, students, and spouses all suffer from the law’s inflexible mandates.
Yet from major corporations to local mom-and-pop shops, from entire states to tiny school districts, a wide range of companies and institutions have seen Obamacare’s negative impact on their workers, budgets, and production. Here are 100 examples of how Obamacare is falling short of what was promised.
(Note: Some items on this list came via Investor’s Business Daily and the Heritage Foundation.)
Corporations
1. IBM
Earlier this month, the computer giant, once famed for its paternalism, announced it would remove 110,000 of its Medicare-eligible retirees from the company’s health insurance and give them subsidies to purchase coverage through the Obamacare exchanges. Retirees fear that they will not get the level of coverage they are used to, and that the options will be bewildering.
2. Delta Air Lines
In a letter to employees, Delta Air Lines revealed that the company’s health-care costs will rise about $100 million next year alone, in large part because of Obamacare. The airline said that in addition to several other changes, it would have to drop its specially crafted insurance plans for pilots because the “Cadillac tax” on luxurious health plans has made them too expensive.
3. UPS
Fifteen thousand employees’ spouses will no longer be able to use UPS’s health-care plan because they have access to coverage elsewhere. The “costs associated with the Affordable Care Act have made it increasingly difficult to continue providing the same level of health care benefits to our employees at an affordable cost,” the delivery giant said in a company memo. The move is expected to save the company $60 million next year. Read the rest of this entry »
Will the NSA Revelations Kickstart the Cybersecurity Industry in China?
Posted: September 30, 2013 Filed under: China, War Room | Tags: China, IBM, Industry of the People's Republic of China, Internet Society of China, National Security Agency, NSA, United Nations General Assembly, United States 2 Comments
Image credit: REUTERS/Pawel Kopczynski
Adam Segal writes: One of the common arguments in the wake of the Snowden revelations about NSA surveillance is that other countries are going to double down on developing their own technology industries to reduce their dependence on U.S. companies. The Chinese press has made this argument numerous times–highlighting how IBM, Cisco, Intel and others have penetrated Chinese society–and this was one of the themes in Brazilian President Dilma Rousseff’s address to the United Nations General Assembly: “Brazil will redouble its efforts to adopt legislation, technologies and mechanisms to protect us from the illegal interception of communications and data.” Read the rest of this entry »
The First Carbon Nanotube Computer
Posted: September 26, 2013 Filed under: Science & Technology | Tags: Carbon nanotube, IBM, Intel, Intel 4004, Microsystems Technology Office, Philip Wong, Semiconductor Research Corporation, Stanford, Stanford University Leave a commentA carbon nanotube computer processor is comparable to a chip from the early 1970s, and may be the first step beyond silicon electronics.

Tube chip: This scanning electron microscopy image shows a section of the first-ever carbon nanotube computer. The image was colored to identify different parts of the chip.
Katherine Bourzac reports: For the first time, researchers have built a computer whose central processor is based entirely on carbon nanotubes, a form of carbon with remarkable material and electronic properties. The computer is slow and simple, but its creators, a group of Stanford University engineers, say it shows that carbon nanotube electronics are a viable potential replacement for silicon when it reaches its limits in ever-smaller electronic circuits.
The carbon nanotube processor is comparable in capabilities to the Intel 4004, that company’s first microprocessor, which was released in 1971, says Subhasish Mitra, an electrical engineer at Stanford and one of the project’s co-leaders. The computer, described today in the journal Nature, runs a simple software instruction set called MIPS. It can switch between multiple tasks (counting and sorting numbers) and keep track of them, and it can fetch data from and send it back to an external memory.
The nanotube processor is made up of 142 transistors, each of which contains carbon nanotubes that are about 10 to 200 nanometer long. The Stanford group says it has made six versions of carbon nanotube computers, including one that can be connected to external hardware—a numerical keypad that can be used to input numbers for addition. Read the rest of this entry »