China is Buying up American Companies Fast, and it’s Freaking People Out

china-business

Given the recent volume of deals, it would appear that the Chinese government is supportive of the foreign-buying spree.

 reports: Here’s a story you’ll be hearing about a lot this year.

now-panic-and-freak-out_i-g-61-6183-1f81100zChinese companies have been buying up foreign businesses, including American ones, at a record rate, and it’s freaking
lawmakers out.

There is General Electric’s sale of its appliance business to Qingdao-based HaierZoomlion’s bid for the heavy-lifting-equipment maker Terex Corp., and ChemChina’s record-breaking deal for the Swiss seeds and pesticides group Syngenta, valued at $48 billion.

Most recently, a unit of the Chinese conglomerate HNA Group said it would buy the technology distributor Ingram Micro for $6 billion.

And the most contentious deal so far might be the Chinese-led investor group Chongqing Casin Enterprise’s bid for the Chicago Stock Exchange.

Shutterstock

A deal spree

To date, there have been 102 Chinese outbound mergers-and-acquisitions deals announced this year, amounting to $81.6 billion in value, according to Dealogic. That’s up from 72 deals worth $11 billion in the same period last year.

And they’re not expected to let up anytime soon. Slow economic growth in China and cheap prices abroad due to the stock market’s recent sell-off suggest the opposite.

[Read the full story here, at Business Insider]

“With the slowdown of the economy, Chinese corporates are increasingly looking to inorganic avenues to supplement their growth,” Vikas Seth, head of emerging markets in the investment-banking and capital-markets department at Credit Suisse, told Business Insider earlier this month.

obama xi jinping

Kim Kyung-Hoon/ReutersPresident Obama and President Xi Jinping.

China’s economic growth in 2015 was its slowest in 25 years.

The law firm O’Melveny & Myers recently surveyed their mainly China-based clients and found that the economic growth potential in the US was the main factor making it an attractive investment destination.panic-betty

Nearly half of respondents agreed that the US was the most attractive market for investment, but 47% felt that US laws and regulations were a major barrier. They’d be right about that.

A major barrier

Forty-five members of Congress this week signed a letter to the Treasury Department’s Committee on Foreign Investment in the US, or CFIUS, urging it to conduct a “full and rigorous investigation” of the Chicago Stock Exchange acquisition.

“This proposed acquisition would be the first time a Chinese-owned, possibly state-influenced, firm maintained direct access into the $22 trillion US equity marketplace,” the letter reads. Read the rest of this entry »


Alex Gibney’s New Documentary Paints An Ambivalent Portrait Of Steve Jobs

 writes: Earlier this week, I joined a group of journalists to meet with director Alex Gibney and discuss his new film, Steve Jobs: The Man in the Machine — and the first thing he did was put away his iPhone.

It was no big deal, but the action took a little extra humor and weight since the documentary is all about our relationship with Jobs and the products he created. It opens with footage of the mass outpouring of grief after Jobs’ death in 2011, and the rest of the movie asks: Why did people feel so much attachment to the CEO of an enormous tech company? And is Jobs really worthy of such admiration?

“The way the Jobs film ends, there’s no prescription there. To me, the best films are the ones that force you, not force you but encourage you to take something out of the theater and the questions roll around you in your head.”

Gibney isn’t trying to convince people that they should stop buying Apple products — after all, he’s still got that iPhone. Instead the aim is to raise questions about Jobs’ values and the influence those values had on the rest of Silicon Valley (where Gibney often sees a similar “rough-and-tumble libertarian vibe”).

steve-jobs-man-in-the-machine-poster-170315-660x330

“The film is not a slam. The film is a meditation on this guy’s life and what it meant to us. It’s not so simple.”

“The way the Jobs film ends, there’s no prescription there,” he said. “To me, the best films are the ones that force you, not force you but encourage you to take something out of the theater and the questions roll around you in your head.”

On the other hand, the film’s ability to address those questions may have been hampered by the fact that many people declined to be interviewed — there’s no Steve Wozniak, no one currently at Apple, and the closest you get to Jobs’ family is Chrisann Brennan, the mother of his first child Lisa. In fact, Gibney recounted how Apple employees walked out of a screening of the movie at South by Southwest — at the time, Apple’s Eddy Cue tweeted that it was “an inaccurate and mean-spirited view of my friend.” Read the rest of this entry »


[VIDEO] MIT: 7 Finger Robot

Researchers at MIT have developed a robot that enhances the grasping motion of the human hand. Learn more…


Does Artificial Intelligence Pose a Threat?

AI-WSJ

A panel of experts discusses the prospect of machines capable of autonomous reasoning

Ted Greenwald writes: After decades as a sci-fi staple, artificial intelligence has leapt into the mainstream. Between Apple ’s Siri and Amazon ’s Alexa, IBM ’s Watson and Google Brain, machines that understand the world and respond productively suddenly seem imminent.

The combination of immense Internet-connected networks and machine-learning algorithms has yielded dramatic advances in machines’ ability to understand spoken and visual communications, capabilities that fall under the heading “narrow” artificial intelligence. Can machines capable of autonomous reasoning—so-called general AI—be far behind? And at that point, what’s to keep them from improving themselves until they have no need for humanity?

Meka's M1 robot is one of the systems that has been acquired by Google

The prospect has unleashed a wave of anxiety. “I think the development of full artificial intelligence could spell the end of the human race,” astrophysicist Stephen Hawking told the BBC. Tesla founder Elon Musk called AI “our biggest existential threat.” Former Microsoft Chief Executive Bill Gates has voiced his agreement.

How realistic are such concerns? And how urgent? We assembled a panel of experts from industry, research and policy-making to consider the dangers—if any—that lie ahead. Read the rest of this entry »


‘Hillary Email Story About to Metasticize’: Clinton Ran Own Computer System

eye-of-hillary

The highly unusual practice of a Cabinet-level official physically running her own email would have given Clinton, the presumptive Democratic presidential candidate, impressive control over limiting access to her message archives

WASHINGTON (AP) – Jack Gillum and Ted Bridis report: The computer server that transmitted and received Hillary Clinton’s emails – on a private account she used exclusively for official business when she was secretary of state – traced back to an Internet service registered to her family’s home in Chappaqua, New York, according to Internet records reviewed by The Associated Press.

“In November 2012, without explanation, Clinton’s private email account was reconfigured to use Google’s servers as a backup in case her own personal email server failed, according to Internet records. That is significant because Clinton publicly supported Google’s accusations in June 2011 that China’s government had tried to break into the Google mail accounts of senior U.S. government officials.”

The highly unusual practice of a Cabinet-level official physically running her own email would have given Clinton, the presumptive Democratic presidential candidate, impressive control over limiting access to her message archives. It also would distinguish Clinton’s secretive email practices as far more sophisticated than some politicians, including Mitt Romney and Sarah Palin, who were caught conducting official business using free email services operated by Microsoft Corp. and Yahoo Inc.

a-ridiculous-number-of-americans-believe-in-crazy-conspiracy-theories

Most Internet users rely on professional outside companies, such as Google Inc. or their own employers, for the behind-the-scenes complexities of managing their email communications. Government employees generally use servers run by federal agencies where they work.

“The AP has waited more than a year under the open records law for the State Department to turn over some emails covering Clinton’s tenure as the nation’s top diplomat, although the agency has never suggested that it didn’t possess all her emails.”

In most cases, individuals who operate their own email servers are technical experts or users so concerned about issues of privacy and surveillance they take matters into their own hands. It was not immediately clear exactly where Clinton ran that computer system.

computer-lady

“Operating her own server would have afforded Clinton additional legal opportunities to block government or private subpoenas in criminal, administrative or civil cases because her lawyers could object in court before being forced to turn over any emails.”

Clinton has not described her motivation for using a private email account – hdr22@clintonemail.com, which traced back to her own private email server registered under an apparent pseudonym – for official State Department business.

computer_crimes1

Operating her own server would have afforded Clinton additional legal opportunities to block government or private subpoenas in criminal, administrative or civil cases because her lawyers could object in court before being forced to turn over any emails. And since the Secret Service was guarding Clinton’s home, an email server there would have been well protected from theft or a physical hacking.

“It was unclear whom Clinton hired to set up or maintain her private email server, which the AP traced to a mysterious identity, Eric Hoteham. That name does not appear in public records databases, campaign contribution records or Internet background searches.” 

But homemade email servers are generally not as reliable, secure from hackers or protected from fires or floods as those in commercial data centers. Those professional facilities provide monitoring for viruses or hacking attempts, regulated temperatures, off-site backups, generators in case of power outages, fire-suppression systems and redundant communications lines.

A spokesman for Clinton did not respond to requests seeking comment from the AP on Tuesday. Clinton ignored the issue during a speech Tuesday night at the 30th anniversary gala of EMILY’s List, which works to elect Democratic women who support abortion rights.

It was unclear whom Clinton hired to set up or maintain her private email server, which the AP traced to a mysterious identity, Eric Hoteham. That name does not appear in public records databases, campaign contribution records or Internet background searches. Hoteham was listed as the customer at Clinton’s $1.7 million home on Old House Lane in Chappaqua in records registering the Internet address for her email server since August 2010. Read the rest of this entry »


Our Fear of Artificial Intelligence

Photograph: Chris Ratcliffe/Bloomberg/Getty

Are We Smart Enough to Control Artificial Intelligence? 

A true AI might ruin the world—but that assumes it’s possible at all

Paul Ford writes: Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. “Don’t laugh at me,” he said, “but I was counting on the singularity.”

“The question ‘Can a machine think?’ has shadowed computer science from its beginnings.”

My friend worked in technology; he’d seen the changes that faster microprocessors and networks had wrought. It wasn’t that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humans—a moment that futurists call the singularity. A benevolent superintelligence might analyze the human genetic code at great speed and unlock the secret to eternal youth. At the very least, it might know how to fix your back.

turing-robot-hand

But what if it wasn’t so benevolent? Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence. Imagine a machine that we might call a “paper-clip maximizer”—that is, a machine programmed to make as many paper clips as possible. Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.

Agility: rapid advances in technology, including machine vision, tactile sensors and autonomous navigation, make today’s robots, such as this model from DLR, increasingly useful

Agility: rapid advances in technology, including machine vision, tactile sensors and autonomous navigation, make today’s robots, such as this model from DLR, increasingly useful

No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it “computronium”) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.

Superintelligence: Paths, Dangers, Strategies
BY NICK BOSTROM
OXFORD UNIVERSITY PRESS, 2014

Bostrom does not believe that the paper-clip maximizer will come to be, exactly; it’s a thought experiment, one designed to show how even careful system design can fail to restrain extreme machine intelligence. But he does believe that superintelligence could emerge, and while it could be great, he thinks it could also decide it doesn’t need humans around. Or do any number of other things that destroy the world. The title of chapter 8 is: “Is the default outcome doom?”

“Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term ‘artificial intelligence’ in 1955.”

If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative superintelligence Bostrom describes is far in the future and perhaps impossible.

female-robot-newsreader

Yet a lot of smart, thoughtful people agree with Bostrom and are worried now. Why?

The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in War Games. The androids of 1973’s Westworld went crazy and started killing.

“Extreme AI predictions are ‘comparable to seeing more efficient internal combustion engines… and jumping to the conclusion that the warp drives are just around the corner,’ Rodney Brooks writes.”

When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.

As Kurzweil described it, this would begin a beautiful new era. Read the rest of this entry »


David W. Buchanan: No, the Robots Are Not Going to Rise Up and Kill You

ROBOTS_B_400

David W. Buchanan is a researcher at IBM, where he is a member of the team that made the Watson “Jeopardy!” system.

David W. Buchanan writes: We have seen astonishing progress in artificial intelligence, and technology companies are pouring money into AI research. In 2011, the IBM system Watson competed on “Jeopardy!,” beating the best human playersSiri and Cortana have taken charge of our smartphones. As I write this, a vacuum called Roomba is cleaning my house on its own, using what the box calls “robot intelligence.” It is easy to feel like the world is on the verge of being taken over by computers, and the news media have indulged such fears with frequent coverage of the supposed dangers of AI.

But as a researcher who works on modern, industrial AI, let me offer a personal perspective to explain why I’m not afraid.

Thunder-Robots

Science fiction is partly responsible for these fears. A common trope works as follows: Step 1: Humans create AI to perform some unpleasant or difficult task. Step 2: The AI becomes conscious. Step 3: The AI decides to kill us all. As science fiction, such stories can be great fun. As science fact, the narrative is suspect, especially around Step 2, which assumes that by synthesizing intelligence, we will somehow automatically, or accidentally, create consciousness. I call this the consciousness fallacy. It seems plausible at first, but the evidence doesn’t support it. And if it is false, it means we should look at AI very differently.

Intelligence is the ability to analyze the world and reason about it in a way that enables more effective action. Our scientific understanding of intelligence is relatively advanced. There is still an enormous amount of work to do before we can create comprehensive, human-caliber intelligence. But our understanding is viable in the sense that there are real businesses that make money by creating AI.

Coming online: some 95,000 new professional service robots, worth some $17.1bn, are set to be installed for professional use between 2013 and 2015

Coming online: some 95,000 new professional service robots, worth some $17.1bn, are set to be installed for professional use between 2013 and 2015

Consciousness is a much different story, perhaps because there is less money in it. Consciousness is also a harder problem: While most of us would agree that we know consciousness when we see it, scientists can’t really agree on a rigorous definition, let alone a research program that would uncover its basic mechanisms. Read the rest of this entry »


Mad Men’s Don Draper: Woe to Those Who Deviate from ‘Pessimist Chic’

DonDraperCoke
Image Credit: Justina Mintz/AMC

For PopWatch writes: The Monolith of 2001: A Space Odyssey is one of the most cryptic icons in all of pop culture. Back in the heat of the cultural conversation about the film, moviegoers wanting to crack the secrets of those sleek alien obelisks concerned themselves with many questions about their motive and influence. Do they mean to harm humanity or improve us? Do those who dare engage them flourish and prosper? Or do they digress and regress? To rephrase in the lexicon of Mad Men: Are these catalysts for evolutionary change subversive manipulators like Lou, advancing Peggy with responsibility and money just to trigger Don’s implosion, or are they benevolent fixers like Freddy, rescuing Don from self-destruction and nudging him forward with helpful life coaching?

“…optimism is a tough sell these days.”

Of course, Don Draper is something of a Monolith himself. The questions people once asked of those mercurial monuments are similar to the questions that the partners and employees of Sterling Cooper & Partners (and the audience) are currently asking of their former fearless leader during the final season of Mad Men, which last week fielded an episode entitled “The Monolith” rich with allusions to Stanley Kubrick’s future-fretting sci-fi stunner. Don, that one-time font of creative genius, is now a mystery of motives and meaning to his peeps following last season’s apocalyptic meltdown during the Hershey pitch. (For Don, Hershey Bars are Monoliths, dark rectangular totems with magical character-changing properties.)

From Stanley Kubrick’s "2001: A Space Odyssey"

From Stanley Kubrick’s “2001: A Space Odyssey”

Watching them wring their hands over Don evokes the way the ape-men of 2001 frantically tizzied over The Monolith when it suddenly appeared outside their cavern homes. Can Don be trusted? Do they dare let him work? What does he really want? “Why are you here?” quizzed Bert during a shoeless interrogation in his man-cave. Gloomy Lou made like Chicken Little: “He’s gonna implode!” (His pessimism wasn’t without bias: He did take Don’s old job.)

There are knowing, deeper ironies here. Read the rest of this entry »


The Singularity is Coming and it’s Going To Be Awesome: ‘Robots Will Be Smarter Than Us All by 2029’

World’s leading futurologist predicts computers will soon be able to flirt, learn from experience and even make jokes

World’s leading futurologist predicts computers will soon be able to flirt, learn from experience and even make jokes

Adam Withnall writes:  By 2029, computers will be able to understand our language, learn from experience and outsmart even the most intelligent humans, according to Google’s director of engineering Ray Kurzweil.

“Today, I’m pretty much at the median of what AI experts think and the public is kind of with them…”

One of the world’s leading futurologists and artificial intelligence (AI) developers, 66-year-old Kurzweil has previous form in making accurate predictions about the way technology is heading.

[Ray Kurzweil’s pioneering book The Singularity Is Near: When Humans Transcend Biology is available at Amazon]

In 1990 he said a computer would be capable of beating a chess champion by 1998 – a feat managed by IBM’s Deep Blue, against Garry Kasparov, in 1997.

When the internet was still a tiny network used by a small collection of academics, Kurzweil anticipated it would soon make it possible to link up the whole world.

Read the rest of this entry »


Geoffrey Hinton: The Man Google Hired to Make AI a Reality

Geoff Hinton, the AI guru who now works for Google. Photo: Josh Valcarcel/WIRED

Geoff Hinton, the AI guru who now works for Google. Photo: Josh Valcarcel/WIRED

Daniela Hernandez  writes:  Geoffrey Hinton was in high school when a friend convinced him that the brain worked like a hologram.

To create one of those 3-D holographic images, you record how countless beams of light bounce off an object and then you store these little bits of information across a vast database. While still in high school, back in 1960s Britain, Hinton was fascinated by the idea that the brain stores memories in much the same way. Rather than keeping them in a single location, it spreads them across its enormous network of neurons.

‘I get very excited when we discover a way of making neural networks better — and when that’s closely related to how the brain works.’

This may seem like a small revelation, but it was a key moment for Hinton — “I got very excited about that idea,” he remembers. “That was the first time I got really into how the brain might work” — and it would have enormous consequences. Inspired by that high school conversation, Hinton went on to explore neural networks at Cambridge and the University of Edinburgh in Scotland, and by the early ’80s, he helped launch a wildly ambitious crusade to mimic the brain using computer hardware and software, to create a purer form of artificial intelligence we now call “deep learning.”

Read the rest of this entry »


Tech Time Warp of the Week: The World’s First Hard Drive, 1956

The RAMAC hard drive at the Computer History Museum in Mountain View, California. Photo: Jon Snyder/WIRED.

The RAMAC hard drive at the Computer History Museum in Mountain View, California. Photo: Jon Snyder/WIRED.

Daniela Hernandez writes:  IBM unleashed the world’s first computer hard disk drive in 1956. It was bigger than a refrigerator. It weighed more than a ton. And it looked kinda like one of those massive cylindrical air conditioning units that used to sit outside your grade school cafeteria.

This seminal hard drive was part of a larger machine known as the IBM 305 RAMAC, short for “Random Access Method of Accounting and Control,” and in the classic promotional video below, you can see the thing in action during its glory days. Better yet, if you ever happen to be in Mountain View, California, you can see one in the flesh. In 2002, two engineers named Dave Bennet and Joe Feng helped restore an old RAMAC at the Computer History Museum in Mountain View, where it’s now on display. And yes, the thing still works.

As we’re told in IBM’s 1956 film — which chronicles the five years of research and development that culminated in the RAMAC — Big Blue built the system “to keep business accounts up to date and make them available, not monthly or even daily, but immediately.” It was meant to rescue companies from a veritable blizzard of paper records, so adorably demonstrated in the film by a toy penguin trapped in a faux snow storm.

us__en_us__ibm100__ramac__woman_works_305__620x350

Before RAMAC, as the film explains, most businesses kept track of inventory, payroll, budgets, and other bits of business info on good old fashioned paper stuffed into filing cabinets. Or if they were lucky, they had a massive computer that could store data on spools of magnetic tape. But tape wasn’t the easiest to deal with. You couldn’t get to one piece of data on a tape without spooling past all the data that came before it.

Then RAMAC gave the world what’s called magnetic disk storage, which let you retrieve any piece of data without delay. The system’s hard drive included 50 vertically stacked disks covered in magnetic paint, and as they spun around — at speeds of 1,200 rpm — a mechanical arm could store and retrieve data from the disks. The device stored data by changing the magnetic orientation of a particular spot on a disk, and then retrieve it at any time by reading the orientation.

Read the rest of this entry »


Who Dreams of Being Average?

The American Dream has long evoked the idea that the next generation will have a better life than the previous one. Today, many Americans feel that dream is in jeopardy.

The American Dream has long evoked the idea that the next generation will have a better life than the previous one. Today, many Americans feel that dream is in jeopardy.

 Average Is Over—But the American Dream Lives On

Who dreams of being average? Americans define personal success in different ways, but certainly no one strives for mediocrity. The children of Lake Wobegon, after all, were “all above average.”

Perhaps this explains why some reviewers have understood the glum predictions of Tyler Cowen’s Average Is Over—that shifts in the labor market will cause the middle class to dwindle—as heralds of the death of the American Dream. This understanding misses the real thrust of Cowen’s book.

Everyone has their own notions of what constitutes the American Dream, but when writer and historian James Truslow Adams coined the phrase in the 1930s, he wrote that in America “life should be better and richer and fuller for everyone, with opportunity for each according to ability or achievement.” Cowen’s vision of our future actually reinforces this idea. Read the rest of this entry »


100 Unintended Consequences of Obamacare

Top 100

Companies, workers, retirees, students, and spouses all suffer from the law’s inflexible mandates.

Andrew Johnson writes: Today, Obamacare’s October 1 launch date finally arrived. Ever since its passage, supporters of the law have made countless attempts to convince the American people of its viability, dismissing predictions of lost jobs, decreased hours, and rising costs, among others.

Will the NSA Revelations Kickstart the Cybersecurity Industry in China?

Image credit: REUTERS/Pawel Kopczynski

Image credit: REUTERS/Pawel Kopczynski

Adam Segal writes: One of the common arguments in the wake of the Snowden revelations about NSA surveillance is that other countries are going to double down on developing their own technology industries to reduce their dependence on U.S. companies. The Chinese press has made this argument numerous times–highlighting how IBM, Cisco, Intel and others have penetrated Chinese society–and this was one of the themes in Brazilian President Dilma Rousseff’s address to the United Nations General Assembly: “Brazil will redouble its efforts to adopt legislation, technologies and mechanisms to protect us from the illegal interception of communications and data.” Read the rest of this entry »


The First Carbon Nanotube Computer

A carbon nanotube computer processor is comparable to a chip from the early 1970s, and may be the first step beyond silicon electronics.

nanotube.computerx299

Tube chip: This scanning electron microscopy image shows a section of the first-ever carbon nanotube computer. The image was colored to identify different parts of the chip.

Katherine Bourzac reports: For the first time, researchers have built a computer whose central processor is based entirely on carbon nanotubes, a form of carbon with remarkable material and electronic properties. The computer is slow and simple, but its creators, a group of Stanford University engineers, say it shows that carbon nanotube electronics are a viable potential replacement for silicon when it reaches its limits in ever-smaller electronic circuits.

The carbon nanotube processor is comparable in capabilities to the Intel 4004, that company’s first microprocessor, which was released in 1971, says Subhasish Mitra, an electrical engineer at Stanford and one of the project’s co-leaders. The computer, described today in the journal Nature, runs a simple software instruction set called MIPS. It can switch between multiple tasks (counting and sorting numbers) and keep track of them, and it can fetch data from and send it back to an external memory.

The nanotube processor is made up of 142 transistors, each of which contains carbon nanotubes that are about 10 to 200 nanometer long. The Stanford group says it has made six versions of carbon nanotube computers, including one that can be connected to external hardware—a numerical keypad that can be used to input numbers for addition. Read the rest of this entry »


Federal workers want everyone on Obamacare except… federal workers

obamacareJazz Shaw writes: It’s really turning out to be an Obamacare kind of weekend, even in the midst of all the Syria news. First, Erika reported that labor unions were out of luck when it comes to getting an exemption from Obamacare, and Ed reminded us that the system is fraught with security problems. Well, the administration will need to keep an eye on how many horses are getting out of the pen, because virtually all Federal Employees would prefer not to be enrolled, thank you very much.

Read the rest of this entry »