Advertisements

Are You a DFIO? The Four Ways Ex-Internet Idealists Explain Where It All Went Wrong 

trump-digital-czarz.jpg

21st-century digital evangelists had a lot in common with early Christians and Russian revolutionaries.

A long time ago, in the bad old days of the 2000s, debates about the internet were dominated by two great tribes: the Optimists and the Pessimists.

“The internet is inherently democratizing,” argued the Optimists. “It empowers individuals and self-­organizing communities against a moribund establishment.”

“Wrong!” shouted the Pessimists. “The internet facilitates surveillance and control. It serves to empower only governments, giant corporations, and on occasion an unruly, destructive mob.”

These battles went on at length and were invariably inconclusive.

Nevertheless, the events of 2016 seem to have finally shattered the Optimist consensus. Long-standing concerns about the internet, from its ineffectual protections against harassment to the anonymity in which teenage trolls and Russian spies alike can cloak themselves, came into stark relief against the backdrop of the US presidential election. Even boosters now seem to implicitly accept the assumption (accurate or not) that the internet is the root of multiple woes, from increasing political polarization to the mass diffusion of misinformation.

All this has given rise to a new breed: the Depressed Former Internet Optimist (DFIO). Everything from public apologies by figures in the technology industry to informal chatter in conference hallways suggests it’s become very hard to find an internet Optimist in the old, classic vein. There are now only Optimists-in-retreat, Optimists-in-doubt, or Optimists-hedging-their-bets.

As Yuri Slezkine argues wonderfully in The House of Government, there is a process that happens among believers everywhere, from Christian sects to the elites of the Russian Revolution, when a vision is unexpectedly deferred. Ideologues are forced to advance a theory to explain why the events they prophesied have failed to come to pass, and to justify a continued belief in the possibility of something better.

[Read the full story here, at MIT Technology Review]

Among the DFIOs, this process is giving rise to a boomlet of distinct cliques with distinct views about how the internet went wrong and what to do about it. As an anxiety-­ridden DFIO myself, I’ve been morbidly cataloguing these strains of thinking and have identified four main groups: the Purists, the Disillusioned, the Hopeful, and the Revisionists.

These are not mutually exclusive positions, and most DFIOs I know combine elements from them all. I, for instance, would call myself a Hopeful-Revisionist. Read the rest of this entry »

Advertisements

The US May Have Just Pulled Even With China in the Race to Build Supercomputing’s Next Big Thing

The two countries are vying to create an exascale computer that could lead to significant advances in many scientific fields.

Martin Giles writes:

… The race to hit the exascale milestone is part of a burgeoning competition for technological leadership between China and the US. (Japan and Europe are also working on their own computers; the Japanese hope to have a machine running in 2021 and the Europeans in 2023.)

In 2015, China unveiled a plan to produce an exascale machine by the end of 2020, and multiple reports over the past year or so have suggested it’s on track to achieve its ambitious goal. But in an interview with MIT Technology Review, Depei Qian, a professor at  Beihang University in Beijing who helps manage the country’s exascale effort, explained it could fall behind schedule. “I don’t know if we can still make it by the end of 2020,” he said. “There may be a year or half a year’s delay.”

Teams in China have been working on three prototype exascale machines, two of which use homegrown chips derived from work on existing supercomputers the country has developed. The third uses licensed processor technology. Qian says that the pros and cons of each approach are still being evaluated, and that a call for proposals to build a fully functioning exascale computer has been pushed back.

Given the huge challenges involved in creating such a powerful computer, timetables can easily slip, which could make an opening for the US. China’s initial goal forced the American government to accelerate its own road map and commit to delivering its first exascale computer in 2021, two years ahead of its original target. The American machine, called Aurora, is being developed for the Department of Energy’s Argonne National Laboratory in Illinois. Supercomputing company Cray is building the system for Argonne, and Intel is making chips for the machine. Read the rest of this entry »


[VIDEO] REWIND: Breaking Vegas Documentary: The True Story of The MIT Blackjack Team


Space History: The Brilliant, Funny Computer Code Behind the Apollo 11 Mission

From their key positions in this control center at Goddard, the Manned Space Flight Network operations director and staff controlled Apollo mission communications activities throughout a far-flung worldwide complex of stations. Image Credit: NASA’s Goddard Space Flight Center.

The code was written in the late ’60s by Margaret Hamilton and her team at the Massachusetts Institute of Technology Instrumentation Laboratory for the Apollo Guidance Computer.

Paul Smith writes: NASA’s Apollo 11 mission—the mission that put human beings on the moon for the first time—was launched in 1969, the year after I was born. My early Christmas presents were giant kids’ books full of pictures of that giant Saturn V rocket launching into space, the command and lunar modules, and of guys in bulky space suits walking on the moon. The first intelligible answer I gave to the question, “What do you want to be when you grow up?” was, “Astronaut.”

gemini-mission-control

I did not end up becoming an astronaut.

Computers also captured my attention at an early age, and now I work as a developer for Slate. But my fascination with space endures—so needless to say, I was pretty excited when I heard that the source code for Apollo 11’s computer guidance systems was uploaded on July 8 to Github, a popular site used by programmers to share code and collaboratively build software. Anyone can now read the actual lines of programming code used to land men on the moon.

[Read the full story here, at slate.com]

The code was written in the late ’60s by Margaret Hamilton and her team at the Massachusetts Institute of Technology Instrumentation Laboratory for the Apollo Guidance Computer.

“I have no idea what a DVTOTAL is, but I’m pretty sure that by BURNBABY, they mean ‘launch a 300-foot rocket ship into space.’ And how totally and completely freaking awesome is that?”

The code is pretty inscrutable to casual inspection: It’s not written in a programming language recognizable to modern coders. But Hamilton and her team wrote comments in their code (just like I do when I write code for Slate’s website) to help remind them what’s going on in a given spot in the program. Those parts are surprisingly readable. Here’s a block of code from a file called BURN_BABY_BURN–MASTER_IGNITION_ROUTINE.s (really, that’s what it’s called):

Apollo1

So, clearly, “don’t forget to clean out leftover DVTOTAL data when GROUP 4 RESTARTS and then BURN, BABY!” I have no idea what a DVTOTAL is, but I’m pretty sure that by BURNBABY, they mean “launch a 300-foot rocket ship into space.” And how totally and completely freaking awesome is that?

Altogether, with comments and some added copyright headers, the AGC code adds up to about 2 megabytes—a teeny tiny fraction of the amount of code packed into an Apple Watch. Read the rest of this entry »


A Chinese University Beat MIT to Become the World’s Top School for Engineering Research

CQ2jqrtUYAApstK

Read more…


Finally: Networked Monkey Brains

monkey-brain-wired

Neurobiologists have shown that brain signals from multiple animals can be combined to perform certain tasks better than a single brain

Mike Orcutt reports: New research proves that two heads are indeed better than one, at least at performing certain simple computational tasks.

The work demonstrates for the first time that multiple animal brains can be networked and harnessed to perform a specific behavior, says Miguel Nicolelis, a professor of neurobiology and biomedical engineering at Duke University and an expert in brain-machine interfaces.

wired-monkey-brains

“Even though the monkeys didn’t know they were collaborating, their brains became synchronized very quickly, and over time they got better and better at moving the arm.”

He says this type of “shared brain-machine interface” could potentially be useful for patients with brain damage, in addition to shedding light on how animal brains work together to perform collective behaviors.

monkey

Networked Monkey Brains Could Help Disabled Humans

Nicolelis and his colleagues published two separate studies today, one involving rats and the other involving monkeys, that describe experiments on networks of brains and illustrate how such “brainets” could be used to combine electrical outputs from the neurons of multiple animals to perform tasks. The rat brain networks often performed better than a BRAINS-BLENDsingle brain can, they report, and in the monkey experiment the brains of three individuals “collaborated” to complete a virtual reality-based task too complicated for a single one to perform.

“In the monkey experiment, the researchers combined two or three brains to perform a virtual motor task in three dimensions. After implanting electrodes, they used rewards to train individual monkeys to move a virtual arm to a target on a screen.”

To build a brain network, the researchers first implant microwire electrode arrays that can record signals as well as deliver pulses of electrical stimulation to neurons in the same region in multiple rat brains.

“An individual monkey brain does not have the capacity to move the arm in three dimensions, says Nicolelis, so each monkey learned to manipulate the arm within a certain ‘subspace’ of the virtual 3-D space.”

In the case of the rat experiment, they then physically linked pairs of rat brains via a “brain-to-brain interface” (see “Rats Communicate Through Brain Chips”). Once groups of three or four rats were interconnected, the researchers delivered prescribed electrical pulses to individual rats, portions of the group, or the whole group, and recorded the outputs.

[Read the full text here, at MIT Technology Review]

The researchers tested the ability of rat brain networks to perform basic computing tasks. For example, by delivering electrical pulse patterns derived from a digital image, they recorded the electrical outputs and measured how well the network of neurons processed that image. Read the rest of this entry »


MIT Technology Review: Some Companies See Virtual and Augmented Reality as a Way to Make Money from a New Type of Ad

MIT-ad-augmented

(Read the full story / Illustration by Bendik Kaltenborn)

MIT Technology Review


Jonathan Gruber + MIT + Individual Mandate + Congressional Budget Office + National Review = Adolf Hitler?

suggestions-bizarre-bias

If you’re a WordPress blogger and post something critical of the Obama Administration and Health Care legislation, do you ever find unrelated content suggestions promoting creepy or offensive content leaking into the Content Recommendations panel?  Either this is innocently absurd (benefit of the doubt) or legitimate Op-Ed criticism is being suspiciously herded into a category that triggers “kooky conspiracy theory” suggestions.

Read this post and see if there’s anything that could possibly fit with “Adolf Hitler”, or “Barack Obama Citizenship conspiracy theories”.  Didn’t find anything related to Obama’s citizenship? Didn’t find anything related to Adolf Hitler? Exactly. Me neither.

The above screen cap records the list of suggestions offered in the content recommendations tags list when I prepared this post. A short post with only 88 words. If any one of the 88 words in that post is related in any way to these obnoxious suggestions (Hitler? Really?) I fail to see a connection.

One episode does not make a pattern, so there are no conclusions to draw here. But if any other WordPress bloggers find similar nonsense appearing in content suggestions, please, make a screen cap, and post it.

 


[VIDEO] Self-Folding Robots

A team of engineers at Harvard and MIT have designed and built a flat-packed robot that assembles itself and walks away. Learn more at http://hvrd.me/A2mM9

YouTube


Creator of Passwords Says They’ve Gotten Out of Hand. ‘It’s Become Kind of a Nightmare’


The Digital Fabrication Revolution

A new digital revolution is coming, this time in fabrication. It draws on the same insights that led to the earlier digitizations of communication and computation, but now what is being programmed is the physical world rather than the virtual one.

Digital fabrication will allow individuals to design and produce tangible objects on demand, wherever and whenever they need them. Widespread access to these technologies will challenge traditional models of business, aid, and education.

Maker Faire 2008, San Mateo. A couple of do-it...

The roots of the revolution date back to 1952, when researchers at the Massachusetts Institute of Technology (MIT) wired an early digital computer to a milling machine, creating the first numerically controlled machine tool. By using a computer program instead of a machinist to turn the screws that moved the metal stock, the researchers were able to produce aircraft components with shapes that were more complex than could be made by hand. From that first revolving end mill, all sorts of cutting tools have been mounted on computer-controlled platforms, including jets of water carrying abrasives that can cut through hard materials, lasers that can quickly carve fine features, and slender electrically charged wires that can make long thin cuts.

Today, numerically controlled machines touch almost every commercial product, whether directly (producing everything from laptop cases to jet engines) or indirectly (producing the tools that mold and stamp mass-produced goods). And yet all these modern descendants of the first numerically controlled machine tool share its original limitation: they can cut, but they cannot reach internal structures. This means, for example, that the axle of a wheel must be manufactured separately from the bearing it passes through.

The aim is to not only produce the parts for a drone, for example, but build a complete vehicle that can fly right out of the printer.

In the 1980s, however, computer-controlled fabrication processes that added rather than removed material (called additive manufacturing) came on the market. Thanks to 3-D printing, a bearing and an axle could be built by the same machine at the same time. A range of 3-D printing processes are now available, including thermally fusing plastic filaments, using ultraviolet light to cross-link polymer resins, depositing adhesive droplets to bind a powder, cutting and laminating sheets of paper, and shining a laser beam to fuse metal particles. Businesses already use 3-D printers to model products before producing them, a process referred to as rapid prototyping. Companies also rely on the technology to make objects with complex shapes, such as jewelry and medical implants. Research groups have even used 3-D printers to build structures out of cells with the goal of printing living organs…

More >> via How to Make Almost Anything | Foreign Affairs