Michael Lind writes: What is politics? The answer is not obvious. Most Americans on the left and the right either do not know or have forgotten what politics is. Conventional American progressives have pretty much abandoned any distinction between the political realm and society and culture in general, while conventional American conservatives treat politics as an exercise in doctrinal purity. Both sides, in different ways, undermine the idea of a limited public square in which different groups in society can agree on a few big things while agreeing to disagree with others — progressives, by including too much of society in the public square, and conservatives, by blocking compromise with too many ideological tests.
February 23, 2014: People paint on the KGB officers monument in Kiev, Ukraine. (AP Photo/Andrew Lubimov)
“The secularization of the population was not necessary, but the secularization of the public sphere was. You could no longer win political debates by appealing to a particular interpretation of divine Scripture. Under the rules of Enlightenment liberalism, you had to make a case for the policy you preferred that was capable of persuading citizens who did not share your religious beliefs. A mere numerical majority was not enough. If the politicians express the will of a majority of voters, and the majority are told how to vote by clerics, then the democracy is really an indirect theocracy.”
Politics is only possible in a society in which much, if not most, of social life is not politicized. In premodern communities in which every aspect of life was regulated by custom or religious law, there was no politics, in the modern sense. There was no public sphere because there was no private sphere. Tribal custom or divine law, as interpreted by tribal elders or religious authorities, governed every action, leaving no room for individual choice. There were power struggles, to be sure. But there was no political realm separate from the tribe or the religious congregation. And disagreement was heresy.
A February protest against a liquified natural gas export facility in Maryland. Susan Yin/Chesapeake Climate Action Network
The separation of church and state — strictly speaking, the privatization of religious belief, beginning in early modern Europe and America — was the precondition for modern politics. The secularization of the population was not necessary, but the secularization of the public sphere was. You could no longer win political debates by appealing to a particular interpretation of divine Scripture.
“Conventional American progressives have pretty much abandoned any distinction between the political realm and society and culture in general, while conventional American conservatives treat politics as an exercise in doctrinal purity. Both sides, in different ways, undermine the idea of a limited public square in which different groups in society can agree on a few big things while agreeing to disagree with others — progressives, by including too much of society in the public square, and conservatives, by blocking compromise with too many ideological tests.”
Under the rules of Enlightenment liberalism, you had to make a case for the policy you preferred that was capable of persuading citizens who did not share your religious beliefs. A mere numerical majority was not enough. If the politicians express the will of a majority of voters, and the majority are told how to vote by clerics, then the democracy is really an indirect theocracy.
“As the Marxist substitute for Abrahamic religion has faded away, its place on the political left is being taken by the new secular political religions of environmentalism and identity politics. Each of these is strongest in post-Protestant Northern Europe and North America, and weakest in historically Catholic and Orthodox Christian societies.”
Unfortunately, as Horace observed, “You can drive out Nature with a pitchfork, but she keeps on coming back.” The same might be said of religion. While some forms of religion have been expelled from politics, new forms keep trying to creep in, to recreate something like the pre-Enlightenment world in which a single moral code governs all of society and disagreement is intolerable heresy.
Marxism can only be understood as a Christian, or Judeo-Christian, or Abrahamic spin-off — a faith militant, with its prophets, its holy scriptures, its providential theory of history, its evangelical universalism, its message of brotherhood and sisterhood transcending particular communities. Marxism was the fourth major Abrahamic religion. Nothing like Marxism could have evolved independently in traditional Confucian China or Hindu India, with their cyclical rather than progressive views of history.
“Other elements of religion, expelled from the public sphere, have crept back in via the left, thanks to environmentalism. As the great environmental scientist James Lovelock has pointed out, anthropogenic global warming is affected by the sources of energy for large-scale power generation and transportation. But refusing to fly on airplanes or reducing your personal “carbon footprint” is a meaningless exercise, explicable only in the context of religion, with its traditions of ritual fasts and sacrifices in the service of personal moral purity.”
As the Marxist substitute for Abrahamic religion has faded away, its place on the political left is being taken by the new secular political religions of environmentalism and identity politics. Each of these is strongest in post-Protestant Northern Europe and North America, and weakest in historically Catholic and Orthodox Christian societies. A case can be made that militant environmentalism and militant identity politics are both by-products of the decomposition of Protestantism in the Anglophone nations and Germanic Europe. Read the rest of this entry »
China’s University of Science and Technology released a human-like robot that is comparable to Japanese models seen in the past on Friday. Not only does it have the face of a beautiful woman, it also capable of interacting with people next to “her.”
Named “Jia Jia,” the face of the life-sized robot is drawn from five attractive female students from the university. Equipped with basic functions, such as making conversation, facial expressions, as well as gestures, it’s apparently more than Siri with a pretty face.
The University also added the robot is “the first of its kind in China”.
(WASHINGTON) — The president of Planned Parenthood said her organization’s clinics never adjust the abortion procedure to better preserve fetalorgans for medical research and that the organization’s charges cover only the cost of transmission to researchers.
Planned Parenthood has come under congressional scrutiny after the release of two stealthily recorded videos that showed officials discussing how they provide aborted fetal organs for research. Read the rest of this entry »
…Tekei claimed falsely that Thomas wrote that slavery was somehow “dignified.” He most certainly did not.
Rather, Thomas argued that human dignity is intrinsic and equal among all human beings, and moreover, that our inherent worth can’t be taken away by government or anyone else.
That’s the prediction of Ray Kurzweil, director of engineering at Google, who spoke Wednesday at the Exponential Finance conference in New York.
“We’re going to gradually merge and enhance ourselves. In my view, that’s the nature of being human — we transcend our limitations.”
Kurzweil predicts that humans will become hybrids in the 2030s. That means our brains will be able to connect directly to the cloud, where there will be thousands of computers, and those computers will augment our existing intelligence. He said the brain will connect via nanobots — tiny robots made from DNA strands.
“As I wrote starting 20 years ago, technology is a double-edged sword. Fire kept us warm and cooked our food but also burnt down our houses. Every technology has had its promise and peril.”
“Our thinking then will be a hybrid of biological and non-biological thinking,” he said.
The bigger and more complex the cloud, the more advanced our thinking. By the time we get to the late 2030s or the early 2040s, Kurzweil believes our thinking will be predominately non-biological.
Sarah Knapton writes: Wealthy humans are likely become cyborgs within 200 years as they gradually merge with technology like computers and smart phones, a historian has claimed.
“I think it is likely in the next 200 years or so homo sapiens will upgrade themselves into some idea of a divine being, either through biological manipulation or genetic engineering of by the creation of cyborgs, part organic part non-organic.”
Yuval Noah Harari, a professor at the Hebrew University of Jerusalem, said the amalgamation of man and machine will be the ‘biggest evolution in biology’ since the emergence of life four billion years ago.
“It will be the greatest evolution in biology since the appearance of life. Nothing really has changed in four billion years biologically speaking. But we will be as different from today’s humans as chimps are now from us.”
Prof Harari, who has written a landmark book charting the history of humanity, said mankind would evolve to become like gods with the power over death, and be as different from humans of today as we are from chimpanzees.
Yuval Noah Harari holds a homo sapiens skull
“What enables humans to cooperate flexibly, and exist in large societies is our imagination. With religion it’s easy to understand. You can’t convince a chimpanzee to give you a banana with the promise it will get 20 more bananas in chimpanzee heaven. It won’t do it. But humans will.”
He argued that humans as a race were driven by dissatisfaction and that we would not be able to resist the temptation to ‘upgrade’ ourselves, whether by genetic engineering or technology.
“We are programmed to be dissatisfied, “ said Prof Harari. “Even when humans gain pleasure and achievements it is not enough. They want more and more.”
“Most legal systems are based on human rights but it is all in our imagination. Money is the most successful story ever. You have the master storytellers, the bankers, the finance ministers telling you that money is worth something. It isn’t. Try giving money to a chimp. It’s worthless.”
“I think it is likely in the next 200 years or so homo sapiens will upgrade themselves into some idea of a divine being, either through biological manipulation or genetic engineering of by the creation of cyborgs, part organic part non-organic.”
“God is extremely important because without religious myth you can’t create society. Religion is the most important invention of humans.”
— Yuval Noah Harari
“It will be the greatest evolution in biology since the appearance of life. Nothing really has changed in four billion years biologically speaking. But we will be as different from today’s humans as chimps are now from us.”
However he warned that the ‘cyborg’ technology would be restricted to the wealthiest in society, widening the gap between rich and poor in society. In the future the rich may be able to live forever while the poor would die out. Read the rest of this entry »
PARIS – A French company has come up with a novel way to keep people close to their departed loved ones: bottling their unique scent as a perfume.
The idea came to Katia Apalategui seven years ago as she struggled to come to terms with her father’s death, missing everything down to the way he smelled.
She mentioned this in passing to her mother, who admitted that, like many who have lost a loved one, she was loath to wash the pillowcase her husband slept on in a bid to keep a remnant of the precious scent of the man she loved.
This inspired the 52-year-old insurance saleswoman to think up ways to capture and preserve a person’s individual scent so people in her position would never have to long for a whiff of their loved one again.
Scientists have long known that smells are linked to the part of the brain that regulates emotion and memory and have the ability to propel you back to a specific time, place or person.
The retail industry often takes advantage of this powerful psychological power, using various odors in stores, cars or restaurants to lure customers.
“The powerful link between smell and memory means the product offers “olfactory comfort,” Apalategui claims, on a par with photos, videos and other memories of the deceased.”
After years of knocking on doors to try and develop her idea, Apalategui was put in touch with the northwestern Havre university which has developed a technique to reproduce the human smell.
“We take the person’s clothing and extract the odor — which represents about a hundred molecules — and we reconstruct it in the form of a perfume in four days,” explained the university’s Geraldine Savary, without giving away the secrets of the process.
The powerful link between smell and memory means the product offers “olfactory comfort,” Apalategui claims, on a par with photos, videos and other memories of the deceased.
Her son, who is currently in business school, plans to launch their start-up by September with the help of a chemist. Read the rest of this entry »
The human self has five components. Machines now have three of them. How far away is artificial consciousness – and what does it tell us about ourselves?
Jon Swartz reports: The chant reverberated through the air near the entrance to the SXSW tech and entertainment festival here.
About two dozen protesters, led by a computer engineer, echoed that sentiment in their movement against artificial intelligence.
“Machines have already taken over. If you drive a car, much of what it does is technology-driven.”
— Ben Medlock, co-founder of mobile-communications company SwiftKey
“This is is about morality in computing,” said Adam Mason, 23, who organized the protest.
Signs at the scene reflected the mood. “Stop the Robots.” “Humans are the future.”
The mini-rally drew a crowd of gawkers, drawn by the sight of a rare protest here.
The dangers of more developed artificial intelligence, which is still in its early stages, has created some debate in the scientific community. Tesla founder Elon Musk donated $10 million to the Future of Life Institute because of his fears.
Stephen Hawking and others have added to the proverbial wave of AI paranoia with dire predictions of its risk to humanity.
“I am amazed at the movement. I has changed life in ways as dramatic as the Industrial Revolution.”
— Stephen Wolfram, a British computer scientist, entrepreneur and former physicist known for his contributions to theoretical physics
The topic is an undercurrent in Steve Jobs: The Man in the Machine, a documentary about the fabled Apple co-founder. The paradoxical dynamic between people and tech products is a “double-edged sword,” said its Academy Award-winning director, Alex Gibney. “There are so many benefits — and yet we can descend into our smartphone.”
As non-plussed witnesses wandered by, another chant went up. “A-I, say goodbye.”
Several of the students were from the University of Texas, which is known for a strong engineering program. But they are deeply concerned about the implications of a society where technology runs too deep. Read the rest of this entry »
Are We Smart Enough to Control Artificial Intelligence?
A true AI might ruin the world—but that assumes it’s possible at all
Paul Ford writes: Years ago I had coffee with a friend who ran a startup. He had just turned 40. His father was ill, his back was sore, and he found himself overwhelmed by life. “Don’t laugh at me,” he said, “but I was counting on the singularity.”
“The question ‘Can a machine think?’ has shadowed computer science from its beginnings.”
My friend worked in technology; he’d seen the changes that faster microprocessors and networks had wrought. It wasn’t that much of a step for him to believe that before he was beset by middle age, the intelligence of machines would exceed that of humans—a moment that futurists call the singularity. A benevolent superintelligence might analyze the human genetic code at great speed and unlock the secret to eternal youth. At the very least, it might know how to fix your back.
But what if it wasn’t so benevolent? Nick Bostrom, a philosopher who directs the Future of Humanity Institute at the University of Oxford, describes the following scenario in his book Superintelligence, which has prompted a great deal of debate about the future of artificial intelligence. Imagine a machine that we might call a “paper-clip maximizer”—that is, a machine programmed to make as many paper clips as possible. Now imagine that this machine somehow became incredibly intelligent. Given its goals, it might then decide to create new, more efficient paper-clip-manufacturing machines—until, King Midas style, it had converted essentially everything to paper clips.
Agility: rapid advances in technology, including machine vision, tactile sensors and autonomous navigation, make today’s robots, such as this model from DLR, increasingly useful
No worries, you might say: you could just program it to make exactly a million paper clips and halt. But what if it makes the paper clips and then decides to check its work? Has it counted correctly? It needs to become smarter to be sure. The superintelligent machine manufactures some as-yet-uninvented raw-computing material (call it “computronium”) and uses that to check each doubt. But each new doubt yields further digital doubts, and so on, until the entire earth is converted to computronium. Except for the million paper clips.
Bostrom does not believe that the paper-clip maximizer will come to be, exactly; it’s a thought experiment, one designed to show how even careful system design can fail to restrain extreme machine intelligence. But he does believe that superintelligence could emerge, and while it could be great, he thinks it could also decide it doesn’t need humans around. Or do any number of other things that destroy the world. The title of chapter 8 is: “Is the default outcome doom?”
“Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term ‘artificial intelligence’ in 1955.”
If this sounds absurd to you, you’re not alone. Critics such as the robotics pioneer Rodney Brooks say that people who fear a runaway AI misunderstand what computers are doing when we say they’re thinking or getting smart. From this perspective, the putative superintelligence Bostrom describes is far in the future and perhaps impossible.
Yet a lot of smart, thoughtful people agree with Bostrom and are worried now. Why?
The question “Can a machine think?” has shadowed computer science from its beginnings. Alan Turing proposed in 1950 that a machine could be taught like a child; John McCarthy, inventor of the programming language LISP, coined the term “artificial intelligence” in 1955. As AI researchers in the 1960s and 1970s began to use computers to recognize images, translate between languages, and understand instructions in normal language and not just code, the idea that computers would eventually develop the ability to speak and think—and thus to do evil—bubbled into mainstream culture. Even beyond the oft-referenced HAL from 2001: A Space Odyssey, the 1970 movie Colossus: The Forbin Project featured a large blinking mainframe computer that brings the world to the brink of nuclear destruction; a similar theme was explored 13 years later in War Games. The androids of 1973’s Westworld went crazy and started killing.
“Extreme AI predictions are ‘comparable to seeing more efficient internal combustion engines… and jumping to the conclusion that the warp drives are just around the corner,’ Rodney Brooks writes.”
When AI research fell far short of its lofty goals, funding dried up to a trickle, beginning long “AI winters.” Even so, the torch of the intelligent machine was carried forth in the 1980s and ’90s by sci-fi authors like Vernor Vinge, who popularized the concept of the singularity; researchers like the roboticist Hans Moravec, an expert in computer vision; and the engineer/entrepreneur Ray Kurzweil, author of the 1999 book The Age of Spiritual Machines. Whereas Turing had posited a humanlike intelligence, Vinge, Moravec, and Kurzweil were thinking bigger: when a computer became capable of independently devising ways to achieve goals, it would very likely be capable of introspection—and thus able to modify its software and make itself more intelligent. In short order, such a computer would be able to design its own hardware.
Frances Martel writes: The past few years have seen a surging interest in the international scientific movement to “help end human death.” It fears no mechanics and abhors the imperfections of the human body. Transhumanism is snowballing into an international movement aggressively defying human nature and embracing machines.
The current wave of debate surrounding the concept began with The Transhumanist Wager, a novel about the possibilities of transhumanism, by Zoltan Istvan, an author who has openly admitted to believing in the possibilities of transcending thousands of concepts about the sanctity of the human body.
In a piece for the Huffington Post preceding the release of his novel this month, Istvan writes that transhumanism springs from “discontent about the humdrum status quo of human life and our frail, terminal human bodies,” and strives for immortality through the use of science at its most ambitious. At its least ambitious, transhumanists “want to be better, smarter, stronger” by replacing imperfect human parts with perfect machines.
Of course, the idea of using the power of the human mind to piece together better functioning human beings raises a number of metaphysical questions about human nature and the essence of what it means to be a person. Where is the line at which a person has been so thoroughly altered that they no longer wield the same identity?
John Biggs writes: Welcome to our continuing series featuring videos of robots that will, when they become autonomous, hunt us down and force us to work in the graphene factories of Mars. Below we see Wild Cat, a fully untethered remote control quadrupedal robot made by Boston Dynamics, creators of the famous Big Dog. This quadruped can run up to 16 miles an hour and features a scary-sound internal gas engine that can power it across rough terrain. Wild Cat was funded by the DARPA’s M3 program aimed at introducing flexible, usable robots into natural environments AKA introducing robotic pack animals for ground troops and build flocking, heavily armed robots that can wipe out a battlefield without putting humans in jeopardy.
Next up we have ATLAS, another Boston Dynamics bot that can walk upright on rocks. Sadly ATLAS is tethered to a power source but he has perfect balance and can survive side and front hits from heavy weights – a plus if you’re built to be the shock troops of a new droid army. ATLAS can even balance on one foot while being smacked with wrecking balls, something the average human can’t do without suffering internal damage. I can’t wait for him to be able to throw cinder blocks!
What if everything we’ve come to think of as American is predicated on a freak coincidence of economic history? And what if that coincidence has run its course?
For all of measurable human history up until the year 1750, nothing happened that mattered. This isn’t to say history was stagnant, or that life was only grim and blank, but the well-being of average people did not perceptibly improve. All of the wars, literature, love affairs, and religious schisms, the schemes for empire-making and ocean-crossing and simple profit and freedom, the entire human theater of ambition and deceit and redemption took place on a scale too small to register, too minor to much improve the lot of ordinary human beings. In England before the middle of the eighteenth century, where industrialization first began, the pace of progress was so slow that it took 350 years for a family to double its standard of living. In Sweden, during a similar 200-year period, there was essentially no improvement at all. By the middle of the eighteenth century, the state of technology and the luxury and quality of life afforded the average individual were little better than they had been two millennia earlier, in ancient Rome.
Then two things happened that did matter, and they were so grand that they dwarfed everything that had come before and encompassed most everything that has come since: the first industrial revolution, beginning in 1750 or so in the north of England, and the second industrial revolution, beginning around 1870 and created mostly in this country. That the second industrial revolution happened just as the first had begun to dissipate was an incredible stroke of good luck. It meant that during the whole modern era from 1750 onward—which contains, not coincidentally, the full life span of the United States—human well-being accelerated at a rate that could barely have been contemplated before. Instead of permanent stagnation, growth became so rapid and so seemingly automatic that by the fifties and sixties the average American would roughly double his or her parents’ standard of living. In the space of a single generation, for most everybody, life was getting twice as good.
At some point in the late sixties or early seventies, this great acceleration began to taper off. The shift was modest at first, and it was concealed in the hectic up-and-down of yearly data. But if you examine the growth data since the early seventies, and if you are mathematically astute enough to fit a curve to it, you can see a clear trend: The rate at which life is improving here, on the frontier of human well-being, has slowed.
If you are like most economists—until a couple of years ago, it was virtually all economists—you are not greatly troubled by this story, which is, with some variation, the consensus long-arc view of economic history. The machinery of innovation, after all, is now more organized and sophisticated than it has ever been, human intelligence is more efficiently marshaled by spreading education and expanding global connectedness, and the examples of the Internet, and perhaps artificial intelligence, suggest that progress continues to be rapid.
But if you are prone to a more radical sense of what is possible, you might begin to follow a different line of thought. If nothing like the first and second industrial revolutions had ever happened before, what is to say that anything similar will happen again? Then, perhaps, the global economic slump that we have endured since 2008 might not merely be the consequence of the burst housing bubble, or financial entanglement and overreach, or the coming generational trauma of the retiring baby boomers, but instead a glimpse at a far broader change, the slow expiration of a historically singular event. Perhaps our fitful post-crisis recovery is no aberration. This line of thinking would make you an acolyte of a 72-year-old economist at Northwestern named Robert Gordon, and you would probably share his view that it would be crazy to expect something on the scale of the second industrial revolution to ever take place again.
“Some things,” Gordon says, and he says it often enough that it has become both a battle cry and a mantra, “can happen only once.”
We’re just inviting you to take a timeout into the rhythmic ambiance of our breakfast, brunch and/or coffee selections. We are happy whenever you stop by.