For the Times Literary Review, Robert Irwin writes: The title Reading Darwin in Arabic notwithstanding, most of the men discussed in this book did not read Charles Darwin in Arabic. Instead they read Jean-Baptiste Lamarck, Ernst Haeckel, Herbert Spencer, Thomas Huxley, Gustave Le Bon, Henri Bergson and George Bernard Shaw in European or Arabic versions. They also read popularizing accounts of various aspects of Darwinism in the scientific and literary journal al-Muqtataf (“The Digest”, 1876–1952). The notion of evolution that Arab readers took away from their reading was often heavily infected by Lamarckism and by the social Darwinism of Spencer. Darwin’s The Origin of Species by Means of Natural Selection was published in 1859, but Isma‘il Mazhar’s translation of the first five chapters of Darwin’s book into Arabic only appeared in 1918.
For a long time, the reception of Darwinism was bedevilled by the need to find either neologisms or new twists to old words. As Marwa Elshakry points out, there was at first no specific word in Arabic for “species”, distinct from “variety” or “kind”. “Natural selection” might appear in Arabic with the sense “nature’s elect”. When Hasan Husayn published a translation of Haeckel, he found no word for evolution and so he invented one. Tawra means to advance or develop further. Extrapolating from this verbal root, he created altatawwur, to mean “evolution”. Darwiniya entered the Arabic language. Even ‘ilm, the word for “knowledge” acquired the new meaning, “science”. With the rise of scientific materialism came agnosticism, al-la’adriya, a compound word, literally “the-not-knowing”.
[Robert Irwin is the author of “Visions of the Jinn: Illustrators of the Arabian Nights” (Studies in the Arcadian Library) and “Memoirs of a Dervish: Sufis, mystics and the sixties” both available at Amazon]
Theories about evolution had circulated widely in Britain and France from the late eighteenth century onwards. In the nineteenth century the work of Georges Cuvier on the reconstruction of creatures from fossil remains and of Charles Lyell on the geological evidence for the great age of the world and its slow transformation had prepared the ground for consideration of Darwinism.
If self-replicating machines are the next stage of human evolution, should we start worrying?
George Zarkadakis writes: When René Descartes went to work as tutor of young Queen Christina of Sweden, his formidable student allegedly asked him what could be said of the human body. Descartes answered that it could be regarded as a machine; whereby the queen pointed to a clock on the wall, ordering him to “see to it that it produces offspring”. A joke, perhaps, in the 17th century, but now many computer scientists think the age of the self-replicating, evolving machine may be upon us.
It is an idea that has been around for a while – in fiction. Stanislaw Lem in his 1964 novel The Invincible told the story of a spaceship landing on a distant planet to find a mechanical life form, the product of millions of years of mechanical evolution. It was an idea that would resurface many decades later in the Matrix trilogy of movies, as well as in software labs.
In fact, self-replicating machines have a much longer, and more nuanced, past. They were indirectly proposed in 1802, when William Paley formulated the first teleological argument of machines producing other machines.
This is a dense, maddening, challenging essay, I don’t agree with all of it. But the questions it raises are hard to ignore. Relevant stuff, merits further examination…
David Gelernter writes: The huge cultural authority science has acquired over the past century imposes large duties on every scientist. Scientists have acquired the power to impress and intimidate every time they open their mouths, and it is their responsibility to keep this power in mind no matter what they say or do. Too many have forgotten their obligation to approach with due respect the scholarly, artistic, religious,humanistic work that has always been mankind’s main spiritual support. Scientists are (on average) no more likely to understand this work than the man in the street is to understand quantum physics. But science used to know enough to approach cautiously and admire from outside, and to build its own work on a deep belief in human dignity. No longer.
Today science and the “philosophy of mind”—its thoughtful assistant, which is sometimes smarter than the boss—are threatening Western culture with the exact opposite of humanism. Call it roboticism. Man is the measure of all things, Protagoras said. Today we add, and computers are the measure of all men.
Many scientists are proud of having booted man off his throne at the center of the universe and reduced him to just one more creature—an especially annoying one—in the great intergalactic zoo. That is their right. But when scientists use this locker-room braggadocio to belittle the human viewpoint, to belittle human life and values and virtues and civilization and moral, spiritual, and religious discoveries, which is all we human beings possess or ever will, they have outrun their own empiricism. They are abusing their cultural standing. Science has become an international bully.
Nowhere is its bullying more outrageous than in its assault on the phenomenon known as subjectivity.
Your subjective, conscious experience is just as real as the tree outside your window or the photons striking your retina—even though you alone feel it. Many philosophers and scientists today tend to dismiss the subjective and focus wholly on an objective, third-person reality—a reality that would be just the same if men had no minds. They treat subjective reality as a footnote, or they ignore it, or they announce that, actually, it doesn’t even exist.
If scientists were rat-catchers, it wouldn’t matter. But right now, their views are threatening all sorts of intellectual and spiritual fields. The present problem originated at the intersection of artificial intelligence and philosophy of mind—in the question of what consciousness and mental states are all about, how they work, and what it would mean for a robot to have them. It has roots that stretch back to the behaviorism of the early 20th century, but the advent of computing lit the fuse of an intellectual crisis that blasted off in the 1960s and has been gaining altitude ever since.
The modern “mind fields” encompass artificial intelligence, cognitive psychology, and philosophy of mind. Researchers in these fields are profoundly split, and the chaos was on display in the ugliness occasioned by the publication of Thomas Nagel’s Mind & Cosmos in 2012. Nagel is an eminent philosopher and professor at NYU. In Mind & Cosmos, he shows with terse, meticulous thoroughness why mainstream thought on the workings of the mind is intellectually bankrupt. He explains why Darwinian evolution is insufficient to explain the emergence of consciousness—the capacity to feel or experience the world. He then offers his own ideas on consciousness, which are speculative, incomplete, tentative, and provocative—in the tradition of science and philosophy.
“A loathsome creature like Stephanie Cutter, the roots jutting out from her blonde dye job as black as the recesses of her soul, can push her way onto national television to sit next to a former Speaker of the House and two sitting U.S. senators. A charmless, dead-eyed, tacky sociopath with no sense of ethics, an empty shell spewing her flat-throated bile without the slightest trace of self-awareness Read the rest of this entry »