The Closing of the Scientific MindPosted: January 4, 2014
This is a dense, maddening, challenging essay, I don’t agree with all of it. But the questions it raises are hard to ignore. Relevant stuff, merits further examination…
David Gelernter writes: The huge cultural authority science has acquired over the past century imposes large duties on every scientist. Scientists have acquired the power to impress and intimidate every time they open their mouths, and it is their responsibility to keep this power in mind no matter what they say or do. Too many have forgotten their obligation to approach with due respect the scholarly, artistic, religious,humanistic work that has always been mankind’s main spiritual support. Scientists are (on average) no more likely to understand this work than the man in the street is to understand quantum physics. But science used to know enough to approach cautiously and admire from outside, and to build its own work on a deep belief in human dignity. No longer.
Today science and the “philosophy of mind”—its thoughtful assistant, which is sometimes smarter than the boss—are threatening Western culture with the exact opposite of humanism. Call it roboticism. Man is the measure of all things, Protagoras said. Today we add, and computers are the measure of all men.
Many scientists are proud of having booted man off his throne at the center of the universe and reduced him to just one more creature—an especially annoying one—in the great intergalactic zoo. That is their right. But when scientists use this locker-room braggadocio to belittle the human viewpoint, to belittle human life and values and virtues and civilization and moral, spiritual, and religious discoveries, which is all we human beings possess or ever will, they have outrun their own empiricism. They are abusing their cultural standing. Science has become an international bully.
Nowhere is its bullying more outrageous than in its assault on the phenomenon known as subjectivity.
Your subjective, conscious experience is just as real as the tree outside your window or the photons striking your retina—even though you alone feel it. Many philosophers and scientists today tend to dismiss the subjective and focus wholly on an objective, third-person reality—a reality that would be just the same if men had no minds. They treat subjective reality as a footnote, or they ignore it, or they announce that, actually, it doesn’t even exist.
If scientists were rat-catchers, it wouldn’t matter. But right now, their views are threatening all sorts of intellectual and spiritual fields. The present problem originated at the intersection of artificial intelligence and philosophy of mind—in the question of what consciousness and mental states are all about, how they work, and what it would mean for a robot to have them. It has roots that stretch back to the behaviorism of the early 20th century, but the advent of computing lit the fuse of an intellectual crisis that blasted off in the 1960s and has been gaining altitude ever since.
The modern “mind fields” encompass artificial intelligence, cognitive psychology, and philosophy of mind. Researchers in these fields are profoundly split, and the chaos was on display in the ugliness occasioned by the publication of Thomas Nagel’s Mind & Cosmos in 2012. Nagel is an eminent philosopher and professor at NYU. In Mind & Cosmos, he shows with terse, meticulous thoroughness why mainstream thought on the workings of the mind is intellectually bankrupt. He explains why Darwinian evolution is insufficient to explain the emergence of consciousness—the capacity to feel or experience the world. He then offers his own ideas on consciousness, which are speculative, incomplete, tentative, and provocative—in the tradition of science and philosophy.
Nagel was immediately set on and (symbolically) beaten to death by all the leading punks, bullies, and hangers-on of the philosophical underworld. Attacking Darwin is the sin against the Holy Ghost that pious scientists are taught never to forgive. Even worse, Nagel is an atheist unwilling to express sufficient hatred of religion to satisfy other atheists. There is nothing religious about Nagel’s speculations; he believes that science has not come far enough to explain consciousness and that it must press on. He believes that Darwin is not sufficient.
The intelligentsia was so furious that it formed a lynch mob. In May 2013, theChronicle of Higher Education ran a piece called “Where Thomas Nagel Went Wrong.” One paragraph was notable:
Whatever the validity of [Nagel’s] stance, its timing was certainly bad. The war between New Atheists and believers has become savage, with Richard Dawkins writing sentences like, “I have described atonement, the central doctrine of Christianity, as vicious, sadomasochistic, and repellent. We should also dismiss it as barking mad….” In that climate, saying anything nice at all about religion is a tactical error.
It’s the cowardice of the Chronicle’s statement that is alarming—as if the only conceivable response to a mass attack by killer hyenas were to run away. Nagel was assailed; almost everyone else ran.
The Kurzweil Cult.
The voice most strongly associated with what I’ve termed roboticism is that of Ray Kurzweil, a leading technologist and inventor. The Kurzweil Cult teaches that, given the strong and ever-increasing pace of technological progress and change, a fateful crossover point is approaching. He calls this point the “singularity.” After the year 2045 (mark your calendars!), machine intelligence will dominate human intelligence to the extent that men will no longer understand machines any more than potato chips understand mathematical topology. Men will already have begun an orgy of machinification—implanting chips in their bodies and brains, and fine-tuning their own and their children’s genetic material. Kurzweil believes in “transhumanism,” the merging of men and machines. He believes human immortality is just around the corner. He works for Google.
Whether he knows it or not, Kurzweil believes in and longs for the death of mankind. Because if things work out as he predicts, there will still be life on Earth, but no human life. To predict that a man who lives forever and is built mainly of semiconductors is still a man is like predicting that a man with stainless steel skin, a small nuclear reactor for a stomach, and an IQ of 10,000 would still be a man. In fact we have no idea what he would be.
Each change in him might be defended as an improvement, but man as we know him is the top growth on a tall tree in a large forest: His kinship with his parents and ancestors and mankind at large, the experience of seeing his own reflection in human history and his fellow man—those things are the crucial part of who he is. If you make him grossly different, he is lost, with no reflection anywhere he looks. If you make lots of people grossly different, they are all lost together—cut adrift from their forebears, from human history and human experience. Of course we do know that whatever these creatures are, untransformed men will be unable to keep up with them. Their superhuman intelligence and strength will extinguish mankind as we know it, or reduce men to slaves or dogs. To wish for such a development is to play dice with the universe.
Luckily for mankind, there is (of course) no reason to believe that brilliant progress in any field will continue, much less accelerate; imagine predicting the state of space exploration today based on the events of 1960–1972. But the real flaw in the Kurzweil Cult’s sickening predictions is that machines do just what we tell them to. They act as they are built to act. We might in principle, in the future, build an armor-plated robot with a stratospheric IQ that refuses on principle to pay attention to human beings. Or an average dog lover might buy a German shepherd and patiently train it to rip him to shreds. Both deeds are conceivable, but in each case, sane persons are apt to intervene before the plan reaches completion.
Subjectivity is your private experience of the world: your sensations; your mental life and inner landscape; your experiences of sweet and bitter, blue and gold, soft and hard; your beliefs, plans, pains, hopes, fears, theories, imagined vacation trips and gardens and girlfriends and Ferraris, your sense of right and wrong, good and evil. This is your subjective world. It is just as real as the objective physical world.
This is why the idea of objective reality is a masterpiece of Western thought—an idea we associate with Galileo and Descartes and other scientific revolutionaries of the 17th century. The only view of the world we can ever have is subjective, from inside our own heads. That we can agree nonetheless on the observable, exactly measurable, and predictable characteristics of objective reality is a remarkable fact. I can’t know that the color I call blue looks to me the same way it looks to you. And yet we both use the word blue to describe this color, and common sense suggests that your experience of blue is probably a lot like mine. Our ability to transcend the subjective and accept the existence of objective reality is the cornerstone of everything modern science has accomplished.
But that is not enough for the philosophers of mind. Many wish to banish subjectivity altogether. “The history of philosophy of mind over the past one hundred years,” the eminent philosopher John Searle has written, “has been in large part an attempt to get rid of the mental”—i.e., the subjective—“by showing that no mental phenomena exist over and above physical phenomena.”
Why bother? Because to present-day philosophers, Searle writes, “the subjectivist ontology of the mental seems intolerable.” That is, your states of mind (your desire for adventure, your fear of icebergs, the ship you imagine, the girl you recall) exist only subjectively, within your mind, and they can be examined and evaluated by you alone. They do not exist objectively. They are strictly internal to your own mind. And yet they do exist. This is intolerable! How in this modern, scientific world can we be forced to accept the existence of things that can’t be weighed or measured, tracked or photographed—that are strictly private, that can be observed by exactly one person each? Ridiculous! Or at least, damned annoying.
And yet your mind is, was, and will always be a room with a view. Your mental states exist inside this room you can never leave and no one else can ever enter. The world you perceive through the window of mind (where you can never go—where no one can ever go) is the objective world. Both worlds, inside and outside, are real.
The ever astonishing Rainer Maria Rilke captured this truth vividly in the opening lines of his eighth Duino Elegy, as translated by Stephen Mitchell: “With all its eyes the natural world looks out/into the Open. Only our eyes are turned backward….We know what is really out there only from/the animal’s gaze.” We can never forget or disregard the room we are locked into forever.
The Brain as Computer.
The dominant, mainstream view of mind nowadays among philosophers and many scientists is computationalism, also known as cognitivism. This view is inspired by the idea that minds are to brains as software is to computers. “Think of the brain,” writes Daniel Dennett of Tufts University in his influential 1991 Consciousness Explained, “as a computer.” In some ways this is an apt analogy. In others, it is crazy. At any rate, it is one of the intellectual milestones of modern times.
How did this “master analogy” become so influential?
Consider the mind. The mind has its own structure and laws: It has desires, emotions, imagination; it is conscious. But no mind can exist apart from the brain that “embodies” it. Yet the brain’s structure is different from the mind’s. The brain is a dense tangle of neurons and other cells in which neurons send electrical signals to other neurons downstream via a wash of neurotransmitter chemicals, like beach bums splashing each other with bucketfuls of water.
Two wholly different structures, one embodied by the other—this is also a precise description of computer software as it relates to computer hardware. Software has its own structure and laws (software being what any “program” or “application” is made of—any email program, web search engine, photo album, iPhone app, video game, anything at all). Software consists of lists of instructions that are given to the hardware—to a digital computer. Each instruction specifies one picayune operation on the numbers stored inside the computer. For example: Add two numbers. Move a number from one place to another. Look at some number and do this if it’s 0.
Large lists of tiny instructions become complex mathematical operations, and large bunches of those become even more sophisticated operations. And pretty soon your application is sending spacemen hurtling across your screen firing lasers at your avatar as you pelt the aliens with tennis balls and chat with your friends in Idaho or Algiers while sending notes to your girlfriend and keeping an eye on the comic-book news. You are swimming happily within the rich coral reef of your software “environment,” and the tiny instructions out of which the whole thing is built don’t matter to you at all. You don’t know them, can’t see them, are wholly unaware of them.
The gorgeously varied reefs called software are a topic of their own—just as the mind is. Software and computers are two different topics, just as the psychological or phenomenal study of mind is different from brain physiology. Even so, software cannot exist without digital computers, just as minds cannot exist without brains.
That is why today’s mainstream view of mind is based on exactly this analogy: Mind is to brain as software is to computer. The mind is the brain’s software—this is the core idea of computationalism.
Of course computationalists don’t all think alike. But they all believe in some version of this guiding analogy. Drew McDermott, my colleague in the computer science department at Yale University, is one of the most brilliant (and in some ways, the most heterodox) of computationalists. “The biological variety of computers differs in many ways from the kinds of computers engineers build,” he writes, “but the differences are superficial.” Note here that by biological computer, McDermott means brain.
McDermott believes that “computers can have minds”—minds built out of software, if the software is correctly conceived. In fact, McDermott writes, “as far as science is concerned, people are just a strange kind of animal that arrived fairly late on the scene….[My] purpose…is to increase the plausibility of the hypothesis that we are machines and to elaborate some of its consequences.”
John Heil of Washington University describes cognitivism this way: “Think about states of mind as something like strings of symbols, sentences.” In other words: astate of mind is like a list of numbers in a computer. And so, he writes, “mental operations are taken to be ‘computations over symbols.’” Thus, in the cognitivist view, when you decide, plan, or believe, you are computing, in the sense that software computes.
But what about consciousness? If the brain is merely a mechanism for thinking or problem-solving, how does it create consciousness?
Most computationalists default to the Origins of Gravy theory set forth by Walter Matthau in the film of Neil Simon’s The Odd Couple. Challenged to account for the emergence of gravy, Matthau explains that, when you cook a roast, “it comes.” That is basically how consciousness arises too, according to computationalists. It just comes.
In Consciousness Explained, Dennett lays out the essence of consciousness as follows: “The concepts of computer science provide the crutches of imagination we need to stumble across the terra incognita between our phenomenology as we know it by ‘introspection’ and our brains as science reveals them to us.” (Note the chuckle-quotes around introspection; for Dennett, introspection is an illusion.) Specifically: “Human consciousness can best be understood as the operation of a ‘von Neumannesque’ virtual machine.” Meaning, it is a software application (avirtual machine) designed to run on any ordinary computer. (Hence von Neumannesque: the great mathematician John von Neumann is associated with the invention of the digital computer as we know it.)
Thus consciousness is the result of running the right sort of program on anorganic computer also called the human brain. If you were able to download the right app on your phone or laptop, it would be conscious, too. It wouldn’t merelytalk or behave as if it were conscious. It would be conscious, with the same sort of rich mental landscape inside its head (or its processor or maybe hard drive) as you have inside yours: the anxious plans, the fragile fragrant memories, the ability to imagine a baseball game or the crunch of dry leaves underfoot. All that just by virtue of running the right program. That program would be complex and sophisticated, far more clever than anything we have today. But no different fundamentally, say the computationalists, from the latest video game.
But the master analogy—between mind and software, brain and computer—is fatally flawed. It falls apart once you mull these simple facts:
1. You can transfer a program easily from one computer to another, but you can’t transfer a mind, ever, from one brain to another.
2. You can run an endless series of different programs on any one computer, but only one “program” runs, or ever can run, on any one human brain.
3. Software is transparent. I can read off the precise state of the entire program at any time. Minds are opaque—there is no way I can know what you are thinking unless you tell me.
4. Computers can be erased; minds cannot.
5. Computers can be made to operate precisely as we choose; minds cannot.
There are more. Come up with them yourself. It’s easy.
There is a still deeper problem with computationalism. Mainstream computationalists treat the mind as if its purpose were merely to act and not to be. But the mind is for doing and being. Computers are machines, and idle machines are wasted. That is not true of your mind. Your mind might be wholly quiet, doing(“computing”) nothing; yet you might be feeling miserable or exalted, or awestruck by the beauty of the object in front of you, or inspired or resolute—and such moments might be the center of your mental life. Or you might merely beconscious. “I cannot see what flowers are at my feet,/Nor what soft incense hangs upon the boughs….Darkling I listen….” That was drafted by the computer known as John Keats.
Emotions in particular are not actions but states of being. And emotions are central to your mental life and can shape your behavior by allowing you to compare alternatives to determine which feels best. Jane Austen, Persuasion: “He walked to the window to recollect himself, and feel how he ought to behave.” Henry James,The Ambassadors: The heroine tells the hero, “no one feels so much as you. No—not any one.” She means that no one is so precise, penetrating, and sympathetic an observer.
Computationalists cannot account for emotion. It fits as badly as consciousness into the mind-as-software scheme.
The Body and the Mind.
And there is (at least) one more area of special vulnerability in the computationalist worldview. Computationalists believe that the mind is embodied by the brain, and the brain is simply an organic computer. But in fact, the mind is embodied not by the brain but by the brain and the body, intimately interleaved. Emotions are mental states one feels physically; thus they are states of mind and body simultaneously. (Angry, happy, awestruck, relieved—these are physical as well as mental states.) Sensations are simultaneously mental and physical phenomena. Wordsworth writes about his memories of the River Wye: “I have owed to them/In hours of weariness, sensations sweet,/Felt in the blood, and felt along the heart/And passing even into my purer mind…”
About the Author
David Gelernter is a professor of computer science at Yale. His book Subjectivism: The Mind from Inside will be published by Norton later this year.
- Thomas Nagel Identifies the Problem (junkscience.com)
- Thomas Nagel on the Emergence of Consciousness (keithburgess-jackson.typepad.com)
- As If To Prove My Point… [EvolutionBlog] (scienceblogs.com)
- ‘The Closing of the Scientific Mind’ (pjmedia.com)
- Should Nagel’s Book Be on the Philosophical Index Librorum Prohibitorum? (maverickphilosopher.typepad.com)
- Why the Materialist Neo-Darwinian Conception of Nature is Almost Certainly False (jamescungureanu.wordpress.com)
- Keith Burgess-Jackson on Thomas Nagel (maverickphilosopher.typepad.com)
- Where Thomas Nagel Went Wrong (3quarksdaily.com)