Will Machines Ever Become Human?Posted: January 22, 2017 | |
David Gelernter writes: No. Digital computers won’t; and in the world as we know it, they are the only candidate machines.
What does “human” mean? Humans are conscious and intelligent — although it’s curiously easy to imagine one attribute without the other. An intelligent but unconscious being is a “zombie” in science fiction — and to philosophers and technologists too. We can also imagine a conscious non-intelligence. It would experience its environment as a flow of unidentified, meaningless sensations engendering no mental activity beyond mere passive awareness.
Some day, digital computers will almost certainly be intelligent. But they will never be conscious. One day we are likely to face a world full of real zombies and the moral and philosophical problems they pose. I’ll return to these hard questions.
The possibility of intelligent computers has obsessed mankind since Alan Turing first raised it formally in 1950. Turing was vague about consciousness, which he thought unnecessary to machine intelligence. Many others have been vague since. But artificial consciousness is surely as fascinating as artificial intelligence.
Digital computers won’t ever be conscious; they are made of the wrong stuff (as the philosopher John Searle first argued in 1980). A scientist, Searle noted, naturally assumes that consciousness results from the chemical and physical structure of humans and animals — as photosynthesis results from the chemistry of plants. (We assume that animals have a sort of intelligence, a sort of consciousness, to the extent they seem human-like.) You can’t program your laptop to transform carbon dioxide into sugar; computers are made of the wrong stuff for photosynthesis — and for consciousness too.
No serious thinker argues that computers today are conscious. Suppose you tell one computer and one man to imagine a rose and then describe it. You might get two similar descriptions, and be unable to tell which is which. But behind these similar statements, a crucial difference. The man can see and sense an imaginary rose in his mind. The computer can put on a good performance, can describe an imaginary rose in detail — but can’t actually see or sense anything. It has no internal mental world; no consciousness; only a blank.
Bur some thinkers reject the wrong-stuff argument and believe that, once computers and software grow powerful and sophisticated enough, they will be conscious as well as intelligent.
They point to a similarity between neurons, the brain’s basic component, and transistors, the basic component of computers. Both neurons and transistors transform incoming electrical signals to outgoing signals. Now a single neuron by itself is not conscious, not intelligent. But gather lots together in just the right way and you get the brain of a conscious and intelligent human. A single transistor seems likewise unpromising. But gather lots together, hook them up right and you will get consciousness, just as you do with neurons.
But this argument makes no sense. One type of unconscious thing (neurons) can create consciousness in the right kind of ensemble. Why should the same hold for other unconscious things? In every other known case, it does not hold. No ensemble of soda cans or grapefruit rinds is likely to yield consciousness. Yes but transistors, according to this argument, resemble neurons in just the right way; therefore they will act like neurons in creating consciousness. But this “exactly right resemblance” is just an assertion, to be taken on trust. Neurons resemble heart cells more closely than they do transistors, but hearts are not conscious.
In fact, an ensemble of transistors is not even the case we’re discussing; we’re discussing digital computers and software. “Computationalist” philosophers and psychologists and some artificial intelligence researchers believe that digitalcomputers will one day be conscious and intelligent. In fact they go farther and assert that mental processes are in essence computational; they build a philosophical worldview on the idea that mind relates to brain as software relates to computer.
So let’s turn to the digital computer. It is an ensemble of (1) the processor, which executes (2) the software, which (when it is executed) has the effect of changing the data stored in (3) the memory. The memory stores data in numerical form, as binary integers or “bits.” Software can be understood many ways, but in basic terms it is a series of commands to be executed by the processor, each carrying out a simple arithmetic (or related) operation, each intended to accomplish one part of a (potentially complex) transformation of data in the memory.
In other words: by executing software, the processor gradually transforms the memory from an input state to an output or result state — as old-fashioned film was transformed (or developed) from its input state — the exposed film, seemingly blank — to a result state, bearing the image caught by the lens. A digital computer is a memory-transforming machine, where the process of transformation is dictated by the software. We can picture a digital computer as a gigantic blackboard (the memory) ruled into squares, each large enough to hold the symbol 0 or 1, and a robot (the processor) moving blazingly fast over the blackboard, erasing old bits and writing new ones. Such a machine is in essence the “Turing machine” of 1936, which played a fundamental role in the development of theoretical computer science.
So: everyone agrees that today’s computers are not conscious, but some believe that….(read more)
- Can artificial intelligence help Japanese bureaucrats write answers to Diet questions? (japantimes.co.jp)
- Tyrant in the code (techcrunch.com)
- 4 ways Artificial Intelligence changes the game for SEO (thenextweb.com)
- Europe is prepping for a world where robots need rights (mashable.com)
- Artificial Intelligence Becomes More Accessible (insidehpc.com)
- How You’ll Search For A Job In 2017 (fastcompany.com)
- Practical artificial intelligence in the cloud (oreilly.com)