Most proponents of strong artificial intelligence consider John Searle a naysayer. Searle, contrary to every AI-geek’s dream, asserts that no matter how much code we write, a computer will never gain sentient understanding. He illustrates his claim with the following example:
Suppose a computer could be programmed to speak Chinese well enough such that a Chinese speaker could ask the computer a question and the computer could respond correctly and idiomatically. The Chinese speaker would not know whether he or she were conversing with a human or a computer, and thus the computer can be said to have human-like understanding, right?
Wrong, according to Searle. He imagined himself sitting inside the computer, performing the very computer-like function of accepting the input of Chinese characters and then using a system of rules (millions of cross-indexed file cabinets, in his example) to decode these symbols and choose an appropriate response. He could do this for years and years, eventually becoming proficient enough to offer responses correct and idiomatic enough to converse with a native Chinese speaker. Still, he would never actually learn how to speak Chinese.
There are many counterarguments, including the idea that, while Searle himself, sitting inside the computer, doesn’t understand Chinese, the system as a whole—Searle, the input system, the filing cabinets, and the output system—does.
What d'you think?
Comments