This post assumes you already know about the Chinese Room Argument developed by John Searle.
Before I refute it, let's begin with a simple explanation of what the experiment entails:
- A person who understands Chinese writes down a message using Chinese characters and places it into the mail slot of a room. The person cannot observe anything about the room's contents including what is going on inside of it.
- Inside the room is John Searle, he has buckets of Chinese symbols and a book that describes for each combination of Chinese symbols, English instructions on how to arrange the Chinese symbols in the buckets onto a piece of paper which he send back out through the same mail slot. Searle, himself has no understanding of Chinese, he blindly follows the instruction in the book.
- The person outside the room, reads the message from Searle and because it's content and the fact that it is written in Chinese is then convinced that there must be a person who understands Chinese in the room.
- Because Searle never understands Chinese, he is only performing syntactic symbol manipulation. There is no semantics going on. Without semantics there can be no strong AI.
Let me now write the steps of the thought experiment down so it's clear before I make changes to them:
- Chinese speaker writes down message, places it in mail slot
- Searle looks at message, thumbs through book to find matching page
- Searle follows the instructions on the page to form a response
- Searle places the response back through the mail slot
- Chinese speaker reads Searle's message and concludes Searle understands Chinese
- Suppose that the room has a 2nd mail slot on the back wall that leads to a 2nd room. Just as the Chinese speaker cannot see into Searle's room, Searle cannot see into the 2nd room. Additionally, to the Chinese speaker, the original room and this room are identical (conceptually the original room is divided into 2 rooms internally).
- The book that Searle uses is also different. It still contains all the same Chinese symbols except now, the place where there were English instructions is blank.
- Searle however can now tear out that page from his book and place it into the 2nd mail slot and within a negligible amount of time, the page will be returned through the same slot with the previously missing English instructions. Assume these instructions match the instructions in the original experiment.
So now let's write these new steps down:
- Chinese speaker writes down message, places it in mail slot
- Searle looks at message, thumbs through book to find matching page
- Searle tears out the page and places it into 2nd mail slot
- The page is returned with the instructions on it
- Searle puts the page back into his book
- Searle follows the instructions on the page to form a response
- Searle places the response back through the mail slot
- Chinese speaker reads Searle's message and concludes Searle understands Chinese
As you can see, the additional steps we added are inconsequential to the experiment. Whether the page already had the instructions on it or whether Searle performs an extra step using purely syntactical operations to get that instruction, the result is the same. Searle still doesn't understand Chinese, so we must conclude there is no Chinese understanding going on and thus strong AI is false.
But wait! I haven't told you what was in the 2nd room
Inside the 2nd room was a person who understands both Chinese & English. He/she used their understanding of Chinese to formulate a response in Chinese, then used their understanding of English to create the necessary instructions for Searle to construct that response.
So what's the problem?
The Chinese room is an analog for a Turing Machine (aka digital computer). By having Searle in essence "be the computer", the idea is that if Searle never understands Chinese, then all computers for which he is an analog for would never understand Chinese either. The test requires that if Chinese understanding were to exist in the Chinese room, then it must be within Searle. Is that true though? Because, didn't we just show that understanding could definitively be in the Chinese room and entirely separate from Searle?
A test that fails to detect what it's meant to detect, shouldn't be used to infer anything.
Systems Reply?
Some may point to Searle's systems reply as a rebuttal. Namely that Searle could internalize the content of the book (with the instructions). However doing so results in the same conclusion that he made --> Searle doesn't understand Chinese and that's precisely the point. In this case, where we know Chinese understanding exists, we'd want Searle to then understand Chinese (not only that the same understanding of Chinese as the person in the 2nd room).
I don't believe the 2 experiments are truly identical, surely their difference is consequential
Okay, lets have the same person from the 2nd room go through the entire book filling out the instructions, then and only after doing all of them, he places the book in Searle's original room and leaves. The Chinese room starts off indistinguishably from the original. The room, book, people are all the same. The only difference being the author of the book. In the original it's a programmer, in the 2nd it's the person who understands Chinese and English.