Friday, August 24, 2007

Why Prof. John Searle is Wrong!
In Philosophy we talk about a lot of things like Do we exist? or How do we know that we're alive? At one point, Philosophy and Computer Science met and opened up a whole can of worms. People started asking questions like What is thinking?, Could we tell if our brains really are stuck in a Matrix like in the movie?, and Are we biological machines? The topic that most interested me was Can Machines Ever Think?

The Turing Test
We can't talk about thinking machines without first talking about Alan Turing. Alan Turing was British mathematician often cited as the Father of Computer Science. In fact, the computers we use today are called Turing Machines in computer science lingo. Turing also thought about whether machines would ever think, but before he could do that, he had to answer the question of How can we tell that a machine is thinking? To answer this question, Turing devised a test. Suppose that you had two rooms that you could not look into. In one room there was a person and in another room there was a machine that would try to answer questions like a human would. Then, without know who/what was in each room, another person (the tester) would write some questions on a piece of paper and slide it underneath each door, and whoever/whatever was in the room would try to answer the question in such a way as to seem as human as possible by writing the response on another piece of paper and sliding it back under the door. If the tester was unable to determine which room contained the machine after several rounds of questions then the machine passes the Turing Test and can be deemed intelligent by Turing's standards.



The Chinese Room Argument
John Searle is a Philosophy professor at the Univ. of California, Berkeley who came up with a mental experiment to show that even if you could make a program that would pass the Turing Test, it still couldn't be intelligent because the program is mindlessly following instructions. Searle says that the program would lack intentionality and cognitive states which he believes are requirements to demonstrating intelligence. Here is the original text The argument asks you to pretend that there is a non-Chinese speaker in a closed room. Someone can write a Chinese question, slip it under the door and the guy inside will lookup a bunch of rule books on how to process these symbols into Chinese answers and push them back out under the door.
You can read it yourself, but the main point of the argument is suppose the guy in the room becomes so efficient at processing those rules and producing Chinese answers that to people outside the room, he would appear to be a Chinese speaking person. The Chinese room would then pass the Turing Test, but since the person doesn't understand Chinese and rules by themselves cannot think, we must conclude that Strong AI (or machines that can think at the level that we do) cannot exist.


This, however, is a false conclusion. Whether or not the man inside the room understands Chinese has no bearing on whether the program executed by the man can understand Chinese.


Let me elaborate with a more simple example. Hopefully you still know how to do long multiplication. If you remember, skip to the next paragraph. If you don't remember, you stack the two numbers that you want to multiply on top of one another, then you take the right most digit of the bottom number and multiply that with all the digits of the top number whilst doing the appropriate carry over. This partial product is written below the numbers to be multiplied. Once you're done with the bottom right most digit, you repeat the process with the digit next to it, but write down the partial product offset one digit to the left of the previous partial product. Then when all thats done, you add up all the partial products assuming you consider all the offset digits in the partial products are zero. This sum is the product of the two original numbers... whew!!!


Okay, now you know how to do long multiplication, but do you understand long multiplication? Do you know what the partial products are for or why you got to offset them. Do you know why you have to carry over when multiplying by a single digit? If you can't answer these questions, then you don't understand long multiplication, you only know how to follow the instructions. What if you got really good at following those instructions and were able to do long multiplication in your head, would you be any better at understanding long multiplication? No.


Now here's the pseudo-paradox that Searle wants to convince you of. Let me apply Searle's argument to long multiplication: Suppose you could exhibit the same behavior of someone who understands long multiplication by simply showing that you can compute the answer, but although, you appear to understand long mulitplication, you don't actually understand it, therefore the long multiplication process can never understand long multiplication.


Unfortunately, we know that people are able to do long multiplication without understanding it so this cannot be a paradox. If this is not a paradox, we must conclude that the process does understand long multiplication. Thus, Searle's argument proves nothing.


You're probably saying "Huh? That doesn't make sense" right about now, right? Let me explain how I can compare long multiplication to real human thought.






No comments: