Most of us are familiar with, or at least aware of, the Turing Test. Simply stated, a computer program is said to have passed this Test if a person interacting with it cannot determine if he/she is conversing with a human or a machine.
Recently, I came across a more intriguing test: The Chinese Room. Consider this: There is a closed room, which has a computer running a software program. We, humans, interact with the program only through a input/output terminal. We input questions or sentences in English, the computer program analyzes them, creates its answers/responses, converts them to Chinese and sends to us through output terminal. If the program is really good, we will not know if we are conversing with a computer or a Chinese person in the room. The machine passes Turing Test.
Now, suppose there is a person in the room equipped with the printed version of the same program. He does not understand Chinese. Again, we pass on questions to him through input. Using instructions from program, the person converts answers to Chinese symbols and gives them out through the output. Again, this setup would pass the Turing test... And yet, the fact remains that the person doesn't know Chinese at all, and all he is doing is converting symbols, the same task that the software program did before.
This thought experiment raises several interesting questions: In this scenario, can the program be considered to have same mind as that of the person in the room? Does the program or the non-Chinese speaking person 'understand' the conversation, as we humans would do? This is similar to the question that Garry Kasparov has repeatedly asked about Deep Blue. The supercomputer may have defeated the Grandmaster, but does it 'understand' Chess?
Recently, I came across a more intriguing test: The Chinese Room. Consider this: There is a closed room, which has a computer running a software program. We, humans, interact with the program only through a input/output terminal. We input questions or sentences in English, the computer program analyzes them, creates its answers/responses, converts them to Chinese and sends to us through output terminal. If the program is really good, we will not know if we are conversing with a computer or a Chinese person in the room. The machine passes Turing Test.
Now, suppose there is a person in the room equipped with the printed version of the same program. He does not understand Chinese. Again, we pass on questions to him through input. Using instructions from program, the person converts answers to Chinese symbols and gives them out through the output. Again, this setup would pass the Turing test... And yet, the fact remains that the person doesn't know Chinese at all, and all he is doing is converting symbols, the same task that the software program did before.
This thought experiment raises several interesting questions: In this scenario, can the program be considered to have same mind as that of the person in the room? Does the program or the non-Chinese speaking person 'understand' the conversation, as we humans would do? This is similar to the question that Garry Kasparov has repeatedly asked about Deep Blue. The supercomputer may have defeated the Grandmaster, but does it 'understand' Chess?
No comments:
Post a Comment