Смекни!
smekni.com

Philosophy Of Mind Essay Research Paper In

Philosophy Of Mind Essay, Research Paper

In this paper I plan to show that Searle is correct in claiming that his Chinese

Room Analogy shows that any Turing machine simulation of human understanding of

a linguistic phenomenon fails to possess any real understanding. First I will

explain the Chinese Room Analogy and how it is compared to a Turing machine. I

will then show that the machine can not literally be said to understand. A

Turing machine has a infinite number of internal states, but always begins a

computation in the initial state go. Turing machines can be generalized in

various ways. For example many machines can be connected, or a single machines

may have more than one reader-printer under command of the control. The machines

are set to accept input and give output based on the type of input given. When

comparing the Turing machine simulation of understanding to actual human

understanding you ca see the story given as input, and the answers to questions

about the story as output. In the Chinese Room Analogy Searle supposed that he

was locked in a room with a large batch of Chinese writing referred to as

"scripts". By using the term "script" it is meant to say

that this first batch of Chinese writing is the original or principal instrument

or document. Further more in this case he is said not to know any Chinese,

either written or spoken. The Chinese writing is described by Searle as "

meaningless squiggles". Next he is presented with a second batch of Chinese

writing referred to as a "story". The term story here is meant to

describe the second batch to be an account of incidents or events that will be

used to make a statement regarding the facts pertinent to the incidents or

events that will follow. Accompanied with the second batch of writing is a set

of written rules written in English that is meant to be used for correlating the

two batches called a "program". The "program" given to

Searle is meant to used as a printed outline of a particular order to be

followed to correlate the Chinese symbols. The rules, or the

"program", will allow Searle to correlate the symbols entirely by

their shape. Finally a third batch of Chinese symbols is presented along with

further instructions in English, referred to as "questions". The

"questions" are implemented as a way to interrogate Searle in such a

manner that his competence in the situation will be given. These

"questions" allow the third batch to be correlated with the first two

batches. It is supposed in this analogy that after a while he becomes so good at

following the instructions to manipulate the symbols, while giving the correct

answers, that is becomes impossible for a man from outside the direct point of

view to distinguish his answers from that of a native Chinese speaker. The

Chinese Room Analogy goes a step further when he is given large batches of

English, called "stories", which he of course understands as native

English speaker. The story in this case is to be used just as it was in the

previous case, to describe the batch as an account of incidents or events that

will be used to make a statement regarding the facts pertinent to the incidents

or events that will follow. Much like the case with the Chinese writing

questions are asked in English and he is able to answer them, also in English.

These answers are indistinguishable from that of other native English speakers,

if for no other reason that he is a native speaker himself. The difference here

is that in the Chinese case, Searle is only producing answers based on

manipulation of the symbols of which have no meaning to him, and in the English

case answers are given based on understanding. It is supposed that in the

Chinese case, Searle behaves as nothing more than a computer, performing

operations on formally specified elements. An advocate of the strong AI

(Artificial Intelligence) claim that if a question and answer sequence much like

the case with the Chinese symbols, a machine is not only simulating human

ability but also that the machine can be said to literally understand a story

and provide answers to questions about them. Searle declares that in regard to

the first claim where machine can literally be said to understand a story and

provide answers, that this is untrue. Obviously in the Chinese Room Analogy even

though the inputs and outputs are indistinguishable from that of native Chinese

speaker Searle did not understand the input he was given or the output that he

gave, even if he was giving the correct output for the situation. A computer

would have no more of a true understanding in this analogy than he did. In

regards to the second claim where a machine and its program explain human

ability to understand stories and answer questions about them, Searle also

claims this to be false. He maintains that sufficient conditions of

understanding are not provided by computer, and therefore its programs have

nothing more than he did in the Chinese Room analogy. A Strong AI supporter

would contradict this belief by alleging that when Searle read and understood

the story in English he is doing the exact same thing as when he manipulates the

Chinese symbols. In both cases he was given an input and gave the correct output

for the situation. On the other hand Searle believes that both a Turing machine,

as well as the Chinese Room Analogy are missing something that is essential to

true understanding. When he gave the correct string of symbols in the Chinese

Room analogy, he was working like a Turing machine using instructions with out

full understanding. There is syntax through manipulations, but not semantics.

Searle possibly could be over simplifying the case by focusing only on part of

the Turing machine of set to receive input and give output. Some supporters of

strong AI argued that Searle could be seen as the writing instructions and tape

in the Turing machine just as he was the controller in the Chinese Room analogy.

Strong AI supporters contend that the controller and reading head in a Turing

machine, as well as Searle as the controller of the Chinese Room analogy, cannot

be said to understand meaning behind the stories. The problem is that these

pieces cannot understand, but the whole could. This means that the Turing

machine as a whole and the Chinese Room as a whole understood the depth, yet

what appeared to "control" them did not. Searle never gave a direct

definition of understanding, yet he did declare that categorizing to give output

whether correct or or incorrect can have understanding as single, lone

instruments. In the second scenario where Searle was given "stories"

in English to answer questions, he is obviously able to understand each single

component in the scenario. With the comparison Searle claimed that his Chinese

Room analogy showed that any Turing machine simulation of human understanding

was incomplete. A complete understanding , much like that he possessed in the

scenario containing only English, is only as capable of occurring as the

"piece" in control. Searle is correct in claiming that his Chinese

Room Analogy shows that any Turing machine or computational simulation of human

understanding of a linguistic phenomenon fails to possess real understanding

that a human is able to comprehend.