An analysis of the chinese room argument

Overview Work in Artificial Intelligence AI has produced computer programs that can beat the world chess champion and defeat the best human players on the television quiz show Jeopardy. AI has also produced programs with which one can converse in natural language, including Apple's Siri.

An analysis of the chinese room argument

References and Further Reading 1.

The Chinese Room Thought Experiment Against "strong AI," Searle a asks you to imagine yourself a monolingual English speaker "locked in a room, and given a large batch of Chinese writing" plus "a second batch of Chinese script" and "a set of rules" in English "for correlating the second batch with the first batch.

Nevertheless, you "get so good at following the instructions" that "from the point of view of someone outside the room" your responses are "absolutely indistinguishable from those of Chinese speakers. I do not understand a word of the Chinese stories.

I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing.

Furthermore, since in the thought experiment "nothing. Replies and Rejoinders Having laid out the example and drawn the aforesaid conclusion, Searle considers several replies offered when he "had the occasion to present this example to a number of workers in artificial intelligence" a, p.

Searle offers rejoinders to these various replies.

An analysis of the chinese room argument

The systems reply grants that "the individual who is locked in the room does not understand the story" but maintains that "he is merely part of a whole system, and the system does understand the story" a, p. Searle also insists the systems reply would have the absurd consequence that "mind is everywhere.

The Robot Reply The Robot Reply - along lines favored by contemporary causal theories of reference - suggests what prevents the person in the Chinese room from attaching meanings to and thus presents them from understanding the Chinese ciphers is the sensory-motoric disconnection of the ciphers from the realities they are supposed to represent: Against the Robot Reply Searle maintains "the same experiment applies" with only slight modification.

All I do is follow formal instructions about manipulating formal symbols. Against this, Searle insists, "even getting this close to the operation of the brain is still not sufficient to produce understanding" as may be seen from the following variation on the Chinese room scenario.

Instead of shuffling symbols, we "have the man operate an elaborate set of water pipes with valves connecting them. Each water connection corresponds to synapse in the Chinese brain, and the whole system is rigged so that after. Surely, now, "we would have to ascribe intentionality to the system" a, p.

Access denied | benjaminpohle.com used Cloudflare to restrict access

Searle responds, in effect, that since none of these replies, taken alone, has any tendency to overthrow his thought experimental result, neither do all of them taken together: Though it would be "rational and indeed irresistible," he concedes, "to accept the hypothesis that the robot had intentionality, as long as we knew nothing more about it" the acceptance would be simply based on the assumption that "if the robot looks and behaves sufficiently like us then we would suppose, until proven otherwise, that it must have mental states like ours that cause and are expressed by its behavior.

Searle responds that this misses the point: The Many Mansions Reply The Many Mansions Reply suggests that even if Searle is right in his suggestion that programming cannot suffice to cause computers to have intentionality and cognitive states, other means besides programming might be devised such that computers may be imbued with whatever does suffice for intentionality by these other means.

This too, Searle says, misses the point: A1 Programs are formal syntactic. A2 Minds have mental contents semantics.

A3 Syntax by itself is neither constitutive of nor sufficient for semantics. C1 Programs are neither constitutive of nor sufficient for minds. Searle then adds a fourth axiom p. A4 Brains cause minds.

C2 Any other system capable of causing minds would have to have causal powers at least equivalent to those of brains. C3 Any artifact that produced mental phenomena, any artificial brain, would have to be able to duplicate the specific causal powers of brains, and it could not do that just by running a formal program.

C4 The way that human brains actually produce mental phenomena cannot be solely by virtue of running a computer program. Continuing Dispute To call the Chinese room controversial would be an understatement.Before continuing to my adapted rendering of the Chinese room argument appearing in Searle's article, the reader should understand that the Chinese room that Searle describes in his argument is designed to be identical in principle to any.

The Chinese room argument leaves open the possibility that a digital machine could be built that acts more intelligently than a person, Searle disagrees with this analysis and argues that "the study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't what we wanted to.

Oct 03,  · The Chinese Room - Second Adventures in Thought (3/6) OpenLearn from The Open University.

An encyclopedia of philosophy articles written by professional philosophers.

An argument against computers ever being truly intelligent. (Part 3 of 6). The argument and thought-experiment now generally known as the Chinese Room Argument was first published in a paper in by American philosopher John Searle ().

It has become one of the best-known arguments in recent philosophy. Searle's Chinese Room Experiment: An Analysis Words | 10 Pages. Properly Translating the Chinese Room John Searle's thought experiment concerning the "Chinese Room" attempts to disprove that so-called "strong-AI" (artificial intelligence that demonstrates "true" thinking and "understanding") could ever possibly exist.

Searle's Chinese room argument is analyzed from a cognitive point of view. The analysis is based on a newly developed model of conceptual integration, the many space model proposed by .

Chinese room - Wikipedia