这是用户在 2024-5-2 1:21 为 https://app.immersivetranslate.com/pdf-pro/39a8f4e6-f84b-40e2-b0a2-8c3706154f91 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?

Minds, brains, and programs

John R. Searle 约翰-R-塞尔Department of Philosophy, University of California, Berkeley, Calif.

Abstract 摘要

This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences: (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4.
这篇文章可以看作是探讨两个命题后果的尝试。(1) 人类(和动物)的意向性是大脑因果特征的产物。我假定这是一个关于心理过程与大脑之间实际因果关系的经验事实。它只是说某些大脑过程足以产生意向性。(2)计算机程序的实例化本身绝不是意向性的充分条件。本文的主要论点就是为了确立这一主张。论证的形式是说明人类代理如何能够实例化程序,但仍然不具备相关的意向性。这两个命题具有如下后果:(3) 对大脑如何产生意向性的解释,不能是通过实例化计算机程序来实现的。这是 1 和 2 的严格逻辑后果。 (4) 任何能够产生意向性的机制必须具有与大脑相同的因果能力。(5) 任何从字面上人为创造意向性(强人工智能)的尝试都不可能仅仅通过设计程序而成功,而必须复制人脑的因果能力。这是从 2 和 4 得出的结论。

"Could a machine think?" On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking

Keywords: artificial intelligence; brain; intentionality; mind
What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI (Artificial Intelligence). According to weak , the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states. In strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations.
我们应该对最近计算机模拟人类认知能力的努力赋予怎样的心理学和哲学意义?在回答这个问题时,我认为有必要区分我所说的 "强 "人工智能与 "弱 "或 "谨慎 "人工智能(Artificial Intelligence)。根据弱人工智能 ,计算机在心智研究中的主要价值在于它为我们提供了一个非常强大的工具。例如,它使我们能够以更严格、更精确的方式提出和检验假设。但是,根据强人工智能的观点,计算机并不仅仅是研究心智的工具;相反,经过适当编程的计算机才是真正的心智,因为计算机只要有正确的程序,就可以说它能够理解并具有其他认知状态。在强人工智能中,由于被编程的计算机具有认知状态,因此程序并不仅仅是使我们能够检验心理学解释的工具;相反,程序本身就是解释。
I have no objection to the claims of weak AI, at least as far as this article is concerned. My discussion here will be directed at the claims I have defined as those of strong AI, specifically the claim that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition. When I hereafter refer to have in mind the strong version, as expressed by these two claims.
至少就本文而言,我不反对弱人工智能的主张。我在这里的讨论将针对我定义为强人工智能的主张,具体来说,就是经过适当编程的计算机确实具有认知状态,而且这些程序可以解释人类的认知。当我在下文中提到 时,我想到的是这两种主张所表达的强人工智能版本。
I will consider the work of Roger Schank and his colleagues at Yale (Schank & Abelson 1977), because I am more familiar with it than I am with any other similar claims, and because it provides a very clear example of the sort of work I wish to examine. But nothing that follows depends upon the details of Schank's programs. The same arguments would apply to Winograd's SHRDLU (Winograd 1973), Weizenbaum's ELIZA (Weizenbaum 1965), and indeed any Turing machine simulation of human mental phenomena.
我将考虑罗杰-申克(Roger Schank)和他在耶鲁大学的同事们的工作(申克和艾贝尔森,1977 年),因为我对他们的工作比我对其他任何类似的主张都要熟悉,而且他们的工作为我想要研究的工作提供了一个非常清晰的例子。但接下来的内容并不取决于 Schank 的程序细节。同样的论点也适用于维诺格拉德的 SHRDLU(维诺格拉德,1973 年)、魏曾鲍姆的 ELIZA(魏曾鲍姆,1965 年),以及任何图灵机对人类心理现象的模拟。
Very briefly, and leaving out the various details, one can describe Schank's program as follows: the aim of the program is to simulate the human ability to understand stories. It is characteristic of human beings' story-understanding capacity that they can answer questions about the story even though the information that they give was never explicitly stated in the story. Thus, for example, suppose you are given the following story: "A man went into a restaurant and ordered a hamburger. When the hamburger arrived it was burned to a crisp, and the man stormed out of the restaurant angrily, without paying for the hamburger or leaving a tip." Now, if you are asked "Did the man eat the hamburger?" you will presumably answer, "No, he did not." Similarly, if you are given the following story: "A man went into a restaurant and ordered a hamburger; when the hamburger came he was very pleased with it; and as he left the restaurant he gave the waitress a large tip before paying his bill," and you are asked the question, "Did the man eat the hamburger?," you will presumably answer, "Yes, he ate the hamburger." Now Schank's machines can similarly answer questions about restaurants in this fashion. To do this, they have a "representation" of the sort of information that human beings have about restaurants, which enables them to answer such questions as those above, given these sorts of stories. When the machine is given the story and then asked the question, the machine will print out answers of the sort that we would expect human beings to give if told similar stories. Partisans of strong AI claim that in this question and answer sequence the machine is not only simulating a human ability but also
撇开各种细节不谈,我们可以非常简要地描述一下 Schank 的程序:该程序的目的是模拟人类理解故事的能力。人类理解故事能力的特点是,他们可以回答有关故事的问题,即使所提供的信息在故事中从未明确表述过。因此,举个例子,假设你得到了下面这个故事:"一个人走进一家餐馆,点了一个汉堡包。当汉堡包送到时,已经被烧成了焦炭,那个人愤怒地冲出了餐馆,既没有付汉堡包的钱,也没有留下小费。现在,如果有人问你:"那个人吃了汉堡包吗?"你大概会回答:"没有,他没有吃。"同样,如果给你讲下面这个故事:"一个人走进一家餐馆,点了一个汉堡包;当汉堡包送来时,他非常满意;当他离开餐馆时,他在付账前给了女服务员一大笔小费",然后问你 "这个人吃了汉堡包吗?"你大概会回答:"是的,他吃了汉堡包"。现在,Schank's 机器也可以用这种方式回答有关餐馆的问题。要做到这一点,机器必须拥有人类所拥有的关于餐馆的信息 "表征",这使得机器能够根据这些故事回答上述问题。当机器得到这个故事,然后被问到问题时,机器就会打印出我们期望人类在听到类似故事时会给出的答案。强人工智能的拥护者声称,在这一问一答的过程中,机器不仅模拟了人类的能力,而且还
  1. that the machine can literally be said to understand the story and provide the answers to questions, and
  2. that what the machine and its program do explains the human ability to understand the story and answer questions about it.
Both claims seem to me to be totally unsupported by Schank's' work, as I will attempt to show in what follows.
在我看来,这两种说法在 Schank 的著作中都是毫无根据的,我将在下文中试图证明这一点。
One way to test any theory of the mind is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on. Let us apply this test to the Schank program with the following Gedankenexperiment. Suppose that I'm locked in a room and given a large batch of Chinese writing. Suppose furthermore
检验任何心智理论的方法之一,就是问问自己,如果我的心智真的按照该理论所说的所有心智的工作原理工作,那会是什么样子。让我们用下面的 "思考实验 "来检验一下 "钱克计划"。假设我被关在一个房间里,并得到了一大批汉字。再假设

(as is indeed the case) that I know no Chinese, either written or spoken, and that I'm not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that "formal" means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch "a script," they call the second batch a "story," and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions," and the set of rules in English that they gave me, they call "the program." Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view that is, from the point of view of somebody outside the room in which I am locked - my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese. Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. From the external point of view - from the point of view of someone reading my "answers" - the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.
(事实的确如此),我不懂中文,无论是书写还是口语,我甚至没有信心将中国文字与日文或毫无意义的方块字区分开来。对我来说,中国文字就是许多毫无意义的方块字。现在再假设,在第一批中国文字之后,我得到了第二批中国文字,以及一套将第二批中国文字与第一批中国文字联系起来的规则。这些规则是用英语写的,我和其他以英语为母语的人一样理解这些规则。这些规则使我能够将一组正式符号与另一组正式符号联系起来,这里所说的 "正式 "是指我可以完全根据符号的形状来识别它们。现在再假设,我又得到了第三批中文符号和一些指令(同样是用英语),这些指令使我能够将第三批符号中的元素与前两批符号联系起来,这些规则指示我如何根据第三批符号中的某些形状还原出某些具有特定形状的中文符号。我不知道,给我所有这些符号的人把第一批符号称为 "剧本",把第二批符号称为 "故事",把第三批符号称为 "问题"。此外,他们把我在回答第三批问题时给他们的符号称为 "问题的答案",而他们给我的那套英语规则则称为 "程序"。现在,让故事变得更复杂一点,假设这些人也用英语给我讲故事,我听得懂,然后他们就这些故事用英语问我问题,我用英语回答他们。又假设过了一段时间,我变得非常善于按照说明操作中文符号,程序员也变得非常善于编写程序,以至于从外部的角度来看,即从我被关在房间外面的人的角度来看,我对问题的回答与母语为中文的人的回答完全没有区别。没有人能从我的答案中看出我一句中文也不会说。让我们再假设一下,我对英语问题的回答也毫无疑问地与其他以英语为母语的人的回答无异,原因很简单,因为我是一个以英语为母语的人。从外部的角度来看,即从阅读我的 "答案 "的人的角度来看,中文问题和英文问题的答案是一样好的。但是,中文问题与英文问题不同,我是通过操作未经解释的形式符号得出答案的。 就中国人而言,我的行为就像一台计算机,我对正式指定的元素进行计算操作。就中文而言,我只是计算机程序的一个实例。
Now the claims made by strong are that the programmed computer understands the stories and that the program in some sense explains human understanding. But we are now in a position to examine these claims in light of our thought experiment.
现在,强 的说法是,被编程的计算机能理解这些故事,而且程序在某种意义上解释了人类的理解。但我们现在可以根据我们的思想实验来审视这些说法。
  1. As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank's computer understands nothing of any stories, whether in Chinese, English, or whatever, since in the Chinese case the computer is me, and in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing.
    至于第一种说法,在我看来,这个例子非常明显,我根本听不懂中文故事。我的输入和输出与以中文为母语的人的输入和输出毫无区别,我可以有任何你喜欢的正式程序,但我仍然什么也听不懂。出于同样的原因,Schank 的计算机对任何故事都一无所知,无论是中文、英文还是其他语言,因为在中文的情况下,计算机就是我,而在计算机不是我的情况下,计算机拥有的东西并不比在我一无所知的情况下拥有的东西多。
  2. As regards the second claim, that the program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding. But does it even provide a necessary condition or a significant contribution to under- standing? One of the claims made by the supporters of strong AI is that when I understand a story in English, what I am doing is exactly the same - or perhaps more of the same - as what I was doing in manipulating the Chinese symbols. It is simply more formal symbol manipulation that distinguishes the case in English, where I do understand, from the case in Chinese, where I don't. I have not demonstrated that this claim is false, but it would certainly appear an incredible claim in the example. Such plausibility as the claim has derives from the supposition that we can construct a program that will have the same inputs and outputs as native speakers, and in addition we assume that speakers have some level of description where they are also instantiations of a program. On the basis of these two assumptions we assume that even if Schank's program isn't the whole story about understanding, it may be part of the story. Well, I suppose that is an empirical possibility, but not the slightest reason has so far been given to believe that it is true, since what is suggested though certainly not demonstrated - by the example is that the computer program is simply irrelevant to my understanding of the story. In the Chinese case I have everything that artificial intelligence can put into me by way of a program, and I understand nothing; in the English case I understand everything, and there is so far no reason at all to suppose that my understanding has anything to do with computer programs, that is, with computational operations on purely formally specified elements. As long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves have no interesting connection with understanding. They are certainly not sufficient conditions, and not the slightest reason has been given to suppose that they are necessary conditions or even that they make a significant contribution to understanding. Notice that the force of the argument is not simply that different machines can have the same input and output while operating on different formal principles - that is not the point at all. Rather, whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything. No reason whatever has been offered to suppose that such principles are necessary or even contributory, since no reason has been given to suppose that when I understand English I am operating with any formal program at all.
    至于第二种说法,即程序解释了人类的理解,我们可以看到,计算机及其程序并没有提供理解的充分条件,因为计算机和程序都在运行,并不存在理解。但是,它是否为理解提供了必要条件或重要贡献呢?强人工智能支持者的一个说法是,当我理解一个英文故事时,我所做的与我操作中文符号时所做的完全相同,或者说更加相同。只是更正式的符号操作将我理解英文故事的情况与我不理解中文故事的情况区分开来。我没有证明这种说法是错误的,但在这个例子中,它肯定是一种不可思议的说法。这种说法之所以可信,是因为我们假设我们可以构建一个程序,它将具有与母语使用者相同的输入和输出,此外,我们还假设说话者具有某种程度的描述,他们也是程序的实例。基于这两个假设,我们认为,即使尚克的程序不是理解的全部,它也可能是理解的一部分。好吧,我想这是一种经验上的可能性,但迄今为止,我们还没有给出丝毫理由来相信这是真的,因为这个例子所暗示的(当然不是证明)是,计算机程序与我对故事的理解根本无关。在中文的例子中,我拥有人工智能可以通过程序输入给我的一切,而我却什么也不懂;在英文的例子中,我理解了一切,而迄今为止,我们根本没有理由认为我的理解与计算机程序有关,也就是说,与纯粹形式化的元素的计算操作有关。只要程序是根据对纯粹形式化元素的计算操作来定义的,那么这个例子所暗示的就是,这些操作本身与理解并没有什么有趣的联系。它们当然不是充分条件,也没有给出丝毫理由来假设它们是必要条件,甚至它们对理解有重大贡献。请注意,论证的力量并不仅仅在于不同的机器可以有相同的输入和输出,但却按照不同的形式原则运行--这根本不是问题的关键。恰恰相反,无论你把什么纯粹的形式原则输入计算机,它们都不足以让人理解,因为人类可以遵循形式原则而不理解任何东西。 我们没有任何理由认为这些原则是必要的,甚至是有帮助的,因为我们没有任何理由认为当我理解英语时,我是按照任何正式的程序来操作的。
Well, then, what is it that I have in the case of the English sentences that I do not have in the case of the Chinese sentences? The obvious answer is that know what the former mean, while I haven't the faintest idea what the latter mean. But in what does this consist and why couldn't we give it to a machine, whatever it is? I will return to this question later, but first I want to continue with the example.
那么,我在英语句子中拥有而在汉语句子中没有的是什么呢?答案显而易见, ,我知道前者是什么意思,而后者是什么意思,我却一无所知。但是,这究竟是什么,为什么我们不能把它交给一台机器,不管它是什么?这个问题我稍后再谈,但首先我想继续举例说明。
I have had the occasions to present this example to several workers in artifical intelligence, and, interestingly, they do not seem to agree on what the proper reply to it is. I get a surprising variety of replies, and in what follows I will consider the most common of these (specified along with their geographic origins).
But first I want to block some common misunderstandings about "understanding": in many of these discussions one finds a lot of fancy footwork about the word "understanding." My critics point out that there are many different degrees of understanding; that "understanding" is not a simple twoplace predicate; that there are even different kinds and levels of understanding, and often the law of excluded middle doesn't even apply in a straightforward way to statements of the form " understands "; that in many cases it is a matter for decision and not a simple matter of fact whether understands ; and so on. To all of these points I want to say: of course, of course. But they have nothing to do with the
但首先,我想消除一些关于 "理解 "的常见误解:在许多讨论中,我们会发现很多关于 "理解 "一词的花哨说法。我的批评者指出,理解有许多不同的程度;"理解 "不是一个简单的两处谓词;理解甚至有不同的种类和层次,排除中立法则甚至常常不能直接适用于 " 理解 "这样的陈述;在许多情况下, 是否理解 是一个需要决定的问题,而不是一个简单的事实问题;等等。对于所有这些观点,我想说的是:当然,当然。但它们与

points at issue. There are clear cases in which "understanding" literally applies and clear cases in which it does not apply; and these two sorts of cases are all I need for this argument. I understand stories in English; to a lesser degree I can understand stories in French; to a still lesser degree, stories in German; and in Chinese, not at all. My car and my adding machine, on the other hand, understand nothing: they are not in that line of business. We often attribute "understanding" and other cognitive predicates by metaphor and analogy to cars, adding machines, and other artifacts, but nothing is proved by such attributions. We say, "The door knows when to open because of its photoelectric cell," "The adding machine knows how (understands how, is able) to do addition and subtraction but not division," and "The thermostat perceives chances in the temperature." The reason we make these attributions is quite interesting, and it has to do with the fact that in artifacts we extend our own intentionality; our tools are extensions of our purposes, and so we find it natural to make metaphorical attributions of intentionality to them; but I take it no philosophical ice is cut by such examples. The sense in which an automatic door "understands instructions" from its photoelectric cell is not at all the sense in which I understand English. If the sense in which Schank's programmed computers understand stories is supposed to be the metaphorical sense in which the door understands, and not the sense in which I understand English, the issue would not be worth discussing. But Newell and Simon (1963) write that the kind of cognition they claim for computers is exactly the same as for human beings. I like the straightforwardness of this claim, and it is the sort of claim I will be considering. I will argue that in the literal sense the programmed computer understands what the car and the adding machine understand, namely, exactly nothing. The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.
有争议的要点。理解 "在字面上适用的情况很明显,不适用的情况也很明显。 我听得懂英语故事;在较小程度上,我听得懂法语故事;在更小程度上,我听得懂德语故事;而我完全听不懂汉语故事。而我的汽车和加法机则什么也听不懂:它们不是干这行的。我们经常通过隐喻和类比把 "理解 "和其他认知谓词归结到汽车、加法器和其他人工制品上,但这种归结并不能证明什么。我们说:"门知道什么时候打开,因为它有光电单元","加法器知道如何(理解如何,能够)做加法和减法,但不能做除法","恒温器能感知温度的变化"。我们做出这些归因的原因非常有趣,这与我们在人工制品中延伸自己的意向性有关; ,我们的工具是我们目的的延伸,因此我们很自然地将意向性的隐喻性归因于它们;但我认为这样的例子并不具有哲学意义。自动门从其光电单元 "理解指令 "的意义与我理解英语的意义完全不同。如果申克的编程计算机理解故事的意义是门理解的隐喻意义,而不是我理解英语的意义,那么这个问题就不值得讨论了。但纽厄尔和西蒙(1963 年)写道,他们所说的计算机认知与人类认知完全相同。我喜欢这种直截了当的说法,这也是我将要考虑的说法。我要论证的是,从字面意义上讲,编程计算机能理解汽车和加法机所理解的东西,即什么也不理解。计算机的理解不仅仅是(就像我对德语的理解一样)片面或不完整的,它是零。
Now to the replies:
I. The systems reply (Berkeley). "While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has 'data banks' of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part."
My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.
Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so unplausible to start with. The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. Still, I think many people who are committed to the ideology of strong will in the end be inclined to say something very much like this; so let us pursue it a bit further. According to one version of this view, while the man in the internalized systems example doesn't understand Chinese in the sense that a native Chinese speaker does (because, for example, he doesn't know that the story refers to restaurants and hamburgers, etc.), still "the man as a formal symbol manipulation system" really does understand Chinese. The subsystem of the man that is the formal symbol manipulation system for Chinese should not be confused with the subsystem for English.
事实上,我甚至都不好意思对系统理论给出这样的答案,因为在我看来,这个理论从一开始就很不靠谱。该理论认为,虽然一个人不懂中文,但这个人和纸片的结合却可能懂中文。我很难想象一个没有被意识形态控制的人会觉得这个想法有什么道理。尽管如此,我认为许多信奉 "强 "意识形态的人最终还是会倾向于这样说,所以让我们再追问一下。根据这种观点的一个版本,虽然内化系统例子中的人并不像母语为中文的人那样理解中文(因为,例如,他不知道这个故事指的是餐馆和汉堡包等),但 "作为形式化符号操纵系统的人 "确实理解中文。这个人的子系统是中文的正式符号操纵系统,不应与英文的子系统混为一谈。
So there are really two subsystems in the man; one understands English, the other Chinese, and "it's just that the two systems have little to do with each other." But, I want to reply, not only do they have little to do with each other, they are not even remotely alike. The subsystem that understands English (assuming we allow ourselves to talk in this jargon of "subsystems" for a moment) knows that the stories are about restaurants and eating hamburgers, he knows that he is being asked questions about restaurants and that he is answering questions as best he can by making various inferences from the content of the story, and so on. But the Chinese system knows none of this. Whereas the English subsystem knows that "hamburgers" refers to hamburgers, the Chinese subsystem knows only that "squiggle squiggle" is followed by "squoggle squoggle." All he knows is that various formal symbols are being introduced at one end and manipulated according to rules written in English, and other symbols are going out at the other end. The whole point of the original example was to argue that such symbol manipulation by itself couldn't be sufficient for understanding Chinese in any literal sense because the man could write "squoggle squoggle" after "squiggle squiggle" without understanding anything in Chinese. And it doesn't meet that argument to postulate subsystems within the man, because the subsystems are no better off than the man was in the first place; they still don't have anything even remotely like what the English-speaking man (or subsystem) has. Indeed, in the case as described, the Chinese subsystem is simply a part of the English subsystem, a part that engages in meaningless symbol manipulation according to rules in English
因此,这个人体内确实有两个子系统,一个能听懂英语,另一个能听懂中文,"只是这两个系统彼此关系不大"。但是,我想回答的是,它们不仅没有什么关系,甚至连一点都不像。听得懂英语的子系统(假设我们允许自己暂时用 "子系统 "这个行话来说话)知道故事是关于餐馆和吃汉堡包的,他知道他被问到的问题是关于餐馆的,他通过对故事内容的各种推断来尽可能地回答问题,等等。但中文子系统对此一无所知。英语子系统知道 "汉堡包 "指的是汉堡包,而汉语子系统只知道 "squiggle squiggle "后面是 "squoggle squoggle"。他所知道的只是,各种形式符号在一端被引入,并按照用英语写成的规则进行操作,而其他符号则在另一端输出。原始例子的全部意义在于论证,这种符号操作本身不足以理解任何字面意义上的中文,因为这个人可以在 "squiggle squiggle "之后写出 "squoggle squoggle",而不理解任何中文。而且,假设这个人体内有子系统也不符合这个论点,因为子系统并不比这个人好到哪里去;他们仍然没有任何东西,甚至与说英语的人(或子系统)所拥有的东西相差甚远。事实上,在上述情况中,中文子系统只是英语子系统的一部分,是按照英语规则进行无意义符号操作的一部分
Let us ask ourselves what is supposed to motivate the systems reply in the first place; that is, what independent grounds are there supposed to be for saying that the agent must have a subsystem within him that literally understands stories in Chinese? As far as I can tell the only grounds are that in the example I have the same input and output as native Chinese speakers and a program that goes from one to the other. But the whole point of the examples has been to try to show that that couldn't be sufficient for understanding, in the sense in which I understand stories in English, because a person, and hence the set of systems that go to make up a person, could have the right combination of input, output, and program and still not understand anything in the relevant literal sense in which I understand English. The only motivation for saying there must be a subsystem in me that understands Chinese is that I have a program and I can pass the Turing test; I can fool native Chinese speakers. But precisely one of the points at issue is the adequacy of the Turing test. The example shows that there could be two "systems," both of which pass the Turing test, but only one of which understands; and it is no argument against this point to say that since they both pass the Turing test they must both understand, since this claim fails to meet the argument that the system in me that understands English has a great deal more than the system that merely processes Chinese. In short, the systems reply simply begs the question by insisting without argument that the system must understand Chinese.
让我们扪心自问,首先是什么促使我们作出系统的回答;也就是说,有什么独立的理由说代理人体内必须有一个子系统能够真正听懂中文故事?在我看来,唯一的理由就是,在这个例子中,我有与母语为中文的人相同的输入和输出,以及一个从一个到另一个的程序。但这些例子的全部意义都在于试图说明,这并不足以让我理解英语故事,因为一个人,也就是构成一个人的一系列系统,可能拥有输入、输出和程序的正确组合,但仍然无法理解我理解英语的相关字面意义上的任何东西。说我体内一定有一个能听懂中文的子系统的唯一动机是,我有一个程序,我能通过图灵测试;我能骗过以中文为母语的人。但问题之一恰恰是图灵测试的充分性。这个例子表明,可能有两个 "系统",它们都通过了图灵测试,但只有一个能听懂;说既然它们都通过了图灵测试,它们就一定都能听懂,这并不能反驳这一点,因为这种说法不能满足这样一个论点,即我体内能听懂英语的系统比仅仅处理中文的系统拥有更多的东西。简而言之,"系统 "的回答只是在 "乞求 "问题,它不加论证地坚持认为系统必须理解中文。
Furthermore, the systems reply would appear to lead to consequences that are independently absurd. If we are to conclude that there must be cognition in me on the grounds that I have a certain sort of input and output and a program
此外,"系统 "的回答似乎会导致一些独立存在的荒谬后果。如果我们以我有某种输入和输出以及一个程序为由,就得出结论说我一定有认知能力

in between, then it looks like all sorts of noncognitive subsystems are going to turn out to be cognitive. For example, there is a level of description at which my stomach does information processing, and it instantiates any number of computer programs, but I take it we do not want to say that it has any understanding [cf. Pylyshyn: "Computation and Cognitition" BBS 3(1) 1980]. But if we accept the systems reply, then it is hard to see how we avoid saying that stomach, heart, liver, and so on, are all understanding subsystems, since there is no principled way to distinguish the motivation for saying the Chinese subsystem understands from saying that the stomach understands. It is, by the way, not an answer to this point to say that the Chinese system has information as input and output and the stomach has food and food products as input and output, since from the point of view of the agent, from my point of view, there is no information in either the food or the Chinese - the Chinese is just so many meaningless squiggles. The information in the Chinese case is solely in the eyes of the programmers and the interpreters, and there is nothing to prevent them from treating the input and output of my digestive organs as information if they so desire.
在这两者之间,各种非认知子系统似乎都会变成认知子系统。例如,在某种描述层次上,我的胃可以进行信息处理,它可以实例化任何数量的计算机程序,但我认为我们并不想说它有任何理解能力[参见 Pylyshyn: "Computation and Cognitition" BBS 3(1) 1980]。但是,如果我们接受系统的回答,那么就很难理解我们如何避免说胃、心、肝等等都是理解子系统,因为没有原则性的方法来区分说中国子系统理解的动机和说胃理解的动机。顺便说一句,说中文系统有信息作为输入和输出,而胃有食物和食品作为输入和输出,并不能回答这一点,因为从代理人的角度来看,从我的角度来看,食物和中文都没有信息--中文只是许多毫无意义的方块字。中文中的信息完全是程序员和解释器眼中的信息,如果他们愿意,没有什么可以阻止他们把我的消化器官的输入和输出当作信息。
This last point bears on some independent problems in strong , and it is worth digressing for a moment to explain it. If strong is to be a branch of psychology, then it must be able to distinguish those systems that are genuinely mental from those that are not. It must be able to distinguish the principles on which the mind works from those on which nonmental systems work; otherwise it will offer us no explanations of what is specifically mental about the mental. And the mental-nonmental distinction cannot be just in the eye of the beholder but it must be intrinsic to the systems; otherwise it would be up to any beholder to treat people as nonmental and, for example, hurricanes as mental if he likes. But quite often in the AI literature the distinction is blurred in ways that would in the long run prove disastrous to the claim that is a cognitive inquiry. McCarthy, for example, writes, "Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance" (McCarthy 1979). Anyone who thinks strong AI has a chance as a theory of the mind ought to ponder the implications of that remark. We are asked to accept it as a discovery of strong AI that the hunk of metal on the wall that we use to regulate the temperature has beliefs in exactly the same sense that we, our spouses, and our children have beliefs, and furthermore that "most" of the other machines in the room - telephone, tape recorder, adding machine, electric light switch, - also have beliefs in this literal sense. It is not the aim of this article to argue against McCarthy's point, so I will simply assert the following without argument. The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't. If you get a theory that denies this point you have produced a counterexample to the theory and the theory is false. One gets the impression that people in AI who write this sort of thing think they can get away with it because they don't really take it seriously, and they don't think anyone else will either. I propose for a moment at least, to take it seriously. Think hard for one minute about what would be necessary to establish that that hunk of metal on the wall over there had real beliefs, beliefs with direction of fit, propositional content, and conditions of satisfaction; beliefs that had the possibility of being strong beliefs or weak beliefs; nervous, anxious, or secure beliefs; dogmatic, rational, or superstitious beliefs; blind faiths or hesitant cogitations; any kind of beliefs. The thermostat is not a candidate. Neither is stomach, liver, adding machine, or telephone. However, since we are taking the idea seriously, notice that its truth would be fatal to strong AI's claim to be a science of the mind. For now the mind is everywhere. What we wanted to know is what distinguishes the mind from thermostats and livers. And if McCarthy were right, strong AI wouldn't have a hope of telling us that.
最后一点涉及到强 中的一些独立问题,值得花点时间来解释一下。如果强 要成为心理学的一个分支,那么它就必须能够区分那些是真正的心理系统和那些不是心理系统的系统。它必须能够区分心智赖以运作的原理与非心智系统赖以运作的原理;否则,它将无法为我们提供关于心智的具体心智的解释。心理与非心理的区别不能只在观察者的眼中,而必须是系统的内在本质;否则,任何观察者都可以随意把人当作非心理的,而把飓风当作心理的。但是,在人工智能文献中,这种区别常常被模糊化,从长远来看,这将对 是一种认知研究的说法造成灾难性的影响。例如,麦卡锡写道:"像恒温器这样简单的机器都可以说是有信念的,而有信念似乎是大多数能够解决问题的机器的特征"(麦卡锡,1979 年)。任何认为强人工智能有机会成为心智理论的人都应该思考一下这句话的含义。我们被要求接受这样一个强人工智能的发现,即我们用来调节温度的墙壁上的金属块与我们、我们的配偶和我们的孩子具有完全相同意义上的信念,而且房间里的 "大多数 "其他机器--电话、录音机、加法器、电灯开关--也具有这种字面意义上的信念。本文的目的不是要反驳麦卡锡的观点,所以我只想不加论证地断言如下。心灵研究的起点是人类有信念,而恒温器、电话和加法器没有信念这样的事实。如果你得到的理论否认了这一点,那么你就提出了一个理论的反例,这个理论就是错误的。给人的印象是,人工智能领域写这种东西的人认为他们可以逍遥法外,因为他们并没有认真对待它,而且他们认为别人也不会认真对待。我建议,至少暂时认真对待一下。请大家认真想一想,要证明那边墙上的那块金属有真正的信念,有符合方向、命题内容和满足条件的信念;有可能是坚定的信念,也有可能是脆弱的信念;有可能是紧张的信念,也有可能是焦虑的信念,有可能是安全的信念;有可能是教条的信念,也有可能是理性的信念,有可能是迷信的信念;有可能是盲目的信仰,也有可能是犹豫不决的思考;有可能是任何一种信念,需要哪些条件。恒温器不是候选者。胃、肝脏、加法器或电话也不是候选对象。 不过,既然我们认真对待这个想法,那么请注意,它的真实性对于强人工智能声称自己是一门心灵科学来说是致命的。现在,心灵无处不在。我们想知道的是,心灵与恒温器和肝脏有何区别。如果麦卡锡是对的,强人工智能就没有希望告诉我们这些。
II. The Robot Reply (Yale). "Suppose we wrote a different kind of program from Schank's program. Suppose we put a computer inside a robot, and this computer would not just take in formal symbols as input and give out formal symbols as output, but rather would actually operate the robot in such a way that the robot does something very much like perceiving, walking, moving about, hammering nails, eating, drinking - anything you like. The robot would, for example, have a television camera attached to it that enabled it to 'see," it would have arms and legs that enabled it to 'act,' and all of this would be controlled by its computer 'brain.' Such a robot would, unlike Schank's computer, have genuine understanding and other mental states."
II.机器人的回答(耶鲁)。"假设我们编写了一种不同于尚克程序的程序。假设我们在机器人体内安装了一台计算机,这台计算机不只是接收形式化符号作为输入,输出形式化符号作为输出,而是实际操作机器人,让机器人做一些非常类似于感知、行走、走动、敲钉子、吃东西、喝水--任何你喜欢的事情。举例来说,机器人会安装一个电视摄像头,让它能够 "看",它还会有胳膊和腿,让它能够 "行动",而这一切都将由它的计算机 "大脑 "来控制。这样的机器人与尚克的计算机不同,它具有真正的理解力和其他精神状态"。
The first thing to notice about the robot reply is that it tacitly concedes that cognition is not soley a matter of formal symbol manipulation, since this reply adds a set of causal relation with the outside world [cf. Fodor: "Methodological Solipsism" BBS 3(1) 1980]. But the answer to the robot reply is that the addition of such "perceptual" and "motor" capacities adds nothing by way of understanding, in particular, or intentionality, in general, to Schank's original program. To see this, notice that the same thought experiment applies to the robot case. Suppose that instead of the computer inside the robot, you put me inside the room and, as in the original Chinese case, you give me more Chinese symbols with more instructions in English for matching Chinese symbols to Chinese symbols and feeding back Chinese symbols to the outside. Suppose, unknown to me, some of the Chinese symbols that come to me come from a television camera attached to the robot and other Chinese symbols that I am giving out serve to make the motors inside the robot move the robot's legs or arms. It is important to emphasize that all I am doing is manipulating formal symbols: I know none of these other facts. I am receiving "information" from the robot's "perceptual" apparatus, and I am giving out "instructions" to its motor apparatus without knowing either of these facts. I am the robot's homunculus, but unlike the traditional homunculus, I don't know what's going on. I don't understand anything except the rules for symbol manipulation. Now in this case I want to say that the robot has no intentional states at all; it is simply moving about as a result of its electrical wiring and its program. And furthermore, by instantiating the program I have no intentional states of the relevant type. All I do is follow formal instructions about manipulating formal symbols.
关于机器人的回答,首先要注意的是,它默认了认知并不完全是形式符号操作的问题,因为这个回答增加了一组与外部世界的因果关系[参见福多:《方法论上的唯我论》BBS 3(1) 1980]。但对机器人回答的回答是,增加这种 "感知 "和 "运动 "能力并没有给尚克的原程序增加任何特别的理解力或一般的意向性。要理解这一点,请注意,同样的思想实验也适用于机器人的情况。假设你把我放在房间里,而不是把电脑放在机器人体内,就像原来的中文案例一样,你给我更多的中文符号和更多的英文指令,让我把中文符号和中文符号进行匹配,并把中文符号反馈到外部。假设,在我不知道的情况下,我收到的一些中文符号来自机器人上的电视摄像机,而我发出的其他中文符号则用于使机器人内部的电机带动机器人的腿或手臂。需要强调的是,我所做的只是在操作形式符号:我对这些其他事实一无所知。我从机器人的 "感知 "装置接收 "信息",并向它的电机装置发出 "指令",但我对这些事实一无所知。我是机器人的同形体,但与传统的同形体不同,我不知道发生了什么。除了符号操纵规则,我什么都不懂。在这种情况下,我想说的是,机器人根本没有意向状态;它只是在电线和程序的作用下活动。此外,通过实例化程序,我也没有相关类型的意向状态。我所做的只是遵循形式化的指令来操作形式化的符号。
III. The brain simulator reply (Berkeley and M.I.T.). "Suppose we design a program that doesn't represent information that we have about the world, such as the information in Schank's scripts, but simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them. The machine takes in Chinese stories and questions about them as input, it simulates the formal structure of actual Chinese brains in processing these stories, and it gives out Chinese answers as outputs. We can even imagine that the machine operates, not with a single serial program, but with a whole set of programs operating in parallel, in the manner that actual human brains presumably operate when they process natural language. Now surely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldn't we also have to deny that native Chinese speakers understood the stories? At the level of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?"
Before countering this reply I want to digress to note that it is an odd reply for any partisan of artificial intelligence (or functionalism, etc.) to make: I thought the whole idea of strong AI is that we don't need to know how the brain works to know how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computer hardwares: on the assumptions of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology. If we had to know how the brain worked to do AI, we wouldn't bother with AI. However, even getting this close to the operation of the brain is still not sufficient to produce understanding. To see this, imagine that instead of a monolingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes.
Now where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn't understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination. The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties.
现在,这个系统的理解力在哪里?它把中文作为输入,模拟中国人大脑突触的形式结构,然后把中文作为输出。但这个人肯定不懂中文,水管也不懂,如果我们想采用我认为荒谬的观点,即人和水管的结合体在某种程度上是理解的,请记住,原则上,这个人可以内化水管的形式结构,并在他的想象中完成所有的 "神经元颤动"。大脑模拟器的问题在于,它模拟的大脑是错误的。只要它模拟的只是神经元在突触处闪烁序列的形式结构,它就没有模拟出大脑的重要内容,即大脑的因果属性,即大脑产生意向状态的能力。而形式属性对于因果属性来说是不够的,水管的例子就证明了这一点:我们可以把所有的形式属性从相关的神经生物学因果属性中剥离出来。
IV. The combination reply (Berkeley and Stanford). "While each of the previous three replies might not be completely convincing by itself as a refutation of the Chinese room counterexample, if you take all three together they are collectively much more convincing and even decisive. Imagine a robot with a brain-shaped computer lodged in its cranial cavity, imagine the computer programmed with all the synapses of a human brain, imagine the whole behavior of the robot is indistinguishable from human behavior, and now think of the whole thing as a unified system and not just as a computer with inputs and outputs. Surely in such a case we would have to ascribe intentionality to the system."
I entirely agree that in such a case we would find it rational and indeed irresistible to accept the hypothesis that the robot had intentionality, as long as we knew nothing more about it. Indeed, besides appearance and behavior, the other elements of the combination are really irrelevant. If we could build a robot whose behavior was indistinguishable over a large range from human behavior, we would attribute intentionality to it, pending some reason not to. We wouldn't need to know in advance that its computer brain was a formal analogue of the human brain.
But I really don't see that this is any help to the claims of strong AI; and here's why: According to strong AI, instantiating a formal program with the right input and output is a sufficient condition of, indeed is constitutive of, intentionality. As Newell (1979) puts it, the essence of the mental is the operation of a physical symbol system. But the attributions of intentionality that we make to the robot in this example have nothing to do with formal programs. They are simply based on the assumption that if the robot looks and behaves sufficiently like us, then we would suppose, until proven otherwise, that it must have mental states like ours that cause and are expressed by its behavior and it must have an inner mechanism capable of producing such mental states. If we knew independently how to account for its behavior without such assumptions we would not attribute intentionality to it, especially if we knew it had a formal program. And this is precisely the point of my earlier reply to objection II.
但我实在看不出这对强人工智能的主张有什么帮助,原因就在这里:根据强人工智能,用正确的输入和输出实例化一个形式化程序是意向性的充分条件,实际上也是意向性的构成要素。正如纽厄尔(1979)所说,心理的本质是物理符号系统的运作。但在这个例子中,我们对机器人的意向性归因与形式化程序毫无关系。它们只是基于这样一个假设:如果机器人的外表和行为足够像我们,那么我们就会认为,除非另有证明,它一定有像我们一样的心理状态,而这种心理状态是由它的行为引起并表现出来的,而且它一定有能够产生这种心理状态的内在机制。如果我们独立地知道如何在没有这些假设的情况下解释它的行为,我们就不会把意向性归因于它,尤其是如果我们知道它有一个形式化的程序。这正是我先前对反对意见 II 所作答复的要点。
Suppose we knew that the robot's behavior was entirely accounted for by the fact that a man inside it was receiving uninterpreted formal symbols from the robot's sensory receptors and sending out uninterpreted formal symbols to its motor mechanisms, and the man was doing this symbol manipulation in accordance with a bunch of rules. Furthermore, suppose the man knows none of these facts about the robot, all he knows is which operations to perform on which meaningless symbols. In such a case we would regard the robot as an ingenious mechanical dummy. The hypothesis that the dummy has a mind would now be unwarranted and unnecessary, for there is now no longer any reason to ascribe intentionality to the robot or to the system of which it is a part (except of course for the man's intentionality in manipulating the symbols). The formal symbol manipulations go on, the input and output are correctly matched, but the only real locus of intentionality is the man, and he doesn't know any of the relevant intentional states; he doesn't, for example, see what comes into the robot's eyes, he doesn't intend to move the robot's arm, and he doesn't understand any of the remarks made to or by the robot. Nor, for the reasons stated earlier, does the system of which man and robot are a part.
To see this point, contrast this case with cases in which we find it completely natural to ascribe intentionality to members of certain other primate species such as apes and monkeys and to domestic animals such as dogs. The reasons we find it natural are, roughly, two: we can't make sense of the animal's behavior without the ascription of intentionality, and we can see that the beasts are made of similar stuff to ourselves - that is an eye, that a nose, this is its skin, and so on. Given the coherence of the animal's behavior and the assumption of the same causal stuff underlying it, we assume both that the animal must have mental states underlying its behavior, and that the mental states must be produced by mechanisms made out of the stuff that is like our stuff. We would certainly make similar assumptions about the robot unless we had some reason not to, but as soon as we knew that the behavior was the result of a formal program, and that the actual causal properties of the physical substance were irrelevant we would abandon the assumption of intentionality. [See "Cognition and Consciousness in Nonhuman Species" BBS I(4) 1978.]
为了理解这一点,我们可以把这种情况与我们认为把意向性归因于某些其他灵长类动物(如猿猴)和家畜(如狗)是完全自然的情况进行对比。我们认为这很自然的原因大致有两个:如果不赋予意向性,我们就无法理解动物的行为;我们可以看到,这些动物是由与我们相似的东西构成的--那是眼睛,那是鼻子,这是它的皮肤,等等。鉴于动物行为的一致性,以及假设动物行为的因果关系是相同的,我们假设动物的行为背后一定有心理状态,而且这些心理状态一定是由类似于我们的东西所构成的机制产生的。我们当然也会对机器人做出类似的假设,除非我们有理由不这么做,但只要我们知道机器人的行为是形式化程序的结果,而物理物质的实际因果属性并不重要,我们就会放弃意向性假设。[见 "非人类物种的认知与意识 "BBS I(4) 1978。]
There are two other responses to my example that come up frequently (and so are worth discussing) but really miss the point.
V. The other minds reply (Yale). "How do you know that other people understand Chinese or anything else? Only by their behavior. Now the computer can pass the behavioral tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers."
This objection really is only worth a short reply. The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am

attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In "cognitive sciences" one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects.
当我把认知状态归因于它们的时候。这个论点的主旨在于,它不可能只是计算过程及其输出,因为计算过程及其输出可以在没有认知状态的情况下存在。对于这个论点,假装麻醉是无法回答的。在 "认知科学 "中,我们必须预先假定精神的真实性和可知性,就像在物理科学中我们必须预先假定物理对象的真实性和可知性一样。
VI. The many mansions reply (Berkeley). "Your whole argument presupposes that is only about analogue and digital computers. But that just happens to be the present state of technology. Whatever these causal processes are that you say are essential for intentionality (assuming you are right), eventually we will be able to build devices that have these causal processes, and that will be artificial intelligence. So your arguments are in no way directed at the ability of artificial intelligence to produce and explain cognition."
VI.多宅回复(伯克利)。"你的整个论点预先假定, 只涉及模拟和数字计算机。但这恰好是目前的技术水平。不管你说的意向性所必需的因果过程是什么(假设你是对的),最终我们将能够制造出具有这些因果过程的设备,那就是人工智能。因此,你的论点绝不是针对人工智能产生和解释认知的能力"。
I really have no objection to this reply save to say that it in effect trivializes the project of strong AI by redefining it as whatever artificially produces and explains cognition. The interest of the original claim made on behalf of artificial intelligence is that it was a precise, well defined thesis: mental processes are computational processes over formally defined elements. I have been concerned to challenge that thesis. If the claim is redefined so that it is no longer that thesis, my objections no longer apply because there is no longer a testable hypothesis for them to apply to.
Let us now return to the question I promised I would try to answer: granted that in my original example I understand the English and I do not understand the Chinese, and granted therefore that the machine doesn't understand either English or Chinese, still there must be something about me that makes it the case that I understand English and a corresponding something lacking in me that makes it the case that I fail to understand Chinese. Now why couldn't we give those somethings, whatever they are, to a machine?
I see no reason in principle why we couldn't give a machine the capacity to understand English or Chinese, since in an important sense our bodies with our brains are precisely such machines. But I do see very strong arguments for saying that we could not give such a thing to a machine where the operation of the machine is defined solely in terms of computational processes over formally defined elements; that is, where the operation of the machine is defined as an instantiation of a computer program. It is not because I am the instantiation of a computer program that I am able to understand English and have other forms of intentionality (I am, I suppose, the instantiation of any number of computer programs), but as far as we know it is because I am a certain sort of organism with a certain biological (i.e. chemical and physical) structure, and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena. And part of the point of the present argument is that only something that had those causal powers could have that intentionality. Perhaps other physical and chemical processes could produce exactly these effects; perhaps, for example, Martians also have intentionality but their brains are made of different stuff. That is an empirical question, rather like the question whether photosynthesis can be done by something with a chemistry different from that of chlorophyll.
But the main point of the present argument is that no purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality, and they have by themselves no causal powers except the power, when instantiated, to produce the next stage of the formalism when the machine is running. And any other causal properties that particular realizations of the formal model have, are irrelevant to the formal model because we can always put the same formal model in a different realization where those causal properties are obviously absent. Even if, by some miracle, Chinese speakers exactly realize Schank's program, we can put the same program in English speakers, water pipes, or computers, none of which understand Chinese, the program not withstanding.
What matters about brain operations is not the formal shadow cast by the sequence of synapses but rather the actual properties of the sequences. All the arguments for the strong version of artificial intelligence that I have seen insist on drawing an outline around the shadows cast by cognition and then claiming that the shadows are the real thing.
By way of concluding I want to try to state some of the general philosophical points implicit in the argument. For clarity I will try to do it in a question and answer fashion, and I begin with that old chestnut of a question:
"Could a machine think?"
The answer is, obviously, yes. We are precisely such machines
"Yes, but could an artifact, a man-made machine, think?"
Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question.
"OK, but could a digital computer think?"
If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.
如果我们所说的 "数字计算机 "是指任何具有一定描述水平的、可以被正确地描述为计算机程序实例化的东西,那么答案当然还是肯定的,因为我们就是任意数量计算机程序的实例化,而且我们会思考。
"But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?"
This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.
"Why not?" "为什么不呢?"
Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.
The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality. It adds nothing, for example, to a man's ability to understand Chinese.
Precisely that feature of AI that seemed so appealing - the distinction between the program and the realization - proves fatal to the claim that simulation could be duplication. The distinction between the program and its realization in the hardware seems to be parallel to the distinction between the level of mental operations and the level of brain operations. And if we could describe the level of mental operations as a formal program, then it seems we could describe what was essential about the mind without doing either introspective
恰恰是人工智能的这一看似如此吸引人的特征--程序与实现之间的区别--被证明对 "模拟可能是复制 "的说法是致命的。程序与其在硬件中的实现之间的区别,似乎与心理操作层面与大脑操作层面之间的区别是平行的。如果我们能把心智运作的层次描述为形式化的程序,那么我们似乎就能描述出心智的本质,而无需进行内省。

psychology or neurophysiology of the brain. But the equation, "mind is to brain as program is to hardware" breaks down at several points, among them the following three:
心理学或大脑神经生理学。但是,"心智之于大脑,就像程序之于硬件 "的等式在几个点上被打破了,其中包括以下三点:
First, the distinction between program and realization has the consequence that the same program could have all sorts of crazy realizations that had no form of intentionality. Weizenbaum (1976, (h. 2), for example, shows in detail how to construct a computer using a roll of toilet paper and a pile of small stones. Similarly, the Chinese story understanding program can be programmed into a sequence of water pipes, a set of wind machines, or a monolingual English speaker, none of which thereby acquires an understanding of Chinese. Stones, toilet paper, wind, and water pipes are the wrong kind of stuff to have intentionality in the first place - only something that has the same causal powers as brains can have intentionality - and though the English speaker has the right kind of stuff for intentionality you can easily see that he doesn't get any extra intentionality by memorizing the program, since memorizing it won't teach him Chinese.
首先,区分程序与实现的结果是,同一个程序可以有各种疯狂的实现,而这些实现没有任何形式的意向性。例如,魏岑鲍姆(Weizenbaum,1976,(h. 2))详细展示了如何用一卷卫生纸和一堆小石头来构造一台计算机。同样,理解中文故事的程序也可以编入一连串的水管、一组风力机器或一个只会说英语的人,但它们都不会因此获得对中文的理解。石头、厕纸、风和水管这些东西本来就不应该具有意向性--只有与大脑具有相同因果能力的东西才可能具有意向性--尽管讲英语的人拥有意向性所需的正确东西,但你可以很容易地看出,他并没有通过背诵程序获得额外的意向性,因为背诵程序并不能教他中文。
Second, the program is purely formal, but the intentional states are not in that way formal. They are defined in terms of their content, not their form. The belief that it is raining, for example, is not defined as a certain formal shape, but as a certain mental content with conditions of satisfaction, a direction of fit (see Searle 1979), and the like. Indeed the belief as such hasn't even got a formal shape in this syntactic sense, since one and the same belief can be given an indefinite number of different syntactic expressions in different linguistic systems.
其次,程序是纯粹形式化的,但意向状态并非如此形式化。它们是根据内容而不是形式来定义的。例如,"下雨了 "这个信念并不是被定义为某种形式,而是被定义为某种具有满足条件、契合方向(见塞尔,1979 年)等的心理内容。事实上,这种信念在句法意义上甚至没有形式,因为同一个信念在不同的语言系统中可以有无数种不同的句法表达。
Third, as I mentioned before, mental states and events are literally a product of the operation of the brain, but the program is not in that way a product of the computer
"Well if programs are in no way constitutive of mental processes, why have so many people believed the converse? That at least needs some explanation."
I don't really know the answer to that one. The idea that computer simulations could be the real thing ought to have seemed suspicious in the first place because the computer isn't confined to simulating mental operations, by any means. No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything? It is sometimes said that it would be frightfully hard to get computers to feel pain or fall in love, but love and pain are neither harder nor easier than cognition or anything else. For simulation, all you need is the right input and output and a program in the middle that transforms the former into the latter. That is all the computer has for anything it does. To confuse simulation with duplication is the same mistake, whether it is pain, love, cognition, fires, or rainstorms.
Still, there are several reasons why AI must have seemed and to many people perhaps still does seem - in some way to reproduce and thereby explain mental phenomena, and I believe we will not succeed in removing these illusions until we have fully exposed the reasons that give rise to them.
First, and perhaps most important, is a confusion about the notion of "information processing": many people in cognitive science believe that the human brain, with its mind, does something called "information processing," and analogously the computer with its program does information processing; but fires and rainstorms, on the other hand, don't do information processing at all. Thus, though the computer can simulate the formal features of any process whatever, it stands in a special relation to the mind and brain because when the computer is properly programmed, ideally with the same program as the brain, the information processing is identical in the two cases, and this information processing is really the essence of the mental. But the trouble with this argument is that it rests on an ambiguity in the notion of "information." In the sense in which people "process information" when they reflect, say, on problems in arithmetic or when they read and answer questions about stories, the programmed computer does not do "information processing." Rather, what it does is manipulate formal symbols. The fact that the programmer and the interpreter of the computer output use the symbols to stand for objects in the world is totally beyond the scope of the computer. The computer, to repeat, has a syntax but no semantics. Thus, if you type into the computer " 2 plus 2 equals?" it will type out " 4 ." But it has no idea that " 4 " means 4 or that it means anything at all. And the point is not that it lacks some second-order information about the interpretation of its first-order symbols, but rather that its first-order symbols don't have any interpretations as far as the computer is concerned. All the computer has is more symbols. The introduction of the notion of "information processing" therefore produces a dilemma: either we construe the notion of "information processing" in such a way that it implies intentionality as part of the process or we don't. If the former, then the programmed computer does not do information processing, it only manipulates formal symbols. If the latter, then, though the computer does information processing, it is only doing so in the sense in which adding machines, typewriters, stomachs, thermostats, rainstorms, and hurricanes do information processing; namely, they have a level of description at which we can describe them as taking information in at one end, transforming it, and producing information as output. But in this case it is up to outside observers to interpret the input and output as information in the ordinary sense. And no similarity is established between the computer and the brain in terms of any similarity of information processing
首先,也许也是最重要的一点,是对 "信息处理 "概念的混淆:认知科学领域的许多人认为,人脑及其思维在进行一种叫做 "信息处理 "的活动,而计算机及其程序也在进行类似的信息处理活动;但另一方面,火灾和暴雨却根本不进行信息处理。因此,尽管计算机可以模拟任何过程的形式特征,但它与心智和大脑的关系是特殊的,因为当计算机被正确编程,理想情况下与大脑使用相同的程序时,两种情况下的信息处理是相同的,而这种信息处理确实是心智的本质。但这一论点的问题在于,它是建立在 "信息 "这一概念的模糊基础之上的。就人们在思考算术问题或阅读并回答有关故事的问题时所进行的 "信息处理 "而言,编程计算机并没有进行 "信息处理"。相反,它所做的是操作形式化的符号。程序员和计算机输出的解释器使用符号来代表世界上的物体,这完全超出了计算机的范围。重复一遍,计算机有语法,但没有语义。因此,如果你在计算机上输入 "2 加 2 等于?",它就会输入 "4"。但它不知道 "4 "是指 4,也不知道它意味着什么。问题的关键并不在于它缺乏关于其一阶符号解释的二阶信息,而在于就计算机而言,其一阶符号没有任何解释。计算机拥有的只是更多的符号。因此,"信息处理 "概念的引入产生了一个两难的问题:要么我们对 "信息处理 "概念的解释意味着作为处理过程一部分的意向性,要么我们不这样做。如果是前者,那么编程计算机就不会进行信息处理,它只会操作形式化的符号。如果是后者,那么,尽管计算机进行信息处理,但它只是在加法器、打字机、胃、恒温器、暴雨和飓风进行信息处理的意义上进行信息处理;也就是说,它们有一个描述层次,在这个层次上,我们可以把它们描述为从一端接收信息、转换信息并产生信息作为输出。但在这种情况下,输入和输出是否为普通意义上的信息,取决于外部观察者。就信息处理的相似性而言,计算机与大脑之间并不存在任何相似性。
Second, in much of AI there is a residual behaviorism or operationalism. Since appropriately programmed computers can have input-output patterns similar to those of human beings, we are tempted to postulate mental states in the computer similar to human mental states. But once we see that it is both conceptually and empirically possible for a system to have human capacities in some realm without having any intentionality at all, we should be able to overcome this impulse. My desk adding machine has calculating capacities, but no intentionality, and in this paper I have tried to show that a system could have input and output capabilities that duplicated those of a native Chinese speaker and still not understand Chinese, regardless of how it was programmed. The Turing test is typical of the tradition in being unashamedly behavioristic and operationalistic, and I believe that if AI workers totally repudiated behaviorism and operationalism much of the confusion between simulation and duplication would be eliminated.
Third, this residual operationalism is joined to a residual form of dualism; indeed strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter. In strong AI (and in functionalism, as well) what matters are programs, and programs are independent of their realization in machines; indeed, as far as is concerned, the same program could be realized by an electronic machine, a Cartesian mental substance, or a Hegelian world spirit. The single most surprising discovery that I have made in discussing these issues is that many AI workers are quite shocked by my idea that actual human mental phenomena might be dependent on actual physicalchemical properties of actual human brains. But if you think about it a minute you can see that I should not have been surprised; for unless you accept some form of dualism, the strong AI project hasn't got a chance. The project is to reproduce and explain the mental by designing programs, but unless the mind is not only conceptually but empirically independent of the brain you couldn't carry out the project,
第三,这种残余的操作主义与二元论的残余形式结合在一起;事实上,只有在二元论假设的前提下,强人工智能才是有意义的,二元论假设认为,就心智而言,大脑并不重要。在强人工智能中(在功能主义中也是如此),重要的是程序,而程序独立于它们在机器中的实现;事实上,就 ,同样的程序可以由电子机器、笛卡尔的精神实质或黑格尔的世界精神来实现。在讨论这些问题的过程中,我发现一个最令人惊讶的现象,那就是许多人工智能工作者对我的观点感到非常震惊,那就是实际的人类精神现象可能取决于实际人类大脑的实际物理化学特性。但如果你仔细想一想,就会发现我不应该感到惊讶;因为除非你接受某种形式的二元论,否则强人工智能项目就没有机会。这个项目是通过设计程序来重现和解释心智,但除非心智不仅在概念上独立于大脑,而且在经验上也独立于大脑,否则你就无法实施这个项目、

for the program is completely independent of any realization. Unless you believe that the mind is separable from the brain both conceptually and empirically - dualism in a strong form - you cannot hope to reproduce the mental by writing and running programs since programs must be independent of brains or any other particular forms of instantiation. If mental operations consist in computational operations on formal symbols, then it follows that they have no interesting connection with the brain; the only connection would be that the brain just happens to be one of the indefinitely many types of machines capable of instantiating the program. This form of dualism is not the traditional Cartesian variety that claims there are two sorts of substances, but it is Cartesian in the sense that it insists that what is specifically mental about the mind has no intrinsic connection with the actual properties of the brain. This underlying dualism is masked from us by the fact that AI literature contains frequent fulminations against "dualism"; what the authors seem to be unaware of is that their position presupposes a strong version of dualism.
因为程序完全独立于任何实现。除非你相信心智与大脑在概念上和经验上都是可分离的--即强形式的二元论--否则你就不能指望通过编写和运行程序来重现心智,因为程序必须独立于大脑或任何其他特定的实现形式。如果思维操作是对形式化符号的计算操作,那么它们与大脑就没有什么有趣的联系;唯一的联系就是大脑恰好是能够实例化程序的无限多类型机器中的一种。这种形式的二元论并不是传统的笛卡尔式的二元论,它宣称存在两种物质,但它是笛卡尔式的二元论,因为它坚持认为,关于心智的具体精神性与大脑的实际属性没有内在联系。人工智能文献中经常出现反对 "二元论 "的言论,这掩盖了这种潜在的二元论;但作者们似乎没有意识到,他们的立场是以强烈的二元论为前提的。
"Could a machine think?" My own view is that only a machine could think, and indeed only very special kinds of machines, namely brains and machines that had the same causal powers as brains. And that is the main reason strong AI has had little to tell us about thinking, since it has nothing to tell us about machines. By its own definition, it is about programs, and programs are not machines. Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena. No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not.
In defense of this dualism the hope is often expressed that the brain is a digital computer (early computers, by the way, were often called "electronic brains"). But that is no help. Of course the brain is a digital computer. Since everything is a digital computer, brains are too. The point is that the brain's causal capacity to produce intentionality cannot consist in its instantiating a computer program, since for any program you like it is possible for something to instantiate that program and still not have any mental states. Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality.
为了捍卫这种二元论,人们常常希望大脑是一台数字计算机(顺便说一句,早期的计算机通常被称为 "电子大脑")。但这无济于事。大脑当然是一台数字计算机。既然万物都是数字计算机,那么大脑也是。问题的关键在于,大脑产生意向性的因果能力并不在于它实例化了一个计算机程序,因为对于任何你喜欢的程序来说,都有可能实例化该程序,却仍然没有任何精神状态。无论大脑是如何产生意向性的,它都不可能包含对程序的实例化,因为任何程序本身都不足以产生意向性。


I am indebted to a rather large number of people for discussion of these matters and for their patient attempts to overcome my ignorance of artificial intelligence. I would especially like to thank Ned Block, Hubert Dreyfus, John Haugeland, Roger Schank, Robert Wilensky, and Terry Winograd.
我感谢许多人对这些问题的讨论,感谢他们耐心地尝试克服我对人工智能的无知。我要特别感谢内德-布洛克(Ned Block)、休伯特-德雷福斯(Hubert Dreyfus)、约翰-豪格兰(John Haugeland)、罗杰-申克(Roger Schank)、罗伯特-威伦斯基(Robert Wilensky)和特里-维诺格拉德(Terry Winograd)。


  1. I am not, of course, saying that Schank himself is committed to these claims.
    当然,我并不是说 Schank 本人就坚持这些说法。
  2. Also, "understanding" implies both the possession of mental (intentional) states and the truth (validity, success) of these states. For the purposes of this discussion we are concerned only with the possession of the states.
    此外,"理解 "既意味着拥有心理(意向)状态,也意味着这些状态的真实性(有效性、成功性)。在本讨论中,我们只关注状态的拥有。
  3. Intentionality is by definition that feature of certain mental states by which they are directed at or about objects and states of affairs in the world. Thus, beliefs, desires, and intentions are intentional states; undirected forms of anxiety and depression are not. For further discussion see Searle (1979c).
    根据定义,意向性是指某些心理状态的特征,即它们是针对或关于世界上的客体和事态的。因此,信念、欲望和意图都是意向性状态,而非定向的焦虑和抑郁则不是。更多讨论见 Searle (1979c)。

Open Peer Commentary 公开同行评论

Commentaries submitted by the qualified professional readership of this journal will be considered for publication in a later issue as Continuing Commentary on this article.

by Robert P. Abelson

Department of Psychology, Yale University, Now Haven, Conn. 06520
耶鲁大学心理学系,Now Haven, Conn.06520

Searle's argument is just a set of Chinese symbols

Searle claims that the apparently commonsensical programs of the Yale Al project really don't display meaningful understanding of text. For him, the computer processing a story about a restaurant visit is just a Chinese symbol manipulator blindly applying uncomprehended rules to uncomprehended text. What is missing, Searle says, is the presence of intentional states.
塞尔声称,耶鲁大学 "阿尔 "项目的程序看似通俗易懂,实际上并没有显示出对文本有意义的理解。对他来说,处理一个关于餐馆之行的故事的计算机只是一个中文符号操纵器,盲目地将未理解的规则应用于未理解的文本。塞尔说,缺少的是有意状态的存在。
Searle is misguided in this criticism in at least two ways. First of all, it is no trivial matter to write rules to transfrom the "Chinese symbols" of a story text into the "Chinese symbols" of appropriate answers to questions about the story. To dismiss this programming feat as mere rule mongering is like downgrading a good piece of literature as something that British Museum monkeys can eventually produce. The programmer needs a very crisp understanding of the real work to write the appropriate rules. Mediocre rules produce feeble-minded output, and have to be rewritten. As rules are sharpened, the output gets more and more convincing, so that the process of rule development is convergent. This is a characteristic of the understanding of a content area, not of blind exercise within it.
塞尔的这种批评至少在两个方面是错误的。首先,编写规则将故事文本的 "中文符号 "转换为对故事问题的适当回答的 "中文符号 "并非小事一桩。将这一编程壮举斥之为纯粹的规则堆砌,无异于将一部优秀的文学作品降格为大英博物馆的猴子最终能创作出来的东西。程序员需要对实际工作有非常清晰的了解,才能编写出合适的规则。平庸的规则只能产生乏力的结果,因此必须重写。随着规则的不断完善,输出结果会越来越令人信服,因此规则的制定过程是趋同的。这是对某一内容领域的理解的特点,而不是在该领域内盲目练习的特点。
Ah, but Searle would say that such understanding is in the programmer and not in the computer. Well, yes, but what's the issue? Most precisely, the understanding is in the programmer's rule set, which the computer exercises. No one I know of (at Yale, at least) has claimed autonomy for the computer. The computer is not even necessary to the representational theory; it is just very, very convenient and very, very vivid.
But just suppose that we wanted to claim that the computer itself understood the story content. How could such a claim be defended, given that the computer is merely crunching away on statements in program code and producing other statements in program code which (following translation) are applauded by outside observers as being correct and perhaps even clever. What kind of understanding is that? it is, I would assert, very much the kind of understanding that people display in exposure to new content via language or other symbol systems. When a child learns to add, what does he do except apply rules? Where does "understanding" enter? Is it understanding that the results of addition apply independent of content, so that means that if you have things and you assemble them with things, then you'll have things? But that's a rule, too. Is it understanding that the units place can be translated into pennies, the tens place into dimes, and the hundreds place into dollars, so that additions of numbers are isomorphic with additions of money? But that's a rule connecting rule systems. In general, it seems that as more and more rules about a given content are incorporated, especially if they connect with other content domains, we have a sense that understanding is increasing. At what point does a person graduate from "merely" manipulating rules to "really" understanding?
但是,假设我们想声称计算机本身理解故事内容。鉴于计算机只是在处理程序代码中的语句,并产生程序代码中的其他语句,而这些语句(经翻译后)被外部观察者称赞为正确,甚至可能是聪明的,这样的说法又如何辩护呢?我敢断言,这与人们在通过语言或其他符号系统接触新内容时所表现出的理解能力如出一辙。当孩子学习加法时,除了应用规则,他还能做什么?理解 "从何而来?是理解加法的结果与内容无关,所以 意味着,如果你有 的东西,并把它们与 的东西组合在一起,那么你就会有 的东西?但这也是一条规则。是否可以这样理解:个位可以转化为便士,十位可以转化为一角,百位可以转化为美元,这样数字的加法与货币的加法就是同构的?但这是连接规则系统的规则。一般来说,随着越来越多的特定内容的规则被纳入,尤其是当它们与其他内容领域相联系时,我们就会感觉到理解能力在增强。一个人从 "仅仅 "操作规则到 "真正 "理解规则的过程是怎样的?
Educationists would love to know, and so would I, but I would be willing to bet that by the Chinese symbol test, most of the people reading this don't really understand the transcendental number economic inflation, or nuclear power plant safety, or how sailboats can sail upwind. (Be honest with yourself!) Searle's agrument itself, sallying forth as it does into a symbol-laden domain that is intrinsically difficult to "understand," could well be seen as mere symbol manipulation. His main rule is that if you see the Chinese symbols for "formal computational operations," then you output the Chinese symbols for "no understanding at all."
教育家们很想知道,我也想知道,但我敢打赌,通过中文符号测试,大多数读这篇文章的人并不真正理解超验数 经济通胀、核电站安全或帆船如何逆风航行。(实话实说吧!)塞尔的论证本身是在一个充满符号、本质上很难 "理解 "的领域里大摇大摆地前进,完全可以被看作是单纯的符号操纵。他的主要规则是,如果你看到的中文符号是 "形式化的计算操作",那么你输出的中文符号就是 "完全不理解"。
Given the very commmon exercise in human affairs of linguistic interchange in areas where it is not demonstrable that we know what we are talking about, we might well be humble and give the computer the benefit of the doubt when and if it performs as well as we do. If we

credit people with understanding by virtue of their apparently competent verbal performances, we might extend the same courtesy to the machine. It is a conceit, not an insight, to give ourselves more credit for a comparable periormance.
But Searle airily dismisses this "other minds" argument, and still insists that the computer lacks something essential. Chinese symbol rules only go so far, and for him, if you don't have everything, you don't have anything. I should think rather that if you don't have everything, you don't have everything. But in any case, the missing ingredient for Searle is his concept of intentionality. In his paper, he does not justify why this is the key factor. It seems more obvious that what the manipulator of Chinese symbols misses is extensional validity. Not to know that the symbol for "menu" refers to that thing out in the world that you can hold and fold and look at closely is to miss some real understanding of what is meant by menu. I readily acknowledge the importance of such sensorimotor knowledge. The understanding of how a sailboat sails upwind gained through the feel of sail and rudder is certainly valid, and is not the same as a verbal explanation.
但塞尔轻描淡写地否定了这种 "其他思维 "的论点,仍然坚持认为计算机缺乏一些基本的东西。对他来说,如果你没有一切,你就什么都没有。我倒认为,如果没有一切,就没有一切。但无论如何,塞尔所缺少的要素是他的意向性概念。在他的论文中,他没有说明为什么这是关键因素。似乎更明显的是,中国符号的操纵者缺少的是扩展有效性。如果不知道 "菜单 "的符号指的是世界上那个你可以拿着折叠起来仔细观察的东西,那就等于没有真正理解 "菜单 "的含义。我很乐意承认这种感官运动知识的重要性。通过对帆和舵的感觉来理解帆船如何逆风航行当然是有效的,但与口头解释不同。
Verbal-conceplual computer programs lacking sensorimotor connection with the world may well miss things. Imagine the following piece of a story: "John told Harry he couldn't find the book. Harry rolled his eyes toward the ceiling." Present common sense inference models can make various predictions about Harry's relation to the book and its unfindability. Perhaps he loaned it to John, and therefore would be upset that it seemed lost. But the unique and nondecomposable meaning of eye rolling is hard for a model to capture except by a clumsy, concrete dictionary entry. A human understander, on the other hand, can imitate Harry's eye roll overtly or in imagination and experience holistically the resigned frustration that Harry must feel. It is important to explore the domain of examples like this.
缺乏与世界的感知运动联系的语言-概念计算机程序很可能会错过一些东西。想象一下下面这个故事:"约翰告诉哈里他找不到书了。哈里朝天花板翻了个白眼"。目前的常识推理模型可以对哈里与这本书的关系以及这本书的不可寻性做出各种预测。也许他把书借给了约翰,因此会对书的丢失感到不安。但是,"翻白眼 "这种独特的、不可分解的含义是模型难以捕捉到的,只能通过一个笨拙的、具体的字典条目。另一方面,人类理解者可以公开或在想象中模仿哈里的翻白眼动作,并全面体验哈里所感受到的不甘的挫败感。探索类似例子的领域非常重要。
But why instead is "intentionality" so important for Searle? If we recite his litany, "hopes, fears, and desires," we don't get the point. A computer or a human certainly need not have hopes or fears about the customer in order to understand a story about a restaurant visit. And inferential use of these concepts is well within the capabilities of computer understanding models. Goal-based inferences, for example, are a standard mechanism in programs of the Yale Al project. Rather, the crucial state of "intentionality" for knowledge is the appreciation of the conditions for its falsification. In what sense does the computer realize that the assertion, "John read the menu" might or might not be true, and that there are ways in the real world to find out?
但是,为什么 "意向性 "对塞尔如此重要呢?如果我们背诵他那一连串的 "希望、恐惧和欲望",我们就不会明白他的意思。计算机或人类当然不需要对顾客抱有希望或恐惧,就能理解一个关于餐馆之行的故事。而这些概念的推理使用完全在计算机理解模型的能力范围之内。例如,基于目标的推理是耶鲁大学阿尔项目程序的标准机制。相反,知识的关键 "意向性 "状态是对知识被证伪的条件的理解。在什么意义上,计算机会意识到 "约翰读了菜单 "这一断言可能是真的,也可能是假的,而现实世界中又有什么方法可以找出答案呢?
Well, Searle has a point there, although I do not see it as the trump card he thinks he is playing. The computer operates in a gullible fashion: it takes every assertion to be true. There are thus certain knowledge problems that have not been considered in artificial intelligence programs for language understanding, for example, the question of what to do when a belief about the world is contradicted by data: should the belief be modified, or the data called into question? These questions have been discussed by psychologists in the context of human knowledge-handling proclivities, but the issues are beyond present Al capability. We shall have to see what happens in this area. The naivete of computers about the validity of what we tell them is perhaps touching, but it would hardly seem to justify the total scorn exhibited by Searle. There are many areas of knowledge within which questions of falsifiability are quite secondary - the understanding of literary fiction, for example. Searle has not made convincing his case for the fundamental essentiality of intentionality in understanding. My Chinese symbol processor, at any rate, is not about to output the symbol for "surrender."
塞尔说得有道理,尽管我并不认为这是他自以为的王牌。计算机是以一种易受骗的方式运行的:它认为每一个断言都是真的。因此,在理解语言的人工智能程序中,有一些知识问题尚未得到考虑,例如,当关于世界的信念与数据相矛盾时该怎么办的问题:是修改信念,还是质疑数据?心理学家已经在人类知识处理倾向的背景下讨论过这些问题,但这些问题超出了目前人工智能的能力范围。我们将拭目以待这方面的进展。计算机对我们告诉它们的东西的真实性所持的天真态度也许令人感动,但这似乎并不能成为塞尔表现出的完全蔑视的理由。在许多知识领域,可证伪性问题都是次要的,比如对文学小说的理解。塞尔并没有令人信服地证明意向性在理解中的根本重要性。无论如何,我的中文符号处理器不会输出 "投降 "的符号。

by Ned Block 作者:奈德-布洛克

Department of Linguistics and Philosophy, Massachusetts Institute of Technology, Cambridgo, Mass. 02139

What intuitions about homunculi don't show

Searle's argument depends for its force on intuitions that certain entities do not think. There are two simple objections to his argument that are based on general considerations about what can be shown by intuitions that something can't think.

First, we are willing, and rightly so, to accept counterintuitive consequences of claims for which we have substantial evidence. It once seemed intuitively absurd to assert that the earth was whirling through space at breakneck speed, but in the face of the evidence for the Copernican view, such an intuition should be (and eventually was) rejected as irrelevant to the truth of the matter. More relevantly, a grapefruit-sized head-enclosed blob of gray protoplasm seems, at least at first blush, a most implausible seat of mentality. But if your intuitions still balk at brains as seats of mentality, you should ignore your intuitions as irrelevant to the truth of the matter, given the remarkable evidence for the role of the brain in our mental life. Searle presents some alleged counterintuitive consequences of the view of cognition as formal symbol manipulation. But his argument does not even have the right form, for in order to know whether we should reject the doctrine because of its alleged counterintuitive consequences, we must know what sort of evidence there is in favor of the doctrine. If the evidence for the doctrine is overwhelming, then incompatible intuitions should be ignored, just as should intuitions that the brain couldn't be the seat of mentality. So Searle's argument has a missing premise to the effect that the evidence isn't sufficient to overrule the intuitions.
Well, is such a missing premise true? I think that anyone who takes a good undergraduate cognitive psychology course would see enough evidence to justify tentatively disregarding intuitions of the sort that Searle appeals to. Many theories in the tradition of thinking as formal symbol manipulation have a moderate (though admittedly not overwhelming) degree of empirical support
A second point against Searle has to do with another aspect of the logic of appeals to intuition. At best, intuition reveals facts about our concepts (at worst, facts about a motley of factors such as our prejudices, ignorance, and, still worse, our lack of imagination - as when people accepted the deliverance of intuition that two straight lines cannot cross twice). So even if we were to accept Searle's appeal to intuitions as showing that homunculus heads that formally manipulate symbols do not think, what this would show is that our formal symbol-manipulation theories do not provide a sulficient condition for the application of our ordinary intentional concepts. The more interesting issue, however, is whether the homunculus head's formal symbol manipulation falls in the same scientific natural kind (see Putnam 1975a) as our intentional processes. If so, then the homunculus head does think in a reasonable scientific sense of the term - and so much the worse for the ordinary concept. Moreover, if we are very concerned with ordinary intentional concepts, we can give sufficient conditions for their application by building in ad hoc conditions designed to rule out the putative counterexamples. A first stab (inadequate, but improvable - see Putnam 1975b, p. 435; Block 1978, p. 292) would be to add the condition that in order to think, realizations of the symbol-manipulating system must not have operations mediated by entities that themselves have symbol manipulation typical of intentional systems. The ad hocness of such a condition is not an objection to it, given that what we are trying to do is "reconstruct" an everyday concept out of a scientific one; we can expect the everyday concept to be scientifically characterizable only in an unnatural way. (See Fodor's commentary on Searle, this issue.) Finally, there is good reason for thinking that the Putnam-Kripke account of the semantics of "thought" and other intentional terms is correct. If so, and if the formal symbol manipulation of the homunculus head falls in the same natural kind as our cognitive processes, then the homunculus head does think, in the ordinary sense as well as in the scientific sense of the term.
反对塞尔的第二点与诉诸直觉的逻辑的另一个方面有关。在最好的情况下,直觉揭示了我们的概念的事实(在最坏的情况下,揭示了我们的偏见、无知等各种因素的事实,更糟糕的是,揭示了我们缺乏想象力的事实--就像人们接受了两条直线不可能交叉两次的直觉)。因此,即使我们接受塞尔对直觉的诉求,认为它表明了形式上操纵符号的同体头不会思考,这也只能说明,我们的形式符号操纵理论并没有为我们的普通意向概念的应用提供充分的条件。然而,更有趣的问题是,同源体头的形式化符号操纵是否与我们的意向过程属于同一科学自然类型(见普特南 1975a)。如果是的话,那么同体脑袋确实是在合理的科学意义上进行思考的--而普通概念就更糟糕了。此外,如果我们非常关注普通的意向性概念,我们就可以通过建立旨在排除所谓反例的特别条件,为它们的应用提供充分条件。第一个尝试(不够充分,但可以改进--见普特南 1975b, 第 435 页;布洛克 1978, 第 292 页)是增加一个条件,即为了进行思考,符号操纵系统的实现必须没有以实体为中介的操作,而实体本身具有意向系统典型的符号操纵性。鉴于我们要做的是从一个科学概念中 "重建 "一个日常概念,这样一个条件的临时性并不是反对它的理由;我们可以期待日常概念只有以一种非自然的方式才具有科学特征。(最后,我们有充分的理由认为普特南-克里普克关于 "思想 "和其他意向术语的语义学解释是正确的。如果是这样,而且如果同体头部的形式符号操作与我们的认知过程属于同一种自然类型,那么同体头部确实会思考,无论是在普通意义上还是在科学意义上。
The upshot of both these points is that the real crux of the debate rests on a matter that Searle does not so much as mention: what the evidence is for the formal symbol-manipulation point of view.
Recall that Searle's target is the doctrine that cognition is formal symbol manipulation, that is, manipulation of representations by mechanisms that take account only of the forms (shapes) of the representations. Formal symbol-manipulation theories of cognition postulate a variety of mechanisms that generate, transform, and compare representations. Once one sees this doctrine as Searle's real target, one can simply ignore his objections to Schank. The idea that a machine programmed à la Schank has anything akin to mentality is not worth taking seriously, and casts as much doubt on the symbol-manipulation

theory of thought as Hitler casts on doctrine favoring a strong executive branch of government. Any plausibility attaching to the idea that a Schank machine thinks would seem to derive from a crude Turing test version of behaviorism that is anathema to most who view cognition as formal symbol manipulation.'
就像希特勒对主张建立强有力的政府行政部门的学说所持的看法一样,"思想理论 "也是如此。Schank机器会思考这一观点的任何合理性似乎都来自于行为主义的图灵测试版本,而这种行为主义对于大多数将认知视为形式化符号操作的人来说是一种诅咒。
Consider a robot akin to the one sketched in Searle's reply II (omitting features that have to do with his criticism of Schank). It simulates your input-output behavior by using a formal symbol-manipulation theory of the sort just sketched of your cognitive processes (together with a theory of your noncognitive mental processes, a qualification omitted from now on). Its body is like yours except that instead of a brain it has a computer equipped with a cognitive theory true of you. You receive an input: "Who is your favorite philosopher?" You cogitate a bit and reply "Heraclitus." If your robot doppelgänger receives the same input, a mechansim converts the input into a description of the input. The computer uses its description of your cognitive mechanisms to deduce a description of the product of your cogitation. This description is then transmitted to a device that transforms the description into the noise "Heraclitus."
考虑一个类似于塞尔在答复二中所勾画的机器人(省略了与他对申克的批评有关的特征)。它通过使用一种形式化的符号操纵理论来模拟你的输入-输出行为,这种理论就是刚才勾勒的那种关于你的认知过程的理论(还有一种关于你的非认知心理过程的理论,从现在起省略这一限定)。它的身体和你的一样,只不过它没有大脑,而是一台装有与你相同的认知理论的计算机。你接收到一个输入"你最喜欢的哲学家是谁?你思考了一下,回答说 "赫拉克利特"。如果你的机器人二重身接收到相同的输入,机械装置会将输入转换成对输入的描述。计算机通过对你的认知机制的描述,推导出对你的思考结果的描述。然后,这一描述被传送到一个装置,该装置将描述转换成噪音 "赫拉克利特"。
While the robot just described behaves just as you would given any input, it is not obvious that it has any mental states. You cogitate in response to the question, but what goes on in the robot is manipulation of descriptions of your cogitation so as to produce the same response. It isn't obvious that the manipulation of descriptions of cogitation in this way is itself cogitation.
My intuitions agree with Searle about this kind of case (see Block, forthcoming), but I have encountered little agreement on the matter. In the absence of widely shared intuition, I ask the reader to pretend to have Searle's and my intuition on this question. Now I ask another favor, one that should be firmly distinguished from the first: take the leap from intuition to fact (a leap that, as I argued in the first four paragraphs of this commentary, Searle gives us no reason to take). Suppose, for the sake of argument, that the robot described above does not in fact have intentional states.
What I want to point out is that even if we grant Searle all this, the doctrine that cognition is formal symbol manipulation remains utterly unscathed. For it is no part of the symbol-manipulation view of cognition that the kind of manipulation attributed to descriptions of our symbol-manipulating cognitive processes is itself a cognitive process. Those who believe formal symbol-manipulation theories of intentionality must assign intentionality to anything of which the theories are true. but the theories cannot be expected to be true of devices that use them to mimic beings of which they are true.
我想指出的是,即使我们承认塞尔所说的这一切,"认知是形式化的符号操纵 "这一学说依然毫发无损。因为在符号操纵认知观中,我们对符号操纵认知过程的描述所归因的那种操纵本身并不是一种认知过程。那些相信关于意向性的形式符号操纵理论的人,必须把意向性赋予这些理论所针对的任何事物。
Thus far, I have pointed out that intuitions that Searle's sort of homunculus head does not think do not challenge the doctrine that thinking is formal symbol manipulation. But a variant of Searle's example, similar to his in its intuitive force, but that avoids the criticism ! just sketched, can be described.
Recall that it is the aim of cognitive psychology to decompose mental processes into combinations of processes in which mechanisms generate representations, other mechanisms transform representations, and still other mechanisms compare representations, issuing reports to still other mechanisms, the whole network being appropriately connected to sensory input transducers and motor output devices. The goal of such theorizing is to decompose these processes to the point at which the mechanisms that carry out the operations have no internal goings on that are themselves decomposable into symbol manipulation by still further mechanisms. Such ultimate mechanisms are described as "primitive," and are often pictured in flow diagrams as "black boxes" whose realization is a matter of "hardware" and whose operation is to be explained by the physical sciences, not psychology. (See Fodor 1968; 1980; Dennet 1975)
回想一下,认知心理学的目标是将心理过程分解为各种过程的组合,在这些过程中,各种机制生成表象,其他机制转换表象,还有其他机制比较表象,并向还有其他机制发出报告,整个网络与感觉输入传感器和运动输出设备适当连接。这种理论研究的目标是分解这些过程,使执行这些操作的机制没有内部活动,而这些活动本身又可以分解为由更多机制进行的符号操作。这种终极机制被称为 "原始机制",在流程图中常被描绘成 "黑箱",其实现是 "硬件 "问题,其运作应由物理科学而非心理学来解释。(见 Fodor 1968;1980;Dennet 1975)。
Now consider an ideally completed theory along these lines, a theory of your cognitive mechanisms. Imagine a robot whose body is like yours, but whose head contains an army of homunculi, one for each black box. Each homunculus does the symbol-manipulating job of the black box he replaces, transmitting his "output" to other homunculi by telephone in accordance with the cognitive theory. This homunculi head is just a variant of one that Searle uses, and it completely avoids the criticism I sketched above, because the cognitive theory it implements is actually true of . Call this robot the cognitive homunculi head. (The cognitive homunculi head is discussed in more detail in Block 1978, pp. 305-10.) I shall argue that even if you have the intuition that the cognitive homunculi head has no intentionality, you should not regard this intuition as casting doubt on the truth of symbol-manipulation theories of thought.
现在,我们来考虑一个理想的理论,一个关于你的认知机制的理论。想象一个机器人,它的身体和你们一样,但它的脑袋里有一支同源体大军,每个黑盒子都有一个同源体。每个同源体都能完成它所替代的黑盒子的符号操纵工作,并根据认知理论通过电话将它的 "输出 "传送给其他同源体。这个同体头只是塞尔使用的同体头的一个变体,它完全避免了我在上文勾勒的批评,因为它所实现的认知理论实际上是真实的, 。我们称这个机器人为认知同构头。(认知同构头在布洛克 1978 年著作第 305-10 页中有更详细的论述)。我将论证,即使你直觉认知同形头没有意向性,你也不应该把这种直觉视为对思维符号操纵理论的真实性的怀疑。
One line of argument against the cognitive homunculi head is that its persuasive power may be due to a "not seeing the forest for the trees" illusion (see Lycan's commentary, this issue, and Lycan, forthcoming). Another point is that brute untutored intuition tends to balk at assigning intentionality to any physical system, including Searle's beloved brains. Does Searle really think that it is an initially congenial idea that a hunk of gray jelly is the seat of his intentionality? (Could one imagine a less likely candidate?) What makes gray jelly so intuitively satisfying to Searle is obviously his knowledge that brains are the seat of our intentionality. But here we see the difficulty in relying on considered intuitions, namely that they depend on our beliefs, and among the beliefs most likely to play a role in the case at hand are precisely our doctrines about whether the formal symbol-manipulation theory of thinking is true or false.
反对认知同构头像的一个论点是,它的说服力可能是由于一种 "只见树木不见森林 "的错觉(见本期Lycan的评论,以及Lycan,即将出版)。另一点是,未经训练的野蛮直觉倾向于对任何物理系统,包括塞尔所钟爱的大脑,赋予意向性。难道塞尔真的认为,一大块灰色果冻就是他的意向性之所在,这是一个最初就能让人产生共鸣的想法吗?(我们还能想象出更不可能的人选吗?)让苏尔对灰色果冻如此直观地感到满意的,显然是他知道大脑是我们意向性的所在地。但在这里,我们看到了依赖经过深思熟虑的直觉的困难,即直觉依赖于我们的信念,而最有可能在当前案例中发挥作用的信念恰恰是我们关于形式化符号操纵思维理论是真是假的学说。
Let me illustrate this and another point via another example (Block 1978, p. 291). Suppose there is a part of the universe that contains matter that is infinitely divisible. In that part of the universe, there are intelligent creatures much smaller than our elementary particles who decide to devote the next few hundred years to creating out of their matter substances with the chemical and physical characteristics (except at the subelementary particle level) of our elements. The build hordes of space ships of different varieties about the sizes of our electrons, protons, and other elementary particles, and fly the ships in such a way as to mimic the behavior of these elementary particles. The ships contain apparatus to produce and detect the type of radiation elementary particles give off. They do this to produce huge (by our standards) masses of substances with the chemical and physical characteristics of oxygen, carbon, and other elements. You go off on an expedition to that part of the universe, and discover the "oxygen" and "carbon." Unaware of its real nature, you set up a colony, using these "elements" to grow plants for food, provide "air" to breathe, and so on. Since one's molecules are constantly being exchanged with the environment, you and other colonizers come to be composed mainly of the "matter" made of the tiny people in space ships.
让我通过另一个例子来说明这一点和另一点(布洛克,1978 年,第 291 页)。假设宇宙中有一部分物质是无限可分的。在这部分宇宙中,有一些比我们的基本粒子小得多的智慧生物,它们决定在接下来的几百年里,用它们的物质创造出具有我们元素的化学和物理特性(亚基本粒子级除外)的物质。他们建造了大量不同种类的太空飞船,大小与我们的电子、质子和其他基本粒子差不多,并以模仿这些基本粒子行为的方式驾驶飞船。飞船上的仪器可以产生和探测基本粒子发出的辐射类型。他们这样做是为了产生大量(按照我们的标准)具有氧、碳和其他元素的化学和物理特性的物质。你远征宇宙的那一部分,发现了 "氧 "和 "碳"。在不了解其真正性质的情况下,你建立了一个殖民地,利用这些 "元素 "种植植物作为食物,提供 "空气 "供人呼吸等等。由于人的分子不断与环境交换,你和其他殖民者的主要成分就是太空船上的小人制造的 "物质"。
If any intuitions about homunculi heads are clear, it is clear that coming to be made of the homunculi-infested matter would not affect your mentality. Thus we see that intuition need not balk at assigning intentionality to a being whose intentionality owes crucially to the actions of internal homunculi. Why is it so obvious that coming to be made of homunculi-infested matter would not affect our sapience or sentience? I submit that it is because we have all absorbed enough neurophysiology to know that changes in particles in the brain that do not affect the brain's basic (electrochemical) mechanisms do not affect mentality.
Our intuitions about the mentality of homunculi heads are obviously influenced (if not determined) by what we believe. If so, then the burden of proof lies with Searle to show that the intuition that the cognitive homunculi head has no intentionality (an intuition that and many others do not share) is not due to doctrine hostile to the symbol-manipulation account of intentionality.
我们对同体元首心态的直觉显然受到(如果不是决定于)我们的信念的影响。如果是这样,那么塞尔就有责任证明,认知同体头没有意向性的直觉( 和许多其他人并不认同这种直觉)并不是由于敌视意向性的符号操纵说的学说造成的。
In sum, an argument such as Searle's requires a careful examination of the source of the intuition that the argument depends on, an examination Searle does not begin.

Acknowledgment 鸣谢

I am grateful to Jerry Fodor and Georges Rey for comments on an earlier draft.
感谢 Jerry Fodor 和 Georges Rey 对我的初稿提出的意见。

Note 备注

  1. While the crude version of behaviorism is refuted by well-known arguments, there is a more sophisticated version that avoids them; however, it can be refuted using an example akin to the one Searle uses against Schank. Such an example is sketched in Block 1978, p. 294, and elaborated in Block, forthcoming
    虽然行为主义的粗略版本被众所周知的论据所驳倒,但也有一个更复杂的版本可以避免这些论据;不过,可以用一个类似于塞尔用来反对尚克的例子来驳倒它。这个例子在布洛克 1978 年的著作第 294 页中作了概述,并在布洛克即将出版的著作中作了详细阐述。

by Bruce Bridgeman 作者:布鲁斯-布里奇曼

Psychology Board of Studies, University of California, Santa Cruz, Calif. 95064

Brains + programs - minds
大脑 + 程序 - 头脑

There are two sides to this commentary, the first that machines can embody somewhat more than Searle imagines, and the other that humans embody somewhat less. My conclusion will be that the two systems can in principle achieve similar levels of function.
My response to Searle's Gedankenexperiment is a variant of the "robot reply": the robot simply needs more information, both environmental and a priori, than Searle is willing to give to it. The robot can internalize meaning only if it can receive information relevant to a definition of meaning, that is, information with a known relationship to the outside world. First it needs some Kantian innate ideas, such as the fact that some input lines (for instance, inputs from the two eyes or from locations in the same eye) are topographically related to one another. In biological brains this is done with labeled lines. Some of the inputs, such as visual inputs, will be connected primarily with spatial processing programs while others such as auditory ones will be more closely related to temporal processing. Further, the system will be built to avoid some input strings (those representing pain, for example) and to seek others (water when thirsty). These properties and many more are built into the structure of human brains genetically, but can be built into a program as a data base just as well. It may be that the homunculus represented in this program would not know what's going on, but it would soon learn, becuase it has all of the information necessary to construct a representation of events in the outside world.
我对塞尔的 "Gedankenexperiment "的回应是 "机器人回应 "的变体:机器人只是需要比塞尔愿意给它的更多的信息,包括环境信息和先验信息。机器人只有接收到与意义定义相关的信息,即与外部世界有已知关系的信息,才能内化意义。首先,它需要一些康德式的先天观念,比如一些输入线(例如,来自两只眼睛或同一只眼睛中不同位置的输入)在地形上相互关联。在生物大脑中,这是通过标记线来实现的。有些输入(如视觉输入)主要与空间处理程序有关,而有些输入(如听觉输入)则与时间处理程序关系更为密切。此外,系统还将避免某些输入字符串(例如代表疼痛的字符串),并寻找其他输入字符串(口渴时寻找水)。这些特性以及其他更多特性都是人类大脑的基因结构,但同样也可以作为一个数据库内置于程序中。这个程序所代表的同源体可能不知道发生了什么,但它很快就会知道,因为它拥有构建外部世界事件表象所需的所有信息。
My super robot would learn about the number five, for instance, in the same way that a child does, by interaction with the outside world where the occurrence of the string of symbols representing "five" in its visual or auditory inputs corresponds with the more direct experience of five of something. The fact that numbers can be coded in the computer in more economical ways is no more relevant than the fact that the number five is coded in the digits of a child's hand. Both a priori knowledge and environmental knowledge could be made similar in quantity and quality to that available to a human.
例如,我的超级机器人学习数字 "5 "的方式与儿童相同,即通过与外部世界的互动,在其视觉或听觉输入中出现代表 "5 "的一串符号,对应于 "5 "这一更直接的体验。数字可以用更经济的方式在计算机中编码,但这并不比数字 "5 "被编码在儿童手掌的数字中更有意义。无论是先验知识还是环境知识,在数量和质量上都可以与人类的知识相媲美。
Now I will try to show that human intentionality is not as qualitatively different from machine states as it might seem to an introspectionist. The brain is similar to a computer program in that it too receives only strings of input and produces only strings of output. The inputs are small 0.1 -volt signals entering in great profusion along afferent nerves, and the outputs are physically identical signals leaving the central nervous system on efferent nerves. The brain is deaf, dumb, and blind, so that the electrical signals (and a few hormonal messages which need not concern us here) are the only ways that the brain has of knowing about its world or acting upon it.
The exception to this rule is the existing information stored in the brain, both that given in genetic development and that added by experience. But it too came without intentionality of the sort that Searle seems to require, the genetic information being received from long strings of DNA base sequences (clearly there is no intentionality here), and previous inputs being made up of the same streams of 0.1 -volt signals that constitute the present input. Now it is clear that no neuron receiving any of these signals or similar signals generated inside the brain has any idea of what is going on. The neuron is only a humble machine which receives inputs and generates outputs as a function of the temporal and spatial relations of the inputs, and its own structural properties. To assert any further properties of brains is the worst sort of dualism.
这一规则的例外是大脑中储存的现有信息,包括遗传发展过程中产生的信息和经验中增加的信息。基因信息是从一长串 DNA 碱基序列中接收的(这里显然不存在意向性),以前的输入也是由构成当前输入的相同的 0.1 伏信号流组成的。现在很清楚了,接收任何这些信号或大脑内部产生的类似信号的神经元都不知道发生了什么。神经元只是一台不起眼的机器,它接收输入,并根据输入的时空关系和自身的结构特性产生输出。断言大脑具有任何其他属性都是最糟糕的二元论。
Searle grants that humans have intentionality, and toward the end of his article he also admits that many animals might have intentionality also. But how far down the phylogenetic scale is he willing to go [see "Cognition and Consciousness in Nonhuman Species" BBS 1(4) 1978|? Does a single-celled animal have intentionality? Clearly not, for it is only a simple machine which receives physically identifiable inputs and "automatically" generates reflex outputs. The hydra with a few dozen neurons might be explained in the same way, a simple nerve network with inputs and outputs that are restricted, relatively easy to understand, and processed according to fixed patterns. Now what about the mollusc with a few hundred neurons, the insect with a few thousand, the amphibian with a few million, or the mammal with billions? To make his argument convincing. Searle needs a criterion for a dividing line in his implicit dualism.
塞尔承认人类具有意向性,在文章末尾,他也承认许多动物也可能具有意向性。但他愿意在系统发育的尺度上向下延伸多远[见《非人类物种的认知与意识》BBS 1(4) 1978]?单细胞动物有意向性吗?显然不是,因为它只是一个简单的机器,接收物理上可识别的输入,并 "自动 "产生反射输出。拥有几十个神经元的九头蛇也可以用同样的方式来解释,它是一个简单的神经网络,输入和输出都是有限的,相对容易理解,并按照固定的模式进行处理。那么,拥有几百个神经元的软体动物、拥有几千个神经元的昆虫、拥有几百万个神经元的两栖动物或拥有几十亿个神经元的哺乳动物呢?为了使他的论证令人信服塞尔需要一个标准来划分他的隐含二元论。
We are left with a human brain that has an intention-free, genetically determined structure, on which are superimposed the results of storms of tiny nerve signals. From this we somehow introspect an intentionality that cannot be assigned to machines. Searle uses the example of arithmetic manipulations to show how humans "understand" something that machines don't. I submit that neither humans nor machines understand numbers in the sense Searle intends. The understanding of numbers greater than about five is always an illusion, for humans can deal with larger numbers only by using memorized tricks rather than true understanding. If I want to add 27 and 54 , I don't use some direct numerical understanding or even a spatial or electrical analogue in brain. Instead, I apply rules that I memorized in elementary school without really knowing what they meant, and combine these rules with memorized facts of addition of one-digit numbers to arrive at an answer without understanding the numbers themselves. Though I have the feeling that I am performing operations on numbers, in terms of the algorithms I use there is nothing numerical about it. In the same way I can add numbers in the billions, although neither I nor anyone else has any concept of what these numbers mean in terms of perceptually meaningful quantities. Any further understanding of the number system that I possess is irrelevant, for it is not used in performing simple computations.
我们看到的人脑是一种无意向的、由基因决定的结构,其上叠加着微小神经信号风暴的结果。由此,我们以某种方式内省出一种无法赋予机器的意向性。塞尔用算术运算的例子来说明人类如何 "理解 "机器不理解的东西。我认为,人类和机器对数字的理解都不符合塞尔的意图。对大于 5 的数字的理解始终是一种假象,因为人类只能通过记忆的技巧而不是真正的理解来处理更大的数字。如果我想把 27 和 54 相加,我不会使用某种直接的数字理解,甚至也不会使用 大脑中的空间或电子模拟。取而代之的是,我运用小学时记住的规则,但并不真正了解这些规则的含义,并将这些规则与记住的一位数加法事实相结合,在不理解数字本身的情况下得出答案。虽然我感觉自己是在对数字进行运算,但就我使用的算法而言,这与数字无关。同样,我可以把数十亿的数字相加,尽管我和其他人都不知道这些数字在感知上意味着什么。我对数字系统的任何进一步理解都是无关紧要的,因为在进行简单计算时并没有用到它。
The illusion of having a consciousness of numbers is similar to the illusion of having a full-color, well focused visual field; such a concept exists in our consciousness, but the physiological reality falls far short of the introspection. High-quality color information is available only in about the central thirty degrees of the visual field, and the best spatial information in only one or two degrees. I suggest that the feeling of intentionality is a cognitive illusion similar to the feeling of the highquality visual image. Consciousness is a neurological system like any other, with functions such as the long-term direction of behavior (intentionality?), access to long-term memories, and several other characteristics that make it a powerful, though limited-capacity. processor of biologically useful information.
拥有数字意识的错觉类似于拥有色彩饱满、聚焦良好的视野的错觉;这样的概念存在于我们的意识中,但生理上的真实情况却与内省相去甚远。高质量的色彩信息只能在视野中心 30 度左右的范围内获得,而最佳的空间信息只能在一到两度的范围内获得。我认为,意向性的感觉是一种认知错觉,类似于高质量视觉图像的感觉。意识与其他神经系统一样,具有行为的长期方向(意向性?)、获取长期记忆等功能,并具有其他一些特点,使其成为一个强大的、但容量有限的生物有用信息处理器。
All of Searle's replies to his Gedankenexperiment are variations on the theme that I have described here, that an adequately designed machine could include intentionality as an emergent quality even though individual parts (transistors, neurons, or whatever) have none. All of the replies have an element of truth, and their shortcomings are more in their failure to communicate the similarity of brains and machines to Searle than in any internal weaknesses. Perhaps the most important difference between brains and machines lies not in their instantiation but in their history, for humans have evolved to perform a variety of poorly understood functions including reproduction and survival in a complex social and ecological context. Programs, being designed without extensive evolution, have more restricted goals and motivations.
塞尔对他的 "Gedankenexperiment "的所有回答都是我在这里所描述的主题的变体,即一个经过充分设计的机器可以把意向性作为一种出现的品质,即使单个部件(晶体管、神经元或其他什么)没有意向性。所有这些回答都有一定的道理,它们的不足之处更多在于没有把大脑和机器的相似性传达给塞尔,而不是内在的弱点。大脑与机器之间最重要的区别也许不在于它们的实例化,而在于它们的历史,因为人类是为了在复杂的社会和生态环境中完成包括繁衍和生存在内的各种鲜为人知的功能而进化的。而程序在设计之初并没有经过广泛的进化,其目标和动机更为有限。
Searle's accusation of dualism in Al falls wide of the mark because the mechanist does not insist on a particular mechanism in the organism, but only that "mental" processes be represented in a physical system when the system is functioning. A program lying on a tape spool in a corner is no more conscious than a brain preserved in a glass jar, and insisting that the program if read into an appropriate computer would function with intentionality asserts only that the adequate machine consists of an organization imposed on a physical substrate. The organization is no more mentalisitc than the substrate itself. Artificial intelligence is about programs rather than machines only becuase the process of organizing information and inputs and outputs into an information system has been largely solved by digital computers. Therefore, the program is the only step in the process left to worry about.
塞尔对阿尔的二元论的指责有失偏颇,因为机械论者并不坚持有机体中的特定机制,而只是坚持在物理系统运行时,"心理 "过程应在物理系统中得到体现。放在角落磁带卷轴上的程序并不比保存在玻璃瓶中的大脑更有意识,而坚持认为如果将程序读入适当的计算机就会以意向性的方式运行,只是断言适当的机器由强加在物理基质上的组织构成。这种组织并不比基质本身更具有心智性。人工智能是关于程序而非机器的,这只是因为将信息和输入输出组织成信息系统的过程在很大程度上已经由数字计算机解决了。因此,程序是这一过程中唯一需要担心的步骤。
Searle may well be right that present programs (as in Schank & Abelson 1977) do not instantiate intentionality according to his definition. The issue is not whether present programs do this but whether it is
塞尔很可能是对的,根据他的定义,现在的程序(如 Schank 和 Abelson 1977 年的程序)并不实例化意向性。问题不在于现在的程序是否这样做,而在于它是否

possible in principle to build machines that make plans and achieve goals. Searle has given us no evidence that this is not possible.

by Arthur C. Danto

Department of Philosophy, Columbia University, New York, N. Y. 10027

The use and mention of terms and the simulation of linguistic understanding

In the ballet Coppélia, a dancer mimics a clockwork dancing doll simulating a dancer. The imitating movements, dancing twice removed, are predictably "mechanical," given the discrepancies of outward resemblance between clockwork dancers and real ones. These discrepancies may diminish to zero with the technological progress of clockwork, until a dancer mimicking a clockwork dancer simulating a dancer may present a spectacle of three indiscernible dancers engaged in a pas de trois. By behavioral criteria, nothing would enable us to identify which is the doll, and the lingering question of whether the clockwork doll is really dancing or only seeming to seems merely verbal - unless we adopt a criterion of meaning much favored by behaviorism that makes the question itself nonsensical.
在芭蕾舞剧《科佩莉亚》中,一位舞者模仿一个发条舞偶,模拟一位舞者。由于发条舞者与真实舞者在外表相似度上存在差异,因此模仿动作、两次舞蹈都是 "机械 "的。随着发条技术的进步,这些差异可能会减小到零,直到一个模仿发条舞者的舞者模拟出一个舞者,呈现出三个难以辨认的舞者跳起三人舞的奇观。根据行为学的标准,没有任何东西能让我们辨别哪个是玩偶,而关于发条玩偶是真的在跳舞还是只是看起来在跳舞这个挥之不去的问题似乎只是口头上的--除非我们采用一种行为主义非常喜欢的意义标准,让这个问题本身变得毫无意义。
The question of whether machines instantiate mental predicates has been cast in much the same terms since Turing, and by tacit appeal to outward indiscernibility the question of whether machines understand is either dissolved or trivialized. It is in part a protest against assimilating the meaning of mental predicates to mere behavioral criteria - an assimilation of which Abelson and Schank are clearly guilty, making them behaviorists despite themselves - that animates Searle's effort to mimic a clockwork thinker simulating understanding; to the degree that he instantiates the same program it does and fails to understand what is understood by those whom the machine is designed to simulate even if the output of the three of them cannot be discriminated - then the machine itself fails to understand. The argumentation is picturesque, and may not be compelling for those resolved to define (such terms as) "understanding" by outward criteria. So I shall recast Searle's thesis in logical terms which must force his opponents either to concede machines do not understand or else, in order to maintain they might understand, to abandon the essentially behaviorist theory of meaning for mental predicates.
自图灵以来,关于机器是否将心理谓词实例化的问题一直被以大致相同的措辞抛出,而通过对外在不可辨性的默示诉求,机器是否理解的问题要么被消解,要么被淡化。某种程度上,这是对将心理谓词的意义同化为单纯的行为标准的抗议--阿贝尔森和尚克显然犯了这种同化的错误,尽管他们自己也是行为主义者--正是这种抗议促使塞尔努力模仿一个模拟理解的钟表思想家;只要他实例化了与机器相同的程序,却无法理解机器所要模拟的人所理解的东西,即使他们三人的输出无法分辨--那么机器本身就无法理解。这种论证如诗如画,对于那些决心用外在标准来定义(诸如)"理解 "的人来说,可能并不令人信服。因此,我将用逻辑术语重构塞尔的论点,迫使他的反对者要么承认机器不理解,要么为了坚持机器可能理解,放弃本质上行为主义的心理谓词意义理论。
Consider, as does Searle, a language one does not understand but that one can in a limited sense be said to read. Thus I cannot read Greek with understanding, but I know the Greek letters and their associated phonetic values, and am able to pronounce Greek words. Milton's daughters were able to read aloud to their blind father from Greek, Latin, and Hebrew texts though they had no idea what they were saying. And they could, as can I, answer certain questions about Greek words, if only how many letters there are, what their names are, and how they sound when voiced. Briefly, in terms of the distinction logicians draw between the use and mention of a term, they knew, as I know, such properties of Greek words as may be identified by someone who is unable to use Greek words in Greek sentences. Let us designate these as M-properties, in contrast to U-properties, the latter being those properties one must know in order to use Greek (or any) words. The question then is whether a machine programmed to simulate understanding is restricted to M-properties, that is, whether the program is such that the machine cannot use the words it otherwise may be said to manipulate under M-rules and M-laws. If so, the machine exercises its powers over what we can recognize in the words of a language we do not understand, without, as it were, thinking in that language. There is some evidence that in fact the machine operates pretty much by pattern recognition, much in the manner of Milton's unhappy daughters.
就像塞尔所做的那样,考虑一种我们不理解的语言,但在有限的意义上我们可以说是读懂了这种语言。因此,我不能理解地阅读希腊文,但我知道希腊字母及其相关音值,并且能够发音。弥尔顿的女儿们能够向她们失明的父亲朗读希腊文、拉丁文和希伯来文,尽管她们不知道这些文字在说什么。她们和我一样,也能回答一些关于希腊语单词的问题,哪怕只是有多少个字母、它们的名字是什么、发声时听起来如何。简而言之,根据逻辑学家对术语的使用和提及所做的区分,他们和我一样知道希腊语单词的属性,而这些属性可以被无法在希腊语句子中使用希腊语单词的人识别出来。让我们把这些属性称为 M 属性,与 U 属性相对,后者是人们为了使用希腊语(或任何)词语而必须知道的属性。那么,问题就在于,一台被编程为模拟理解的机器是否仅限于 M-属性,也就是说,该程序是否使机器无法使用它根据 M-规则和 M-法则可以操纵的词语。如果是这样,那么机器就行使了它在我们所不理解的语言文字中所能识别的能力,而无需用这种语言进行思考。有证据表明,事实上,这台机器几乎是通过模式识别来运作的,就像弥尔顿笔下那些不幸的女儿们一样。
Now 1 shall suppose it granted that we cannot define the properties of words exhaustively through their M-properties. If this is true, Schank's machines, restricted to M-properties, cannot think in the languages they simulate thinking in. One can ask whether it is possible for the machines to exhibit the output they do exhibit if all they have is -competence. If not, then they must have some sort of competence. But the difficulty with putting the question thus is that there are two ways in which the output can be appreciated: as showing understanding or as only seeming to, and as such the structure of the problem is of a piece with the structure of the mind-body problem in the following respect. Whatever outward behavior, even of a human being. we would want to describe with a psychological (or mental) predicate say that the action of raising an arm was performed - has a physical description that is true whether or not the psychological description is true - for example, that the arm went up. The physical description then underdetermines the distinction between bodily movements and actions, or between actions and bodily movements that exactly resemble them. So whatever outward behavior takes a (psychological) -predicate takes a (physical) -predicate that underdetermines whether the former is true or false of what the latter is true of. So we cannot infer from a -description whether or not a -description applies. To be sure, we can ruthlessly define -terms as -terms, in which case the inference is easy but trivial, but then we cannot any longer, as Schank and Abelson wish to do, explain outward behavior with such concepts as understanding. In any case, the distinction between -properties and -properties is exactly parallel: anything by way of output we would be prepared to describe in U-terms has an -description true of it, which underdetermines whether the U-description is true or not.
现在,我们假定,我们无法通过词的 M 特性来详尽地定义词的 特性。如果这是真的,那么仅限于M属性的尚克机器就不能用它们所模拟的语言进行思维。我们可以问,如果机器所拥有的只是 -能力,那么它们是否可能表现出它们所表现出的输出呢?如果不能,那么它们就必须具备某种 能力。但是,这样提出问题的困难在于,我们可以从两个方面来理解机器的输出:表现出理解能力或只是看起来理解能力,因此,这个问题的结构在以下方面与身心问题的结构是相同的。无论我们想用心理(或精神)谓词来描述什么外在行为,甚至是人的外在行为,比如,举起手臂这个动作,都有一个物理描述,无论心理描述是否为真,物理描述都是真的,比如,手臂举了起来。这样,物理描述就无法确定身体动作和行为之间的区别,也无法确定行为和与之完全相似的身体动作之间的区别。因此,无论外在行为以(心理的) -谓词还是以(物理的) -谓词,都无法确定前者是真还是后者是假。因此,我们无法从 -描述中推断出 -描述是否适用。当然,我们可以无情地把 -terms 定义为 -terms,在这种情况下,推论是容易但微不足道的,但这样我们就不能再像尚克和艾贝尔森所希望的那样,用理解这样的概念来解释外显行为了。无论如何, -属性与 -属性之间的区别是完全平行的:任何我们准备用U-术语来描述的输出都有一个 -描述,这个描述决定了U-描述的真假。
So no pattern of outputs entails that language is being used, nor hence that the source of the output understands, inasmuch as it may have been cleverly designed to emit a pattern exhaustively describable in M-terms. The problem is perfectly Cartesian. We may worry about whether any of our fellows is an automaton. The question is whether the Schank machine (SAM) is so programmed that only M-properties apply to its output. Then, however closely (exactly) it simulates what someone with understanding would show in his behavior, not one step has been taken toward constructing a machine that understands. And Searle is really right. For while U-competence cannot be defined in M-terms, an -specified simulation can be given of any U-performance, however protracted and intricate. The simulator will only show. not have, the properties of the U-performance. The performances may be indiscriminable, but one constitutes a use of language only if that which emits it in fact uses language. But it cannot be said to use language if its program, as it were, is written solely in M-terms.
因此,任何输出模式都不意味着使用了语言,也不意味着输出源理解了语言,因为它可能被巧妙地设计成输出一种可详尽地用 M 术语描述的模式。这个问题完全是笛卡尔式的。我们可以担心我们的同伴中是否有人是自动机。问题在于,Schank 机器(SAM)是否被编程得只适用于其输出的 M 特性。那么,无论它如何(精确地)模拟有理解力的人在其行为中会表现出来的东西,我们都还没有朝着构建一台有理解力的机器迈出一步。塞尔确实是对的。虽然 "U-能力 "不能用 "M-术语 "来定义,但可以对任何 "U-表现 "进行 -specified simulation,无论它是多么漫长和错综复杂。模拟器只会显示,而不是具有 U-性能的特性。表演可以是无差别的,但只有当发出表演的东西事实上使用了语言,它才构成对语言的使用。但是,如果它的程序仅仅是用 M 术语编写的,就不能说它使用了语言。
The principles on the basis of which a user of language structures a story or text are so different from the principles on the basis of which one could predict, from certain M-properties, what further M-properties to expect, that even if the outputs are indiscernible, the principles must be discernible. And to just the degree that they deviate does a program employing the latter sorts of principles fail to simulate the principles employed in understanding stories or texts. The degree of deviation determines the degree to which the strong claims of are false. This is all the more the case if the M-principles are not to be augmented with -principles.
语言使用者构建一个故事或文本所依据的原则,与人们从某些 M 特性中预测会有哪些进一步的 M 特性所依据的原则是如此不同,以至于即使输出无法辨别,原则也必须是可以辨别的。而采用后一种原则的程序,其偏离程度恰恰决定了它无法模拟在理解故事或文本时所采用的原则。偏离的程度决定了 的强烈主张在多大程度上是错误的。如果M-原则没有得到 -原则的补充,情况就更是如此。
Any of us can predict what sounds a person may make when he answers certain questions that he understands, but that is because we understand where he is going. If we had to develop the ability to predict sounds only on the basis of other sounds, we might attain an astounding congruence with what our performance would have been if we knew what was going on. Even if no one could tell we didn't, understanding would be nil. On the other hand, the question remains as to whether the Schank machine uses words. If it does, Searle has failed as a simulator of something that does not simulate but genuinely possesses understanding. If he is right, there is a pretty consequence. M-properties yield, as it were, pictures of words: and machines, if they encode propositions, do so pictorially.
我们中的任何人都可以预测一个人在回答他所理解的某些问题时可能发出的声音,但那是因为我们理解他的意图。如果我们只能根据其他声音来发展预测声音的能力,那么我们的表现可能会达到惊人的一致,如果我们知道发生了什么事。即使没有人能看出我们不知道,理解力也将是零。另一方面,Schank 机器是否会使用文字仍然是个问题。如果它使用了,那么塞尔作为一个模拟者就失败了,因为他模拟的东西不是模拟,而是真正拥有理解力的东西。如果他是对的,那就有一个很好的结果。M属性产生的,可以说是词语的图画:而机器,如果它们对命题进行编码,也是以图画的方式进行的。

by Daniel Dennett 作者:丹尼尔-丹尼特

Center for Advanced Study in the Behavioral Sciences. Stanford. Califi. 94305

The milk of human intentionality

I want to distinguish Searle's arguments, which I consider sophistry. from his positive view, which raises a useful challenge to , if only becuase it should induce a more thoughtful formulation of Al's foundations. First, I must support the charge of sophistry by diagnosing, briefly, the tricks with mirrors that give his case a certain spurious plausibility. Then / will comment briefly on his positive view.
我想把塞尔的论点与他的积极观点区分开来,我认为塞尔的论点是诡辩,而他的积极观点则对 提出了有益的挑战,哪怕只是因为它应该促使人们对阿尔的基础进行更深思熟虑的表述。首先,我必须支持对他诡辩的指控,简要地分析一下他的论据中的镜像技巧,这些技巧使他的论据具有某种虚假的可信性。然后,我将简要评述他的正面观点。
Searle's form of argument is a familiar one to philosophers: he has

constructed what one might call an intuition pump, a device for provoking a family of intuitions by producing variations on a basic thought experiment. An intuition pump is not, typically, an engine of discovery, but a persuader or pedagogical tool - a way of getting people to see things your way once you've seen the truth, as Searle thinks he has. I would be the last to disparage the use of intuition pumps - I love to use them myself - but they can be abused. In this instance I think Searle relies almost entirely on ill-gotton gains: favorable intuitions generated by misleadingly presented thought experiments.
Searle begins with a Schank-style Al task, where both the input and output are linguistic objects, sentences of Chinese. In one regard, perhaps, this is fair play, since Schank and others have certainly allowed enthusiastic claims of understanding for such programs to pass their lips, or go uncorrected; but from another point of view it is a cheap shot, since it has long been a familiar theme within circles that such programs - I call them bedridden programs since their only modes of perception and action are linguistic - tackle at best a severe truncation of the interesting task of modeling real understanding. Such programs exhibit no "language-entry" and "language-exit" transitions, to use Wilfrid Sellars's terms, and have no capacity for non linguistic perception or bodily action. The shortcomings of such models have been widely recognized for years in ; for instance, the recognition was implicit in Winograd's decision to give SHRDLU something to do in order to have something to talk about. "A computer whose only input and output was verbal would always be blind to the meaning of what was written" (Dennett 1969, p. 182). The idea has been around for a long time. So, many if not all supporters of strong would simply agree with Searle that in his initial version of the Chinese room, no one and nothing could be said to understand Chinese, except perhaps in some very strained, elliptical, and attenuated sense. Hence what Searle calls "the robot reply (Yale)" is no surprise, though its coming from Yale suggests that even Schank and his school are now attuned to this point.
Searle 从 Schank 式的 Al 任务开始,输入和输出都是语言对象,即中文句子。从某个角度看,这也许是公平的,因为Schank和其他人肯定会允许对这类程序的理解的热情宣称流于口头,或不加纠正;但从另一个角度看,这又是一种低劣的攻击,因为在 ,这类程序--我称它们为 "卧床不起的程序",因为它们唯一的感知和行动模式是语言的--充其量只是对模拟真实理解这一有趣任务的严重截断,这早已是圈内熟知的话题。用威尔弗莱德-塞拉斯(Wilfrid Sellars)的话说,这些程序没有 "语言进入 "和 "语言退出 "的转换,也没有非语言感知或身体行动的能力。多年来, ,人们已普遍认识到这种模型的缺陷;例如,维诺格拉德决定让 SHRDLU 做一些事情,以便有话可说,就隐含着这种认识。"如果一台计算机的唯一输入和输出是语言,那么它对所写内容的意义永远是视而不见的"(丹尼特,1969 年,第 182 页)。这一观点由来已久。因此,许多甚至所有强 的支持者都会同意塞尔的观点,即在他最初版本的中文房间里,没有人或任何事物可以说是懂中文的,除非是在某种非常紧张、椭圆和衰减的意义上。因此,塞尔所说的 "机器人的回答(耶鲁)"并不令人吃惊,尽管它来自耶鲁大学,这表明甚至尚克和他的学派现在也注意到了这一点。
Searle's response to the robot reply is to revise his thought experiment, claiming it will make no difference. Let our hero in the Chinese room also (unbeknownst to him) control the nonlinguistic actions of, and receive the perceptual informings of, a robot. Still (Searle asks you to consult your intuitions at this point) no one and nothing will really understand Chinese. But Searle does not dwell on how vast a difference this modification makes to what we are being asked to imagine.
Nor does Searle stop to provide vivid detail when he again revises his thought experiment to meet the "systems reply." The systems reply suggests, entirely correctly in my opinion, that Searle has confused different levels of explanation (and attribution). I understand English: my brain doesn't - nor, more particularly, does the proper part of it (if such can be isolated) that operates to "process" incoming sentences and to execute my speech act intentions. Searle's portrayal and discussion of the systems reply is not sympathetic, but he is prepared to give ground in any case; his proposal is that we may again modify his Chinese room example, if we wish, to accommodate the objection. We are to imagine our hero in the Chinese room to "internalize all of these elements of the system" so that he "incorporates the entire system." Our hero is now no longer an uncomprehending sub-personal part of a supersystem to which understanding of Chinese might be properly attributed, since there is no part of the supersystem external to his skin. Still Searle insists (in another plea for our intuitional support) that no one - not our hero or any other person he may in some metaphysical sense now be a part of - can be said to understand Chinese.
当塞尔再次修改他的思想实验以应对 "系统回答 "时,他也没有停下来提供生动的细节。在我看来,"系统回答 "完全正确地表明,塞尔混淆了不同层次的解释(和归因)。我懂英语:我的大脑不懂英语--更具体地说,我的大脑中负责 "处理 "接收到的句子和执行我的言语行为意图的适当部分(如果可以分离出来的话)也不懂英语。塞尔对系统回答的描述和讨论并不令人同情,但他准备在任何情况下都让步;他的建议是,如果我们愿意,可以再次修改他的中国房间的例子,以适应反对意见。我们可以想象我们的主人公在中式房间里 "内化了系统的所有这些元素",这样他就 "融入了整个系统"。现在,我们的主人公不再是一个超系统中不被理解的亚个人部分,对中文的理解可以恰当地归因于这个超系统,因为没有任何超系统的部分外在于他的皮肤。尽管如此,塞尔仍然坚持认为(在另一个恳求我们直觉支持的场合),没有人--不是我们的主人公,也不是他现在可能在某种形而上学意义上是其一部分的任何其他人--可以说是理解中文的。
But will our intuitions support Searle when we imagine this case in detail? Putting both modifications together, we are to imagine our hero controlling both the linguisitc and nonlinguistic behavior of a robot who is - himslef! When the Chinese words for "Hands up! This is a stickup!" are intoned directly in his ear, he will uncomprehendingly (and at breathtaking speed) hand simulate the program, which leads him to do things (what things - is he to order himself in Chinese to stimulate his own motor neurons and then obey the order?) that lead to his handing over his own wallet while begging for mercy, in Chinese, with his own lips. Now is it at all obvious that, imagined this way, no one in the situation understands Chinese? In point of fact, Searle has simply not told us how he intends us to imagine this case, which we are licensed to do by his two modifications. Are we to suppose that if the words had been in English, our hero would have responded (appropriately) in his native English? Or is he so engrossed in his massive homuncular task that he responds with the (simulated) incomprehension that would be the program-driven response to this bit of incomprehensible ("to the robot") input? If the latter, our hero has taken leave of his Englishspeaking friends for good, drowned in the engine room of a Chinesespeaking "person" inhabiting his body. If the former, the situation is drastically in need of further description by Searle, for just what he is imagining is far from clear. There are several radically different alternatives - all so outlandishly unrealizable as to caution us not to trust our gut reactions about them in any case. When we imagine our hero "incorporating the entire system" are we to imagine that he pushes buttons with his fingers in order to get his own arms to move? Surely not, since all the buttons are now internal. Are we to imagine that when he responds to the Chinese for "pass the salt, please" by getting his hand to grasp the salt and move it in a certain direction, he doesn't notice that this is what he is doing? In short, could anyone who became accomplished in this imagined exercise fail to become fluent in Chinese in the process? Perhaps, but it all depends on details of this, the only crucial thought experiment in Searle's kit, that Searle does not provide.
但是,当我们详细想象这种情况时,我们的直觉会支持塞尔吗?把两种修改放在一起,我们可以想象我们的主人公同时控制着一个机器人的语言和非语言行为,而这个机器人就是他自己!当 "Hands up!当 "举起手来!这是抢劫!"的中文词直接在他耳边响起时,他会不加思索地(以惊人的速度)用手模拟程序,从而做出一些事情(什么事情--难道他要用中文命令自己刺激自己的运动神经元,然后服从命令吗?现在,在这种想象中,没有一个人懂中文,这一点还不明显吗?事实上,塞尔根本没有告诉我们他打算让我们如何想象这个案例,而他的两处修改却允许我们这样做。我们是否可以假设,如果这些话是用英语说的,我们的主人公就会用他的母语英语(恰当地)做出回应?还是说,他沉浸在自己的巨大同传任务中,以至于(模拟的)无法理解的方式做出了反应,而这正是程序驱动的对这种无法理解("对机器人来说")的输入做出的反应?如果是后者,那么我们的主人公就永远离开了他的英语朋友们,被淹死在一个居住在他身体里的说中文的 "人 "的机房里。如果是前者,情况就急需苏尔进一步描述了,因为他所想象的情况还很不清楚。有几种截然不同的选择--都是如此离奇地不可能实现,以至于提醒我们无论如何都不要相信我们对它们的直觉反应。当我们想象我们的主人公 "融入整个系统 "时,我们是在想象他用手指按下按钮来让自己的手臂动起来吗?当然不是,因为现在所有的按钮都是内部的。我们是否可以想象,当他回应中文 "请把盐递给我 "时,用手抓住盐并向某个方向移动,而他并没有注意到这就是他正在做的事情?简而言之,在这个想象的练习中取得成就的人,会不会在这个过程中中文变得不流利呢?也许吧,但这完全取决于苏尔没有提供的这一细节,这是苏尔工具包中唯一关键的思想实验。
Searle tells us that when he first presented versions of this paper to Al audiences, objections were raised that he was prepared to meet, in part, by moditying his thought experiment. Why then did he not present us, his subsequent audience, with the modified thought experiment in the first place, instead of first leading us on a tour of red herrings? Could it be because it is impossible to tell the doubly modified story in anything approaching a cogent and detailed manner without provoking the unwanted intuitions? Told in detail, the doubly modified story suggests either that there are two people, one of whom understands Chinese, inhabiting one body, or that one English-speaking person has, in effect, been engulfed within another person, a person who understands Chinese (among many other things).
塞尔告诉我们,当他第一次向阿尔的听众介绍这篇论文的版本时,有人提出了反对意见,而他准备通过修改他的思想实验来部分地回应这些反对意见。那么,他为什么不首先向我们--他后来的听众--展示经过修改的思想实验,而要先带领我们进行一次 "红线之旅 "呢?会不会是因为要想以近乎有力和详细的方式讲述这个经过双重修改的故事,而又不引起我们不想要的直觉,是不可能的?从细节上讲,这个经过双重修改的故事要么表明有两个人,其中一个懂中文,居住在一个身体里,要么表明一个说英语的人实际上被另一个懂中文的人(除其他外)吞没了。
These and other similar considerations convince me that we may turn our backs on the Chinese room at least until a better version is deployed. In its current state of disrepair I can get it to pump my contrary intuitions at least as plentifully as Searle's. What, though, of his positive view? In the conclusion of his paper, Searle observes: "No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle." I don't think this is just a curious illustration of Searle's vision; I think it vividly expresses the feature that most radically distinguishes his view from the prevailing winds of doctrine. For Searle, intentionality is rather like a wonderful substance secreted by the brain the way the pancreas secretes insulin. Brains produce intentionality, he says, whereas other objects, such as computer programs, do not, even if they happen to be designed to mimic the input-output behavior of (some) brain. There is, then, a major disagreement about what the product of the brain is. Most people in AI (and most functionalists in the philosophy of mind) would say that its product is something like control: what a brain is for is for governing the right, appropriate, intelligent input-output relations, where these are deemed to be, in the end, relations between sensory inputs and behavioral outputs of some sort. That looks to Searle like some sort of behaviorism, and he will have none of it. Passing the Turing test may be prima facie evidence that something has intentionality - really has a mind but "as soon as we knew that the behavior was the result of a formal program, and that the actual causal properties of the physical substance were irrelevant we would abandon the assumption of intentionality."
这些考虑以及其他类似的考虑让我相信,至少在更好的版本问世之前,我们可以放弃 "中国房间"。在它目前失修的状态下,我至少可以让它像塞尔的版本一样,大量灌输我的相反直觉。那么,他的正面观点又是什么呢?塞尔在论文的结论中指出:"没有人会认为我们可以通过计算机模拟泌乳和光合作用中的形式序列来生产牛奶和糖,但就思维而言,许多人愿意相信这样的奇迹"。我不认为这只是对塞尔观点的奇特说明,我认为这生动地表达了他的观点与主流学说截然不同的特征。在塞尔看来,意向性就像是大脑分泌的一种奇妙物质,就像胰腺分泌胰岛素一样。他说,大脑会产生意向性,而其他物体,比如计算机程序,则不会,即使它们碰巧是为了模仿(某些)大脑的输入-输出行为而设计的。因此,关于大脑的产物是什么,存在着重大分歧。人工智能领域的大多数人(以及心灵哲学领域的大多数功能主义者)都会说,大脑的产品是类似于控制的东西:大脑的作用是管理正确的、适当的、智能的输入-输出关系,而这些关系最终被认为是感官输入与某种行为输出之间的关系。这在塞尔看来就像是某种行为主义,而他是不会接受的。通过图灵测试可能是某物具有意向性--真的有思想--的初步证据,但 "只要我们知道行为是形式化程序的结果,而物理物质的实际因果属性无关紧要,我们就会放弃意向性假设"。
So on Searle's view the "right" input-output relations are symptomatic but not conclusive or criterial evidence of intentionality; the proof of the pudding is in the presence of some (entirely unspecified) causal properties that are internal to the operation of the brain. This internality needs highlighting. When Searle speaks of causal properties one may
因此,在塞尔看来,"正确的 "输入-输出关系是意向性的征兆,但不是决定性或标准性的证据;证明布丁的证据在于大脑运作内部存在的某些(完全未指定的)因果属性。这种内在性需要强调。当塞尔谈到因果属性时,人们可以

think at first that those causal properties crucial for intentionality are those that link the activities of the system (brain or computer) to the things in the world with which the system interacts - including, preeminently, the active, sentient body whose behavior the system controls. But Searle insists that these are not the relevant causal properties. He concedes the possibility in principle of duplicating the input-output competence of a human brain with a "formal program," which (suitably attached) would guide a body through the world exactly as that body's brain would, and thus would acquire all the relevant extra systemic causal properties of the brain. But such a brain substitute would utterly fail to produce intentionality in the process, Searle holds, becuase it would lack some other causal properties of the brain's internal operation.'
起初,我们认为那些对意向性至关重要的因果属性是那些将系统(大脑或计算机)的活动与世界上与系统相互作用的事物联系在一起的因果属性,其中最主要的是包括由系统控制其行为的活跃的、有知觉的身体。但塞尔坚持认为,这些并不是相关的因果属性。他承认,原则上有可能用一个 "形式化程序 "来复制人脑的输入输出能力,这个 "形式化程序"(适当地附加)将引导一个身体穿过这个世界,就像这个身体的大脑一样,从而获得大脑所有相关的系统外因果属性。但塞尔认为,这样的大脑替代品在这个过程中完全无法产生意向性,因为它缺乏大脑内部运作的其他一些因果属性。
How, though, would we know that it lacked these properties, if all we knew was that it was (an implementation of) a formal program? Since Searle concedes that the operation of anything - and hence a human brain - can be described in terms of the execution of a formal program, the mere existence of such a level of description of a system would not preclude its having intentionality. It seems that it is only when we can see that the system in question is only the implementation of a formal program that we can conclude that it doesn't make a little intentionality on the side. But nothing could be only the implementation of a formal program; computers exude heat and noise in the course of their operations - why not intentionality too?
Besides, which is the major product and which the byproduct? Searle can hardly deny that brains do in fact produce lots of reliable and appropriate bodily control. They do this, he thinks, by producing intentionality, but he concedes that something - such as a computer with the right input-output rules - could produce the control without making or using any intentionality. But then control is the main product and intentionality just one (no doubt natural) means of obtaining it. Had our ancestors been nonintentional mutants with mere control systems, nature would just as readily have selected them instead. (I owe this point to Bob Moore.) Or, to look at the other side of the coin, brains with lots of intentionality but no control competence would be producers of an ecologically irrelevant product, which evolution would not protect. Luckily for us, though, our brains make intentionality; if they didn't, we'd behave just as we now do, but of course we wouldn't mean it!
Surely Searle does not hold the view I have just ridiculed, although it seems as if he does. He can't really view intentionality as a marvelous mental fluid, so what is he trying to get at? I think his concern with internal properties of control systems is a misconceived attempt to capture the interior point of view of a conscious agent. He does not see how any mere computer, chopping away at a formal program, could harbor such a point of view. But that is because he is looking too deep. It is just as mysterious if we peer into the synapse-filled jungles of the brain and wonder where consciousness is hiding. It is not at that level of description that a proper subject of consciousness will be found. That is the systems reply, which Searle does not yet see to be a step in the right direction away from his updated version of élan vital.
当然,塞尔并不持有我刚才所嘲笑的观点,尽管他似乎持有这种观点。他不可能真的把意向性视为一种奇妙的精神流体,那么他到底想表达什么呢?我认为,他对控制系统内部特性的关注是一种误解,他试图捕捉有意识的人的内部观点。他不明白,任何单纯的计算机,在执行一个形式化的程序时,怎么会怀有这样的观点。但这是因为他看得太深了。如果我们窥探充满突触的大脑丛林,想知道意识藏在哪里,那也同样神秘莫测。在这一描述层面,我们找不到意识的适当主题。这就是系统的回答,塞尔还不认为这是向他更新版的 "生命活力"(élan vital)迈出的正确一步。

Note 备注

  1. For an intuition pump involving exactly this case - a prosthetic brain - but designed to pump contrary intuitions, see "Where Am I?" in Dennett (1978).
    关于这种情况下的直觉泵--义脑--但其目的是泵出相反的直觉,请参阅 Dennett (1978) 的 "我在哪里?

by John C. Eccles

Cá a lá Gra, Contra (Locarno) CH-6611, Switzerland
瑞士康特拉(洛迦诺)Cá a lá Gra, CH-6611

A dualist-interactionist perspective

Searle clearly states that the basis of his critical evaluation of is dependent on two propositions. The first is: "Intentionality in human beings (and animals) is a product of causal features of the brain." He supports this proposition by an unargued statement that it "is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality" (my italics).
塞尔明确指出,他对 的批判性评价的基础取决于两个命题。第一个命题是"人类(和动物)的意向性是大脑因果特征的产物"。为支持这一命题,他不加论证地指出,该命题 "是关于心理过程与大脑之间实际因果关系的经验事实。它只是说,某些大脑过程足以产生意向性"(斜体为笔者所加)。
This is a dogma of the psychoneural identity theory, which is one variety of the materialist theories of the mind. There is no mention of the alternative hypothesis of dualist interactionism that Popper and I published some time ago (1977) and that I have further developed more recently (Eccles 1978; 1979). According to that hypothesis intentionality is a property of the self-conscious mind (World 2 of Popper), the brain being used as an instrument in the realization of intentions. I refer to Fig. E 7-2 of Popper and Eccles (1977), where intentions appear in the box (inner senses) of World 2, with arrows indicating the flow of information by which intentions in the mind cause changes in the liaison brain and so eventually in voluntary movements.
这是精神神经同一论的教条,而精神神经同一论是唯物主义心灵理论的一个分支。波普尔和我在不久前(1977 年)发表了二元论互动论的另一种假说,而我最近又进一步发展了这一假说(埃克斯,1978 年;1979 年)。根据这一假说,意向性是自我意识心智(波普尔的世界 2)的属性,大脑被用作实现意向的工具。我参考了波普尔和埃克斯(1977)的图 E 7-2,图中意图出现在世界 2 的方框(内部感官)中,箭头表示信息流,通过这些信息流,头脑中的意图引起联络脑的变化,并最终导致自主运动。
I have no difficulty with proposition 2 , but I would suggest that 3,4 , and 5 be rewritten with "mind" substituted for "brain." Again the statement: "only a machine could think, and only very special kinds of machines ... with internal causal powers equivalent to those of brains" is the identity theory dogma. I say dogma becuase it is unargued and without empirical support. The identity theory is very weak empirically, being merely a theory of promise.
我对命题 2 没有异议,但我建议将命题 3、4 和 5 改写,用 "心智 "代替 "大脑"。还是那句话"只有机器才能思考,而且只有非常特殊的机器......才具有与大脑相当的内部因果能力",这就是同一论的教条。我之所以说它是教条,是因为它缺乏论证和经验支持。同一性理论在经验上非常薄弱,只是一种承诺理论。
So long as Searle speaks about human performance without regarding intentionality as a property of the brain, I can appreciate that he has produced telling arguments against the strong Al theory. The story of the hamburger with the Gedankenexperiment of the Chinese symbols is related to Premack's attempts to teach the chimpanzee Sarah a primitive level of human language as expressed in symbols [See Premack:"Does the Chimpanzee Have a Theory of Mind?" BBS 1(4) 1978]. The criticism of Lenneberg (1975) was that, by conditioning, Sarah had learnt a symbol game, using symbols instrumentally, but had no idea that it was related to human language. He trained high school students with the procedures described by Premack, closely replicating Premack's study. The human subjects were quickly able to obtain considerably lower error scores than those reported for the chimpanzee. However, they were unable to translate correctly a single one of their completed sentences into English. In fact, they did not understand that there was any correspondence between the plastic symbols and language; instead they were under the impression that their task was to solve puzzles.
只要塞尔在谈论人类的表现时没有把意向性视为大脑的一种属性,我就能理解他提出了反对强艾尔理论的有说服力的论据。汉堡包与中国符号的Gedankenexperiment的故事与普雷马克(Premack)试图教黑猩猩萨拉学会用符号表达的人类语言的原始水平有关[见普雷马克:"黑猩猩有心智理论吗?" BBS 1(4) 1978]。伦纳伯格(1975 年)的批评意见是,通过条件反射,莎拉学会了一种符号游戏,工具性地使用符号,但却不知道这与人类语言有关。他用普雷马克描述的程序对高中生进行了训练,密切复制了普雷马克的研究。人类受试者很快就能获得比黑猩猩低得多的错误分数。然而,他们却无法将一个完整的句子正确地翻译成英语。事实上,他们并不理解塑料符号与语言之间存在任何对应关系;相反,他们的印象是,他们的任务是解谜。
I think this simple experiment indicates a fatal flaw in all the Al work. No matter how complex the performance instantiated by the computer it can be no more than a triumph for the computer designer in simulation. The Turing machine is a magician's dream - or nightmare!
我认为这个简单的实验表明了所有 Al 作品的致命缺陷。无论计算机实例化的表演有多么复杂,它都不过是计算机设计者在模拟中的一次胜利。图灵机是魔术师的梦想,也可以说是噩梦!
It was surprising that after the detailed brain-mind statements of the abstract, I did not find the word "brain" in Searle's text through the whole of his opening three pages of argument, where he uses mind, mental states, human understanding, and cognitive states exactly as would be done in a text on dualist interactionism. Not until "the robot reply" does brain appear as "computer 'brain.' " However, from "the brain simulator reply" in the statements and criticisms of the various other replies, brain, neuron firings, synapses, and the like are profusely used in a rather naive way. For example "imagine the computer programmed with all the synapses of a human brain" is more than I can do by many orders of magnitude! So "the combination reply" reads like fantasy - and to no purpose!
令人惊讶的是,在摘要中详细陈述了脑-心之后,我在塞尔的文章中却没有找到 "脑 "这个词,在他开头的三页论证中,他使用了心智、精神状态、人类理解和认知状态,这与二元论互动论的文章中的用法完全一致。直到 "机器人的回答",大脑才作为 "计算机'大脑'"出现。"然而,从 "大脑模拟器回复 "开始,在其他各种回复的陈述和批评中,大脑、神经元闪烁、突触等都以一种相当幼稚的方式被大量使用。例如,"想象一下计算机编程时具有人脑的所有突触",这比我能做的要多出好几个数量级!因此,"组合回复 "读起来就像天方夜谭,毫无意义!
I agree that it is a mistake to confuse simulation with duplication. But I do not object to the idea that the distinction between the program and its realization in the hardware seems to be parallel to the distinction between the mental operations and the level of brain operations. However, Searle believes that the equation "mind is to brain as program is to hardware" breaks down at several points. I would prefer to substitute programmer for program, because as a dualist interactionist I accept the analogy that as conscious beings we function as programmers of our brains. In particular I regret Searle's third argument: "Mental states and events are literally a product of the operation of the brain, but the program is not in that way a product of the computer," and so later we are told "whatever else intentionality is, it is a biological phenomenon, and it is as likely to be causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomenon." I have the feeling of being transported back to the nineteenth century, where, as derisorily recorded by Sherrington (1950): "the oracular Professor Tyndall, presiding over the British Association at Belfast, told his audience that as the bile is a secretion of the liver, so the mind is a secretion of the brain."
我同意将模拟与复制混为一谈是错误的。但我并不反对这样的观点,即程序与其在硬件中的实现之间的区别似乎与心理操作和大脑操作层面的区别是平行的。然而,塞尔认为,"心智之于大脑,就像程序之于硬件 "这一等式在几个点上是不成立的。我更倾向于用 "程序员 "来代替 "程序",因为作为一个二元互动论者,我接受这样的类比,即作为有意识的存在,我们的功能就是我们大脑的程序员。我尤其对塞尔的第三个论点感到遗憾:"精神状态和事件从字面上看是大脑运作的产物,但程序并不是计算机的产物。"因此,后来我们被告知,"无论意向性是什么,它都是一种生物现象,它就像哺乳、光合作用或任何其他生物现象一样,很可能在因果关系上依赖于其起源的特定生物化学"。我有一种回到十九世纪的感觉,正如谢林顿(1950 年)揶揄地记载的那样"神通广大的廷德尔教授在贝尔法斯特主持英国协会会议时告诉听众,正如胆汁是肝脏的分泌物一样,心灵也是大脑的分泌物"。
In summary, my criticisms arise from fundamental differences in

respect of beliefs in relation to the brain-mind problem. So long as Searle is referring to human intentions and performances without reference to the brain-mind problem, I can appreciate the criticisms that he marshals against the Al beliefs that an appropriately programmed computer is a mind literally understanding and having other cognitive states. Most of Searle's criticisms are acceptable for dualist interactionism. It is high time that strong AI was discredited.

by J. A. Fodor
作者:J. A. Fodor

Department of Psychology. Massachusetts Institute of Technology, Cambridge, Mass. 02139

Searle on what only brains can do

  1. Searle is certainly right that instantiating the same program that the brain does is not, in and of itself, a sufficent condition for having those propositional attitudes characteristic of the organism that has the brain. If some people in Al think that it is, they're wrong. As for the Turing test, it has all the usual difficulties with predictions of "no difference"; you can't distinguish the truth of the prediction from the insensitivity of the test instrument.'
    塞尔当然是对的,大脑实例化相同的程序本身并不是拥有大脑的有机体所特有的命题态度的充分条件。如果阿尔的某些人认为这是一个充分条件,那他们就错了。至于图灵测试,它具有预测 "无差别 "的所有常见困难;你无法将预测的真实性与测试工具的不敏感性区分开来。
  2. However, Searle's treatment of the "robot reply" is quite unconvincing. Given that there are the right kinds of causal linkages between the symbols that the device manipulates and things in the world including the afferent and efferent transducers of the device - it is quite unclear that intuition rejects ascribing propositional attitudes to it. All that Searle's example shows is that the kind of causal linkage he imagines - one that is, in effect, mediated by a man sitting in the head of a robot - is, unsurprisingly, not the right kind.
    然而,塞尔对 "机器人回答 "的处理却很难令人信服。鉴于该装置所操纵的符号与世界上的事物(包括该装置的传入和传出转换器)之间存在着正确的因果联系--直觉拒绝将命题态度赋予它,这一点并不清楚。塞尔的例子所表明的只是,他所想象的那种因果联系--实际上是由一个坐在机器人头部的人作为中介的因果联系--并不是正确的因果联系,这一点不足为奇。
  3. We don't know how to say what the right kinds of causal linkage are. This, also, is unsurprising since we don't know how to answer the closely related question as to what kinds of connection between a formula and the world determine the interpretation under which the formula is employed. We don't have an answer to this question for any symbolic system; a fortiori, not for mental representations. These questions are closely related because. given the mental representation view, it is natural to assume that what makes mental states intentional is primarily that they involve relations to semantically interpreted mental objects; again, relations of the right kind.
  4. It seems to me that Searle has misunderstood the main point about the treatment of intentionality in representational theories of the mind; this is not surprising since proponents of the theory - especially in - have been notably unlucid in expounding it. For the record, then, the main point is this: intentional properies of propositional attitudes are viewed as inherited from semantic properties of mental representations (and not from the functional role of mental representations, unless "functional role" is construed broadly enough to include symbol-world relations). In effect, what is proposed is a reduction of the problem what makes mental states intentional to the problem what bestows semantic properties on (fixes the interpretation of) a symbol. This reduction looks promising because we're going to have to answer the latter question anyhow (for example, in constructing theories of natural languages); and we need the notion of mental representation anyhow (for example, to provide appropriate domains for mental processes).
    在我看来,塞尔误解了心智表征理论中对待意向性的主要观点;这并不奇怪,因为该理论的支持者--尤其是在 --在阐述这一观点时明显不够清晰。为了记录在案,主要观点是这样的:命题态度的意向性属性被视为从心理表征的语义属性继承而来(而不是从心理表征的功能作用继承而来,除非对 "功能作用 "的解释足够宽泛,以至于包括符号-世界关系)。实际上,我们所提出的是把 "是什么使心理状态具有意向性 "这一问题简化为 "是什么赋予符号以语义属性(固定对符号的解释)"这一问题。这种还原看起来很有希望,因为我们无论如何都要回答后一个问题(例如,在构建自然语言理论时);而且我们无论如何都需要心理表征的概念(例如,为心理过程提供适当的领域)。
It may be worth adding that there is nothing new about this strategy. Locke, for example, thought (a) that the intentional properties of mental states are inherited from the semantic (referential) properties of mental representations; (b) that mental processes are formal (associative); and (c) that the objects from which mental states inherit their intentionality are the same ones over which mental processes are defined: namely ideas. It's my view that no serious alternative to this treatment of propositional attitudes has ever been proposed.
值得补充的是,这一策略并无新意。例如,洛克认为:(a) 心理状态的意向属性是从心理表征的语义(指称)属性继承而来的;(b) 心理过程是形式的(联想的);(c) 心理状态继承其意向性的对象与心理过程被定义的对象是相同的:即观念。我认为,除了对命题态度的这种处理之外,还没有人提出过其他严肃的替代方案。
  1. To say that a computer (or a brain) pertorms formal operations on symbols is not the same thing as saying that it performs operations on formal (in the sense of "uninterpreted") symbols. This equivocation occurs repeatedly in Searle's paper, and causes considerable confusion. If there are mental representations they must, of course, be interpreted objects; it is because they are interpreted objects that mental states are intentional. But the brain might be a computer for all that.
    说计算机(或大脑)对符号进行形式运算,与说它对形式符号(在 "未被解释 "的意义上)进行运算,并不是一回事。这种模棱两可的说法在塞尔的论文中反复出现,造成了相当大的混乱。如果存在心理表征,它们当然必须是被解释的对象;正是因为它们是被解释的对象,心理状态才是有意的。但是,大脑可能就是一台计算机。
  2. This situation - needing a notion of causal connection, but not knowing which notion of causal connection is the right one - is entirely familiar in philosophy. It is, for example, extremely plausible that "a perceives can be true only where there is the right kind of causal connection between and . And we don't know what the right kind of causal connection is here either.
    这种情况--需要一个因果联系的概念,却不知道哪个因果联系的概念才是正确的--在哲学中是完全常见的。例如,"只有在 之间存在正确的因果联系时,'a 感知 '才可能是真的 "这种说法就非常可信。我们也不知道这里的正确因果联系是什么。
Demonstrating that some kinds of causal connection are the wrong kinds would not, of course, prejudice the claim. For example, suppose we interpolated a little man between and , whose function it is to report to on the presence of . We would then have (inter alia) a sort of causal link from a to , but we wouldn't have the sort of causal link that is required for a to perceive . It would, of course, be a fallacy to argue from the fact that this causal linkage fails to reconstruct perception to the conclusion that no causal linkage would succeed. Searle's argument against the "robot reply" is a fallacy of precisely that sort.
当然,证明某些类型的因果联系是错误的,并不会影响我们的主张。例如,假设我们在 之间插入一个小人,其功能是向 报告 的存在。这样,我们就有(除其他外)一种从 a 到 的因果联系,但我们不会有 a 感知 所需的那种因果联系。当然,从这一因果联系未能重构知觉这一事实出发来论证任何因果联系都不会成功这一结论是谬误的。塞尔反对 "机器人回答 "的论证正是这种谬误。
  1. It is entirely reasonable (indeed it must be true) that the right kind of causal relation is the kind that holds between our brains and our transducer mechanisms (on the one hand) and between our brains and distal objects (on the other). It would not begin to follow that only our brains can bear such relations to transducers and distal objects; and it would also not follow that being the same sort of thing our brain is (in any biochemical sense of "same sort") is a necessary condition for being in that relation; and it would also not follow that formal manipulations of symbols are not among the links in such causal chains. And, even if our brains are the only sorts of things that can be in that relation, the fact that they are might quite possibly be of no particular interest; that would depend on why it's true.
    正确的因果关系是我们的大脑和我们的转换机制之间(一方面)以及我们的大脑和远距离物体之间(另一方面)的因果关系,这是完全合理的(事实上它必须是真实的)。这并不是说,只有我们的大脑才能与传导机制和远端对象建立起这种关系;这也不是说,我们的大脑是同类事物(在任何生化意义上的 "同类")是建立这种关系的必要条件;这也不是说,符号的形式化操作不属于这种因果关系链中的环节。而且,即使我们的大脑是唯一可以处于这种关系中的事物,它们处于这种关系中这一事实也很可能并没有什么特别的意义;这取决于它为什么是真的。
Searle gives no clue as to why he thinks the biochemistry is important for intentionality and, prima facie, the idea that what counts is how the organism is connected to the world seems far more plausible. After all, it's easy enough to imagine, in a rough and ready sort of way, how the fact that my thought is causally connected to a tree might bear on its being a thought about a tree. But it's hard to imagine how the fact that (to put it crudely) my thought is made out of hydrocarbons could matter, except on the unlikely hypothesis that only hydrocarbons can be causally connected to trees in the way that brains are.
  1. The empirical evidence for believing that "manipulation of symbols" is involved in mental processes derives largely from the considerable success of work in linguistics, psychology, and AI that has been grounded in that assumption. Little of the relevant data concerns the simulation of behavior or the passing of Turing tests, though Searle writes as though all of it does. Searle gives no indication at all of how the facts that this work accounts for are to be explained if not on the mental-processes-are-formal-processes view. To claim that there is no argument that symbol manipulation is necessary for mental processing while systematically ignoring all the evidence that has been alleged in favor of the claim strikes me as an extremely curious strategy on Searle's part.
    认为 "操纵符号 "涉及心理过程的经验证据,主要来自语言学、心理学和人工智能领域以这一假设为基础而取得的巨大成功。相关数据很少涉及行为的模拟或图灵测试的通过,尽管苏尔写得好像所有数据都与此有关。塞尔完全没有说明,如果不按照心理过程即形式过程的观点,该如何解释这项工作所说明的事实。一方面声称符号操纵对于心理过程是必要的,另一方面却系统性地忽略了所有支持这一说法的证据,这让我觉得塞尔的策略非常奇怪。
  2. Some necessary conditions are more interesting than others. While connections to the world and symbol manipulations are both presumably necessary for intentional processes, there is no reason (so far) to believe that the former provide a theoretical domain for a science; wheras, there is considerable a posteriori reason to suppose that the latter do. If this is right, it provides some justification for practice, if not for Al rhetoric.
    有些必要条件比其他条件更有趣。虽然与世界的联系和符号操作可能都是有意过程的必要条件,但(到目前为止)没有理由认为前者为科学提供了一个理论领域;而有相当多的后验理由认为后者提供了一个理论领域。如果这是正确的,那么它就为 的实践提供了一些理由,如果不是为阿尔修辞学提供理由的话。
  3. Talking involves performing certain formal operations on symbols: stringing words together. Yet, not everything that can string words together can talk. It does not follow from these banal observations that what we utter are uninterpreted sounds, or that we don't understand what we say, or that whoever talks talks nonsense, or that only hydrocarbons can assert - similaly, mutatis mutandis, if you substitue "thinking" for "talking."
    说话需要对符号进行某些形式上的操作:把词语串联起来。然而,并不是所有能把词串起来的东西都能说话。从这些平淡无奇的观察中,我们并不能得出这样的结论:我们发出的都是未经解释的声音,或者我们并不理解我们所说的话,或者无论谁说话都是胡说八道,或者只有碳氢化合物才能断言--如果把 "说话 "换成 "思考",情况也会大致相同。

Notes 说明

  1. I assume, for simplicity, that there is only one program that the brain instantiates (which, of course, there isn't). Notice, by the way, that even passing the Turing test requires doing more than just manipulating symbols. A device that can't run a typewriter can't play the game.
Commentary/Searle: Minds, brains, and programs
  1. For example, it might be that, in point of physical fact, only things that have the same simultaneous values of weight, density, and shade of gray that brains have can do the things that brains can. This would be surprising, but it's hard to see why a psychologist should care much. Not even if it turned out - still in point of physical fact - that brains are the only things that can have that weight, density. and color. If that's dualism, I imagine we can live with it.

by John Haugeland 作者:约翰-豪格兰

Center for Advanced Study in the Behavioral Sciences, Stanford, Calif. 94305
行为科学高级研究中心,加州斯坦福 94305

Programs, causal powers, and intentionality

Searle is in a bind. He denies that any Turing test for intelligence is adequate - that is, that behaving intelligently is a sufficent condition for being intelligent. But he dare not deny that creatures physiologically very different from people might be intelligent nonetheless - smart green saucer pilots, say. So he needs an intermediate criterion: not so specific to us as to rule out the aliens, yet not so dissociated from specifics as to admit any old object with the right behavior. His suggestion is that only objects (made of stuff) with "the right causal powers" can have intentionality, and hence, only such objects can genuinely understand anything or be intelligent. This suggestion, however, is incompatible with the main argument of his paper.
塞尔陷入了困境。他否认任何图灵智能测试都是充分的,也就是说,智能行为是成为智能人的充分条件。但他又不敢否认,生理上与人类截然不同的生物也可能具有智慧--比如说,聪明的绿碟飞行员。因此,他需要一个中间标准:既不能太具体,以至于排除外星人,又不能太脱离具体,以至于接纳任何具有正确行为的旧物体。他的建议是,只有具有 "正确的因果能力 "的物体(由东西构成)才能具有意向性,因此,只有这样的物体才能真正理解任何事物或具有智慧。然而,这一建议与他的论文的主要论点不符。
Ostensibly, that argument is against the claim that working according to a certain program can ever be sufficient for understanding anything - no matter how cleverly the program is contrived so as to make the relevant object (computer, robot, or whatever) behave as if it understood. The crucial move is replacing the central processor (c.p.u.) with a superfast person - whom we might as well call "Searle's demon." And Searle argues that an English-speaking demon could perfectly well follow a program for simulating a Chinese speaker, without itself understanding a word of Chinese.
从表面上看,这个论点是反对这样一种说法,即按照某个程序工作就足以理解任何事物--无论程序设计得多么巧妙,以使相关对象(计算机、机器人或其他什么东西)表现得好像它理解了什么。关键的一步是用一个超快的人取代中央处理器(c.p.u.)--我们不妨称他为 "塞尔的恶魔"。塞尔认为,一个会说英语的恶魔完全可以按照程序模拟一个会说中文的人,而它自己却一句中文也听不懂。
The trouble is that the same strategy will work as well against any specification of "the right causal powers." Instead of manipulating formal tokens according to the specifications of some computer program, the demon will manipulate physical states or variables according to the specification of the "right" causal interactions. Just to be concrete, imagine that the right ones are those powers that our neuron tips have to titillate one another with neurotransmitters. The green aliens can be intelligent, even though they're based on silicon chemistry, because their (silicon) neurons have the same power of intertitillation. Now imagine covering each of the neurons of a Chinese criminal with a thin coating, which has no effect, except that it is impervious to neurotransmitters. And imagine further that Searle's demon can see the problem, and comes to the rescue; he peers through the coating at each neural tip, determines which transmitter (if any) would have been emitted, and then massages the adjacent tips in a way that has the same effect as if they had received that transmitter. Basically, instead of replacing the c.p.u., the demon is replacing the neurotransmitters.
问题在于,同样的策略也适用于任何 "正确因果力量 "的规范。恶魔不会按照计算机程序的规范操纵形式代币,而是会按照 "正确的 "因果互动规范操纵物理状态或变量。具体来说,我们可以把 "正确的 "因果互动想象成我们的神经元尖端通过神经递质相互刺激的能力。绿色外星人可以是智能的,尽管它们是基于硅化学,因为它们的(硅)神经元具有同样的相互刺激能力。现在想象一下,在中国罪犯的每个神经元上都覆盖一层薄薄的涂层,这层涂层除了不透神经递质外,没有任何作用。再想象一下,苏尔的恶魔能够看到问题所在,并出手相救;他透过涂层窥视每一个神经末梢,确定哪种递质(如果有的话)会被发射出去,然后按摩相邻的神经末梢,使其产生与接收到该递质相同的效果。从根本上说,恶魔不是替换了 c.p.u.,而是替换了神经递质。
By hypothesis, the victim's behavior is unchanged; in particular, she still acts as if she understood Chinese. Now, however, none of her neurons has the right causal powers - the demon has them, and he still understands only English. Therefore, having the right causal powers (even while embedded in a system such that the exercise of these powers leads to "intelligent" behavior) cannot be sufficient for understanding. Needless to say, a corresponding variation will work, whatever the relevant causal powers are.
根据假设,受害者的行为并没有改变,尤其是她仍然表现得好像能听懂中文。然而,现在她的神经元都不具备正确的因果能力--恶魔具备这种能力,但他仍然只懂英语。因此,拥有正确的因果能力(即使嵌入到一个系统中,行使这些能力会导致 "智能 "行为)并不足以让人理解。不用说,无论相关的因果能力是什么,相应的变体都会起作用。
None of this should come as a surprise. A computer program just is a specification of the exercise of certain causal powers: the powers to manipulate various formal tokens (physical objects or states of some sort) in certain specified ways, depending on the presence of certain other such tokens. Of course, it is a particular way of specifying causal exercises of a particular sort - that's what gives the "computational paradigm" its distinctive character. But Searle makes no use of this particularity; his argument depends only on the fact that causal powers can be specified independently of whatever it is that has the power. This is precisely what makes it possible to interpose the demon, in both the token-interaction (program) and neuron-interaction cases.
这些都不足为奇。计算机程序只是对行使某些因果能力的一种规范:即根据某些其他形式标记的存在,以某些特定方式操纵各种形式标记(物理对象或某种状态)的能力。当然,这是一种特定的因果练习的特定方式--这正是 "计算范式 "的独特之处。但塞尔并没有利用这种特殊性;他的论证仅仅依赖于这样一个事实,即因果能力可以独立于拥有这种能力的任何东西而被指定。这正是令牌-交互(程序)和神经元-交互这两种情况下的 "插魔 "成为可能的原因。

There is no escape in urging that this is a "dualistic" view of causal powers, not intrinisically connected with "the actual properties" of physical objects. To speak of causal powers in any way that allows for generalization (to green aliens, for example) is ipso facto to abstract from the particulars of any given "realization." The point is independent of the example - it works just as well for photosynthesis. Thus, flesh-colored plantlike organisms on the alien planet might photosynthesize (I take it, in a full and literal sense) so long as they contain some chemical (not necessarily chlorophyll) that absorbs light and uses the energy to make sugar and tree oxygen out of carbon dioxide (or silicon dioxide?) and water. This is what it means to specify photosynthesis as a causal power, rather than just a property that is, by definition, idiosyncratic to chlorophyll. But now, of course, the demon can enter, replacing both chiorophyll and its alien substitute: he devours photons, and thus energized, makes sugar from and . It seems to me that the demon is photosynthesizing.
我们不能回避的是,这是一种关于因果能力的 "二元论 "观点,与物理对象的 "实际属性 "并无内在联系。以任何允许概括的方式谈论因果能力(例如,绿色外星人),事实上就是对任何特定 "实现 "的特殊性进行抽象。这一点与例子无关--它同样适用于光合作用。因此,只要外星球上的肉色植物类生物含有某种化学物质(不一定是叶绿素),能够吸收光线并利用能量从二氧化碳(或二氧化硅?这就是光合作用作为一种因果能力,而不仅仅是叶绿素所特有的一种性质的含义。但现在,恶魔当然可以进来,取代叶绿素和它的外来替代品:它吞噬光子,从而获得能量,从 制造糖。在我看来,恶魔正在进行光合作用。
Let's set aside the demon argument, however. Searle also suggests that "there is no reason to suppose" that understanding (or intentionality) "has anything to do with" computer programs. This too, I think, rests on his failure to recognize that specitying a program is (in a distinctive way) specifying a range of causal powers and interactions.
然而,让我们暂且搁置 "恶魔 "的论点。塞尔还认为,"没有理由认为 "理解(或意向性)"与 "计算机程序 "有任何关系"。我认为,这也是因为他没有认识到,具体化一个程序就是(以一种独特的方式)具体化一系列因果力量和相互作用。
The central issue is what differentiates original intentionality from derivative intentionality. The former is intentionality that a thing (system. state, process) has "in its own right"; the latter is intentionality that is "borrowed from" or "conferred by" something else. Thus (on standard assumptions, which I will not question here), the intentionality of conscious thought and perception is original, whereas the intentionality (meaning) of linguistic tokens is merely conferred upon them by language users - that is, words don't have any meaning in and of themselves, but only in virtue of our giving them some. These are paradigm cases; many other cases will fall clearly on one side or the other, or be questionable, or perhaps even marginal. No one denies that if Al systems don't have original intentionality, then they at least have derivative intentionality, in a nontrivial sense - because they have nontrivial interpretations. What Searle objects to is the thesis, held by many, that good-enough Al systems have (or will eventually have) original intentionality.
核心问题是如何区分原始意向性与派生意向性。前者是指事物(系统、状态、过程)"本身 "具有的意向性;后者是指 "借用 "或 "赋予 "其他事物的意向性。因此(根据标准假设,我在此不作质疑),意识思维和感知的意向性是原创的,而语言标记的意向性(意义)只是语言使用者赋予它们的--也就是说,词语本身并没有任何意义,只是我们赋予了它们一些意义。这些都是典型案例;其他许多案例都会明显地偏向一方或另一方,或者是值得商榷的,甚至可能是边缘性的。没有人否认,如果阿尔系统不具有原初意向性,那么它们至少具有衍生意向性,在非微不足道的意义上--因为它们具有非微不足道的解释。塞尔反对的是许多人持有的论点,即足够好的阿尔系统具有(或最终将具有)原初意向性。
Thought tokens, such as articulate beliets and desires, and linguistic tokens, such as the expressions of articulate beliefs and desires, seem to have a lot in common - as pointed out, for example, by Searle (1979c). In particular, except for the original/derivative distinction, they have (or at least appear to have) closely parallel semantic structures and variations. There must be some other principled distinction between them, then, in virtue of which the former can be originally intentional, but the latter only derivatively so. A conspicuous candidate for this distinction is that thoughts are semantically active, whereas sentence tokens, written out, say, on a page, are semantically inert. Thoughts are constantly interacting with one another and the world, in ways that are semantically appropriate to their intentional content. The causal interactions of written sentence tokens, on the other hand, do not consistently reflect their content (except when they interact with people).
Thoughts are embodied in a "system" that provides "normal channels" for them to interact with the world, and such that these normal interactions tend to maximize the "fit" between them and the world; that is, via perception, beliefs tend toward the truth; and, via action, the world tends toward what is desired. And there are channels of interaction among thoughts (various kinds of inference) via which the set of them tends to become more coherent, and to contain more consequences of its members. Naturally, other effects introduce aberrations and "noise" into the system; but the normal channels tend to predominate in the long run. There are no comparable channels of interaction for written tokens. In fact, (according to this same standard view), the only semantically sensitive interactions that written tokens ever have are with thoughts; insofar as they tend to express truths, it is because they express beliefs, and insofar as they tend to bring about their own satisfaction conditions, it is because they tend to bring about desires. Thus, the only semantically significant interactions that written tokens have with the world are via thoughts; and this, the suggestion
思想体现在一个 "系统 "中,这个 "系统 "提供了思想与世界互动的 "正常渠道",这些正常的互动使思想与世界之间的 "契合 "趋于最大化;也就是说,通过感知,信念趋于真实;通过行动,世界趋于理想。思想之间存在着相互作用的渠道(各种推论),通过这些渠道,思想集合趋向于更加连贯,并包含其成员的更多后果。当然,其他影响也会给系统带来畸变和 "噪音";但从长远来看,正常的渠道往往占主导地位。书面代币没有类似的互动渠道。事实上,(根据同样的标准观点),书面代币唯一具有语义敏感性的互动就是与思想的互动;只要它们倾向于表达真理,那是因为它们表达了信念;只要它们倾向于带来自身的满足条件,那是因为它们倾向于带来欲望。因此,书写符号与世界之间唯一具有语义意义的互动是通过思想进行的。

goes, is why their intentionality is derivative.
The interactions that thoughts have among themselves (within a single "system") are particularly important, for it is in virtue of these that thought can be subtle and indirect, relative to its interactions with the world - that is, not easily fooled or thwarted. Thus, we tend to consider more than the immediately present evidence in making judgments, and more than the immediately present options in making plans. We weigh desiderata, seek further information, try things to see if they'll work, formulate general maxims and laws, estimate results and costs, go to the library, cooperate, manipulate, scheme, test, and reflect on what we're doing. All of these either are or involve a lot of thought-thought interaction, and tend, in the long run to broaden and improve the "fit" between thought and world. And they are typical as manifestations both of intelligence and of independence.
思想之间(在一个单一的 "系统 "内)的相互作用尤为重要,因为正是由于这些相互作用,思想相对于其与世界的相互作用而言,可以是微妙和间接的--也就是说,不容易被愚弄或挫败。因此,在做出判断时,我们往往会考虑更多而不是眼前的证据;在制定计划时,我们往往会考虑更多而不是眼前的选择。我们会权衡各种需要,寻求更多的信息,尝试是否可行,制定一般性的格言和法则,估计结果和成本,去图书馆,合作,操纵,计划,测试,以及反思我们正在做的事情。所有这些,要么是思想与思想之间的互动,要么涉及思想与思想之间的互动,从长远来看,往往会扩大和改善思想与世界之间的 "契合度"。它们既是智慧的典型表现,也是独立性的典型表现。
I take it for granted that all of the interactions mentioned are, in some sense, causal - hence, that it is among the system's "causal powers" that it can have (instantiate, realize, produce) thoughts that interact with the world and each other in these ways. It is hard to tell whether these are the sorts of causal powers that Searle has in mind, both because he doesn't say, and because they don't seem terribly similar to photosynthesis and lactation. But, in any case, they strike me as strong candidates for the kinds of powers that would distinguish systems with intentionality - that is, original intentionality - from those without. The reason is that these are the only powers that consistently reflect the distinctively intentional character of the interactors: namely, their "content" or "meaning" (except, so to speak, passively, as in the case of written tokens being read). That is, the power to have states that are semantically active is the "right" causal power for intentionality.
我想当然地认为,所提到的所有互动在某种意义上都是因果性的--因此,系统的 "因果能力 "之一就是它可以拥有(实例化、实现、产生)以这些方式与世界和彼此互动的思想。很难说这些是否就是塞尔心目中的因果能力,一是因为他没有说,二是因为它们似乎与光合作用和哺乳并不十分相似。但无论如何,它们在我看来都是将具有意向性--即原始意向性--的系统与不具有意向性的系统区分开来的因果能力的有力候选者。原因在于,只有这些能力能够始终如一地反映互动者的明显意向性特征:即互动者的 "内容 "或 "意义"(可以说,被动地反映除外,如阅读书写的标记)。也就是说,具有语义活动状态的力量是意向性的 "正确 "因果力量。
It is this plausible claim that underlies the thesis that (sufficiently developed) Al systems could actually be intelligent, and have original intentionality. For a case can surely be made that their "representations" are semantically active (or, at least, that they would be if the system were built into a robot). Remember, we are conceding them at least derivative intentionality, so the states in question do have a content, relative to which we can gauge the "semantic appropriateness" of their causal interactions. And the central discovery of all computer technology is that devices can be contrived such that, relative to a certain interpretation, certain of their states will always interact (causally) in semantically appropriate ways, so long as the devices perform as designed electromechanically - that is, these states can have "normal channels" of interaction (with each other and with the world) more or less comparable to those that underlie the semantic activity of thoughts. This point can hardly be denied, so long as it is made in terms of the derivative intentionality of computing systems; but what it seems to add to the archetypical (and "inert") derivative intentionality of, say, written text is, precisely, semantic activity. So, if (sufficiently rich) semantic activity is what distinguishes original from derivative intentionality (in other words, it's the "right" causal power), then it seems that (sufficiently rich) computing systems can have original intentionality.
正是这种似是而非的说法支撑着这样一个论点,即(经过充分开发的)Al 系统实际上可能是智能的,并具有原始的意向性。因为我们可以肯定,它们的 "表征 "在语义上是活跃的(或者,至少,如果该系统是内置在机器人中的话,它们在语义上是活跃的)。请记住,我们承认它们至少具有派生意向性,因此有关状态确实具有内容,我们可以据此来衡量其因果互动的 "语义适当性"。而所有计算机技术的核心发现就是,可以设计出这样的设备,即相对于某种解释而言,它们的某些状态总是会以语义上适当的方式(因果地)相互作用,只要这些设备的性能符合机电设计的要求--也就是说,这些状态可以有(相互之间以及与世界之间的)"正常的 "相互作用渠道,或多或少可以与思想的语义活动的基础渠道相媲美。只要从计算系统的派生意向性的角度来看,这一点就很难被否认;但它似乎给诸如书面文本的典型(和 "惰性")派生意向性增添的,恰恰是语义活动。因此,如果(足够丰富的)语义活动是区分原始意向性与派生意向性的因素(换句话说,它是 "正确的 "因果力量),那么(足够丰富的)计算系统似乎就可以具有原始意向性。
Now, like Searle, I am inclined to dispute this conclusion; but for entirely different reasons. I don't believe there is any conceptual confusion in supposing that the right causal powers for original intentionality are the ones that would be captured by specifying a program (that is, a virtual machine). Hence. I don't think the above plausibility argument can be dismissed out of hand ("no reason to suppose," and so on); nor can I imagine being convinced that, no matter how good Al got, it would still be "weak" - that is, would not have created a "real" intelligence - because it still proceeded by specifying programs. It seems to me that the interesting question is much more nitty-gritty empirical than that: given that programs might be the right way to express the relevant causal structure, are they in fact so? It is to this question that I expect the answer is no. In other words, I don't much care about Searle's demon working through a program for perfect simulation of a native Chinese speaker - not because there's no such demon, but because there's no such program. Or rather, whether there is such a program, and if not, why not, are, in my view, the important questions.
现在,和塞尔一样,我也倾向于对这一结论提出异议;但理由却完全不同。我认为,假设原初意向性的正确因果力量是通过指定一个程序(即虚拟机)来捕捉的因果力量,这并不会造成任何概念上的混乱。因此我不认为上述可信性论证可以被一概否定("没有理由假设",等等);我也无法想象,我们会相信,无论阿尔有多好,它仍然是 "弱 "的--也就是说,不会创造出 "真正的 "智能--因为它仍然是通过指定程序来进行的。在我看来,有趣的问题要比这更细微的经验主义问题:既然程序可能是表达相关因果结构的正确方式,那么它们事实上是否如此呢?对于这个问题,我认为答案是否定的。换句话说,我不太关心塞尔的恶魔是否通过程序完美地模拟了母语为中文的人--不是因为没有这样的恶魔,而是因为没有这样的程序。或者说,在我看来,重要的问题是是否有这样一个程序,如果没有,为什么没有。

by Douglas R. Hofstadter

Computer Science Department, Indiana University, Bloomington, Ind. 47405
印第安纳大学计算机科学系,Bloomington, Ind.47405

Reductionism and religion

This religious diatribe against , masquerading as a serious scientific argument, is one of the wrongest, most infuriating articles 1 have ever read in my life. It is matched in its power to annoy only by the famous article "Minds, Machines, and Gö̀del" by J. R. Lucas (1961).
这篇针对 的宗教谩骂伪装成严肃的科学论证,是我一生中读过的最错误、最令人愤怒的文章之一。只有 J. R. 卢卡斯(J. R. Lucas,1961 年)的著名文章《思想、机器和哥德尔》(Minds, Machines, and Gö̀del)才能与之媲美。
Searle's trouble is one that I can easily identify with. Like me, he has deep difficulty in seeing how mind, soul, "I," can come out of brain, cells, atoms. To show his puzzlement, he gives some beautiful paraphrases of this mystery. One of my favorites is the water-pipe simulation of a brain. It gets straight to the core of the mind-body problem. The strange thing is that Searle simply dismisses any possibility of such a system's being conscious with a hand wave of "absurd." (I actually think he radically misrepresents the complexity of such a water-pipe system both to readers and in his own mind, but that is a somewhat separable issue.)
塞尔的烦恼我很容易理解。和我一样,他也很难理解思想、灵魂、"我 "是如何从大脑、细胞和原子中产生的。为了表达他的困惑,他对这一谜团做了一些美丽的诠释。其中我最喜欢的是水管模拟大脑。它直指身心问题的核心。奇怪的是,塞尔用 "荒谬 "一词就简单地否定了这种系统有意识的可能性。(实际上,我认为他对读者和他自己都从根本上歪曲了这种水管系统的复杂性,但这是一个可分的问题)。
The fact is, we have to deal with a reality of nature - and realities of nature sometimes are absurd. Who would have believed that light consists of spinning massless wave particles obeying an uncertainty principle while traveling through a curved four-dimensional universe? The fact that intelligence, understanding, mind, consciousness, soul all do spring from an unlikely source - an enormously tangled web of cell bodies, axons, synapses, and dendrites - is absurd, and yet undeniable. How this can create an "I" is hard to understand, but once we accept that fundamental, strange, disorienting fact, then it should seem no more weird to accept a water-pipe "I."
事实上,我们必须面对自然界的现实--而自然界的现实有时是荒谬的。谁会相信光是由旋转的无质量波粒子组成的,在弯曲的四维宇宙中穿行时遵守不确定性原理?智慧、理解力、思想、意识、灵魂的确都来自一个不太可能的源头--由细胞体、轴突、突触和树突组成的巨大纠结网络--这一事实是荒谬的,但又是不可否认的。我们很难理解这怎么会产生一个 "我",但一旦我们接受了这个基本的、奇怪的、令人迷失方向的事实,那么接受一个水管 "我 "似乎也就不那么奇怪了。
Searle's way of dealing with this reality of nature is to claim he accepts it - but then he will not accept its consequences. The main consequence is that "intentionality" - his name for soul - is an outcome of formal processes. I admit that I have slipped one extra premise in here: that physical processes are formal, that is, rule governed. To put it another way, the extra premise is that there is no intentionality at the level of particles. (Perhaps I have misunderstood Searle. He may be a mystic and claim that there is intentionality at that level. But then how does one explain why it seems to manifest itself in consciousness only when the particles are arranged in certain special configurations - brains - but not, say, in water-pipe arrangements of any sort and size?) The conjunction of these two beliefs seems to me to compel one to admit the possibility of all the hopes of artificial intelligence, despite the fact that it will always baffle us to think of ourselves as, at bottom, formal systems.
To people who have never programmed, the distinction between levels of a computer system - programs that run "on" other programs or on hardware - is an elusive one. I believe Searle doesn't really understand this subtle idea, and thus blurs many distinctions while creating other artificial ones to take advantage of human emotional responses that are evoked in the process of imagining unfamiliar ideas.
对于从未接触过编程的人来说,计算机系统的层次--"运行 "在其他程序或硬件上的程序--之间的区别是难以捉摸的。我相信塞尔并不真正理解这个微妙的概念,因此他模糊了许多区别,同时人为地制造了其他区别,以利用人类在想象陌生概念的过程中产生的情绪反应。
He begins with what sounds like a relatively innocent situation: a man in a room with a set of English instructions ("bits of paper") for manipulating some Chinese symbols. At first, you think the man is answering questions (although unbeknown to him) about restaurants, using Schankian scripts. Then Searle casually slips in the idea that this program can pass the Turing test! This is an incredible jump in complexity - perhaps a millionfold increase if not more. Searle seems not to be aware of how radically it changes the whole picture to have that "little" assumption creep in. But even the initial situation, which sounds plausible enough, is in fact highly unrealistic.
他从一个听起来相对单纯的情境开始:一个人在一个房间里,拿着一套操作一些中文符号的英文说明("纸片")。起初,你会认为这个人在使用 Schankian 脚本回答关于餐馆的问题(尽管他并不知道)。接着,塞尔又不经意地提出,这个程序可以通过图灵测试!这在复杂性上是一个令人难以置信的飞跃--也许是一百万倍的增长,甚至更多。塞尔似乎并没有意识到,这个 "小小的 "假设的悄然加入,会给整个局面带来多么大的改变。但是,即使是听起来很有道理的最初情况,实际上也是非常不现实的。
Imagine a human being, hand simulating a complex Al program, such as a script-based "understanding" program. To digest a full story, to go through the scripts and to produce the response, would probably take a hard eight-hour day for a human being. Actually, of course, this hand-simulated program is supposed to be passing the Turing test, not just answering a few stereotyped questions about restaurants. So let's jump up to a week per question, since the program would be so complex. (We are being unbelievably generous to Searle.)
想象一下,人类手动模拟一个复杂的 Al 程序,比如基于脚本的 "理解 "程序。要消化一个完整的故事、阅读脚本并做出回答,人类可能要花费一天八小时的时间。当然,实际上,这个手工模拟的程序应该通过图灵测试,而不仅仅是回答几个关于餐馆的千篇一律的问题。既然程序如此复杂,我们就把每个问题的时间增加到一周吧。(我们对塞尔真是慷慨得令人难以置信)。
Now Searle asks you to identify with this poor slave of a human (he doesn't actually ask you to identify with him - he merely knows you will project yourself onto this person, and vicariously experience the indescribably boring nightmare of that hand simulation). He knows your reaction will be: "This is not understanding the story - this is some sort of formal process!" But remember: any time some phenomenon is looked at on a scale a million times different from its familiar scale, it doesn't seem the same! When I imagine myself feeling my brain running a hundred times too slowly (of course that is paradoxical but it is what Searle wants me to do), then of course it is agonizing, and presumably 1 would not even recognize the feelings at all. Throw in yet another factor of a thousand and one cannot even imagine what it would feel like.
现在,塞尔要求你认同这个可怜的人类奴隶(他实际上并没有要求你认同他--他只是知道你会把自己投射到这个人身上,代入式地体验那只手模拟的无聊噩梦)。他知道你的反应会是:"这不是在理解故事--这是某种形式上的过程!"但请记住:任何时候,如果某种现象的尺度与其熟悉的尺度相差一百万倍,它看起来就不一样了!当我想象自己感觉大脑运转慢了一百倍时(这当然是自相矛盾的,但这正是塞尔希望我做的),那当然是痛苦的,大概 1 甚至根本不会意识到这种感觉。再加上一千倍的因素,人们甚至无法想象那会是什么感觉。
Now this is what Searle is doing. He is inviting you to identify with a nonhuman which he lightly passes off as a human, and by doing so he asks you to participate in a great fallacy. Over and over again he uses this ploy, this emotional trickery, to get you to agree with him that surely, an intricate system of water pipes can't think! He forgets to tell you that a water-pipe simulation of the brain would take, say, a few trilion water pipes with a few trilion workers standing at faucets turning them when needed, and he forgets to tell you that to answer a question it would take a year or two. He forgets to tell you, because if you remembered that, and then on your own, imagined taking a movie and speeding it up a million times, and imagined changing your level of description of the thing from the faucet level to the pipe-cluster level, and on through a series of ever higher levels until you reached some sort of eventual symbolic level, why then you might say, "Hey, when I imagine what this entire system would be like when perceived at this time scale and level of description, I can see how it might be conscious after all!"
Searle is representative of a class of people who have an instinctive horror of any "explaining away" of the soul. I don't know why certain people have this horror while others, like me, find in reductionism the ultimate religion. Perhaps my lifelong training in physics and science in general has given me a deep awe at seeing how the most substantial and familiar of objects or experiences fades away, as one approaches the infinitesimal scale, into an eerily insubstantial ether, a myriad of ephemeral swirling vortices of nearly incomprehensible mathematical activity. This in me evokes a kind of cosmic awe. To me, reductionism does not "explain away"; rather, it adds mystery. I know that this journal is not the place for philosophical and religious commentary, yet it seems to me that what Searle and I have is, at the deepest level, a religious disagreement, and I doubt that anything I say could ever change his mind. He insists on things he calls "causal intentional properties" which seem to vanish as soon as you analyze them, find rules for them, or simulate them. But what those things are, other than epiphenomena, or 'innocently emergent" qualities, I don't know.
塞尔是一类人的代表,他们对任何 "解释 "灵魂的行为都有一种本能的恐惧。我不知道为什么某些人会有这种恐惧,而另一些人,比如我,却在还原论中找到了终极宗教。也许是我终生接受的物理学和科学的训练,让我在看到最实在、最熟悉的物体或经验是如何在接近无穷小的尺度时逐渐消失的时候,产生了一种深深的敬畏之情,这种敬畏之情消失在一种令人毛骨悚然的不实在的以太之中,消失在无数瞬息万变的漩涡之中,消失在几乎无法理解的数学活动之中。这让我产生了一种对宇宙的敬畏。对我来说,还原论并不能 "解释一切",反而会增加神秘感。我知道这本杂志不是发表哲学和宗教评论的地方,但在我看来,塞尔和我之间最深层次的分歧是宗教分歧,我怀疑我说的任何话都无法改变他的想法。他坚持他称之为 "因果意向属性 "的东西,而一旦你对它们进行分析,为它们找到规则,或对它们进行模拟,它们似乎就会消失。但除了表象或 "无辜出现的 "特质之外,这些东西到底是什么,我也不知道。

by B. Libet 作者:B.

Department of Physiology, University of California, San Francisco, Calif. 94143

Mental phenomena and behavior

Searle states that the main argument of his paper is directed at establishing his second proposition, that "instantiating a computer program is never by itself a sufficient condition of intentionality" (that is, of a mental state that includes beliefs, desires, and intentions). He accomplishes this with a Gedankenexperiment to show that even "a human agent could instantiate the program and still not have the relevant intentionality"; that is. Searle shows, in a masterful and convincing manner, that the behavior of the appropriately programmed computer could transpire in the absence of a cognitive mental state. 1 believe it is also possible to establish the proposition by means of an argument based on simple formal logic.
塞尔指出,他的论文的主要论点旨在确立他的第二个命题,即 "计算机程序的实例化本身绝不是意向性的充分条件"(即包括信念、欲望和意图在内的心理状态的充分条件)。他通过一个 "Gedankenexperiment "实验来证明,即使 "人类代理人可以将程序实例化,但仍然不具备相关的意向性";也就是说,"人类代理人可以将程序实例化,但仍然不具备相关的意向性"。塞尔以一种高超而令人信服的方式表明,在没有认知心理状态的情况下,适当编程的计算机的行为也可以发生。1 我们相信,通过基于简单形式逻辑的论证,也可以确立这一命题。
We start with the knowledge that we are dealing with two different systems: system is the computer, with its appropriate program; system is the human being, particularly his brain. Even if system could be arranged to behave and even to look like system , in a manner that might make them indistinguishable to an external observer, system must be at least internally different from . If and were identical, they would both be human beings and there would be no thesis to discuss
我们首先要知道,我们面对的是两个不同的系统:系统 是计算机,及其相应的程序;系统 是人,尤其是人的大脑。即使系统 可以被安排得像系统 ,甚至看起来也像系统 ,以至于外部观察者无法将它们区分开来,系统 至少在内部必须与 不同。如果 完全相同,那么他们都是人,也就没有论文可讨论了。
Let us accept the proposal that, on an input-output basis, system and system could be made to behave alike, properties that we may group together under category . The possession of the relevant mental states (including understanding, beliefs, desires, intentions, and the like) may be called property . We know that system has property . Remembering that systems and are known to be different, it is an error in logic to argue that because systems and both have property , they must also both have property
让我们接受这样的建议,即在输入-输出的基础上,系统 和系统 可以表现出相同的性质,我们可以将这些性质归入类别 。拥有相关的心理状态(包括理解、信念、欲望、意图等)可称为属性 。我们知道,系统 具有属性 。我们知道系统 是不同的,因此,如果认为系统 都有属性 ,那么它们也一定都有属性,这是逻辑上的错误。
The foregoing leads to a more general proposition - that no behavior of a computer, regardless of how successful it may be in simulating human behavior, is ever by itself sufficient evidence of any mental state. Indeed, Searle also appears to argue for this more general case when, later in the discussion, he notes: (a) To get computers to feel pain or fall in love would be neither harder nor easier than to get them to have cognition. (b) "For simulation, all you need is the right input and output and a program in the middle that transforms the former into the latter." And, (c) "to confuse simulation with duplication is the same mistake, whether it is pain, love, cognition." On the other hand, Searle seems not to maintain this general proposition with consistency. In his discussion of "IV. The combination reply" (to his analytical example or thought experiment), Searle states: "If we could build a robot whose behavior was indistinguishable over a large range from human behavior, we would ... find it rational and indeed irresistible to ... attribute intentionality to it, pending some reason not to." On the basis of my argument, one would not have to know that the robot had a formal program (or whatever) that accounts for its behavior, in order not to have to attribute intentionality to it. All we need to know is that the robot's internal control apparatus is not made in the same way and out of the same stuff as is the human brain, to reject the thesis that the robot must possess the mental states of intentionality, and so on.
以上论述引出了一个更普遍的命题--计算机的任何行为,无论它在模拟人类行为方面多么成功,其本身都不足以证明任何心理状态。事实上,塞尔在后面的讨论中似乎也在论证这种更普遍的情况,他指出(a) 让计算机感受痛苦或坠入爱河,并不比让它们具有认知能力更难,也不比让它们具有认知能力更容易。(b) "对于模拟来说,你所需要的只是正确的输入和输出,以及在中间将前者转化为后者的程序"。还有,(c)"混淆模拟与复制是同样的错误,无论是痛苦、爱还是认知"。另一方面,塞尔似乎并没有始终如一地坚持这一一般性命题。在讨论 "IV.组合回答"(对他的分析例子或思想实验)的讨论中,塞尔指出:"如果我们能制造出一个机器人,其行为在很大范围内与人类行为无异,我们就会......发现......将意向性归因于它是合理的,甚至是不可抗拒的,除非有理由不这样做"。根据我的论点,我们不必知道机器人有一个能解释其行为的正式程序(或其他什么),就不必把意向性归因于它。我们只需要知道机器人的内部控制装置不是以与人脑相同的方式、用与人脑相同的材料制造的,就可以否定机器人必须具备意向性等心理状态的论点。
Now, it is true that neither my nor Searle's argument excludes the possibility that an appropriately programmed computer could also have mental states (property ); the argument merely states it is not warranted to propose that the robot must have mental states . However, Searle goes on to contribute a valuable analysis of why so many people have believed that computer programs do impart a kind of mental process or state to the computer. Searle notes that, among other factors, a residual behaviorism or operationalism underlies the willingness to accept input-output patterns as sufficient for postulating human mental states in appropriately programmed computers. I would add that there are still many psychologists and perhaps philosophers who are similarly burdened with residual behaviorism or operationalism even when dealing with criteria for existence of a conscious subjective experience in human subjects (see Libet 1973; 1979).
诚然,我和塞尔的论证都没有排除一种可能性,即编程适当的计算机也可能具有心理状态(属性 );论证只是指出,没有理由提出机器人必须具有心理状态 。然而,塞尔接着提出了一个有价值的分析,说明为什么这么多人相信计算机程序确实向计算机传递了一种心理过程或状态。塞尔指出,除其他因素外,残余的行为主义或操作主义是人们愿意接受输入-输出模式的基础,认为这足以假设适当编程的计算机具有人类心理状态。我想补充的是,即使在处理人类主体有意识主观体验的存在标准时,仍有许多心理学家,或许还有哲学家,同样背负着残余行为主义或操作主义的包袱(见利贝特,1973 年;1979 年)。

by William G. Lycan

Department of Philosophy, Ohio State University, Columbus, Ohio 43210

The functionalist reply (Ohio State)

Most versions of philosophical behaviorism have had the consequence that if an organism or device passes the Turing test, in the sense of systematically manifesting all the same outward behavioral dispositions that a normal human does, the has all the same sorts of contentful or intentional states that humans do. In light of fairly obvious counterexamples to this thesis, materialist philosophers of mind have by and large rejected behaviorism in favor of a more species-chauvinistic view: D's manifesting all the same sorts of behavioral dispositions we do does not alone suffice for having intentional states; it is necessary in addition that produce behavior from stimuli in roughly the way that we do - that 's inner functional organization be not unlike ours and that process the stimulus input by analogous inner procedures. On this "functionalist" theory, to be in a mental state of such and such a kind is to incorporate a functional component or system of components of type so and so which is in a certain distinctive state of its own. "Functional components" are individuated according to the roles they play within their owners' overall functional organization.'
哲学行为主义的大多数版本都认为,如果一个有机体或设备 通过了图灵测试,即系统地表现出与正常人相同的所有外在行为倾向,那么这个 就具有与人类相同的所有内容状态或意向状态。鉴于这一论点存在着相当明显的反例,唯物主义心灵哲学家基本上都摒弃了行为主义,转而支持一种更具物种沙文主义色彩的观点:D表现出与我们相同的各种行为倾向,这并不足以说明 具有意向状态;此外, 从刺激中产生行为的方式也必须与我们大致相同-- 的内部功能组织与我们并无二致, 通过类似的内部程序处理刺激输入。根据这种 "功能主义 "理论,处于这样或那样的心理状态,就是包含了这样或那样类型的功能成分或成分系统,它本身就处于某种独特的状态。"功能成分 "是根据它们在其所有者的整体功能组织中所扮演的角色而被个性化的。
Searle offers a number of cases of entities that manifest the

Commentary/Searle: Minds, brains, and programs

behavioral dispositions we associate with intentional states but that rather plainly do not have any such states. I accept his intuitive judgments about most of these cases. Searle plus rule book plus pencil and paper presumably does not understand Chinese, nor does Searle with memorized rule book or Searle with TV camera or the robot with Searle inside. Neither my stomach nor Searle's liver nor a thermostat nor a light switch has beliefs and desires. But none of these cases is a counterexample to the functionalist hypothesis. The systems in the former group are pretty obviously not functionally isomorphic at the relevant level to human beings who do understand Chinese: a native Chinese carrying on a conversation is implementing procedures of his own, not those procedures that would occur in a mockup containing the cynical, English-speaking, American-acculturated homuncular Searle. Therefore they are not counterexamples to a functionatist theory of language understanding, and accordingly they leave it open that a computer that was functionally isomorphic to a real Chinese speaker would indeed understand Chinese also. Stomachs, thermostats, and the like, because of their brutish simplicity, are even more clearly dissimilar to humans. (The same presumably is true of Schank's existing language-understanding programs.)
我们把行为倾向与意向状态联系在一起,但 ,很明显没有任何这样的状态。 我接受他对其中大多数情况的直觉判断。苏尔加上规则书和纸笔大概听不懂中文,背着规则书的苏尔、带着电视摄像机的苏尔或里面装着苏尔的机器人也听不懂中文。我的胃和苏尔的肝脏、恒温器和电灯开关都没有信念和欲望。但这些情况都不是功能主义假设的反例。前一组中的系统在相关层面上显然与懂中文的人类没有功能上的同构性:一个母语为中文的人在进行对话时执行的是他自己的程序,而不是在一个包含愤世嫉俗、说英语、受过美国文化熏陶的同声传译的苏尔的模型中会发生的那些程序。因此,它们并不是语言理解的功能主义理论的反例,相应地,它们留下了一个可能性,即一台与真正的说中文者功能同构的计算机确实也能理解中文。胃、恒温器之类的东西,由于其简单粗暴,与人类的相似性就更加明显了。(Schank现有的语言理解程序大概也是如此)。
I have hopes for a sophisticated version of the "brain simulator" (or the "combination" machine) that Searle illustrates with his plumbing example. Imagine a hydraulic system of this type that does replicate; perhaps not the precise neuroanatomy of a Chinese speaker, but all that is relevant of the Chinese speaker's higher functional organization; individual water pipes are grouped into organ systems precisely analogous to those found in the speaker's brain, and the device processes linguistic input in just the way that the speaker does. (It does not merely simulate or describe this processing.) Moreover, the system is automatic and does all this without the intervention of Searle or any other deus in machina. Under these conditions and given a suitable social context, 1 think it would be plausible to accept the functionalist consequence that the hydraulic system does understand Chinese.
我对苏尔以水管为例说明的 "大脑模拟器"(或 "组合 "机器)的复杂版本抱有希望。试想一下,这种水力系统的确可以复制;也许不能精确复制中文演讲者的神经解剖学,但可以复制中文演讲者高级功能组织的所有相关内容;单个水管被组合成器官系统,与演讲者大脑中的器官系统精确相似,而且该设备可以按照演讲者的方式处理语言输入(它不仅仅是模拟或描述这种处理过程)。(此外,该系统是自动的,不需要塞尔或任何其他 "机器之神 "的干预。在这些条件下,如果有一个合适的社会环境,我认为接受功能主义的结果,即液压系统确实能听懂中文,是有道理的。
Searle's paper suggest two objections to this claim. First. "where is the understanding in this system?" All Searle sees is pipes and valves and flowing water. Reply: Looking around the fine detail of the system's hardware, you are too small to see that the system is understanding Chinese sentences. If you were a tiny, cell-sized observer inside a real Chinese speaker's brain, all you would see would be neurons stupidly, mechanically transmitting electrical charge, and in the same tone you would ask, "Where is the understanding in this system?" But you would be wrong in concluding that the system you were observing did not understand Chinese; in like manner you may well be wrong about the hydraulic device.
Second, even if a computer were to replicate all of the Chinese speaker's relevant functional organization, all the computer is really doing is performing computational operations on formally specified elements. A purely formally or syntactically characterized element has no meaning or content in itself, obviously, and no amount of mindless syntactic manipulation of it will endow it with any. Reply: The premise is correct, and I agree it shows that no computer has or could have intentional states merely in virtue of performing syntactic operations on formally characterized elements. But that does not suffice to prove that no computer can have intentional states at all. Our brain states do not have the contents they do just in virtue of having their purely formal properties either;" a brain state described "syntactically" has no meaning or content on its own. In virtue of what, then, do brain states (or mental states however construed) have the meanings they do? Recent theory advises that the content of a mental representation is not determined within its owner's head (Putnam 1975a; Fodor 1980); rather, it is determined in part by the objects in the environment that actually figure in the representation's etiology and in part by social and contextual factors of several other sorts (Stich, in preparation). Now, present-day computers live in highly artificial and stifling environments. They receive carefully and tendentiously preselected input; their software is adventitiously manipulated by uncaring programmers; and they are isolated in laboratories and offices, deprived of any normal interaction within a natural or appropriate social setting. For this reason and several others, Searle is surely right in saying that presentday computers do not really have the intentional states that we fancifully incline toward attributing to them. But nothing Searle has said impugns the thesis that if a sophisticated future computer not only replicated human functional organization but harbored its inner representations as a result of the right sort of causal history and had also been nurtured within a favorable social setting, we might correctly ascribe intentional states to it. This point may or may not afford lasting comfort to the Al community.
其次,即使计算机能够复制汉语说话人的所有相关功能组织,计算机真正做的也只是对形式上指定的元素进行计算操作。显然,一个纯粹形式上或句法上表征的元素本身并没有意义或内容,对它进行再多无谓的句法操作也不会赋予它任何意义或内容。请回答:前提是正确的,我同意它表明,没有计算机仅仅凭借对形式表征的元素进行句法操作就具有或可能具有意向状态。但这并不足以证明任何计算机都不可能有意向性状态。我们的大脑状态也不会仅仅因为具有纯粹的形式属性而具有它们所具有的内容;""语法上 "描述的大脑状态本身并没有意义或内容。那么,大脑状态(或无论如何解释的心理状态)是凭借什么而具有意义的呢?最近的理论认为,心理表征的内容并不是在其所有者的头脑中决定的(普特南,1975a;福多,1980 年);相反,它部分是由环境中的物体决定的,这些物体实际上是表征的成因,部分则是由其他几种社会和环境因素决定的(斯蒂奇,正在撰写中)。现在,当今的计算机生活在高度人工化和令人窒息的环境中。它们接受经过精心挑选、带有倾向性的输入;它们的软件被冷酷无情的程序员随意操纵;它们被隔离在实验室和办公室,无法在自然或适当的社会环境中进行任何正常的互动。 出于这个原因和其他一些原因,塞尔说当今的计算机并不真正具有我们幻想中的意向状态,这无疑是正确的。但是,塞尔所说的一切并不妨碍这样一个论点,即如果未来的精密计算机不仅复制了人类的功能组织,而且由于正确的因果历史而拥有其内在表征,并且还在有利的社会环境中得到培育,那么我们就可能正确地把意向状态归因于它。这一点可能会,也可能不会给阿尔社区带来持久的安慰。

Notes 说明

  1. This characterization is necessarily crude and vague. For a very useful survey of different versions of functionalism and their respective foibles, see Block (1978); I have developed and detended what I think is the most promising version of functionalism in Lycan (forthcoming).
    这种描述必然是粗略和模糊的。关于不同版本的功能主义及其各自的缺陷,见 Block (1978)。
  2. For further discussion of cases of this kind, see Block (forthcoming).
    关于此类案例的进一步讨论,见 Block(即将出版)。
  3. A much expanded version of this reply appears in section 4 of Lycan (forthcoming).
    本答复的详细内容见《Lycan》(即将出版)第 4 节。
4.1 do not understand Searle's positive suggestion as to the source of intentionality in our own brains. What "neurobiological causal properties"?
4.1 不理解塞尔关于我们大脑中意向性来源的积极建议。什么 "神经生物学因果属性"?
  1. As Fodor (forthcoming) remarks, SHRDLU as we interpret him is the victim of a Cartesian evil demon; the "blocks" he manipulates do not exist in reality.
    正如 Fodor(即将出版)所说,我们所理解的 SHRDLU 是笛卡尔邪魔的牺牲品;他所操纵的 "区块 "在现实中并不存在。

by John McCarthy 作者:约翰-麦卡锡

Artifcial Intelligence Laboratory. Stanford University, Stantord, Calii. 94305

Beliefs, machines, and theories

John Searle's refutation of the Berkeley answer that the system understands Chinese proposes that a person (call him Mr. Hyde) carry out in his head a process (call it Dr. Jekyll) for carrying out a written conversation in Chinese. Everyone will agree with Searle that Mr. Hyde does not understand Chinese, but I would contend, and I suppose his Berkeley interlocutors would also, that provided certain other conditions for understanding are met, Dr. Jekyll understands Chinese. In Robert Louis Stevenson's story, it seems assumed that Dr. Jekyll and Mr. Hyde time-share the body, while in Searle's case, one interprets a program specifying the other.
约翰-塞尔(John Searle)在驳斥伯克利 "系统理解中文 "的答案时,提议让一个人(称他为海德先生)在他的头脑中执行一个用中文进行书面对话的过程(称它为杰基尔博士)。每个人都会同意塞尔的观点,即海德先生不懂中文,但我会争辩说,只要满足理解的某些其他条件,杰基尔博士是懂中文的,我想他的伯克利对话者也会这么认为。在罗伯特-路易斯-史蒂文森(Robert Louis Stevenson)的故事中,似乎假定 Jekyll 博士和 Hyde 先生共享身体,而在 Searle 的故事中,一方解释了指定另一方的程序。
Searle's dismissal of the idea that thermostats may be ascribed belief is based on a misunderstanding. It is not a pantheistic notion that all machinery including telephones, light switches, and calculators believe. Belief may usefully be ascribed only to systems about which someone's knowledge can best be expressed by ascribing beliefs that satisfy axioms such as those in McCarthy (1979). Thermostats are sometimes such systems. Telling a child, "If you hold the candle under the thermostat, you will fool it into thinking the room is too hot, and it will turn off the furnace" makes proper use of the child's repertoire of intentional concepts.
塞尔否认恒温器可以被赋予信仰的观点是基于一种误解。包括电话、电灯开关和计算器在内的所有机器都有信仰,这并不是泛神论的观念。只有那些满足麦卡锡(McCarthy,1979 年)所提公理的信念才能最好地表达某个人的知识的系统,才有可能被赋予信念。恒温器有时就是这样的系统。告诉孩子:"如果你把蜡烛放在恒温器下面,你就能骗过它,让它以为房间太热了,然后它就会关掉火炉",这就恰当地利用了孩子的意向性概念。
Formalizing belief requires treating simple cases as well as more interesting ones. Ascribing beliefs to thermostats is analogous to including and in the number system even though we would not need a number system to treat the null set or sets with just one element; indeed we wouldn't even need the concept of set.
将信念形式化需要处理简单的情况和更有趣的情况。把信念赋予恒温器就好比把 纳入数字系统,尽管我们不需要数字系统来处理空集或只有一个元素的集;事实上,我们甚至不需要集的概念。
However, a program that understands should not be regarded as a theory of understanding any more than a man who understands is a theory. A program can only be an illustration of a theory, and a useful theory will contain much more than an assertion that "the following program understands about restaurants." I can't decide whether this last complaint applies to Searle or just to some of the Al researchers he criticizes.
然而,"理解 "的程序不应被视为 "理解 "的理论,就像 "理解 "的人不应被视为理论一样。一个程序只能是一种理论的例证,而一个有用的理论所包含的远不止 "以下程序理解餐馆 "这样的断言。我无法确定最后这句抱怨是适用于塞尔,还是只适用于他所批评的一些阿尔研究者。

by John C Marshall

Neuropsychology Unit, University Department of Clinical Neurology. The Radclife Infrmary, Oxford, England

Artificial intelligence - the real thing?

Searle would have us believe that the present-day inhabitants of respectable universities have succumbed to the Faustian dream. (Mephistopheles: "What is it then?" Wagner: "A man is in the making.") He assures us, with a straight face, that some contemporary scholars think that "an appropriately programmed computer really is a mind," that such artificial creatures "literally have cognitive states." The real thing indeed! But surely no one could believe this? I mean, if
塞尔会让我们相信,当今那些受人尊敬的大学校长们已经屈服于浮士德式的梦想。(梅菲斯特:"那是什么?"瓦格纳:"一个人正在形成。")他一脸正经地向我们保证,一些当代学者认为,"适当编程的计算机真的就是一个头脑",这种人造生物 "确实具有认知状态"。的确是真的!但肯定没人会相信这一点吧?我是说,如果

someone did, then wouldn't he want to give his worn-out IBM a decent burial and say Kaddish for it? And even if some people at Yale, Berkeley, Stanford, and so forth do instantiate these weird belief states, what conceivable scientific interest could that hold? Imagine that they were right, and that their computers really do perceive, understand, and think. All that our Golem makers have done on Searle's story is to create yet another mind. If the sole aim is to "reproduce" (Searle's term, not mine) mental phenomena, there is surely no need to buy a computer.
那么,他难道不想给他那破旧的 IBM 一个体面的葬礼,并为它念 Kaddish 经吗?即使耶鲁大学、伯克利大学、斯坦福大学等高校的某些人真的将这些奇怪的信念状态实例化了,这又能带来什么可想象的科学兴趣呢?想象一下,如果他们是对的,他们的计算机真的能感知、理解和思考。在塞尔的故事中,我们的 "泥巨人 "制造者所做的只是创造了另一种思维。如果唯一的目的是 "再现"(塞尔的术语,不是我的术语)心理现象,那么肯定就没有必要购买计算机了。
Frankly, I just don't care what some members of the Al community think about the ontological status of their creations. What I do care about is whether anyone can produce principled, revealing accounts of. say, the perception of tonal music (Longuet-Higgins 1979), the properties of stereo vision (Marr & Poggio 1979), and the parsing of natural language sentences (Thorne 1968). Everyone that I know who tinkers around with computers does so because he has an attractive theory of some psychological capacity and wishes to explore certain consequences of the theory algorithmically. Searle refers to such activity as "weak Al," but I would have thought that theory construction and testing was one of the stronger enterprises that a scientist could indulge in. Clearly, there must be some radical misunderstanding here.
坦率地说,我并不关心阿尔社区的某些成员如何看待其创作的本体论地位。我关心的是,是否有人能对音调音乐的感知(朗格-希金斯,1979 年)、立体视觉的特性(马尔和波吉奥,1979 年)以及自然语言句子的解析(索恩,1968 年)做出有原则、有启发性的解释。据我所知,每一个摆弄计算机的人都是因为他对某种心理能力有一套吸引人的理论,并希望通过算法探索该理论的某些后果。塞尔把这种活动称为 "弱阿尔"(weak Al),但我认为,理论的构建和测试是科学家所能从事的更强的事业之一。显然,这里一定存在着某种根本性的误解。
The problem appears to lie in Searle's (or his Al informants') strange use of the term 'theory.' Thus Searle writes in his shorter abstract: "According to strong AI, appropriately programmed computers literally have cognitive states, and therefore the programs are psychological theories." Ignoring momentarily the "and therefore," which introduces a simple non sequitur, how could a program be a theory? As Moor (1978) points out, a theory is (at least) a collection of related propositions which may be true or false, whereas a program is (or was) a pile of punched cards. For all I know, maybe suitably switched-on computers do "literally" have cognitive states, but even if they did, how could that possibly licence the inference that the program per se was a psychological theory? What would one make of an analogous claim applied to physics rather than psychology? "Appropriately programmed computers literally have physical states, and therefore the programs are theories of matter" doesn't sound like a valid inference to me. Moor's exposition of the distinction between program and theory is particularly clear and thus worth quoting at some length:
问题似乎在于塞尔(或他的艾尔线人)对'理论'一词的奇怪用法。因此,塞尔在其较短的摘要中写道:"根据强人工智能,适当编程的计算机确实具有认知状态,因此程序是心理学理论"。暂且不提 "因此 "这个引入了一个简单的非连续性问题的词,程序怎么可能是一种理论呢?正如摩尔(Moor,1978 年)所指出的,理论(至少)是相关命题的集合,这些命题可能是真的,也可能是假的,而程序则是(或曾经是)一堆打好的卡片。据我所知,也许适当开启的计算机确实 "字面上 "具有认知状态,但即使它们真的具有认知状态,这又怎么可能允许推论说程序本身就是一种心理学理论呢?如果把类似的说法应用于物理学而不是心理学,我们又会怎么看呢?在我看来,"适当编程的计算机确实具有物理状态,因此程序是物质理论 "并不是一个有效的推论。摩尔对程序与理论之间区别的阐述尤为清晰,因此值得详细引述:
A program must be interpreted in order to generate a theory. In the process of interpreting, it is likely that some of the program will be discarded as irrelevant since it will be devoted to the technicalities of making the program acceptable to the computer. Moreover, the remaining parts of the program must be organized in some coherent fashion with perhaps large blocks of the computer program taken to represent specific processes. Abstracting a theory from the program is not a simple matter, for different groupings of the program can generate different theories. Therefore, to the extent that a program, understood as a model, embodies one theory, it may well embody many theories. (Moor 1978, p. 215)
程序必须经过解释才能产生理论。在解释的过程中,程序的某些部分很可能会因为无关紧要而被舍弃,因为这些部分将用于使程序为计算机所接受的技术性问题。此外,程序的其余部分必须以某种连贯的方式组织起来,也许计算机程序的大块内容会被用来表示特定的过程。从程序中抽象出理论并不是一件简单的事,因为程序的不同组合会产生不同的理论。因此,如果把程序理解为一个模型,它体现的是一种理论,那么它也可能体现多种理论。(摩尔,1978 年,第 215 页)
Searle reports that some of his informants believe that running programs are other minds, albeit artificial ones; if that were so, would these scholars not attempt to construct theories of artificial minds, just as we do for natural ones? Considerable muddle then arises when Searle's informants ignore their own claim and use the terms 'reproduce' and 'explain' synonymously: "The project is to reproduce and explain the mental by designing programs." One can see how hopelessly confused this is by transposing the argument back from computers to people. Thus I have noticed that many of my daughter's mental states bear a marked resemblance to my own; this has arisen, no doubt, because part of my genetic plan was used ot build her hardware and because I have shared in the responsibility of programming her. All well and good, but it would be straining credulity to regard my daughter as "explaining" me, as being a "theory" of me.
塞尔报告说,他的一些线人相信运行的程序是另一种思维,尽管是人造的;如果真是这样,这些学者难道不会试图构建人造思维的理论,就像我们构建自然思维的理论一样吗?当塞尔的信息提供者无视他们自己的主张,将 "再现 "和 "解释 "作为同义词使用时,就出现了相当大的混乱:"这个项目就是通过设计程序来再现和解释心理"。只要把这个论点从计算机转回到人,我们就能明白这是多么令人绝望的混淆。因此,我注意到,我女儿的许多心理状态与我自己的心理状态十分相似;毫无疑问,这是因为我的部分遗传计划被用于构建她的硬件,而且我也分担了为她编程的责任。这一切都很好,但如果把我的女儿看作是在 "解释 "我,看作是我的一种 "理论",那就太难以置信了。
What one would like is an elucidation of the senses in which programs, computers and other machines do and don't figure in the explanation of behavior (Cummins 1977; Marshall 1977). It is a pity that Searle disregards such questions in order to discuss the everyday use of mental vocabulary, an enterprise best left to lexicographers. Searle writes: "The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't." Well, perhaps it does start there, but that is no reason to suppose it must finish there. How would such an "argument" fare in natural philosophy? "The study of physics starts with such facts as that tables are solid objects without holes in them, whereas Gruyere cheese. ..." Would Searle now continue that "If you get a theory that denies this point, you have produced a counterexample to the theory and the theory is wrong"? Of course a thermostat's "belief" that the temperature should be a little higher is not the same kind of thing as my "belief" that it should. It would be totally uninteresting if they were the same. Surely the theorist who compares the two must be groping towards a deeper parallel; he has seen an analogy that may illuminate certain aspects of the control and regulation of complex systems. The notion of positive and negative feedback is what makes thermostats so appealing to Alfred Wallace and Charles Darwin, to Claude Bernard and Walter Cannon, to Norbert Wiener and Kenneth Craik. Contemplation of governors and thermostats has enabled them to see beyond appearances to a level at which there are profound similarities between animals and artifacts (Marshall 1977). It is Searle, not the theoretician, who doesn't really take the enterprise seriously. According to Searle, "what we wanted to know is what distinguishes the mind from thermostats and livers," Yes, but that is not all; we also want to know at what levels of description there are striking resemblances between disparate phenomena.
我们所希望的是阐明程序、计算机和其他机器在行为解释中的作用(Cummins 1977;Marshall 1977)。遗憾的是,塞尔为了讨论心理词汇的日常使用而忽略了这些问题,而这项工作最好留给词典编纂者去做。塞尔写道:"对心智的研究始于这样的事实:人类有信念,而恒温器、电话和加法器没有。好吧,也许确实是从这里开始的,但这并不能成为认为它必须在这里结束的理由。在自然哲学中,这样的 "论证 "会是怎样的呢?"物理学的研究是从这样的事实开始的:桌子是没有洞的固体物体,而格鲁耶尔奶酪则是......"塞尔现在会继续说:"如果你得到一个否认这一点的理论,你就提出了一个理论的反例,这个理论就是错的 "吗?当然,恒温器认为温度应该再高一点的 "信念 "与我认为温度应该再高一点的 "信念 "不是一回事。如果它们是一样的,那就完全没有意思了。当然,将两者相提并论的理论家一定是在摸索更深层次的相似之处;他看到了一个类比,这个类比可能会阐明复杂系统控制和调节的某些方面。正反馈的概念正是恒温器吸引阿尔弗雷德-华莱士(Alfred Wallace)和查尔斯-达尔文(Charles Darwin)、克劳德-贝尔纳(Claude Bernard)和沃尔特-坎农(Walter Cannon)、诺伯特-维纳(Norbert Wiener)和肯尼斯-克雷克(Kenneth Craik)的原因。对调速器和恒温器的思考使他们能够透过表象,看到动物与人工制品之间的深刻相似之处(马歇尔,1977 年)。是塞尔,而不是理论家,没有真正认真对待这项事业。塞尔认为,"我们想知道的是,心灵与恒温器和肝脏的区别在哪里。"是的,但这还不是全部;我们还想知道,在描述的哪个层次上,不同的现象之间存在着惊人的相似之处。
In the opening paragraphs of Leviathan, Thomas Hobbes (1651, p 8) gives clear expression to the mechanist's philosophy:
在《利维坦》的开篇,托马斯-霍布斯(1651 年,第 8 页)明确表达了机械论哲学:
Nature, the art whereby God hath made and governs the world, is by the art of man, as in many other things, in this also imitated, that it can make an artificial animal. ... For what is the heart but a spring, and the nerves so many strings; and joints so many wheels giving motion to the whole body, such as was intended by the artificer? What is the notion of "imitation" that Hobbes is using here? Obviously not the idea of exact imitation or copying. No one would confuse a cranial nerve with a piece of string, a heart with the mainspring of a watch, or an ankle with a wheel. There is no question of trompe l'oeil. The works of the scientist are not in that sense reproductions of nature; rather they are attempts to see behind the phenomenological world to a hidden reality. It was Galileo, of course, who articulated this paradigm most forcefully: sculpture, remarks Galileo,
大自然是上帝创造和管理世界的艺术,人类的艺术也模仿了大自然,就像模仿许多其他事物一样,人类可以制造人造动物。...因为心脏不过是一个弹簧,神经不过是许多线,关节不过是许多轮子,给整个身体带来运动,这就是造物主的意图吗?霍布斯在这里使用的 "模仿 "概念是什么?显然不是完全模仿或复制的概念。没有人会把颅神经与一根线混为一谈,把心脏与手表的主发条混为一谈,把脚踝与车轮混为一谈。不存在 "涂鸦"(trompe l'oeil)的问题。从这个意义上说,科学家的作品不是自然的再现,而是试图在现象世界的背后看到隐藏的现实。当然,最有力地阐明这一范式的是伽利略:雕塑,伽利略说、
is "closer to nature" than painting in that the material substratum manipulated by the sculptor shares with the matter manipulated by nature herself the quality of three-dimensionality. But does this fact rebound to the credit of sculpture? On the contrary, says Galileo, it greatly "diminishes its merit": What will be so wonderful in imitating sculptress Nature by sculpture itself?" And he concludes: "The most artistic imitation is that which represents the three-dimensional by its opposite, which is the plane." (Panofsky 1954, p. 97)
雕塑比绘画 "更接近自然",因为雕塑家所操纵的物质基质与自然界所操纵的物质本身具有相同的三维性。但这一事实会给雕塑带来好处吗?伽利略说,恰恰相反,它大大 "削弱了雕塑的优点":用雕塑本身来模仿大自然的女雕塑家,会有什么美妙之处呢?他最后说他总结道:"最具艺术性的模仿,是用三维的反面,即平面,来表现三维"。(帕诺夫斯基,1954 年,第 97 页)
Galileo summarizes his position in the following words: "The further removed the means of imitation are from the thing to be imitated, the more worthy of admiration the imitation will be" (Panofsky 1954). In a footnote to the passage, Panofsky remarks on "the basic affinity between the spirit of this sentence and Galileo's unbounded admiration for Aristarchus and Copernicus 'because they trusted reason rather than sensory experience" (Panoisky 1954)
伽利略用下面的话概括了他的立场:"模仿的手段离被模仿的事物越远,模仿就越值得推崇"(帕诺夫斯基,1954 年)。在这段话的脚注中,帕诺夫斯基指出:"这句话的精神与伽利略对亚里士多德和哥白尼的无限钦佩之间存在着基本的亲和力,'因为他们相信理性而不是感官经验'"(帕诺夫斯基,1954 年)
Now Searle is quite right in pointing out that in Al one seeks to model cognitive states and their consequences (the real thing) by a formal syntax, the interpretation of which exists only the eye of the beholder. Precisely therein lies the beauty and significance of the enterprise - to try to provide a counterpart for each substantive distinction with a syntactic one. This is essentially to regard the study of the relationships between physical transactions and symbolic operations as an essay in cryptanalysis (Freud 1895; Cummins 1977). The interesting question then arises as to whether there is a unique mapping between the formal elements of the system and their "meanings" (Householder 1962).
塞尔指出,在阿尔,人们试图用形式句法来模拟认知状态及其后果(真实事物),而对认知状态及其后果的解释只存在于观察者的眼中。这正是这项事业的魅力和意义所在--试图用句法为每一个实质性的区别提供对应物。这实质上是将物理交易与符号操作之间关系的研究视为一篇密码分析论文(弗洛伊德,1895 年;卡明斯,1977 年)。有趣的问题是,在系统的形式元素与其 "意义"(Householder,1962 年)之间是否存在独特的映射关系。
Searle, however, seems to be suggesting that we abandon entirely both the Galilean and the "linguistic" mode in order merely to copy cognitions. He would apparently have us seek mind only in "neurons with axons and dendrites," although he admits, as an empirical possibility, that such objects might "produce consciousness, intentionality and all the rest of it using some other sorts of chemical principles
然而,塞尔似乎在建议我们完全放弃伽利略模式和 "语言 "模式,以便仅仅复制认知。他显然想让我们只在 "有轴突和树突的神经元 "中寻找心智,尽管他承认,作为一种经验可能性,这些对象可能 "利用其他一些化学原理产生意识、意向性和所有其他的东西"。

than human beings use." But this admission gives the whole game away. How would Searle know that he had built a silicon-based mind (rather than our own carbon-based mind) except by having an appropriate abstract (that is, nonmaterial) characterization of what the two life forms hold in common? Searle finesses this problem by simply "attributing" cognitive states to himsell, other people, and a variety of domestic animals: 'In 'cognitive sciences' one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects." But this really won't do: we are, after all, a long way from having any very convincing evidence that cats and dogs have "cognitive states" in anything like Searle's use of the term [See "Cognition and Consciousness in Nonhuman Species" BBS 1(4) 1978|.
"。但这一承认暴露了整个游戏。除了对这两种生命形式的共同点进行适当的抽象(即非物质)描述,塞尔又怎么会知道自己构建了一个硅基思维(而不是我们自己的碳基思维)呢?塞尔通过简单地把认知状态 "归属 "于他自己、其他人和各种家畜来解决这个问题:"在'认知科学'中,人们预先假定精神的现实性和可知性,就像在物理科学中,人们必须预先假定物理对象的现实性和可知性一样。但这真的行不通:毕竟,我们还远未获得任何非常令人信服的证据,证明猫和狗具有塞尔所使用的 "认知状态"[见 "非人类物种的认知与意识 "BBS 1(4) 1978]。
Thomas Huxley (1874, p. 156) poses the question in his paraphrase of Nicholas Malebranche's orthodox Cartesian line: "What proof is there that brutes are other than a superior race of marionettes, which eat without pleasure, cry without pain, desire nothing, know nothing, and only simulate intelligence as a bee simulates a mathematician?" Descartes' friend and correspondent, Marin Mersenne, had little doubt about the answer to this kind of question. In his discussion of the perceptual capacities 'of animals he forthrightly denies mentality to the beasts:
托马斯-赫胥黎(1874 年,第 156 页)在转述尼古拉斯-马勒布兰切的正统笛卡尔观点时提出了这个问题:"有什么证据能证明野蛮人只是一种高级的提线木偶,他们吃东西时没有快乐,哭泣时没有痛苦,没有欲望,什么也不知道,只是模拟智慧,就像蜜蜂模拟数学家一样?笛卡尔的朋友和通信者马林-梅森对这类问题的答案几乎没有疑问。在讨论'动物的感知能力'时,他直截了当地否定了野兽的智力:
Animals have no knowledge of these sounds, but only a representation, without knowing whether what they apprehend is a sound or a color or something else; so one can say that they do not act so much as are acted upon, and that the objects make an impression upon their senses from which their action necessarily follows, as the wheels of a clock necessarily follow the weight or spring which drives them. (Mersenne 1636)
动物对这些声音并不了解,而只是一种表象,不知道它们听到的是声音还是颜色或其他东西;因此可以说,它们并不是在行动,而是在被行动,物体给它们的感官留下了印象,而它们的行动必然来自于这种印象,就像钟表的轮子必然来自于驱动它们的重物或弹簧一样。(梅森 1636)
For :Mersenne, then, the program inside animals is indeed an uninterpreted calculus, a syntax without a semantics (See Fodor: "Methodological Solipsism" BBS 3(1) 1980]. Searle, on the other hand, seems to believe that apes, monkeys, and dogs do "have mental states" because they "are made of similar stuff to ourselves" and have eyes, a nose, and skin. I fail to see how the datum supports the conclusion. One might have thought that some quite intricate reasoning and subtle experimentation would be required to justify the ascription of intentionality to chimpanzees (Marshall 1971; Woodruff & Premack 1979). That chimpanzees look quite like us is a rather weak fact on which to build such a momentous conclusion.
那么,对于梅森来说,动物内部的程序确实是一种未被解释的微积分,一种没有语义的语法(见福多:《方法论的唯我论》,BBS 3(1) 1980]:"方法论的唯我论" BBS 3(1) 1980]。另一方面,塞尔似乎认为,猿猴和狗确实 "有精神状态",因为它们 "由与我们相似的东西构成",有眼睛、鼻子和皮肤。我看不出这些数据是如何支持这一结论的。人们可能会认为,要证明把意向性归因于黑猩猩是合理的,需要一些相当复杂的推理和微妙的实验(Marshall 1971; Woodruff & Premack 1979)。黑猩猩和我们长得很像,这个事实对于得出如此重要的结论来说是相当薄弱的。
When Jacques de Vaucanson - the greatest of all Al theorists - had completed his artificial duck he showed it, in all its naked glory of wood, string, steel, and wire. However much his audience may have preferred a more cuddly creature, Vaucanson firmly resisted the temptation to clothe it:
Perhaps some Ladies, or some People, who only like the Outside of Animals, had rather have seen the whole cover'd; that is the Duck with Feathers. But besides, that I have been desir'd to make every thing visible; I wou'd not be thought to impose upon the Spectators by any conceal'd or juggling Contrivance (Fryer & Marshall 1979).
也许有些女士或有些人只喜欢动物的外表,他们更希望看到的是整个被遮盖起来的鸭子,即有羽毛的鸭子。但除此之外,我一直想让所有的东西都清晰可见;我不想让观众觉得我有任何隐藏或变戏法的伎俩(弗莱尔和马歇尔,1979 年)。
For Vaucanson, the theory that he has embodied in the model duck is the real thing

Acknowledgment 鸣谢

I thank Dr. J. Loew for his comments on earlier versions of this work.
感谢 J. Loew 博士对本论文早期版本提出的意见。

by Grover Maxwell 作者:格罗弗-马克斯韦尔

Center for Pnilosophy of Science, University of Minnesota, Minneapolis, Minn 55455
明尼苏达大学科学哲学中心,明尼阿波利斯,明尼苏达州 55455

Intentionality: Hardware, not software

It is a rare and pleasant privilege to comment on an article that surely is destined to become, almost immediately, a classic. But, alas, what comments are called for? Following BBS instructions, I'll resist the very strong temptation to explain how Searle makes exactly the right central points and supports them with exactly the right arguments; and I shall leave it to those who, for one reason or another, still disagree with his central contentions to call attention to a few possible weaknesses, perhaps even a mistake or two, in the treatment of some of his ancillary claims. What I shall try to do, is to examine, briefly - and therefore sketchily and inadequately - what seem to me to be some implications of his results for the overall mind-body problem.
能对一篇肯定会立即成为经典的文章发表评论,是一种难得而愉快的荣幸。但是,唉,该怎么评论呢?按照 BBS 的指示,我将抵制强烈的诱惑,不去解释塞尔是如何提出完全正确的中心论点并用完全正确的论据来支持这些论点的;我将把注意力留给那些出于某种原因仍然不同意他的中心论点的人,让他们注意在处理他的一些附带主张时可能存在的一些弱点,也许甚至是一两个错误。我将尝试做的是,简要地--因此也是粗略和不充分地--研究他的成果对整个心身问题的一些影响。
Quite prudently, in view of the brevity of his paper, Searle leaves some central issues concerning mind-brain relations virtually untouched. In particular, his main thrust seems compatible with interactionism, with epiphenomenalism, and with at least some versions of the identity thesis. It does count, very heavily, against eliminative materialism, and, equally importantly, it reveals "functionalism" (or "functional materialism") as it is commonly held and interpreted (by, for example, Hilary Putnam and David Lewis) to be just another variety of eliminative materialism (protestations to the contrary notwithstanding). Searle correctly notes that functionalism of this kind (and strong , in general) is a kind of dualism. But it is not a mental-physical dualism; it is a form-content dualism, one, moreover, in which the form is the thing and content doesn't matter! [See Fodor: "Methodological Solipsism" BBS 3(1) 1980 .]
考虑到他的论文篇幅简短,塞尔相当谨慎地没有触及有关心脑关系的一些核心问题。尤其是,他的主要观点似乎与互动论、表象论以及至少某些版本的同一论相兼容。同样重要的是,它揭示了 "功能主义"(或 "功能唯物主义")通常被认为和解释为(例如希拉里-普特南(Hilary Putnam)和戴维-刘易斯(David Lewis))只是另一种消除唯物主义(尽管有相反的抗议)。塞尔正确地指出,这种功能主义(以及一般的强 )是一种二元论。但它不是精神-物理二元论;它是形式-内容二元论,而且,在这种二元论中,形式就是事物,内容并不重要![见福多:"方法论的唯我论" BBS 3(1) 1980 .]
Now I must admit that in order to find these implications in Searle's results I have read into them a little more than they contain explicity. Specifically, I have assumed that intentional states are genuinely mental in the what-is-it-like-to-be-a-bat? sense of "mental" (Nagel 1974) as well as, what I suppose is obvious, that eliminative materialism seeks to "eliminate" the genuinely mental in this sense. But it seems to me that it does not take much reading between the lines to see that Searle is sympathetic to my assumptions. For example, he does speak of "genuinely mental [systems]," and he says (in Searle 1979c) that he believes that "only beings capable of conscious states are capable of intentional states" (my italics), although he says that he does not know how to demonstrate this. (How, indeed, could anyone demonstrate such a thing? How could one demonstrate that fire burns?)
现在,我必须承认,为了从塞尔的结果中找到这些含义,我对它们的解读比它们所包含的更多一些。具体地说,我假定意向状态是 "精神 "意义上的真正精神(纳格尔,1974 年),以及我认为显而易见的消除唯物主义试图 "消除 "这种意义上的真正精神。但在我看来,不需要过多的字里行间的解读,就能看出塞尔对我的假设表示同情。例如,他确实谈到了 "真正的精神[系统]",而且他说(在塞尔 1979c 中)他相信 "只有能够意识状态的生命才能够意向状态"(我的斜体),尽管他说他不知道如何证明这一点。(事实上,怎么可能有人证明这一点呢?如何证明火会燃烧?)
The argument that Searle gives for the conclusion that only machines can think (can have intentional states) appears to have two suppressed premisses: (1) intentional states must always be causally produced, and (2) any causal network (with a certain amount of organization and completeness, or some such condition) is a machine. accept for the purposes of this commentary his premises and his conclusion. Next I want to ask: what kind of hardware must a thinking machine incorporate? (By "thinking machine" I mean of course a machine that has genuinely mental thoughts: such a machine, I contend, will also have genuinely mental states or events instantiating sensations, emotions, and so on in all of their subjective, qualitative, conscious, experiential richness.) To continue this line of investigation, I want to employ an "event ontology," discarding substance metaphysics altogether. (Maxwell 1978, provides a sketch of some of the details and of the contentions that contemporary physics, quite independently of philosophy of mind, leads to such an ontology.) An event is (something like) the instancing of a property or the instancing (concrete realization) of a state. A causal network, then, consists entirely of a group of events and the causal links that interconnect them. A fortiori, our "machine" will consist entirely of events and causal connections. In other words, the hardware of this machine (or of any machine, for example, a refrigerator) consists of its constituent events and the machine consists of nothing else (except the causal linkages). Our thinking machine in the only form we know it today is always a brain (or if you prefer, an entire human or other animal body). which, as we have explained, is just a certain causal network of events. The mind-brain identity theory in the version that I defend says that some of the events in this network are (nothing but) genuinely mental events (instances of intentional states, or of pains, or the like). Epiphenomenalism says that the mental events "dangle" from the main net (the brain) by causal connections which are always one way (from brain to dangler). (Epiphenomenalism is, I believe, obviously, though contingently, false.) Interactionism says that there are causal connections in both directions but that the mental events are somehow in a different realm from the brain events. (How come a "different realm" or whatever? Question: Is there a real difference between interactionism and identity theory in an event ontology?)
苏尔为 "只有机器才会思考(才会有意向性状态)"这一结论所做的论证似乎有两个被压制的前提:(1)意向性状态必须总是因果性地产生;(2)任何因果网络(具有一定的组织性和完整性,或诸如此类的条件)都是机器。 ,为了本评论的目的,我们接受他的前提和结论。接下来我想问:一台会思考的机器必须包含什么样的硬件?(我所说的 "会思考的机器 "当然是指具有真正精神思想的机器:我认为,这样的机器也将具有真正的精神状态或事件,这些状态或事件将感觉、情感等所有主观的、定性的、有意识的、丰富的经验实例化)。为了继续这一研究方向,我想采用一种 "事件本体论",完全摒弃物质形而上学。(马克斯韦尔 1978 年著作《当代物理学与心灵哲学》提供了一些细节和争论的草图,这些争论导致了这样一种本体论。)事件是(类似于)属性的实例化或状态的实例化(具体实现)。因此,一个因果网络完全由一组事件和连接它们的因果联系组成。更何况,我们的 "机器 "将完全由事件和因果联系组成。换句话说,这台机器(或任何机器,例如冰箱)的硬件由其组成事件构成,而机器则不包括任何其他东西(因果联系除外)。我们今天所知的唯一形式的思维机器始终是大脑(如果你愿意,也可以是整个人体或其他动物躯体)。正如我们已经解释过的,大脑只是一个由事件组成的因果网络。我所捍卫的心脑同一论认为,这个网络中的某些事件(只不过是)真正的精神事件(意向状态、痛苦或类似事件的实例)。外显说认为,这些心理事件是通过因果联系从主网(大脑)上 "悬挂 "下来的,而这些因果联系总是单向的(从大脑到悬挂者)。(互动论认为,因果联系是双向的,但心理事件与大脑事件处于不同的领域。(何来 "不同领域 "之说?问题:在事件本体论中,交互作用论和同一性理论真的有区别吗?)
Assuming that Searle would accept the event ontology, if no more than for the sake of discussion, would he say that mental events, in general, and instances of intentional states, in particular, are parts of
假设塞尔会接受事件本体论,如果只是为了讨论的目的,他是否会说,一般的心理事件,特别是意向状态的实例,都是 "本体 "的一部分?

the machine, or is it his position that they are just products of the machine? That is, would Searle be inclined to accept the identity thesis, or would he lean toward either epiphenomenalism or interactionism? For my money, in such a context, the identity theory seems by far the most plausible, elegant, and economical guess. To be sure, it must face serious and, as yet, completely unsolved problems, such as the "grain objection" (see, for example, Maxwell 1978), and emergence versus panpsychism (see, for example, Popper & Eccles 1977), but I believe that epiphenomenalism and interactionism face even more severe difficulties.
还是他认为它们只是机器的产物?也就是说,塞尔倾向于接受同一论,还是倾向于表象论或互动论?在我看来,在这种情况下,同一论似乎是迄今为止最合理、最优雅、最经济的猜测。可以肯定的是,它必须面对严重的、至今还完全没有解决的问题,比如 "谷物反对"(参见麦克斯韦,1978 年),以及涌现与泛心理主义(参见波普尔与埃克尔斯,1977 年),但我认为表象论和互动论面临着更为严峻的困难。
Before proceeding, I should emphasize that contemporary scientific knowledge not only leads us to an event ontology but also that it indicates the falsity of naive realism and "gently persuades" us to accept what I have (somewhat misleadingly, I fear) called "structural realism." According to this, virtually all of our knowledge of the physical world is knowledge of the structure (including space-time structure) of the causal networks that constitute it. (See, for example, Russell 1948 and Maxwell 1976). This holds with full force for knowledge about the brain (except for a very special kind of knowledge, to be discussed soon). We are, therefore, left ignorant as to what the intrinsic (nonstructural) properties of "matter" (or what contemporary physics leaves of it) are. In particular, if only we knew a lot more neurophysiology, we would know the structure of the (immense) causal network that constitutes the brain, but we would not know its content; that is, we still wouldn't know what any of its constituent events are. Identity theory goes a step further and speculates that some of these events just are (instances of) our intentional states, our sensations, our emotions, and so on, in all of their genuinely mentalistic richness, as they are known directly "by acquaintance." This is the "very special knowledge" mentioned above, and if identity theory is true, it is knowledge of what some (probably a very small subset) of the events that constitute the brain are. In this small subset of events we know intrinsic as well as structural properties.
在继续之前,我应该强调,当代科学知识不仅引导我们走向事件本体论,而且还表明了天真的现实主义的虚假性,并 "温和地说服 "我们接受我所称(恐怕有点误导)的 "结构现实主义"。根据这种观点,我们对物理世界的几乎所有知识都是关于构成物理世界的因果网络结构(包括时空结构)的知识。(例如,见 Russell 1948 和 Maxwell 1976)。这一点对于有关大脑的知识也完全适用(除了即将讨论的一种非常特殊的知识)。因此,我们对 "物质 "的内在(非结构)属性(或当代物理学对物质的认识)一无所知。特别是,如果我们知道更多的神经生理学知识,我们就会知道构成大脑的(巨大的)因果网络的结构,但我们不会知道它的内容;也就是说,我们仍然不知道它的任何组成事件是什么。同一性理论更进一步推测,其中一些事件就是我们的意向状态、感觉、情感等等的(实例),它们真正具有丰富的精神性,因为它们是直接 "通过相识 "而知道的。这就是上文提到的 "非常特殊的知识",如果同一性理论是真的,那么它就是关于构成大脑的某些事件(可能是很小的一个子集)是什么的知识。在这一小部分事件中,我们既知道其内在属性,也知道其结构属性。
Let us return to one of the questions posed by Searle: "could an artifact, a man-made machine, think?" The answer he gives is, I think, the best possible one, given our present state of unbounded ignorance in the neurosciences, but I'd like to elaborate a little. Since, I have claimed above, thoughts and other (genuinely) mental events are part of the hardware of "thinking machines," such hardware must somehow be got into any such machine we build. At present we have no inkling as to how this could be done. The best bet would seem to be, as Searle indicates, to "build" a machine (out of protoplasm) with neurons, dentrites, and axons like ours, and then to hope that, from this initial hardware, mental hardware would be mysteriously generated (would "emerge"). But this "best bet" seems to me extremely implausible. However, I do not conclude that construction of a thinking machine is (even contingently, much less logically) impossible. I conclude, rather, that we must learn a lot more about physics, neurophysiology, neuropsychology, psychophysiology, and so on, not just more details - but much more about the very foundations of our theroretical knowledge in these areas, before we can even speculate with much sense about building thinking machines. (I have argued in Maxwell 1978 that the foundations of contemporary physics are in such bad shape that we should hope for truly "revolutionary" changes in physical theory, that such changes may very well aid immensely in "solving the mind-brain problems." and that speculations in neurophysiology and perhaps even psychology may very well provide helpful hints for the physicist in his renovation of, say, the foundations of space-time theory.) Be all this as it may, Searle has shown the total futility of the strong route to genuine artificial intelligence.
让我们回到塞尔提出的一个问题:"人工制品,人造机器,会思考吗?"鉴于我们目前在神经科学方面的无知,我认为他给出的答案是最好的答案,但我还想再详细说明一下。我在上文已经说过,思想和其他(真正的)心理事件是 "思维机器 "硬件的一部分,因此,我们制造的任何机器都必须以某种方式植入这种硬件。目前,我们还不知道如何才能做到这一点。最好的办法似乎是,正如塞尔所指出的那样,(用原生质)"制造 "一台像我们这样具有神经元、神经原和轴突的机器,然后希望从这个初始硬件中神秘地产生("出现")心理硬件。但在我看来,这种 "最好的赌注 "是极其难以置信的。然而,我并没有得出结论说,制造一台会思考的机器是不可能的(即使是偶然的,更不是逻辑上的)。相反,我的结论是,我们必须对物理学、神经生理学、神经心理学、心理生理学等有更多的了解,不仅仅是更多的细节,而是对我们在这些领域的理论知识的基础有更多的了解,然后我们才能对制造思维机器进行有意义的推测。(我曾在《麦克斯韦 1978》一书中指出,当代物理学的基础如此糟糕,以至于我们应该希望物理理论发生真正的 "革命性 "变化,而这种变化很可能会极大地有助于 "解决心脑问题",神经生理学甚至心理学的推测很可能会为物理学家翻新时空理论的基础提供有益的提示)。尽管如此,塞尔已经表明,强 ,通往真正的人工智能之路是完全徒劳无益的。

by E.W. Menzel, Jr.
作者:E.W. Menzel, Jr.

Department of Psychology. State University of New York at Stony Brook, Stony Brook, N.Y. 11794

Is the pen mightier than the computer?

The area of artificial intelligence (Al) differs from that of natural intelligence in at least three respects. First, in one is perforce limited to the use of formalized behavioral data or "output" as a basis for making inferences about one's subjects. (The situation is no different, however, in the fields of history and archaeology.) Second, by convention, if nothing else, in one must ordinarily assume, until proved otherwise, that one's subject has no more mentality than a rock: whereas in the area of natural intelligence one can often get away with the opposite assumption, namely, that until proved otherwise, one's subject can be considered to be sentient. Third, in analysis is ordinarily limited to questions regarding the "structure" of intelligence. whereas a complete analysis of natural intelligence must also consider questions of function, development, and evolution.
人工智能(Al)领域至少在三个方面不同于自然智能领域。首先,在 中,人们不得不局限于使用形式化的行为数据或 "输出 "作为对研究对象进行推断的基础(但在历史学和考古学领域,情况并无不同)。(然而,在历史学和考古学领域,情况并无不同)。其次,按照惯例,如果没有其他原因,在 ,人们通常必须假定,在没有得到证明之前,研究对象并不比石头有更多的思维:而在自然智能领域,人们往往可以不作相反的假定,即在没有得到证明之前,可以认为研究对象是有知觉的。第三,对 的分析通常仅限于智能的 "结构 "问题。而对自然智能的全面分析还必须考虑功能、发展和进化等问题。
In other respects, however, it seems to me that the problems of inferring mental capacities are very much the same in the two areas And the whole purpose of the Turing test (or the many counterparts to that test which are the mainstay of comparative psychology) is to devise a clear set of rules for determining the status of subjects of any species, about whose possession of a given capacity we are uncertain. This is admittedly a game, and it cannot be freed of all possible arbitrary aspects any more than can, say, the field of law. Furthermore, unless everyone agrees to the rules of the game, there is no way to prove one's case for (or against) a given capacity with absolute and dead certainty.
As I see it, Searle quite simply refuses to play such games, at least according to the rules proposed by . He assigns himself the role of a judge who knows beforehand in most cases what the correct decision should be. And he does not, in my opinion, provide us with any decision rules for the remaining (and most interesting) undecided cases, other than rules of latter-day common sense (whose pitfalls and ambiguities are perhaps the major reason for devising objective tests that are based on performance rather than physical characteristics such as species, race, sex, and age.)
在我看来,塞尔非常干脆地拒绝玩这种游戏,至少是拒绝按照 提出的规则玩这种游戏。他让自己扮演一个法官的角色,在大多数情况下,他事先就知道正确的决定是什么。在我看来,除了后世的常识规则之外,他并没有为剩下的(也是最有趣的)未决案件提供任何裁决规则(这些规则的缺陷和模糊性也许正是设计基于表现而非物种、种族、性别和年龄等身体特征的客观测试的主要原因)。
Let me be more specific. First of all, the discussion of 'the brain' and "certain brain processes" is not only vague but seems to me to displace and complicate the problems it purports to solve. In saying this I do not imply that physiological data are irrelevant; I only say that their relevance is not made clear, and the problem of deciding where the brain leaves off and nonbrain begins is not as easy as it sounds. Indeed, I doubt that many neuroanatomists would even try to draw any sharp and unalterable line that demarcates exactly where in the animal kingdom "the brain" emerges from "the central nervous system"; and I suspect that some of them would ask. Why single out the brain as crucial to mind or intentionality? Why not the central nervous system or DNA or (to become more restrictive rather than liberal) the human brain or the Caucasian brain? Precisely analogous problems would arise in trying to specify for a single species such as man precisely how much intact brain, or what parts of it, or which of the "certain brain processes," must be taken into account and when one brain process leaves off and another one begins. Quite coincidentally, I would be curious as to what odds Searle would put on the likelihood that a neurophysiologist could distinguish between the brain processes of Searle during the course of his hypothetical experiment and the brain processes of a professor of Chinese. Also, I am curious as to what mental status he would assign to, say, an earthworm.
让我说得更具体一些。首先,关于 "大脑 "和 "某些大脑过程 "的讨论不仅含糊不清,而且在我看来,它似乎取代了它想要解决的问题,并使之复杂化了。我这么说并不是暗示生理数据无关紧要,我只是说它们的相关性并不明确,而且决定大脑从哪里开始、非大脑从哪里开始的问题并不像听起来那么容易。事实上,我怀疑许多神经解剖学家甚至会试图划出一条锐利而不可改变的界线,来划分 "大脑 "与 "中枢神经系统 "在动物王国中的确切位置;我怀疑他们中的一些人会问:"为什么要把大脑作为对人类至关重要的部分?为什么单单把大脑作为心智或意向性的关键?为什么不是中枢神经系统或 DNA,或者(为了更严格而不是更自由)人脑或高加索人脑?在试图为人类这样的单一物种明确规定必须考虑多少完整的大脑,或大脑的哪些部分,或 "某些大脑过程 "中的哪些过程,以及一个大脑过程何时结束,另一个大脑过程何时开始时,会出现完全类似的问题。非常巧合的是,我很好奇苏尔认为神经生理学家能够区分苏尔在其假设实验过程中的大脑过程和一位中文教授的大脑过程的可能性有多大。另外,我也很好奇他会赋予蚯蚓怎样的精神状态。
Second, it seems to me that, especially in the domain of psychology, there are always innumerable ways to skin a cat, and that these ways are not necessarily commensurate, especially when one is discussing two different species or cultures or eras. Thus, for example, I would be quite willing to concede that to "acquire the calculus" I would not require the intellectual power of Newton or of Leibnitz, who invented the calculus. But how would Searle propose to quantify the relative "causal powers" that are involved here, or how would he establish the relative similarity of the "effects"? The problem is especially difficult when Searle talks about subjects who have "zero understanding," for we possess no absolute scales or ratio scales in this domain, but only relativistic ones. In other words, we can assume by definition that a given subject may be taken as a criterion of "zero understanding," and assess the competence of other subjects by comparing them against this norm; but someone else is always free to invoke some other norm. Thus, for example, Searle uses himself as a standard of comparison and assumes he possesses zero understanding of Chinese. But what if I proposed that a better norm would be, say, a dog? Unless Searle's performance were no better than that of the dog, it seems to me that
其次,在我看来,特别是在心理学领域,总有无数种方法可以剥一只猫的皮,而且这些方法并不一定相称,特别是当我们讨论两个不同的物种或文化或时代时。因此,举例来说,我很愿意承认,要 "掌握微积分",我并不需要牛顿或发明微积分的莱布尼茨那样的智力。但是,塞尔打算如何量化这里涉及的相对 "因果能力",或者如何确定 "效果 "的相对相似性呢?当塞尔谈到 "零理解力 "的主体时,这个问题就显得尤为棘手,因为我们在这个领域没有绝对尺度或比率尺度,只有相对尺度。换句话说,我们可以根据定义假定某个主体可以作为 "零理解 "的标准,并通过将其他主体与这一标准进行比较来评估他们的能力;但其他人总是可以自由地援引其他标准。因此,举例来说,塞尔以自己为比较标准,假设自己对中文的理解为零。但如果我提出更好的标准是狗呢?除非苏尔的表现并不比狗好,否则在我看来

the student of could argue that Searle's understanding must be greater than zero, and that his hypothetical experiment is therefore inconclusive; that is, the computer, which pertorms as he did, cannot necessarily be said to have zero understanding either.
In addition to these problems. Searle's hypothetical experiment is based on the assumption that Al would be proved "false" if it could be shown that even a single subject on a single occasion could conceivably pass a Turing test despite the fact that he possesses what may be assumed to be zero understanding. This, in my opinion, is a mistaken assumption. No student of would, to my knowledge, claim infallibility. His predictions would be at best probabilistic or statistical; and, even apart from problems such as cheating, some errors of classification are to be expected on the basis of "chance" alone. Turing, for example, predicted only that by the year 2000 computers will be able to fool an average interrogator on a Turing test, and be taken for a person, at least 30 times out of 100 . In brief, I would agree with Searle if he had said that the position of strong is unprovable with dead certainty; but by his criteria no theory in empirical science is provable, and I therefore reject his claim that he has shown Al to be false.
除了这些问题之外。塞尔的假设实验基于这样一个假设,即如果能够证明,即使是一个受试者在一个场合通过了图灵测试,尽管他拥有的理解力可以被认为是零,那么阿尔就会被证明是 "错误的"。在我看来,这是一个错误的假设。据我所知,没有一个 的学生会声称自己是无懈可击的。他的预测充其量只是概率性或统计性的;而且,即使不考虑作弊等问题,仅凭 "偶然性",一些分类错误也是意料之中的。例如,图灵只预言到 2000 年,计算机将能够在图灵测试中骗过普通的审问者,并在 100 次中至少有 30 次被当成人。简而言之,如果塞尔说强 的立场是死无对证的,我会同意他的观点;但根据他的标准,实证科学中没有任何理论是可证实的,因此我拒绝接受他关于他已证明阿尔是错误的说法。
Perhaps the central question raised by Searle's paper is, however, Where does mentality lie? Searle tells us that the intelligence of computers lies in our eyes alone. Einstein, however, used to say, 'My pencil is more intelligent than ; and this maxim seems to me to come at least as close to the truth as Searle's position. It is quite true that without a brain to guide it and interpret its output, the accomplishments of a pencil or a computer or of any of our other "tools of thought" would not be very impressive. But, speaking for myself, I confess I'd have to take the same dim view of my own accomplishments. In other words, I am quite sure that I could not even have "had" the thoughts expressed in the present commentary without the assistance of various means of externalizing and objectifying "them" and rendering them accessible not only for further examination but for their very formulation. I presume that there were some causal connections and correspondences between what is now on paper (or is it in the reader's eyes alone?) and what went on in my brain or mind; but it is an open question as to what these causal connections and correspondences were. Furthermore, it is only if one confuses present and past, and internal and external happenings with each other, and considers them a single "thing," that "thinking" or even the causal power behind thought can be allocated to a single "place" or entity. I grant that it is metaphorical if not ludicrous to give my pencil the credit or blame for the accomplishment of "the thoughts in this commentary." But it would be no less metaphorical and ludicrous - at least in science, as opposed to everyday life - to give the credit or blame to my brain as such. Whatever brain processes or mental processes were involved in the writing of this commentary, they have long since been terminated. In the not-too-distant future not even "I" as a body will exist any longer. Does this mean that the reader of the future will have no valid basis for estimating whether or not I was (or, as a literary figure, "am") any more intelligent than a rock? I am curious as to how Searle would answer this question. In particular, I would like to know whether he would ever infer from an artifact or document alone that its author had a brain or certain brain processes - and, if so, how this is different from making inferences about mentality from a subject's outputs alone.
然而,塞尔论文提出的核心问题或许是:智力究竟在哪里?塞尔告诉我们,计算机的智能仅仅在于我们的眼睛。然而,爱因斯坦曾经说过:"我的铅笔比 更聪明;在我看来,这句格言至少与塞尔的立场一样接近真理。诚然,如果没有大脑的引导和解释,铅笔、计算机或任何其他 "思想工具 "的成就都不会令人印象深刻。但是,就我自己而言,我承认我对我自己的成就也持同样的否定态度。换句话说,我非常肯定,如果没有各种手段的帮助,将 "它们 "外化和客观化,使它们不仅可以被进一步研究,而且可以被表述出来,我甚至不可能 "拥有 "本评论中所表达的思想。我推测,现在写在纸上(还是只写在读者的眼睛里?)的东西与我大脑或头脑中的东西之间存在着某些因果联系和对应关系;但这些因果联系和对应关系是什么,却是一个未决问题。此外,只有当一个人把现在和过去、内部和外部发生的事情相互混淆,并把它们视为一个单一的 "事物 "时,"思维 "甚至思维背后的因果力量才能被分配到一个单一的 "地方 "或实体上。我承认,把 "这篇评论中的思想 "的成就归功于或归咎于我的铅笔,即使不是可笑,也是一种隐喻。但是,至少在科学领域,相对于日常生活而言,将功劳或责任归于我的大脑,也同样具有隐喻性和可笑性。无论撰写这篇评论的大脑过程或心理过程是怎样的,它们早已被终止了。在不远的将来,连 "我 "这个躯体都将不复存在。这是否意味着,未来的读者将没有任何有效的依据来估计 "我"(或者,作为一个文学形象,"我")是否比一块石头更聪明?我很好奇塞尔会如何回答这个问题。特别是,我想知道他是否会仅仅从一件文物或一份文件中推断出其作者拥有大脑或某些大脑过程--如果是的话,这与仅仅从一个主体的产出来推断其心智有什么不同。

by Marvin Minsky 作者:马文-明斯基

Artificial Intelligence Laboratory. Massachusetts Institute of Technology, Cambridge, Mass. 02139

Decentralized minds 分散式思维

In this essay, Searle asserts without argument: "The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't. If you get a theory that denies this. . .the theory is false."
No. The study of mind is not the study of belief; it is the attempt to discover powerful concepts - be they old or new - that help explain why some machines or animals can do so many things others cannot. I will argue that traditional, everyday, precomputational concepts like believing and understanding are neither powerful nor robust enough for developing or discussing the subject.
对心智的研究不是对信念的研究;它是试图发现强有力的概念--无论是新的还是旧的--来帮助解释为什么一些机器或动物可以做很多其他机器或动物做不到的事情。我要论证的是,传统的、日常的、前计算概念(如 "相信 "和 "理解")既不强大,也不够有力,不足以发展或讨论这一主题。

In centuries past, biologists argued about machines and life much as Searle argues about programs and mind; one might have heard: "The study of biology begins with such facts as that humans have life, while locomotives and telegraphs don't. If you get a theory that denies this the theory is false." Yet today a successtul biological science is based on energetics and information processing; no notion of "alive" appears in the theory, nor do we miss it. The theory's power comes from replacing, for example, a notion of "self-reproduction" by more sophisticated notions about encoding, translation, and recombination - to explain the reproduction of sexual animals that do not, exactly, "reproduce."
在过去的几个世纪里,生物学家对机器和生命的争论,就像塞尔对程序和思维的争论一样;人们可能会听到这样的话:"生物学的研究始于这样的事实:人类有生命,而火车头和电报机没有:"生物学的研究始于这样的事实:人类有生命,而火车头和电报机没有。如果你得到的理论否认了这一点,那么这个理论就是错误的。然而,今天成功的生物科学是以能量学和信息处理为基础的;理论中没有出现 "生命 "的概念,我们也不会错过它。例如,该理论的力量来自于用编码、翻译和重组等更复杂的概念取代了 "自我繁殖 "的概念--以解释有性动物的繁殖,而这些动物并不完全是在 "繁殖"。
Similarly in mind science, though prescientific idea germs like "believe," know," and "mean" are useful in daily life, they seem technically too coarse to support powerful theories; we need to supplant, rather than to support and explicate them. Real as "self" or "understand" may seem to us today, they are not (like milk and sugar) objective things our theories must accept and explain; they are only first steps toward better concepts. It would be inappropriate here to put forth my own ideas about how to proceed; instead consider a fantasy in which our successors recount our present plight: "The ancient concept of 'belief' proved inadequate until replaced by a continuum in which, it turned out, stones placed near zero, and termostats scored 0.52 . The highest human score measured so far is 67.9. Because it is theoretically possible for something to be believed as intensely as 3600 , we were chagrined to learn that men in fact believe so poorly. Nor, for that matter, are they very proficient (on an absolute scale) at intending. Still, they are comfortably separated from the thermostats." IOlivaw, R.D. (2063) Robotic reflections, Phenomenological Science 67:60.1 A joke, of course; I doubt any such onedimensional idea would help much. Understanding how parts of a program or mind can relate to things outside of it - or to other parts within - is complicated, not only because of the intricacies of hypothetical intentions and meanings, but because different parts of the mind do different things - both with regard to externals and to one another. This raises another issue: "In employing formulas like ' believes that means ,' our philosophical precursors became unwittingly entrapped in the 'single-agent fallacy' - the belief that inside each mind is a single believer (or meaner) who does the believing. It is strange how long this idea survived, even after Freud published his first clumsy portraits of our inconsistent and adversary mental constitutions. To be sure, that myth of 'self' is indispensable both for social purposes, and in each infant's struggle to make simplified models of his own mind's structure. But it has not otherwise been of much use in modern applied cognitive theory, such as our work to preserve, rearrange, and recombine those aspects of a brain's mind's parts that seem of value." (Byerly, . (2080) New hope for the Dead, Reader's Digest, March 13.1
同样,在心智科学中,尽管 "相信"、"知道 "和 "意思 "等先知先觉的观念萌芽在日常生活中很有用,但它们在技术上似乎过于粗糙,无法支持强有力的理论;我们需要取代它们,而不是支持和阐释它们。今天,"自我 "或 "理解 "在我们看来可能是真实的,但它们并不是(像牛奶和糖)我们的理论必须接受和解释的客观事物;它们只是迈向更好概念的第一步。在此,我不宜就如何继续前进提出自己的想法;倒不如考虑一下我们的后人在其中描述我们目前困境的幻想:"古代的'信仰'概念被证明是不充分的,直到被一个连续体所取代,在这个连续体中,石头的分值接近零,而恒温动物的分值为 0.52。迄今为止,人类测得的最高分是 67.9 分。因为从理论上讲,人们有可能像 3600 一样强烈地相信某件事情,所以我们很懊恼地发现,事实上人类的相信程度如此之低。在这方面,他们的意图能力(绝对值)也不高。不过,他们与恒温器之间的距离还是很舒适的"。IOlivaw, R.D. (2063) Robotic reflections, Phenomenological Science 67:60.1 当然,这只是个玩笑;我怀疑任何这样的一维想法都不会有太大帮助。理解程序或思维的各个部分如何与外部事物--或内部其他部分--发生联系是一件复杂的事情,这不仅是因为假设的意图和意义错综复杂,还因为思维的不同部分做着不同的事情--既涉及外部事物,也涉及彼此。这就提出了另一个问题:"我们的哲学先驱在使用' 相信 意味着 '这样的公式时,不知不觉地陷入了'单一代理谬误'--认为每个心灵内部都有一个单一的相信者(或意味着者)在做相信的事情。奇怪的是,即使在弗洛伊德发表了他对我们不一致的和敌对的心理结构的第一幅拙劣的肖像之后,这种想法依然存在了很长时间。可以肯定的是,这种 "自我 "的神话对于社会目的和每个婴儿为自己的心智结构建立简化模型而进行的斗争都是不可或缺的。但是,在现代应用认知理论中,比如在我们保存、重新排列和重组大脑思维中那些看起来有价值的部分的工作中,它并没有发挥多大作用"。(拜尔利, .(2080 年)《死者的新希望》,《读者文摘》,3 月 13.1 日
Searle talks of letting "the individual internalize all of these elements of the system" and then complains that "there isn't anything in the system that isn't in him." Just as our predecessors sought "life" in the study of biology, Searle still seeks "him" in the study of mind, and holds strong to be impotent to deal with the phenomenology of understanding. Because this is so subjective a topic, 1 feel it not inappropriate to introduce some phenomenology of my own. While reading about Searle's hypothetical person who incorporates into his mind - "without understanding" - the hypothetical "squiggle squoggle" process that appears to understand Chinese, I found my own experience to have some of the quality of a double exposure: "The text makes sense to some parts of my mind but, to other parts of my mind, it reads much as though it were itself written in Chinese. I understand its syntax, can parse the sentences, and can follow the technical deductions. But the terms and assumptions themselves what the words like 'intend' and 'mean' intend and mean - escape me. They seem suspiciously like Searle's 'formally specified symbols' because their principal meanings engage only certain older parts of my mind that are not in harmonious, agreeable contact with just those newer parts that seem better able to deal with such issues (precisely because they know how to exploit the new concepts of strong Al)."
塞尔说要让 "个体内化系统中的所有这些要素",然后又抱怨说 "系统中没有任何东西不在他身上"。就像我们的前辈在生物学研究中寻求 "生命 "一样,塞尔在心智研究中仍然寻求 "他",并认为强大的 ,无法处理理解的现象学。由于这是一个如此主观的话题,我觉得介绍一些我自己的现象学并无不妥。在读到塞尔假想的人 "不理解 "地将假想的 "squiggle squoggle "过程纳入自己的头脑,似乎能理解中文时,我发现自己的经验具有某种双重暴露的特质:"对我头脑的某些部分来说,这篇文章是有意义的,但对我头脑的其他部分来说,它读起来就好像它本身是用中文写的一样。我理解它的语法, 可以解析句子, 可以遵循技术推导。但是,"意图 "和 "意义 "等词的意图和意义是什么,我却无法理解。它们似乎很像塞尔的'形式化的符号',因为它们的主要含义只涉及我头脑中某些旧的部分,而这些旧的部分与那些似乎能够更好地处理这些问题的新的部分(正是因为它们知道如何利用强阿尔的新概念)并不和谐、不一致。
Searle considers such internalizations - ones not fully integrated in the whole mind - to be counterexamples, or reductiones ad absurdum of some sort, setting programs somehow apart from minds. I see them

as illustrating the usual condition of normal minds, in which different fragments of structure interpret - and misinterpret - the fragments of activity they "see" in the others. There is absolutely no reason why programs, too, cannot contain such conflicting elements. To be sure, the excessive clarity of Searle's example saps its strength; the man's Chinese has no contact at all with his other knowledge - while even the parts of today's computer programs are scarcely ever jammed together in so simple a manner.
在这种情况下,不同的结构片段会对它们在其他结构片段中 "看到 "的活动片段进行解释--或曲解。程序也完全没有理由不包含这种相互冲突的元素。可以肯定的是,塞尔的例子过于清晰,削弱了它的力量;这个人的中文与他的其他知识完全没有联系--而即使是当今计算机程序的各个部分,也很少以如此简单的方式组合在一起。
In the case of a mind so split into two parts that one merely executes some causal housekeeping for the other, I should suppose that each part - the Chinese rule computer and its host - would then have its own separate phenomenologies - perhaps along different time scales. No wonder, then, that the host can't "understand" Chinese very fluently - here I agree with Searle. But (for language, if not for walking or breathing) surely the most essential nuances of the experience of intending and understanding emerge, not from naked data bases of assertions and truth values, but from the interactions - the consonances and conflicts among different reactions within various partial selves and self-images.
如果思维被分割成两个部分,其中一部分只是为另一部分执行一些因果性的内务,那么我想,每一部分--中文规则计算机和它的主机--都会有自己独立的现象学--也许是沿着不同的时间尺度。难怪主机无法流利地 "理解 "中文--在这一点上,我同意塞尔的观点。但是(对于语言来说,如果不是走路或呼吸的话),意图和理解的体验中最本质的细微差别肯定不是来自赤裸裸的断言和真值数据库,而是来自相互作用--各种部分自我和自我图像中不同反应之间的一致和冲突。
What has this to do with Searle's argument? Well, I can see that if one regards intentionality as an all-or-none attribute, which each machine has or doesn't have, then Searle's idea - that intentionality emerges from some physical semantic principle - might seem plausible. But in may view this idea (of intentionality as a simple attribute) results from an oversimplification - a crude symbolization - of complex and subtle introspective activities. In short, when we construct our simplified models of our minds, we need terms to represent whole classes of such consonances and conflicts - and, 1 conjecture, this is why we create omnibus terms like "mean" and "intend." Then, those terms become reified.
这与塞尔的论证有什么关系呢?好吧,我明白,如果把意向性看作是一种全有或全无的属性,每台机器都有或没有,那么塞尔的想法--意向性从某种物理语义原则中产生--可能看起来是有道理的。但在我看来,这种观点(将意向性视为一种简单属性)是对复杂而微妙的内省活动的过度简化--一种粗糙的符号化--的结果。简而言之,当我们构建简化的心智模型时,我们需要一些术语来代表一整类这样的一致和冲突--而且,我猜想,这就是我们创造 "意思 "和 "意图 "这样的总括术语的原因。然后,这些术语就被重新定义了。
It is possible that only a machine as decentralized yet interconnected as a human mind would have anything very like a human phenomenology. Yet even this supports no Searle-like thesis that mind's character depends on more than abstract information processing - on, for example, the "particular causal properties" of the substances of the brain in which those processes are embodied. And here I find Searle's arguments harder to follow. He criticizes dualism, yet complains about fictitious antagonists who suppose mind to be as substantial as sugar. He derides "residual operationalism" - yet he goes on to insist that, somehow, the chemistry of a brain can contribute to the quality or flavor of its mind with no observable effect on its behavior.
可能只有像人类心智这样分散但又相互关联的机器,才会有非常像人类的现象学。然而,即便如此,我们也无法支持类似塞尔的论点,即心灵的特性不仅仅取决于抽象的信息处理--例如,还取决于体现这些处理过程的大脑物质的 "特定因果属性"。在这里,我发现塞尔的论证更加难以理解。他批评二元论,却又抱怨那些虚构的对立面,认为心灵就像糖一样具有实质。他嘲笑 "残余操作主义"--但他又坚持认为,大脑的化学性质可以在某种程度上促进心智的质量或味道,而对其行为却没有可观察到的影响。
Strong Al enthusiasts do not maintain, as Searle suggests, that "what is specifically mental about the mind has no intrinsic connection with the actual properties of the brain." Instead, they make a much more discriminating scientific hypothesis: about which such causal properties are important in mindlike processes - namely computationsupporting properties. So, what Searle mistakenly sees as a difference in kind is really one of specific detail. The difference is important because what might appear to Searle as careless inattention to vital features is actually a deliberate - and probably beneficial - scientific strategy! For, as Putnam points out:
强艾尔热衷者并不像塞尔所说的那样,认为 "关于心智的具体心智与大脑的实际属性没有内在联系"。相反,他们提出了一个更有辨别力的科学假设:在类心智过程中,哪些因果属性是重要的--即支持计算的属性。因此,塞尔误认为是种类上的差异,其实是具体细节上的差异。这种差异之所以重要,是因为在塞尔看来可能是对重要特征的漫不经心的忽视,实际上是一种深思熟虑的--而且很可能是有益的--科学策略!正如普特南所指出的:
What is our intellectual form? is the question, not what the matter is. Small effects may have to be explained in terms of the actual physics of the brain. But when are not even at the level of an idealized description of the functional organization of the brain, to talk about the importance of small perturbations seems decidedly premature. Now, many strong Al enthusiasts go on the postulate that functional organization is the only such dependency, and it is this bold assumption that leads directly to the conclusion Searle seems to dislike so; that nonorganic machines may have the same kinds of experience as people do. That seems fine with me. I just can't see why Searle is so opposed to the idea that a really big pile of junk might have feelings like ours. He proposes no evidence whatever against it, he merely tries to portray it as absurd to imagine machines, with minds like ours intentions and all - made from stones and paper instead of electrons and atoms. But I remain left to wonder how Searle, divested of despised dualism and operationalism, proposes to distinguish the authentic intentions of carbon compounds from their behaviorally identical but mentally counterfeit imitations.
问题是我们的智力形式是什么,而不是物质是什么。微小的影响可能需要从大脑的实际物理角度来解释。但是,当我们甚至还没有达到理想化描述大脑功能组织的水平时,谈论微小扰动的重要性似乎还为时过早。现在,许多 "阿尔 "理论的狂热者都在假设功能组织是唯一的依赖关系,而正是这种大胆的假设直接导致了塞尔似乎不喜欢的结论:非有机机器可能拥有与人相同的经验。这在我看来没什么问题。我只是不明白为什么塞尔如此反对这样的观点:一堆真正的垃圾也可能像我们一样有感觉。他没有提出任何反对的证据,他只是试图把想象中的机器描绘成荒谬的,这些机器有着像我们一样的思想意图和一切--由石头和纸张而不是电子和原子制造而成。但我仍然不禁要问,苏尔摆脱了他所鄙视的二元论和操作主义,又是如何将碳化合物的真实意图与它们行为上相同但精神上伪造的仿制品区分开来的呢?

I feel that I have dealt with the arguments about Chinese, and those about substantiality. Yet a feeling remains that there is something deeply wrong with all such discussions (as this one) of other minds; nothing ever seems to get settled in them. From the finest minds, on all sides, emerge thoughts and methods of low quality and little power. Surely this stems from a burden of traditional ideas inadequate to this tremendously difficult enterprise. Even our logic may be suspect. Thus, I could even agree with Searle that modern computational ideas are of little worth here - if, with him, I could agree to judge those ideas by their coherence and consistency with earlier constructions in philosophy. However, once one suspects that there are other bad apples in the logistic barrel, rigorous consistency becomes much too fragile a standard - and we must humbly turn to what evidence we can gather. So, beacuse this is still a formative period for our ideas about mind, 1 suggest that we must remain especially sensitive to the empirical power that each new idea may give us to pursue the subject further. And, as Searle nowhere denies, computationalism is the principal source of the new machines and programs that have produced for us the first imitations, however limited and shabby, of mindlike activity.
我觉得我已经讨论了关于 "中国 "和 "实质 "的争论。然而,我仍然感觉到,其他思想的所有此类讨论(如本讨论)都存在着深刻的问题,似乎从来没有解决过任何问题。从各方面最优秀的思想中,都会涌现出低质量、低力量的思想和方法。这肯定是由于传统思想的负担不足以应对这项艰巨的事业。甚至我们的逻辑也可能值得怀疑。因此,我甚至可以同意塞尔的观点,即现代计算思想在此并无多大价值--如果我和他一样,同意根据这些思想与哲学中早期建构的连贯性和一致性来判断它们的话。然而,一旦我们怀疑逻辑学的木桶里还有其他坏苹果,严格的一致性就成了太脆弱的标准--我们必须谦卑地转向我们所能收集到的证据。因此,由于现在仍是我们关于心智的观念的形成期,我建议我们必须对每一个新观念可能赋予我们的经验力量保持特别的敏感,以便进一步探讨这个问题。而且,正如塞尔所否认的,计算主义是新机器和新程序的主要来源,这些机器和程序为我们提供了对心灵活动的最初模仿,无论这种模仿多么有限和寒酸。

by Thomas Natsoulas 作者:托马斯-纳楚拉斯

Deparment of Psychology, University of California, Davis, Calif. 95616
加州大学戴维斯分校心理学系,加州 95616

The primary source of intentionality

I have shared Searle's belief: the level of description that computer programs exemplify is not one adequate to the explanation of mind. My remarks about this have appeared in discussions of perceptual theory that make little if any reference to computer programs per se (Natsoulas 1974; 1977; 1978a; 1978b; 1980). Just as Searle argues for the material basis of mind - "that actual human mental phenomena [depend] on actual physical-chemical properties of actual human brains" - I have argued that the particular concrete nature of perceptual awarenesses, as occurrences in a certain perceptual system, is essential to the references they make to objects, events, or situations in the stimulational environment.
In opposition to Gibson (for example, 1966; 1967; 1972), whose perceptual theory amounts to hypotheses concerning the pickup by perceptual systems of abstract entities called "informational invariants" from the stimulus flux, I have stated:
吉布森(Gibson,例如 1966 年;1967 年;1972 年)的知觉理论等同于关于知觉系统从刺激通量中拾取被称为 "信息不变量 "的抽象实体的假设,与之相反,我曾说过:
Perceptual systems work in their own modes to ascribe the detected properties which are specified informationally to the actual physical environment around us. The informational invariants to which a perceptual system resonates are defined abstractly (by Gibson) such that the resonating process itself can exemplify them. But the resonance process is not itself abstract. And characterization of it at the level of informational invariants cannot suffice for the theory of perception. It is crucial to the theory of perception that informational invariants are resonated to in concrete modes that are characteristic of the organism as the kind of perceiving physical system that it is. (Natsoulas 1978b, p. 281)
感知系统以其自身的模式工作,将检测到的属性以信息的方式指定给我们周围的实际物理环境。知觉系统共振的信息不变性是抽象定义的(由吉布森定义),因此共振过程本身就能体现这些不变性。但共振过程本身并不抽象。在信息不变式的层面上对其进行表征对于感知理论来说是不够的。对于感知理论来说,信息不变式以具体的模式引起共鸣是至关重要的,而这种共鸣正是有机体作为感知物理系统所特有的。(纳祖拉斯,1978 年 b,第 281 页)
The latter is crucial to perceptual theory, I have argued, if that theory is to explain the intentionality, or aboutness, of perceptual awarenesses Isee also Uliman: "Against Direct perception" BBS 3(3) 1980, this issuel.
我认为,如果知觉理论要解释知觉意识的意向性或关于性,后者对知觉理论至关重要:"另见乌利曼:《反对直接知觉》,BBS 3(3) 1980 年,本期。
And just as Searle summarily rejects the attempt to eliminate intentionality, saying that it does no good to "feign anesthesia," । argued as follows against Dennett's effort to treat intentionality as merely a heuristic overlay on the extensional theory of the nervous system and bodily motions.
正如塞尔断然拒绝消除意向性的企图,认为 "假装麻醉 "毫无益处一样,।对丹尼特( )将意向性仅仅视为神经系统和身体运动的外延理论的启发式叠加的努力提出了如下论证。
In knowing that we are knowing subjects, here is one thing that we know: that we are aware of objects in a way other than the "colorless" way in which we sometimes think of them. The experienced presence of objects makes it difficult if not impossible to claim that perceptions involve only the acquisition of information. . It is this. . kind of presence that makes perceptual aboutness something more than an "interpretation" or "heuristic overlay" to be dispensed with once a complete enough extensional account is at hand. The qualitative being thereness of objects and scenes. . is about as easy to doubt as our own existence. (Natsoulas 1977, pp. ; cf. Searle, , p. 261 , on "presentational immediacy.")
在知道我们是知性主体的同时,我们还知道一件事:我们对客体的感知方式不同于我们有时认为的 "无色 "方式。对象的经验存在使得我们很难甚至不可能声称知觉只涉及信息的获取。.正是这种.正是这种 "存在 "使得知觉的 "关于性 "不仅仅是一种 "解释 "或 "启发式叠加",而一旦有了足够完整的外延解释,这种 "解释 "或 "启发式叠加 "就可以免去了。物体和场景的质的存在性.......就像我们自身的存在一样容易被怀疑。(Natsoulas 1977, pp. ; cf. Searle, , p. 261, on "presentational immediacy.")
However, I do not know in what perceptual aboutness consists. have been making some suggestions and I believe, with Sperry (1969; 1976), that an objective description of subjective experience is possible in terms of brain function. Such a description should include that feature or those features of perceptual awareness that make it be of (or as if it is of, in the hallucinatory instance) an object, occurrence, or situation in the physical environment or in the perceiver's body outside the nervous system. If the description did not include this feature it would be incomplete, in my view, and in need of further development.
然而,我不知道知觉的 "关于性 "是由什么构成的。我一直在提出一些建议,我相信,与斯佩里(1969; 1976)一样,从大脑功能的角度对主观体验进行客观描述是可能的。这种描述应该包括知觉意识的那些特征或那些特点,这些特征或特点使知觉意识是关于(或好像是关于,在幻觉的例子中)物理环境中或知觉者身体中神经系统之外的一个物体、事件或情况的。在我看来,如果描述中不包括这一特征,那么它将是不完整的,需要进一步发展。
In another recent article on intentionality, Searle (1979a) has written of the unavoidability of "the intentional circle," arguing that any explanation of intentionality that we may come up with will presuppose an understanding of intentionality: "There is no analysis of intentionality into logically necessary and sufficient conditions of the form ' is an intentional state if and only if ' , and ,' where ' , and ' make no use of intentional notions" (p. 195). I take this to say that the intentionality of mental states is not reducible; but 1 don't think it is meant, by itself, to rule out the possibility that intentionality might be a property of certain brain processes. Searle could still take the view that it is one of their as yet unknown "ground floor" properties.
在另一篇关于意向性的最新文章中,塞尔(1979a)写到了 "意向性循环 "的不可避免性,认为我们可能提出的任何关于意向性的解释都将以对意向性的理解为前提:"不存在把意向性分析为' 是意向状态 ,当且仅当' ,和 ,'这种形式的逻辑必要条件和充分条件,其中' ,和 '没有使用意向性概念"(第 195 页)。我认为这句话的意思是,心理状态的意向性是不可还原的;但我认为这句话本身并不意味着排除意向性可能是某些大脑过程的一种属性的可能性。塞尔仍然可以认为,意向性是它们尚未知晓的 "底层 "属性之一。
But Searle's careful characterization, in the target article, of the relation between brain and mental processes as causal, with mental processes consistently said to be produced by brain processes, gives a different impression. Of course brain processes produce other brain processes, but if he had meant to include mental processes among the latter, would he have written about only the causal properties of the brain in a discussion of the material basis of intentionality?
One is tempted to assume that Searle would advocate some form of interactionism with regard to the relation of mind to brain. I think that his analogy of mental processes to products of biological processes, such as sugar and milk, was intended to illuminate the causal basis of mental processes and not their nature. His statement that intentionality is "a biological phenomenon" was prefaced by "whatever else intentionality is" and was followed by a repetition of his earlier point concerning the material basis of mind (mental processes as produced by brain processes). And 1 feel quite certain that Searle would not equate mental processes with another of the brain's effects, namely behaviors (see Searle 1979b)
人们很容易假定,塞尔会在心智与大脑的关系问题上主张某种形式的互动论。我认为,他把心理过程比作糖和牛奶等生物过程的产物,是为了阐明心理过程的因果基础,而不是心理过程的本质。他在说意向性是 "一种生物现象 "之前,先说了 "不管意向性是什么",然后又重复了他先前关于心智的物质基础(由大脑过程产生的心智过程)的观点。而且我十分肯定,塞尔不会把心理过程等同于大脑的另一种作用,即行为(见塞尔,1979b)。
Though it may be tempting to construe Searle's position as a form of dualism, there remains the more probable alternative that he has simple chosen not to take, in these recent writings, a position on the ontological question. He has chosen to deal only with those elements that seem already clear to him as regards the problem of intentionality. However, his emphasis in the target article on intentionality's material basis would seem to be an indication that he is now inching toward a position on the ontological question and the view