这是用户在 2024-5-2 1:21 为 https://app.immersivetranslate.com/pdf-pro/39a8f4e6-f84b-40e2-b0a2-8c3706154f91 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
2024_05_01_553f3355c01d406ff08eg

Minds, brains, and programs
思想、大脑和程序

John R. Searle 约翰-R-塞尔Department of Philosophy, University of California, Berkeley, Calif.
加州大学伯克利分校哲学系。
94720

Abstract 摘要

This article can be viewed as an attempt to explore the consequences of two propositions. (1) Intentionality in human beings (and animals) is a product of causal features of the brain. I assume this is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality. (2) Instantiating a computer program is never by itself a sufficient condition of intentionality. The main argument of this paper is directed at establishing this claim. The form of the argument is to show how a human agent could instantiate the program and still not have the relevant intentionality. These two propositions have the following consequences: (3) The explanation of how the brain produces intentionality cannot be that it does it by instantiating a computer program. This is a strict logical consequence of 1 and 2. (4) Any mechanism capable of producing intentionality must have causal powers equal to those of the brain. This is meant to be a trivial consequence of 1. (5) Any attempt literally to create intentionality artificially (strong AI) could not succeed just by designing programs but would have to duplicate the causal powers of the human brain. This follows from 2 and 4.
这篇文章可以看作是探讨两个命题后果的尝试。(1) 人类(和动物)的意向性是大脑因果特征的产物。我假定这是一个关于心理过程与大脑之间实际因果关系的经验事实。它只是说某些大脑过程足以产生意向性。(2)计算机程序的实例化本身绝不是意向性的充分条件。本文的主要论点就是为了确立这一主张。论证的形式是说明人类代理如何能够实例化程序,但仍然不具备相关的意向性。这两个命题具有如下后果:(3) 对大脑如何产生意向性的解释,不能是通过实例化计算机程序来实现的。这是 1 和 2 的严格逻辑后果。 (4) 任何能够产生意向性的机制必须具有与大脑相同的因果能力。(5) 任何从字面上人为创造意向性(强人工智能)的尝试都不可能仅仅通过设计程序而成功,而必须复制人脑的因果能力。这是从 2 和 4 得出的结论。

"Could a machine think?" On the argument advanced here only a machine could think, and only very special kinds of machines, namely brains and machines with internal causal powers equivalent to those of brains. And that is why strong AI has little to tell us about thinking, since it is not about machines but about programs, and no program by itself is sufficient for thinking
"机器会思考吗?根据这里提出的论点,只有机器才能思考,而且只能是非常特殊的机器,即大脑和内部因果能力与大脑相当的机器。这就是为什么强人工智能对我们的思维没有什么启发,因为它不是关于机器,而是关于程序的,而任何程序本身都不足以进行思维

Keywords: artificial intelligence; brain; intentionality; mind
关键词:人工智能;大脑;意向性;心智
What psychological and philosophical significance should we attach to recent efforts at computer simulations of human cognitive capacities? In answering this question, I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI (Artificial Intelligence). According to weak , the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states. In strong AI, because the programmed computer has cognitive states, the programs are not mere tools that enable us to test psychological explanations; rather, the programs are themselves the explanations.
我们应该对最近计算机模拟人类认知能力的努力赋予怎样的心理学和哲学意义?在回答这个问题时,我认为有必要区分我所说的 "强 "人工智能与 "弱 "或 "谨慎 "人工智能(Artificial Intelligence)。根据弱人工智能 ,计算机在心智研究中的主要价值在于它为我们提供了一个非常强大的工具。例如,它使我们能够以更严格、更精确的方式提出和检验假设。但是,根据强人工智能的观点,计算机并不仅仅是研究心智的工具;相反,经过适当编程的计算机才是真正的心智,因为计算机只要有正确的程序,就可以说它能够理解并具有其他认知状态。在强人工智能中,由于被编程的计算机具有认知状态,因此程序并不仅仅是使我们能够检验心理学解释的工具;相反,程序本身就是解释。
I have no objection to the claims of weak AI, at least as far as this article is concerned. My discussion here will be directed at the claims I have defined as those of strong AI, specifically the claim that the appropriately programmed computer literally has cognitive states and that the programs thereby explain human cognition. When I hereafter refer to have in mind the strong version, as expressed by these two claims.
至少就本文而言,我不反对弱人工智能的主张。我在这里的讨论将针对我定义为强人工智能的主张,具体来说,就是经过适当编程的计算机确实具有认知状态,而且这些程序可以解释人类的认知。当我在下文中提到 时,我想到的是这两种主张所表达的强人工智能版本。
I will consider the work of Roger Schank and his colleagues at Yale (Schank & Abelson 1977), because I am more familiar with it than I am with any other similar claims, and because it provides a very clear example of the sort of work I wish to examine. But nothing that follows depends upon the details of Schank's programs. The same arguments would apply to Winograd's SHRDLU (Winograd 1973), Weizenbaum's ELIZA (Weizenbaum 1965), and indeed any Turing machine simulation of human mental phenomena.
我将考虑罗杰-申克(Roger Schank)和他在耶鲁大学的同事们的工作(申克和艾贝尔森,1977 年),因为我对他们的工作比我对其他任何类似的主张都要熟悉,而且他们的工作为我想要研究的工作提供了一个非常清晰的例子。但接下来的内容并不取决于 Schank 的程序细节。同样的论点也适用于维诺格拉德的 SHRDLU(维诺格拉德,1973 年)、魏曾鲍姆的 ELIZA(魏曾鲍姆,1965 年),以及任何图灵机对人类心理现象的模拟。
Very briefly, and leaving out the various details, one can describe Schank's program as follows: the aim of the program is to simulate the human ability to understand stories. It is characteristic of human beings' story-understanding capacity that they can answer questions about the story even though the information that they give was never explicitly stated in the story. Thus, for example, suppose you are given the following story: "A man went into a restaurant and ordered a hamburger. When the hamburger arrived it was burned to a crisp, and the man stormed out of the restaurant angrily, without paying for the hamburger or leaving a tip." Now, if you are asked "Did the man eat the hamburger?" you will presumably answer, "No, he did not." Similarly, if you are given the following story: "A man went into a restaurant and ordered a hamburger; when the hamburger came he was very pleased with it; and as he left the restaurant he gave the waitress a large tip before paying his bill," and you are asked the question, "Did the man eat the hamburger?," you will presumably answer, "Yes, he ate the hamburger." Now Schank's machines can similarly answer questions about restaurants in this fashion. To do this, they have a "representation" of the sort of information that human beings have about restaurants, which enables them to answer such questions as those above, given these sorts of stories. When the machine is given the story and then asked the question, the machine will print out answers of the sort that we would expect human beings to give if told similar stories. Partisans of strong AI claim that in this question and answer sequence the machine is not only simulating a human ability but also
撇开各种细节不谈,我们可以非常简要地描述一下 Schank 的程序:该程序的目的是模拟人类理解故事的能力。人类理解故事能力的特点是,他们可以回答有关故事的问题,即使所提供的信息在故事中从未明确表述过。因此,举个例子,假设你得到了下面这个故事:"一个人走进一家餐馆,点了一个汉堡包。当汉堡包送到时,已经被烧成了焦炭,那个人愤怒地冲出了餐馆,既没有付汉堡包的钱,也没有留下小费。现在,如果有人问你:"那个人吃了汉堡包吗?"你大概会回答:"没有,他没有吃。"同样,如果给你讲下面这个故事:"一个人走进一家餐馆,点了一个汉堡包;当汉堡包送来时,他非常满意;当他离开餐馆时,他在付账前给了女服务员一大笔小费",然后问你 "这个人吃了汉堡包吗?"你大概会回答:"是的,他吃了汉堡包"。现在,Schank's 机器也可以用这种方式回答有关餐馆的问题。要做到这一点,机器必须拥有人类所拥有的关于餐馆的信息 "表征",这使得机器能够根据这些故事回答上述问题。当机器得到这个故事,然后被问到问题时,机器就会打印出我们期望人类在听到类似故事时会给出的答案。强人工智能的拥护者声称,在这一问一答的过程中,机器不仅模拟了人类的能力,而且还
  1. that the machine can literally be said to understand the story and provide the answers to questions, and
    可以说,这台机器能够听懂故事,提供问题的答案,而且
  2. that what the machine and its program do explains the human ability to understand the story and answer questions about it.
    机器及其程序所做的一切解释了人类理解故事和回答相关问题的能力。
Both claims seem to me to be totally unsupported by Schank's' work, as I will attempt to show in what follows.
在我看来,这两种说法在 Schank 的著作中都是毫无根据的,我将在下文中试图证明这一点。
One way to test any theory of the mind is to ask oneself what it would be like if my mind actually worked on the principles that the theory says all minds work on. Let us apply this test to the Schank program with the following Gedankenexperiment. Suppose that I'm locked in a room and given a large batch of Chinese writing. Suppose furthermore
检验任何心智理论的方法之一,就是问问自己,如果我的心智真的按照该理论所说的所有心智的工作原理工作,那会是什么样子。让我们用下面的 "思考实验 "来检验一下 "钱克计划"。假设我被关在一个房间里,并得到了一大批汉字。再假设

(as is indeed the case) that I know no Chinese, either written or spoken, and that I'm not even confident that I could recognize Chinese writing as Chinese writing distinct from, say, Japanese writing or meaningless squiggles. To me, Chinese writing is just so many meaningless squiggles. Now suppose further that after this first batch of Chinese writing I am given a second batch of Chinese script together with a set of rules for correlating the second batch with the first batch. The rules are in English, and I understand these rules as well as any other native speaker of English. They enable me to correlate one set of formal symbols with another set of formal symbols, and all that "formal" means here is that I can identify the symbols entirely by their shapes. Now suppose also that I am given a third batch of Chinese symbols together with some instructions, again in English, that enable me to correlate elements of this third batch with the first two batches, and these rules instruct me how to give back certain Chinese symbols with certain sorts of shapes in response to certain sorts of shapes given me in the third batch. Unknown to me, the people who are giving me all of these symbols call the first batch "a script," they call the second batch a "story," and they call the third batch "questions." Furthermore, they call the symbols I give them back in response to the third batch "answers to the questions," and the set of rules in English that they gave me, they call "the program." Now just to complicate the story a little, imagine that these people also give me stories in English, which I understand, and they then ask me questions in English about these stories, and I give them back answers in English. Suppose also that after a while I get so good at following the instructions for manipulating the Chinese symbols and the programmers get so good at writing the programs that from the external point of view that is, from the point of view of somebody outside the room in which I am locked - my answers to the questions are absolutely indistinguishable from those of native Chinese speakers. Nobody just looking at my answers can tell that I don't speak a word of Chinese. Let us also suppose that my answers to the English questions are, as they no doubt would be, indistinguishable from those of other native English speakers, for the simple reason that I am a native English speaker. From the external point of view - from the point of view of someone reading my "answers" - the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer; I perform computational operations on formally specified elements. For the purposes of the Chinese, I am simply an instantiation of the computer program.
(事实的确如此),我不懂中文,无论是书写还是口语,我甚至没有信心将中国文字与日文或毫无意义的方块字区分开来。对我来说,中国文字就是许多毫无意义的方块字。现在再假设,在第一批中国文字之后,我得到了第二批中国文字,以及一套将第二批中国文字与第一批中国文字联系起来的规则。这些规则是用英语写的,我和其他以英语为母语的人一样理解这些规则。这些规则使我能够将一组正式符号与另一组正式符号联系起来,这里所说的 "正式 "是指我可以完全根据符号的形状来识别它们。现在再假设,我又得到了第三批中文符号和一些指令(同样是用英语),这些指令使我能够将第三批符号中的元素与前两批符号联系起来,这些规则指示我如何根据第三批符号中的某些形状还原出某些具有特定形状的中文符号。我不知道,给我所有这些符号的人把第一批符号称为 "剧本",把第二批符号称为 "故事",把第三批符号称为 "问题"。此外,他们把我在回答第三批问题时给他们的符号称为 "问题的答案",而他们给我的那套英语规则则称为 "程序"。现在,让故事变得更复杂一点,假设这些人也用英语给我讲故事,我听得懂,然后他们就这些故事用英语问我问题,我用英语回答他们。又假设过了一段时间,我变得非常善于按照说明操作中文符号,程序员也变得非常善于编写程序,以至于从外部的角度来看,即从我被关在房间外面的人的角度来看,我对问题的回答与母语为中文的人的回答完全没有区别。没有人能从我的答案中看出我一句中文也不会说。让我们再假设一下,我对英语问题的回答也毫无疑问地与其他以英语为母语的人的回答无异,原因很简单,因为我是一个以英语为母语的人。从外部的角度来看,即从阅读我的 "答案 "的人的角度来看,中文问题和英文问题的答案是一样好的。但是,中文问题与英文问题不同,我是通过操作未经解释的形式符号得出答案的。 就中国人而言,我的行为就像一台计算机,我对正式指定的元素进行计算操作。就中文而言,我只是计算机程序的一个实例。
Now the claims made by strong are that the programmed computer understands the stories and that the program in some sense explains human understanding. But we are now in a position to examine these claims in light of our thought experiment.
现在,强 的说法是,被编程的计算机能理解这些故事,而且程序在某种意义上解释了人类的理解。但我们现在可以根据我们的思想实验来审视这些说法。
  1. As regards the first claim, it seems to me quite obvious in the example that I do not understand a word of the Chinese stories. I have inputs and outputs that are indistinguishable from those of the native Chinese speaker, and I can have any formal program you like, but I still understand nothing. For the same reasons, Schank's computer understands nothing of any stories, whether in Chinese, English, or whatever, since in the Chinese case the computer is me, and in cases where the computer is not me, the computer has nothing more than I have in the case where I understand nothing.
    至于第一种说法,在我看来,这个例子非常明显,我根本听不懂中文故事。我的输入和输出与以中文为母语的人的输入和输出毫无区别,我可以有任何你喜欢的正式程序,但我仍然什么也听不懂。出于同样的原因,Schank 的计算机对任何故事都一无所知,无论是中文、英文还是其他语言,因为在中文的情况下,计算机就是我,而在计算机不是我的情况下,计算机拥有的东西并不比在我一无所知的情况下拥有的东西多。
  2. As regards the second claim, that the program explains human understanding, we can see that the computer and its program do not provide sufficient conditions of understanding since the computer and the program are functioning, and there is no understanding. But does it even provide a necessary condition or a significant contribution to under- standing? One of the claims made by the supporters of strong AI is that when I understand a story in English, what I am doing is exactly the same - or perhaps more of the same - as what I was doing in manipulating the Chinese symbols. It is simply more formal symbol manipulation that distinguishes the case in English, where I do understand, from the case in Chinese, where I don't. I have not demonstrated that this claim is false, but it would certainly appear an incredible claim in the example. Such plausibility as the claim has derives from the supposition that we can construct a program that will have the same inputs and outputs as native speakers, and in addition we assume that speakers have some level of description where they are also instantiations of a program. On the basis of these two assumptions we assume that even if Schank's program isn't the whole story about understanding, it may be part of the story. Well, I suppose that is an empirical possibility, but not the slightest reason has so far been given to believe that it is true, since what is suggested though certainly not demonstrated - by the example is that the computer program is simply irrelevant to my understanding of the story. In the Chinese case I have everything that artificial intelligence can put into me by way of a program, and I understand nothing; in the English case I understand everything, and there is so far no reason at all to suppose that my understanding has anything to do with computer programs, that is, with computational operations on purely formally specified elements. As long as the program is defined in terms of computational operations on purely formally defined elements, what the example suggests is that these by themselves have no interesting connection with understanding. They are certainly not sufficient conditions, and not the slightest reason has been given to suppose that they are necessary conditions or even that they make a significant contribution to understanding. Notice that the force of the argument is not simply that different machines can have the same input and output while operating on different formal principles - that is not the point at all. Rather, whatever purely formal principles you put into the computer, they will not be sufficient for understanding, since a human will be able to follow the formal principles without understanding anything. No reason whatever has been offered to suppose that such principles are necessary or even contributory, since no reason has been given to suppose that when I understand English I am operating with any formal program at all.
    至于第二种说法,即程序解释了人类的理解,我们可以看到,计算机及其程序并没有提供理解的充分条件,因为计算机和程序都在运行,并不存在理解。但是,它是否为理解提供了必要条件或重要贡献呢?强人工智能支持者的一个说法是,当我理解一个英文故事时,我所做的与我操作中文符号时所做的完全相同,或者说更加相同。只是更正式的符号操作将我理解英文故事的情况与我不理解中文故事的情况区分开来。我没有证明这种说法是错误的,但在这个例子中,它肯定是一种不可思议的说法。这种说法之所以可信,是因为我们假设我们可以构建一个程序,它将具有与母语使用者相同的输入和输出,此外,我们还假设说话者具有某种程度的描述,他们也是程序的实例。基于这两个假设,我们认为,即使尚克的程序不是理解的全部,它也可能是理解的一部分。好吧,我想这是一种经验上的可能性,但迄今为止,我们还没有给出丝毫理由来相信这是真的,因为这个例子所暗示的(当然不是证明)是,计算机程序与我对故事的理解根本无关。在中文的例子中,我拥有人工智能可以通过程序输入给我的一切,而我却什么也不懂;在英文的例子中,我理解了一切,而迄今为止,我们根本没有理由认为我的理解与计算机程序有关,也就是说,与纯粹形式化的元素的计算操作有关。只要程序是根据对纯粹形式化元素的计算操作来定义的,那么这个例子所暗示的就是,这些操作本身与理解并没有什么有趣的联系。它们当然不是充分条件,也没有给出丝毫理由来假设它们是必要条件,甚至它们对理解有重大贡献。请注意,论证的力量并不仅仅在于不同的机器可以有相同的输入和输出,但却按照不同的形式原则运行--这根本不是问题的关键。恰恰相反,无论你把什么纯粹的形式原则输入计算机,它们都不足以让人理解,因为人类可以遵循形式原则而不理解任何东西。 我们没有任何理由认为这些原则是必要的,甚至是有帮助的,因为我们没有任何理由认为当我理解英语时,我是按照任何正式的程序来操作的。
Well, then, what is it that I have in the case of the English sentences that I do not have in the case of the Chinese sentences? The obvious answer is that know what the former mean, while I haven't the faintest idea what the latter mean. But in what does this consist and why couldn't we give it to a machine, whatever it is? I will return to this question later, but first I want to continue with the example.
那么,我在英语句子中拥有而在汉语句子中没有的是什么呢?答案显而易见, ,我知道前者是什么意思,而后者是什么意思,我却一无所知。但是,这究竟是什么,为什么我们不能把它交给一台机器,不管它是什么?这个问题我稍后再谈,但首先我想继续举例说明。
I have had the occasions to present this example to several workers in artifical intelligence, and, interestingly, they do not seem to agree on what the proper reply to it is. I get a surprising variety of replies, and in what follows I will consider the most common of these (specified along with their geographic origins).
我曾有机会向几位人工智能工作者介绍过这个例子,有趣的是,他们似乎并没有就如何正确回答这个问题达成一致意见。我得到了令人吃惊的各种答复,下面我将介绍其中最常见的几种(具体说明其来源)。
But first I want to block some common misunderstandings about "understanding": in many of these discussions one finds a lot of fancy footwork about the word "understanding." My critics point out that there are many different degrees of understanding; that "understanding" is not a simple twoplace predicate; that there are even different kinds and levels of understanding, and often the law of excluded middle doesn't even apply in a straightforward way to statements of the form " understands "; that in many cases it is a matter for decision and not a simple matter of fact whether understands ; and so on. To all of these points I want to say: of course, of course. But they have nothing to do with the
但首先,我想消除一些关于 "理解 "的常见误解:在许多讨论中,我们会发现很多关于 "理解 "一词的花哨说法。我的批评者指出,理解有许多不同的程度;"理解 "不是一个简单的两处谓词;理解甚至有不同的种类和层次,排除中立法则甚至常常不能直接适用于 " 理解 "这样的陈述;在许多情况下, 是否理解 是一个需要决定的问题,而不是一个简单的事实问题;等等。对于所有这些观点,我想说的是:当然,当然。但它们与

points at issue. There are clear cases in which "understanding" literally applies and clear cases in which it does not apply; and these two sorts of cases are all I need for this argument. I understand stories in English; to a lesser degree I can understand stories in French; to a still lesser degree, stories in German; and in Chinese, not at all. My car and my adding machine, on the other hand, understand nothing: they are not in that line of business. We often attribute "understanding" and other cognitive predicates by metaphor and analogy to cars, adding machines, and other artifacts, but nothing is proved by such attributions. We say, "The door knows when to open because of its photoelectric cell," "The adding machine knows how (understands how, is able) to do addition and subtraction but not division," and "The thermostat perceives chances in the temperature." The reason we make these attributions is quite interesting, and it has to do with the fact that in artifacts we extend our own intentionality; our tools are extensions of our purposes, and so we find it natural to make metaphorical attributions of intentionality to them; but I take it no philosophical ice is cut by such examples. The sense in which an automatic door "understands instructions" from its photoelectric cell is not at all the sense in which I understand English. If the sense in which Schank's programmed computers understand stories is supposed to be the metaphorical sense in which the door understands, and not the sense in which I understand English, the issue would not be worth discussing. But Newell and Simon (1963) write that the kind of cognition they claim for computers is exactly the same as for human beings. I like the straightforwardness of this claim, and it is the sort of claim I will be considering. I will argue that in the literal sense the programmed computer understands what the car and the adding machine understand, namely, exactly nothing. The computer understanding is not just (like my understanding of German) partial or incomplete; it is zero.
有争议的要点。理解 "在字面上适用的情况很明显,不适用的情况也很明显。 我听得懂英语故事;在较小程度上,我听得懂法语故事;在更小程度上,我听得懂德语故事;而我完全听不懂汉语故事。而我的汽车和加法机则什么也听不懂:它们不是干这行的。我们经常通过隐喻和类比把 "理解 "和其他认知谓词归结到汽车、加法器和其他人工制品上,但这种归结并不能证明什么。我们说:"门知道什么时候打开,因为它有光电单元","加法器知道如何(理解如何,能够)做加法和减法,但不能做除法","恒温器能感知温度的变化"。我们做出这些归因的原因非常有趣,这与我们在人工制品中延伸自己的意向性有关; ,我们的工具是我们目的的延伸,因此我们很自然地将意向性的隐喻性归因于它们;但我认为这样的例子并不具有哲学意义。自动门从其光电单元 "理解指令 "的意义与我理解英语的意义完全不同。如果申克的编程计算机理解故事的意义是门理解的隐喻意义,而不是我理解英语的意义,那么这个问题就不值得讨论了。但纽厄尔和西蒙(1963 年)写道,他们所说的计算机认知与人类认知完全相同。我喜欢这种直截了当的说法,这也是我将要考虑的说法。我要论证的是,从字面意义上讲,编程计算机能理解汽车和加法机所理解的东西,即什么也不理解。计算机的理解不仅仅是(就像我对德语的理解一样)片面或不完整的,它是零。
Now to the replies:
现在请看回复:
I. The systems reply (Berkeley). "While it is true that the individual person who is locked in the room does not understand the story, the fact is that he is merely part of a whole system, and the system does understand the story. The person has a large ledger in front of him in which are written the rules, he has a lot of scratch paper and pencils for doing calculations, he has 'data banks' of sets of Chinese symbols. Now, understanding is not being ascribed to the mere individual; rather it is being ascribed to this whole system of which he is a part."
I.系统的回答(伯克利)。"虽然被关在房间里的个人确实不理解这个故事,但事实上,他只是整个系统的一部分,而系统确实理解这个故事。这个人面前有一个大账本,上面写着规则,他有大量的草稿纸和用于计算的铅笔,他还有由一组组中文符号组成的'数据库'。现在,理解力并不是归因于单纯的个人,而是归因于他作为其中一部分的整个系统"。
My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him.
我对系统理论的回应非常简单:让个人内化系统的所有这些要素。他记住账簿中的规则和中国符号的数据库,并在头脑中进行所有的计算。然后,个人将整个系统融入其中。系统中没有任何东西是他不包含在内的。我们甚至可以去掉房间,假设他在室外工作。同样,他对中国一无所知,更不用说对整个系统也一无所知,因为系统中没有任何东西不属于他。如果他不懂,那么系统也不可能懂,因为系统只是他的一部分。
Actually I feel somewhat embarrassed to give even this answer to the systems theory because the theory seems to me so unplausible to start with. The idea is that while a person doesn't understand Chinese, somehow the conjunction of that person and bits of paper might understand Chinese. It is not easy for me to imagine how someone who was not in the grip of an ideology would find the idea at all plausible. Still, I think many people who are committed to the ideology of strong will in the end be inclined to say something very much like this; so let us pursue it a bit further. According to one version of this view, while the man in the internalized systems example doesn't understand Chinese in the sense that a native Chinese speaker does (because, for example, he doesn't know that the story refers to restaurants and hamburgers, etc.), still "the man as a formal symbol manipulation system" really does understand Chinese. The subsystem of the man that is the formal symbol manipulation system for Chinese should not be confused with the subsystem for English.
事实上,我甚至都不好意思对系统理论给出这样的答案,因为在我看来,这个理论从一开始就很不靠谱。该理论认为,虽然一个人不懂中文,但这个人和纸片的结合却可能懂中文。我很难想象一个没有被意识形态控制的人会觉得这个想法有什么道理。尽管如此,我认为许多信奉 "强 "意识形态的人最终还是会倾向于这样说,所以让我们再追问一下。根据这种观点的一个版本,虽然内化系统例子中的人并不像母语为中文的人那样理解中文(因为,例如,他不知道这个故事指的是餐馆和汉堡包等),但 "作为形式化符号操纵系统的人 "确实理解中文。这个人的子系统是中文的正式符号操纵系统,不应与英文的子系统混为一谈。
So there are really two subsystems in the man; one understands English, the other Chinese, and "it's just that the two systems have little to do with each other." But, I want to reply, not only do they have little to do with each other, they are not even remotely alike. The subsystem that understands English (assuming we allow ourselves to talk in this jargon of "subsystems" for a moment) knows that the stories are about restaurants and eating hamburgers, he knows that he is being asked questions about restaurants and that he is answering questions as best he can by making various inferences from the content of the story, and so on. But the Chinese system knows none of this. Whereas the English subsystem knows that "hamburgers" refers to hamburgers, the Chinese subsystem knows only that "squiggle squiggle" is followed by "squoggle squoggle." All he knows is that various formal symbols are being introduced at one end and manipulated according to rules written in English, and other symbols are going out at the other end. The whole point of the original example was to argue that such symbol manipulation by itself couldn't be sufficient for understanding Chinese in any literal sense because the man could write "squoggle squoggle" after "squiggle squiggle" without understanding anything in Chinese. And it doesn't meet that argument to postulate subsystems within the man, because the subsystems are no better off than the man was in the first place; they still don't have anything even remotely like what the English-speaking man (or subsystem) has. Indeed, in the case as described, the Chinese subsystem is simply a part of the English subsystem, a part that engages in meaningless symbol manipulation according to rules in English
因此,这个人体内确实有两个子系统,一个能听懂英语,另一个能听懂中文,"只是这两个系统彼此关系不大"。但是,我想回答的是,它们不仅没有什么关系,甚至连一点都不像。听得懂英语的子系统(假设我们允许自己暂时用 "子系统 "这个行话来说话)知道故事是关于餐馆和吃汉堡包的,他知道他被问到的问题是关于餐馆的,他通过对故事内容的各种推断来尽可能地回答问题,等等。但中文子系统对此一无所知。英语子系统知道 "汉堡包 "指的是汉堡包,而汉语子系统只知道 "squiggle squiggle "后面是 "squoggle squoggle"。他所知道的只是,各种形式符号在一端被引入,并按照用英语写成的规则进行操作,而其他符号则在另一端输出。原始例子的全部意义在于论证,这种符号操作本身不足以理解任何字面意义上的中文,因为这个人可以在 "squiggle squiggle "之后写出 "squoggle squoggle",而不理解任何中文。而且,假设这个人体内有子系统也不符合这个论点,因为子系统并不比这个人好到哪里去;他们仍然没有任何东西,甚至与说英语的人(或子系统)所拥有的东西相差甚远。事实上,在上述情况中,中文子系统只是英语子系统的一部分,是按照英语规则进行无意义符号操作的一部分
Let us ask ourselves what is supposed to motivate the systems reply in the first place; that is, what independent grounds are there supposed to be for saying that the agent must have a subsystem within him that literally understands stories in Chinese? As far as I can tell the only grounds are that in the example I have the same input and output as native Chinese speakers and a program that goes from one to the other. But the whole point of the examples has been to try to show that that couldn't be sufficient for understanding, in the sense in which I understand stories in English, because a person, and hence the set of systems that go to make up a person, could have the right combination of input, output, and program and still not understand anything in the relevant literal sense in which I understand English. The only motivation for saying there must be a subsystem in me that understands Chinese is that I have a program and I can pass the Turing test; I can fool native Chinese speakers. But precisely one of the points at issue is the adequacy of the Turing test. The example shows that there could be two "systems," both of which pass the Turing test, but only one of which understands; and it is no argument against this point to say that since they both pass the Turing test they must both understand, since this claim fails to meet the argument that the system in me that understands English has a great deal more than the system that merely processes Chinese. In short, the systems reply simply begs the question by insisting without argument that the system must understand Chinese.
让我们扪心自问,首先是什么促使我们作出系统的回答;也就是说,有什么独立的理由说代理人体内必须有一个子系统能够真正听懂中文故事?在我看来,唯一的理由就是,在这个例子中,我有与母语为中文的人相同的输入和输出,以及一个从一个到另一个的程序。但这些例子的全部意义都在于试图说明,这并不足以让我理解英语故事,因为一个人,也就是构成一个人的一系列系统,可能拥有输入、输出和程序的正确组合,但仍然无法理解我理解英语的相关字面意义上的任何东西。说我体内一定有一个能听懂中文的子系统的唯一动机是,我有一个程序,我能通过图灵测试;我能骗过以中文为母语的人。但问题之一恰恰是图灵测试的充分性。这个例子表明,可能有两个 "系统",它们都通过了图灵测试,但只有一个能听懂;说既然它们都通过了图灵测试,它们就一定都能听懂,这并不能反驳这一点,因为这种说法不能满足这样一个论点,即我体内能听懂英语的系统比仅仅处理中文的系统拥有更多的东西。简而言之,"系统 "的回答只是在 "乞求 "问题,它不加论证地坚持认为系统必须理解中文。
Furthermore, the systems reply would appear to lead to consequences that are independently absurd. If we are to conclude that there must be cognition in me on the grounds that I have a certain sort of input and output and a program
此外,"系统 "的回答似乎会导致一些独立存在的荒谬后果。如果我们以我有某种输入和输出以及一个程序为由,就得出结论说我一定有认知能力

in between, then it looks like all sorts of noncognitive subsystems are going to turn out to be cognitive. For example, there is a level of description at which my stomach does information processing, and it instantiates any number of computer programs, but I take it we do not want to say that it has any understanding [cf. Pylyshyn: "Computation and Cognitition" BBS 3(1) 1980]. But if we accept the systems reply, then it is hard to see how we avoid saying that stomach, heart, liver, and so on, are all understanding subsystems, since there is no principled way to distinguish the motivation for saying the Chinese subsystem understands from saying that the stomach understands. It is, by the way, not an answer to this point to say that the Chinese system has information as input and output and the stomach has food and food products as input and output, since from the point of view of the agent, from my point of view, there is no information in either the food or the Chinese - the Chinese is just so many meaningless squiggles. The information in the Chinese case is solely in the eyes of the programmers and the interpreters, and there is nothing to prevent them from treating the input and output of my digestive organs as information if they so desire.
在这两者之间,各种非认知子系统似乎都会变成认知子系统。例如,在某种描述层次上,我的胃可以进行信息处理,它可以实例化任何数量的计算机程序,但我认为我们并不想说它有任何理解能力[参见 Pylyshyn: "Computation and Cognitition" BBS 3(1) 1980]。但是,如果我们接受系统的回答,那么就很难理解我们如何避免说胃、心、肝等等都是理解子系统,因为没有原则性的方法来区分说中国子系统理解的动机和说胃理解的动机。顺便说一句,说中文系统有信息作为输入和输出,而胃有食物和食品作为输入和输出,并不能回答这一点,因为从代理人的角度来看,从我的角度来看,食物和中文都没有信息--中文只是许多毫无意义的方块字。中文中的信息完全是程序员和解释器眼中的信息,如果他们愿意,没有什么可以阻止他们把我的消化器官的输入和输出当作信息。
This last point bears on some independent problems in strong , and it is worth digressing for a moment to explain it. If strong is to be a branch of psychology, then it must be able to distinguish those systems that are genuinely mental from those that are not. It must be able to distinguish the principles on which the mind works from those on which nonmental systems work; otherwise it will offer us no explanations of what is specifically mental about the mental. And the mental-nonmental distinction cannot be just in the eye of the beholder but it must be intrinsic to the systems; otherwise it would be up to any beholder to treat people as nonmental and, for example, hurricanes as mental if he likes. But quite often in the AI literature the distinction is blurred in ways that would in the long run prove disastrous to the claim that is a cognitive inquiry. McCarthy, for example, writes, "Machines as simple as thermostats can be said to have beliefs, and having beliefs seems to be a characteristic of most machines capable of problem solving performance" (McCarthy 1979). Anyone who thinks strong AI has a chance as a theory of the mind ought to ponder the implications of that remark. We are asked to accept it as a discovery of strong AI that the hunk of metal on the wall that we use to regulate the temperature has beliefs in exactly the same sense that we, our spouses, and our children have beliefs, and furthermore that "most" of the other machines in the room - telephone, tape recorder, adding machine, electric light switch, - also have beliefs in this literal sense. It is not the aim of this article to argue against McCarthy's point, so I will simply assert the following without argument. The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't. If you get a theory that denies this point you have produced a counterexample to the theory and the theory is false. One gets the impression that people in AI who write this sort of thing think they can get away with it because they don't really take it seriously, and they don't think anyone else will either. I propose for a moment at least, to take it seriously. Think hard for one minute about what would be necessary to establish that that hunk of metal on the wall over there had real beliefs, beliefs with direction of fit, propositional content, and conditions of satisfaction; beliefs that had the possibility of being strong beliefs or weak beliefs; nervous, anxious, or secure beliefs; dogmatic, rational, or superstitious beliefs; blind faiths or hesitant cogitations; any kind of beliefs. The thermostat is not a candidate. Neither is stomach, liver, adding machine, or telephone. However, since we are taking the idea seriously, notice that its truth would be fatal to strong AI's claim to be a science of the mind. For now the mind is everywhere. What we wanted to know is what distinguishes the mind from thermostats and livers. And if McCarthy were right, strong AI wouldn't have a hope of telling us that.
最后一点涉及到强 中的一些独立问题,值得花点时间来解释一下。如果强 要成为心理学的一个分支,那么它就必须能够区分那些是真正的心理系统和那些不是心理系统的系统。它必须能够区分心智赖以运作的原理与非心智系统赖以运作的原理;否则,它将无法为我们提供关于心智的具体心智的解释。心理与非心理的区别不能只在观察者的眼中,而必须是系统的内在本质;否则,任何观察者都可以随意把人当作非心理的,而把飓风当作心理的。但是,在人工智能文献中,这种区别常常被模糊化,从长远来看,这将对 是一种认知研究的说法造成灾难性的影响。例如,麦卡锡写道:"像恒温器这样简单的机器都可以说是有信念的,而有信念似乎是大多数能够解决问题的机器的特征"(麦卡锡,1979 年)。任何认为强人工智能有机会成为心智理论的人都应该思考一下这句话的含义。我们被要求接受这样一个强人工智能的发现,即我们用来调节温度的墙壁上的金属块与我们、我们的配偶和我们的孩子具有完全相同意义上的信念,而且房间里的 "大多数 "其他机器--电话、录音机、加法器、电灯开关--也具有这种字面意义上的信念。本文的目的不是要反驳麦卡锡的观点,所以我只想不加论证地断言如下。心灵研究的起点是人类有信念,而恒温器、电话和加法器没有信念这样的事实。如果你得到的理论否认了这一点,那么你就提出了一个理论的反例,这个理论就是错误的。给人的印象是,人工智能领域写这种东西的人认为他们可以逍遥法外,因为他们并没有认真对待它,而且他们认为别人也不会认真对待。我建议,至少暂时认真对待一下。请大家认真想一想,要证明那边墙上的那块金属有真正的信念,有符合方向、命题内容和满足条件的信念;有可能是坚定的信念,也有可能是脆弱的信念;有可能是紧张的信念,也有可能是焦虑的信念,有可能是安全的信念;有可能是教条的信念,也有可能是理性的信念,有可能是迷信的信念;有可能是盲目的信仰,也有可能是犹豫不决的思考;有可能是任何一种信念,需要哪些条件。恒温器不是候选者。胃、肝脏、加法器或电话也不是候选对象。 不过,既然我们认真对待这个想法,那么请注意,它的真实性对于强人工智能声称自己是一门心灵科学来说是致命的。现在,心灵无处不在。我们想知道的是,心灵与恒温器和肝脏有何区别。如果麦卡锡是对的,强人工智能就没有希望告诉我们这些。
II. The Robot Reply (Yale). "Suppose we wrote a different kind of program from Schank's program. Suppose we put a computer inside a robot, and this computer would not just take in formal symbols as input and give out formal symbols as output, but rather would actually operate the robot in such a way that the robot does something very much like perceiving, walking, moving about, hammering nails, eating, drinking - anything you like. The robot would, for example, have a television camera attached to it that enabled it to 'see," it would have arms and legs that enabled it to 'act,' and all of this would be controlled by its computer 'brain.' Such a robot would, unlike Schank's computer, have genuine understanding and other mental states."
II.机器人的回答(耶鲁)。"假设我们编写了一种不同于尚克程序的程序。假设我们在机器人体内安装了一台计算机,这台计算机不只是接收形式化符号作为输入,输出形式化符号作为输出,而是实际操作机器人,让机器人做一些非常类似于感知、行走、走动、敲钉子、吃东西、喝水--任何你喜欢的事情。举例来说,机器人会安装一个电视摄像头,让它能够 "看",它还会有胳膊和腿,让它能够 "行动",而这一切都将由它的计算机 "大脑 "来控制。这样的机器人与尚克的计算机不同,它具有真正的理解力和其他精神状态"。
The first thing to notice about the robot reply is that it tacitly concedes that cognition is not soley a matter of formal symbol manipulation, since this reply adds a set of causal relation with the outside world [cf. Fodor: "Methodological Solipsism" BBS 3(1) 1980]. But the answer to the robot reply is that the addition of such "perceptual" and "motor" capacities adds nothing by way of understanding, in particular, or intentionality, in general, to Schank's original program. To see this, notice that the same thought experiment applies to the robot case. Suppose that instead of the computer inside the robot, you put me inside the room and, as in the original Chinese case, you give me more Chinese symbols with more instructions in English for matching Chinese symbols to Chinese symbols and feeding back Chinese symbols to the outside. Suppose, unknown to me, some of the Chinese symbols that come to me come from a television camera attached to the robot and other Chinese symbols that I am giving out serve to make the motors inside the robot move the robot's legs or arms. It is important to emphasize that all I am doing is manipulating formal symbols: I know none of these other facts. I am receiving "information" from the robot's "perceptual" apparatus, and I am giving out "instructions" to its motor apparatus without knowing either of these facts. I am the robot's homunculus, but unlike the traditional homunculus, I don't know what's going on. I don't understand anything except the rules for symbol manipulation. Now in this case I want to say that the robot has no intentional states at all; it is simply moving about as a result of its electrical wiring and its program. And furthermore, by instantiating the program I have no intentional states of the relevant type. All I do is follow formal instructions about manipulating formal symbols.
关于机器人的回答,首先要注意的是,它默认了认知并不完全是形式符号操作的问题,因为这个回答增加了一组与外部世界的因果关系[参见福多:《方法论上的唯我论》BBS 3(1) 1980]。但对机器人回答的回答是,增加这种 "感知 "和 "运动 "能力并没有给尚克的原程序增加任何特别的理解力或一般的意向性。要理解这一点,请注意,同样的思想实验也适用于机器人的情况。假设你把我放在房间里,而不是把电脑放在机器人体内,就像原来的中文案例一样,你给我更多的中文符号和更多的英文指令,让我把中文符号和中文符号进行匹配,并把中文符号反馈到外部。假设,在我不知道的情况下,我收到的一些中文符号来自机器人上的电视摄像机,而我发出的其他中文符号则用于使机器人内部的电机带动机器人的腿或手臂。需要强调的是,我所做的只是在操作形式符号:我对这些其他事实一无所知。我从机器人的 "感知 "装置接收 "信息",并向它的电机装置发出 "指令",但我对这些事实一无所知。我是机器人的同形体,但与传统的同形体不同,我不知道发生了什么。除了符号操纵规则,我什么都不懂。在这种情况下,我想说的是,机器人根本没有意向状态;它只是在电线和程序的作用下活动。此外,通过实例化程序,我也没有相关类型的意向状态。我所做的只是遵循形式化的指令来操作形式化的符号。
III. The brain simulator reply (Berkeley and M.I.T.). "Suppose we design a program that doesn't represent information that we have about the world, such as the information in Schank's scripts, but simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them. The machine takes in Chinese stories and questions about them as input, it simulates the formal structure of actual Chinese brains in processing these stories, and it gives out Chinese answers as outputs. We can even imagine that the machine operates, not with a single serial program, but with a whole set of programs operating in parallel, in the manner that actual human brains presumably operate when they process natural language. Now surely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldn't we also have to deny that native Chinese speakers understood the stories? At the level of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?"
III.大脑模拟器的回答(伯克利和麻省理工学院)。"假设我们设计了一个程序,它并不代表我们所掌握的关于世界的信息,如尚克脚本中的信息,而是模拟一个以中文为母语的人在听懂中文故事并给出答案时,大脑神经突触处神经元实际跳动的顺序。这台机器以中文故事和相关问题作为输入,模拟真实的中国人大脑在处理这些故事时的形式结构,并以中文答案作为输出。我们甚至可以想象,这台机器不是用一个串行程序运行,而是用一整套程序并行运行,就像人类大脑处理自然语言一样。在这种情况下,我们肯定得说机器听懂了故事;如果我们拒绝这么说,我们不也得否认母语为中文的人听懂了故事吗?在突触的层面上,计算机的程序和中国人大脑的程序会有什么不同呢?
Before countering this reply I want to digress to note that it is an odd reply for any partisan of artificial intelligence (or functionalism, etc.) to make: I thought the whole idea of strong AI is that we don't need to know how the brain works to know how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computer hardwares: on the assumptions of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology. If we had to know how the brain worked to do AI, we wouldn't bother with AI. However, even getting this close to the operation of the brain is still not sufficient to produce understanding. To see this, imagine that instead of a monolingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes.
在反驳这个回答之前,我想说的是,对于任何人工智能(或功能主义等)的拥护者来说,这都是一个奇怪的回答:我以为强人工智能的整个理念是,我们不需要知道大脑是如何工作的,就能知道思维是如何工作的。我的基本假设是,存在一个由形式元素上的计算过程组成的心智运作层次,它构成了心智的本质,可以在各种不同的大脑过程中实现,就像任何计算机程序都可以在不同的计算机硬件中实现一样:根据强人工智能的假设,心智之于大脑,就像程序之于硬件一样,因此我们不需要做神经生理学研究就能理解心智。如果我们必须知道大脑是如何工作的才能进行人工智能,我们就不会费心去研究人工智能了。然而,即使如此接近大脑的运作,仍然不足以产生理解。要理解这一点,想象一下,我们让一个人在一个房间里操作一套复杂的水管,水管之间用阀门连接,而不是让一个只会一门语言的人在房间里洗符号。当这个人接收到中文符号时,他会在用英文编写的程序中查找他需要打开和关闭哪些阀门。每一个水管接口都对应着中国人大脑中的一个突触,整个系统的设置是这样的:在完成所有正确的启动之后,也就是在打开所有正确的水龙头之后,中文答案就会在一系列水管的输出端跳出来。
Now where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn't understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination. The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties.
现在,这个系统的理解力在哪里?它把中文作为输入,模拟中国人大脑突触的形式结构,然后把中文作为输出。但这个人肯定不懂中文,水管也不懂,如果我们想采用我认为荒谬的观点,即人和水管的结合体在某种程度上是理解的,请记住,原则上,这个人可以内化水管的形式结构,并在他的想象中完成所有的 "神经元颤动"。大脑模拟器的问题在于,它模拟的大脑是错误的。只要它模拟的只是神经元在突触处闪烁序列的形式结构,它就没有模拟出大脑的重要内容,即大脑的因果属性,即大脑产生意向状态的能力。而形式属性对于因果属性来说是不够的,水管的例子就证明了这一点:我们可以把所有的形式属性从相关的神经生物学因果属性中剥离出来。
IV. The combination reply (Berkeley and Stanford). "While each of the previous three replies might not be completely convincing by itself as a refutation of the Chinese room counterexample, if you take all three together they are collectively much more convincing and even decisive. Imagine a robot with a brain-shaped computer lodged in its cranial cavity, imagine the computer programmed with all the synapses of a human brain, imagine the whole behavior of the robot is indistinguishable from human behavior, and now think of the whole thing as a unified system and not just as a computer with inputs and outputs. Surely in such a case we would have to ascribe intentionality to the system."
IV.组合答复(伯克利和斯坦福)。"虽然前面三个回答本身可能并不能完全令人信服地反驳中式房间反例,但如果把三个回答合在一起,就会更令人信服,甚至具有决定性意义。想象一个机器人的颅腔里装有一台脑形计算机,想象这台计算机的程序具有人脑的所有突触,想象机器人的整个行为与人的行为无异,现在再把整个机器人看成一个统一的系统,而不仅仅是一台有输入和输出的计算机。当然,在这种情况下,我们必须赋予系统意向性"。
I entirely agree that in such a case we would find it rational and indeed irresistible to accept the hypothesis that the robot had intentionality, as long as we knew nothing more about it. Indeed, besides appearance and behavior, the other elements of the combination are really irrelevant. If we could build a robot whose behavior was indistinguishable over a large range from human behavior, we would attribute intentionality to it, pending some reason not to. We wouldn't need to know in advance that its computer brain was a formal analogue of the human brain.
我完全同意,在这种情况下,只要我们对机器人一无所知,我们就会发现接受机器人具有意向性的假设是合理的,甚至是不可抗拒的。事实上,除了外观和行为之外,其他的组合要素其实都无关紧要。如果我们能制造出一个机器人,它的行为在很大程度上与人类的行为无法区分,那么我们就会赋予它意向性,除非有理由不赋予它意向性。我们不需要事先知道它的计算机大脑是人脑的形式模拟。
But I really don't see that this is any help to the claims of strong AI; and here's why: According to strong AI, instantiating a formal program with the right input and output is a sufficient condition of, indeed is constitutive of, intentionality. As Newell (1979) puts it, the essence of the mental is the operation of a physical symbol system. But the attributions of intentionality that we make to the robot in this example have nothing to do with formal programs. They are simply based on the assumption that if the robot looks and behaves sufficiently like us, then we would suppose, until proven otherwise, that it must have mental states like ours that cause and are expressed by its behavior and it must have an inner mechanism capable of producing such mental states. If we knew independently how to account for its behavior without such assumptions we would not attribute intentionality to it, especially if we knew it had a formal program. And this is precisely the point of my earlier reply to objection II.
但我实在看不出这对强人工智能的主张有什么帮助,原因就在这里:根据强人工智能,用正确的输入和输出实例化一个形式化程序是意向性的充分条件,实际上也是意向性的构成要素。正如纽厄尔(1979)所说,心理的本质是物理符号系统的运作。但在这个例子中,我们对机器人的意向性归因与形式化程序毫无关系。它们只是基于这样一个假设:如果机器人的外表和行为足够像我们,那么我们就会认为,除非另有证明,它一定有像我们一样的心理状态,而这种心理状态是由它的行为引起并表现出来的,而且它一定有能够产生这种心理状态的内在机制。如果我们独立地知道如何在没有这些假设的情况下解释它的行为,我们就不会把意向性归因于它,尤其是如果我们知道它有一个形式化的程序。这正是我先前对反对意见 II 所作答复的要点。
Suppose we knew that the robot's behavior was entirely accounted for by the fact that a man inside it was receiving uninterpreted formal symbols from the robot's sensory receptors and sending out uninterpreted formal symbols to its motor mechanisms, and the man was doing this symbol manipulation in accordance with a bunch of rules. Furthermore, suppose the man knows none of these facts about the robot, all he knows is which operations to perform on which meaningless symbols. In such a case we would regard the robot as an ingenious mechanical dummy. The hypothesis that the dummy has a mind would now be unwarranted and unnecessary, for there is now no longer any reason to ascribe intentionality to the robot or to the system of which it is a part (except of course for the man's intentionality in manipulating the symbols). The formal symbol manipulations go on, the input and output are correctly matched, but the only real locus of intentionality is the man, and he doesn't know any of the relevant intentional states; he doesn't, for example, see what comes into the robot's eyes, he doesn't intend to move the robot's arm, and he doesn't understand any of the remarks made to or by the robot. Nor, for the reasons stated earlier, does the system of which man and robot are a part.
假设我们知道,机器人的行为完全是由机器人内部的一个人从机器人的感官接收器接收未经解释的形式符号,并向机器人的运动机构发出未经解释的形式符号,而这个人是按照一系列规则进行符号操作的。此外,假设这个人对机器人的这些情况一无所知,他所知道的只是对哪些无意义的符号进行哪些操作。在这种情况下,我们会把机器人看成是一个巧妙的机械假人。现在,关于这个假人有思想的假设是没有道理的,也是不必要的,因为现在已经没有任何理由再把意向性归于机器人或机器人所在的系统了(当然,人在操作符号时的意向性除外)。形式上的符号操作仍在继续,输入和输出正确匹配,但唯一真正的意向性所在是人,而他并不知道任何相关的意向状态;例如,他看不到机器人的眼睛里有什么,他并不打算移动机器人的手臂,他也不理解机器人对他或由他发表的任何言论。由于前面所述的原因,人和机器人所组成的系统也是如此。
To see this point, contrast this case with cases in which we find it completely natural to ascribe intentionality to members of certain other primate species such as apes and monkeys and to domestic animals such as dogs. The reasons we find it natural are, roughly, two: we can't make sense of the animal's behavior without the ascription of intentionality, and we can see that the beasts are made of similar stuff to ourselves - that is an eye, that a nose, this is its skin, and so on. Given the coherence of the animal's behavior and the assumption of the same causal stuff underlying it, we assume both that the animal must have mental states underlying its behavior, and that the mental states must be produced by mechanisms made out of the stuff that is like our stuff. We would certainly make similar assumptions about the robot unless we had some reason not to, but as soon as we knew that the behavior was the result of a formal program, and that the actual causal properties of the physical substance were irrelevant we would abandon the assumption of intentionality. [See "Cognition and Consciousness in Nonhuman Species" BBS I(4) 1978.]
为了理解这一点,我们可以把这种情况与我们认为把意向性归因于某些其他灵长类动物(如猿猴)和家畜(如狗)是完全自然的情况进行对比。我们认为这很自然的原因大致有两个:如果不赋予意向性,我们就无法理解动物的行为;我们可以看到,这些动物是由与我们相似的东西构成的--那是眼睛,那是鼻子,这是它的皮肤,等等。鉴于动物行为的一致性,以及假设动物行为的因果关系是相同的,我们假设动物的行为背后一定有心理状态,而且这些心理状态一定是由类似于我们的东西所构成的机制产生的。我们当然也会对机器人做出类似的假设,除非我们有理由不这么做,但只要我们知道机器人的行为是形式化程序的结果,而物理物质的实际因果属性并不重要,我们就会放弃意向性假设。[见 "非人类物种的认知与意识 "BBS I(4) 1978。]
There are two other responses to my example that come up frequently (and so are worth discussing) but really miss the point.
对于我的例子,还有另外两种回答经常出现(因此值得讨论),但却没有抓住重点。
V. The other minds reply (Yale). "How do you know that other people understand Chinese or anything else? Only by their behavior. Now the computer can pass the behavioral tests as well as they can (in principle), so if you are going to attribute cognition to other people you must in principle also attribute it to computers."
V.其他头脑回答(耶鲁)。"你怎么知道别人懂中文或其他什么?只能通过他们的行为。现在计算机可以和他们一样通过行为测试(原则上),所以如果你要把认知归因于其他人,原则上也必须把认知归因于计算机。"
This objection really is only worth a short reply. The problem in this discussion is not about how I know that other people have cognitive states, but rather what it is that I am
这个反对意见确实只值得简短回答。这个讨论的问题不在于我如何知道其他人有认知状态,而在于我在

attributing to them when I attribute cognitive states to them. The thrust of the argument is that it couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state. It is no answer to this argument to feign anesthesia. In "cognitive sciences" one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects.
当我把认知状态归因于它们的时候。这个论点的主旨在于,它不可能只是计算过程及其输出,因为计算过程及其输出可以在没有认知状态的情况下存在。对于这个论点,假装麻醉是无法回答的。在 "认知科学 "中,我们必须预先假定精神的真实性和可知性,就像在物理科学中我们必须预先假定物理对象的真实性和可知性一样。
VI. The many mansions reply (Berkeley). "Your whole argument presupposes that is only about analogue and digital computers. But that just happens to be the present state of technology. Whatever these causal processes are that you say are essential for intentionality (assuming you are right), eventually we will be able to build devices that have these causal processes, and that will be artificial intelligence. So your arguments are in no way directed at the ability of artificial intelligence to produce and explain cognition."
VI.多宅回复(伯克利)。"你的整个论点预先假定, 只涉及模拟和数字计算机。但这恰好是目前的技术水平。不管你说的意向性所必需的因果过程是什么(假设你是对的),最终我们将能够制造出具有这些因果过程的设备,那就是人工智能。因此,你的论点绝不是针对人工智能产生和解释认知的能力"。
I really have no objection to this reply save to say that it in effect trivializes the project of strong AI by redefining it as whatever artificially produces and explains cognition. The interest of the original claim made on behalf of artificial intelligence is that it was a precise, well defined thesis: mental processes are computational processes over formally defined elements. I have been concerned to challenge that thesis. If the claim is redefined so that it is no longer that thesis, my objections no longer apply because there is no longer a testable hypothesis for them to apply to.
我真的不反对这个回答,但我想说的是,这个回答实际上是把强人工智能项目轻描淡写地重新定义为人工产生和解释认知的任何东西。代表人工智能提出的最初主张的意义在于,它是一个精确的、定义明确的论题:心理过程是对形式上定义的元素的计算过程。我一直致力于挑战这一论断。如果重新定义了这一主张,使之不再是这一论题,那么我的反对意见就不再适用,因为它们不再适用于一个可检验的假设。
Let us now return to the question I promised I would try to answer: granted that in my original example I understand the English and I do not understand the Chinese, and granted therefore that the machine doesn't understand either English or Chinese, still there must be something about me that makes it the case that I understand English and a corresponding something lacking in me that makes it the case that I fail to understand Chinese. Now why couldn't we give those somethings, whatever they are, to a machine?
现在让我们回到我答应过要回答的问题上来:假设在我原来的例子中,我懂英文,不懂中文,也假设机器既不懂英文也不懂中文,那么我身上一定有某种东西使我懂英文,我身上也一定缺少某种东西使我不懂中文。现在,我们为什么不能把这些东西,不管它们是什么,交给机器呢?
I see no reason in principle why we couldn't give a machine the capacity to understand English or Chinese, since in an important sense our bodies with our brains are precisely such machines. But I do see very strong arguments for saying that we could not give such a thing to a machine where the operation of the machine is defined solely in terms of computational processes over formally defined elements; that is, where the operation of the machine is defined as an instantiation of a computer program. It is not because I am the instantiation of a computer program that I am able to understand English and have other forms of intentionality (I am, I suppose, the instantiation of any number of computer programs), but as far as we know it is because I am a certain sort of organism with a certain biological (i.e. chemical and physical) structure, and this structure, under certain conditions, is causally capable of producing perception, action, understanding, learning, and other intentional phenomena. And part of the point of the present argument is that only something that had those causal powers could have that intentionality. Perhaps other physical and chemical processes could produce exactly these effects; perhaps, for example, Martians also have intentionality but their brains are made of different stuff. That is an empirical question, rather like the question whether photosynthesis can be done by something with a chemistry different from that of chlorophyll.
从原则上讲,我看不出为什么我们不能赋予机器理解英语或中文的能力,因为在一个重要的意义上,我们的身体和大脑正是这样的机器。但我确实看到有非常有力的论据表明,我们不能赋予机器这样的能力,因为机器的运行完全是通过形式上定义的元素的计算过程来定义的;也就是说,机器的运行被定义为计算机程序的实例化。我之所以能够理解英语并具有其他形式的意向性(我想,我是任何数量的计算机程序的实例化),并不是因为我是计算机程序的实例化,而是因为我是某种具有特定生物(即化学和物理)结构的有机体,而这种结构在特定条件下能够产生感知、行动、理解、学习和其他意向性现象。本论证的部分观点是,只有具有这些因果能力的东西才能具有这种意向性。也许其他物理和化学过程也能产生这些效果;比如,也许火星人也有意向性,但他们的大脑是由不同的东西构成的。这是一个经验问题,就像光合作用是否可以由化学成分不同于叶绿素的东西来完成一样。
But the main point of the present argument is that no purely formal model will ever be sufficient by itself for intentionality because the formal properties are not by themselves constitutive of intentionality, and they have by themselves no causal powers except the power, when instantiated, to produce the next stage of the formalism when the machine is running. And any other causal properties that particular realizations of the formal model have, are irrelevant to the formal model because we can always put the same formal model in a different realization where those causal properties are obviously absent. Even if, by some miracle, Chinese speakers exactly realize Schank's program, we can put the same program in English speakers, water pipes, or computers, none of which understand Chinese, the program not withstanding.
但本论证的要点在于,任何纯粹的形式模型本身都不足以构成意向性,因为形式属性本身并不构成意向性,它们本身除了在机器运行时,在实例化时产生形式主义的下一阶段的力量之外,不具有任何因果力量。而形式模型的特定实现所具有的任何其他因果属性,都与形式模型无关,因为我们总是可以把同一个形式模型放在不同的实现中,而在不同的实现中,这些因果属性显然是不存在的。即使奇迹发生了,说中文的人完全实现了尚克的程序,我们也可以把同样的程序放到说英语的人、水管或计算机中,它们都不懂中文,程序也不例外。
What matters about brain operations is not the formal shadow cast by the sequence of synapses but rather the actual properties of the sequences. All the arguments for the strong version of artificial intelligence that I have seen insist on drawing an outline around the shadows cast by cognition and then claiming that the shadows are the real thing.
大脑运作的关键不在于突触序列投下的形式上的阴影,而在于序列的实际属性。我所见过的所有强人工智能论点,都坚持在认知所投下的阴影周围画出轮廓,然后声称这些阴影才是真实的东西。
By way of concluding I want to try to state some of the general philosophical points implicit in the argument. For clarity I will try to do it in a question and answer fashion, and I begin with that old chestnut of a question:
最后,我想阐述一下论证中隐含的一些一般性哲学观点。为了清楚起见,我将尝试以问答的方式来阐述,首先我会提出一个老生常谈的问题:
"Could a machine think?"
"机器会思考吗?"
The answer is, obviously, yes. We are precisely such machines
答案显然是肯定的。我们正是这样的机器
"Yes, but could an artifact, a man-made machine, think?"
"是的,但人造机器能思考吗?"
Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use. It is, as I said, an empirical question.
假设有可能人工制造出一台机器,它有神经系统、带有轴突和树突的神经元以及所有其他部分,与我们的机器完全一样,那么问题的答案显然又是肯定的。如果你能完全复制原因,你就能复制结果。事实上,我们也有可能利用一些其他的化学原理,而不是人类所使用的化学原理,来产生意识、意向性以及所有其他的东西。正如我所说,这是一个经验问题。
"OK, but could a digital computer think?"
"好吧,但数字计算机会思考吗?"
If by "digital computer" we mean anything at all that has a level of description where it can correctly be described as the instantiation of a computer program, then again the answer is, of course, yes, since we are the instantiations of any number of computer programs, and we can think.
如果我们所说的 "数字计算机 "是指任何具有一定描述水平的、可以被正确地描述为计算机程序实例化的东西,那么答案当然还是肯定的,因为我们就是任意数量计算机程序的实例化,而且我们会思考。
"But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?"
"但是,某物是否能够思考、理解等,仅仅因为它是一台计算机,拥有正确的程序?将程序实例化,当然是正确的程序,本身就能成为理解的充分条件吗?
This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no.
我认为这是一个正确的问题,尽管它通常会与前面的一个或多个问题混淆,但答案是否定的。
"Why not?" "为什么不呢?"
Because the formal symbol manipulations by themselves don't have any intentionality; they are quite meaningless; they aren't even symbol manipulations, since the symbols don't symbolize anything. In the linguistic jargon, they have only a syntax but no semantics. Such intentionality as computers appear to have is solely in the minds of those who program them and those who use them, those who send in the input and those who interpret the output.
因为形式化的符号操作本身不具有任何意向性;它们毫无意义;它们甚至不是符号操作,因为符号并不象征任何东西。用语言学的行话来说,它们只有语法而没有语义。计算机看似具有的这种意向性,仅仅存在于计算机编程者和计算机使用者的头脑中,存在于输入者和输出者的头脑中。
The aim of the Chinese room example was to try to show this by showing that as soon as we put something into the system that really does have intentionality (a man), and we program him with the formal program, you can see that the formal program carries no additional intentionality. It adds nothing, for example, to a man's ability to understand Chinese.
中餐厅的例子就是为了说明这一点,只要我们把一个真正具有意向性的东西(一个人)放进系统,并用形式化程序对他进行编程,你就会发现形式化程序并没有带来额外的意向性。例如,它并没有增加一个人理解中文的能力。
Precisely that feature of AI that seemed so appealing - the distinction between the program and the realization - proves fatal to the claim that simulation could be duplication. The distinction between the program and its realization in the hardware seems to be parallel to the distinction between the level of mental operations and the level of brain operations. And if we could describe the level of mental operations as a formal program, then it seems we could describe what was essential about the mind without doing either introspective
恰恰是人工智能的这一看似如此吸引人的特征--程序与实现之间的区别--被证明对 "模拟可能是复制 "的说法是致命的。程序与其在硬件中的实现之间的区别,似乎与心理操作层面与大脑操作层面之间的区别是平行的。如果我们能把心智运作的层次描述为形式化的程序,那么我们似乎就能描述出心智的本质,而无需进行内省。

psychology or neurophysiology of the brain. But the equation, "mind is to brain as program is to hardware" breaks down at several points, among them the following three:
心理学或大脑神经生理学。但是,"心智之于大脑,就像程序之于硬件 "的等式在几个点上被打破了,其中包括以下三点:
First, the distinction between program and realization has the consequence that the same program could have all sorts of crazy realizations that had no form of intentionality. Weizenbaum (1976, (h. 2), for example, shows in detail how to construct a computer using a roll of toilet paper and a pile of small stones. Similarly, the Chinese story understanding program can be programmed into a sequence of water pipes, a set of wind machines, or a monolingual English speaker, none of which thereby acquires an understanding of Chinese. Stones, toilet paper, wind, and water pipes are the wrong kind of stuff to have intentionality in the first place - only something that has the same causal powers as brains can have intentionality - and though the English speaker has the right kind of stuff for intentionality you can easily see that he doesn't get any extra intentionality by memorizing the program, since memorizing it won't teach him Chinese.
首先,区分程序与实现的结果是,同一个程序可以有各种疯狂的实现,而这些实现没有任何形式的意向性。例如,魏岑鲍姆(Weizenbaum,1976,(h. 2))详细展示了如何用一卷卫生纸和一堆小石头来构造一台计算机。同样,理解中文故事的程序也可以编入一连串的水管、一组风力机器或一个只会说英语的人,但它们都不会因此获得对中文的理解。石头、厕纸、风和水管这些东西本来就不应该具有意向性--只有与大脑具有相同因果能力的东西才可能具有意向性--尽管讲英语的人拥有意向性所需的正确东西,但你可以很容易地看出,他并没有通过背诵程序获得额外的意向性,因为背诵程序并不能教他中文。
Second, the program is purely formal, but the intentional states are not in that way formal. They are defined in terms of their content, not their form. The belief that it is raining, for example, is not defined as a certain formal shape, but as a certain mental content with conditions of satisfaction, a direction of fit (see Searle 1979), and the like. Indeed the belief as such hasn't even got a formal shape in this syntactic sense, since one and the same belief can be given an indefinite number of different syntactic expressions in different linguistic systems.
其次,程序是纯粹形式化的,但意向状态并非如此形式化。它们是根据内容而不是形式来定义的。例如,"下雨了 "这个信念并不是被定义为某种形式,而是被定义为某种具有满足条件、契合方向(见塞尔,1979 年)等的心理内容。事实上,这种信念在句法意义上甚至没有形式,因为同一个信念在不同的语言系统中可以有无数种不同的句法表达。
Third, as I mentioned before, mental states and events are literally a product of the operation of the brain, but the program is not in that way a product of the computer
第三,正如我之前提到的,精神状态和事件确实是大脑运作的产物,但程序并不是计算机的产物。
"Well if programs are in no way constitutive of mental processes, why have so many people believed the converse? That at least needs some explanation."
"好吧,如果程序绝不是心理过程的组成部分,那么为什么有那么多人相信相反的说法呢?这至少需要一些解释。
I don't really know the answer to that one. The idea that computer simulations could be the real thing ought to have seemed suspicious in the first place because the computer isn't confined to simulating mental operations, by any means. No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything? It is sometimes said that it would be frightfully hard to get computers to feel pain or fall in love, but love and pain are neither harder nor easier than cognition or anything else. For simulation, all you need is the right input and output and a program in the middle that transforms the former into the latter. That is all the computer has for anything it does. To confuse simulation with duplication is the same mistake, whether it is pain, love, cognition, fires, or rainstorms.
我真的不知道这个问题的答案。计算机模拟可能是真实的,这种想法首先就应该是可疑的,因为无论如何,计算机并不局限于模拟心理运作。没有人会认为计算机模拟的五级大火会烧毁整个社区,也没有人会认为计算机模拟的暴雨会让我们全身湿透。究竟为什么会有人认为,计算机模拟的理解实际上什么也不懂?人们有时会说,让计算机感受痛苦或坠入爱河是一件非常困难的事情,但爱情和痛苦并不比认知或其他任何事情更难或更容易。对于模拟来说,你所需要的只是正确的输入和输出,以及在中间将前者转化为后者的程序。这就是计算机所能做的一切。把模拟和复制混为一谈是同样的错误,无论是痛苦、爱、认知、火灾还是暴雨。
Still, there are several reasons why AI must have seemed and to many people perhaps still does seem - in some way to reproduce and thereby explain mental phenomena, and I believe we will not succeed in removing these illusions until we have fully exposed the reasons that give rise to them.
尽管如此,人工智能在某种程度上似乎--对许多人来说也许仍然--能够重现并由此解释心理现象,这其中有几个原因。我相信,在我们完全揭露产生这些幻觉的原因之前,我们不会成功地消除这些幻觉。
First, and perhaps most important, is a confusion about the notion of "information processing": many people in cognitive science believe that the human brain, with its mind, does something called "information processing," and analogously the computer with its program does information processing; but fires and rainstorms, on the other hand, don't do information processing at all. Thus, though the computer can simulate the formal features of any process whatever, it stands in a special relation to the mind and brain because when the computer is properly programmed, ideally with the same program as the brain, the information processing is identical in the two cases, and this information processing is really the essence of the mental. But the trouble with this argument is that it rests on an ambiguity in the notion of "information." In the sense in which people "process information" when they reflect, say, on problems in arithmetic or when they read and answer questions about stories, the programmed computer does not do "information processing." Rather, what it does is manipulate formal symbols. The fact that the programmer and the interpreter of the computer output use the symbols to stand for objects in the world is totally beyond the scope of the computer. The computer, to repeat, has a syntax but no semantics. Thus, if you type into the computer " 2 plus 2 equals?" it will type out " 4 ." But it has no idea that " 4 " means 4 or that it means anything at all. And the point is not that it lacks some second-order information about the interpretation of its first-order symbols, but rather that its first-order symbols don't have any interpretations as far as the computer is concerned. All the computer has is more symbols. The introduction of the notion of "information processing" therefore produces a dilemma: either we construe the notion of "information processing" in such a way that it implies intentionality as part of the process or we don't. If the former, then the programmed computer does not do information processing, it only manipulates formal symbols. If the latter, then, though the computer does information processing, it is only doing so in the sense in which adding machines, typewriters, stomachs, thermostats, rainstorms, and hurricanes do information processing; namely, they have a level of description at which we can describe them as taking information in at one end, transforming it, and producing information as output. But in this case it is up to outside observers to interpret the input and output as information in the ordinary sense. And no similarity is established between the computer and the brain in terms of any similarity of information processing
首先,也许也是最重要的一点,是对 "信息处理 "概念的混淆:认知科学领域的许多人认为,人脑及其思维在进行一种叫做 "信息处理 "的活动,而计算机及其程序也在进行类似的信息处理活动;但另一方面,火灾和暴雨却根本不进行信息处理。因此,尽管计算机可以模拟任何过程的形式特征,但它与心智和大脑的关系是特殊的,因为当计算机被正确编程,理想情况下与大脑使用相同的程序时,两种情况下的信息处理是相同的,而这种信息处理确实是心智的本质。但这一论点的问题在于,它是建立在 "信息 "这一概念的模糊基础之上的。就人们在思考算术问题或阅读并回答有关故事的问题时所进行的 "信息处理 "而言,编程计算机并没有进行 "信息处理"。相反,它所做的是操作形式化的符号。程序员和计算机输出的解释器使用符号来代表世界上的物体,这完全超出了计算机的范围。重复一遍,计算机有语法,但没有语义。因此,如果你在计算机上输入 "2 加 2 等于?",它就会输入 "4"。但它不知道 "4 "是指 4,也不知道它意味着什么。问题的关键并不在于它缺乏关于其一阶符号解释的二阶信息,而在于就计算机而言,其一阶符号没有任何解释。计算机拥有的只是更多的符号。因此,"信息处理 "概念的引入产生了一个两难的问题:要么我们对 "信息处理 "概念的解释意味着作为处理过程一部分的意向性,要么我们不这样做。如果是前者,那么编程计算机就不会进行信息处理,它只会操作形式化的符号。如果是后者,那么,尽管计算机进行信息处理,但它只是在加法器、打字机、胃、恒温器、暴雨和飓风进行信息处理的意义上进行信息处理;也就是说,它们有一个描述层次,在这个层次上,我们可以把它们描述为从一端接收信息、转换信息并产生信息作为输出。但在这种情况下,输入和输出是否为普通意义上的信息,取决于外部观察者。就信息处理的相似性而言,计算机与大脑之间并不存在任何相似性。
Second, in much of AI there is a residual behaviorism or operationalism. Since appropriately programmed computers can have input-output patterns similar to those of human beings, we are tempted to postulate mental states in the computer similar to human mental states. But once we see that it is both conceptually and empirically possible for a system to have human capacities in some realm without having any intentionality at all, we should be able to overcome this impulse. My desk adding machine has calculating capacities, but no intentionality, and in this paper I have tried to show that a system could have input and output capabilities that duplicated those of a native Chinese speaker and still not understand Chinese, regardless of how it was programmed. The Turing test is typical of the tradition in being unashamedly behavioristic and operationalistic, and I believe that if AI workers totally repudiated behaviorism and operationalism much of the confusion between simulation and duplication would be eliminated.
其次,在人工智能的许多方面,还存在着残余的行为主义或操作主义。由于经过适当编程的计算机可以拥有与人类相似的输入输出模式,我们很容易假定计算机中的心理状态与人类的心理状态相似。但是,一旦我们发现一个系统在概念上和经验上都有可能在某些领域具有人类的能力,而根本不具有任何意向性,我们就应该能够克服这种冲动。在本文中,我试图证明,一个系统可以拥有与母语为中文的人相同的输入和输出能力,但仍然听不懂中文,无论它是如何编程的。图灵测试是典型的行为主义和操作主义传统,我相信,如果人工智能工作者完全摒弃行为主义和操作主义,模拟和复制之间的许多混淆就会消除。
Third, this residual operationalism is joined to a residual form of dualism; indeed strong AI only makes sense given the dualistic assumption that, where the mind is concerned, the brain doesn't matter. In strong AI (and in functionalism, as well) what matters are programs, and programs are independent of their realization in machines; indeed, as far as is concerned, the same program could be realized by an electronic machine, a Cartesian mental substance, or a Hegelian world spirit. The single most surprising discovery that I have made in discussing these issues is that many AI workers are quite shocked by my idea that actual human mental phenomena might be dependent on actual physicalchemical properties of actual human brains. But if you think about it a minute you can see that I should not have been surprised; for unless you accept some form of dualism, the strong AI project hasn't got a chance. The project is to reproduce and explain the mental by designing programs, but unless the mind is not only conceptually but empirically independent of the brain you couldn't carry out the project,
第三,这种残余的操作主义与二元论的残余形式结合在一起;事实上,只有在二元论假设的前提下,强人工智能才是有意义的,二元论假设认为,就心智而言,大脑并不重要。在强人工智能中(在功能主义中也是如此),重要的是程序,而程序独立于它们在机器中的实现;事实上,就 ,同样的程序可以由电子机器、笛卡尔的精神实质或黑格尔的世界精神来实现。在讨论这些问题的过程中,我发现一个最令人惊讶的现象,那就是许多人工智能工作者对我的观点感到非常震惊,那就是实际的人类精神现象可能取决于实际人类大脑的实际物理化学特性。但如果你仔细想一想,就会发现我不应该感到惊讶;因为除非你接受某种形式的二元论,否则强人工智能项目就没有机会。这个项目是通过设计程序来重现和解释心智,但除非心智不仅在概念上独立于大脑,而且在经验上也独立于大脑,否则你就无法实施这个项目、

for the program is completely independent of any realization. Unless you believe that the mind is separable from the brain both conceptually and empirically - dualism in a strong form - you cannot hope to reproduce the mental by writing and running programs since programs must be independent of brains or any other particular forms of instantiation. If mental operations consist in computational operations on formal symbols, then it follows that they have no interesting connection with the brain; the only connection would be that the brain just happens to be one of the indefinitely many types of machines capable of instantiating the program. This form of dualism is not the traditional Cartesian variety that claims there are two sorts of substances, but it is Cartesian in the sense that it insists that what is specifically mental about the mind has no intrinsic connection with the actual properties of the brain. This underlying dualism is masked from us by the fact that AI literature contains frequent fulminations against "dualism"; what the authors seem to be unaware of is that their position presupposes a strong version of dualism.
因为程序完全独立于任何实现。除非你相信心智与大脑在概念上和经验上都是可分离的--即强形式的二元论--否则你就不能指望通过编写和运行程序来重现心智,因为程序必须独立于大脑或任何其他特定的实现形式。如果思维操作是对形式化符号的计算操作,那么它们与大脑就没有什么有趣的联系;唯一的联系就是大脑恰好是能够实例化程序的无限多类型机器中的一种。这种形式的二元论并不是传统的笛卡尔式的二元论,它宣称存在两种物质,但它是笛卡尔式的二元论,因为它坚持认为,关于心智的具体精神性与大脑的实际属性没有内在联系。人工智能文献中经常出现反对 "二元论 "的言论,这掩盖了这种潜在的二元论;但作者们似乎没有意识到,他们的立场是以强烈的二元论为前提的。
"Could a machine think?" My own view is that only a machine could think, and indeed only very special kinds of machines, namely brains and machines that had the same causal powers as brains. And that is the main reason strong AI has had little to tell us about thinking, since it has nothing to tell us about machines. By its own definition, it is about programs, and programs are not machines. Whatever else intentionality is, it is a biological phenomenon, and it is as likely to be as causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomena. No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle because of a deep and abiding dualism: the mind they suppose is a matter of formal processes and is independent of quite specific material causes in the way that milk and sugar are not.
"机器会思考吗?"我自己的观点是,只有机器才会思考,事实上只有非常特殊的机器才会思考,即大脑和与大脑具有相同因果能力的机器。这也是强人工智能对我们的思维几乎没有帮助的主要原因,因为它对机器没有任何帮助。根据它自己的定义,它是关于程序的,而程序不是机器。无论意向性是什么,它都是一种生物现象,它就像哺乳、光合作用或任何其他生物现象一样,很可能在因果关系上依赖于其起源的特定生物化学。没有人会认为我们可以通过在计算机上模拟泌乳和光合作用的形式序列来生产牛奶和糖,但就心智而言,许多人却愿意相信这样的奇迹,因为他们有一种深刻而持久的二元论:他们认为心智是形式过程的问题,独立于相当具体的物质原因,而牛奶和糖却不是。
In defense of this dualism the hope is often expressed that the brain is a digital computer (early computers, by the way, were often called "electronic brains"). But that is no help. Of course the brain is a digital computer. Since everything is a digital computer, brains are too. The point is that the brain's causal capacity to produce intentionality cannot consist in its instantiating a computer program, since for any program you like it is possible for something to instantiate that program and still not have any mental states. Whatever it is that the brain does to produce intentionality, it cannot consist in instantiating a program since no program, by itself, is sufficient for intentionality.
为了捍卫这种二元论,人们常常希望大脑是一台数字计算机(顺便说一句,早期的计算机通常被称为 "电子大脑")。但这无济于事。大脑当然是一台数字计算机。既然万物都是数字计算机,那么大脑也是。问题的关键在于,大脑产生意向性的因果能力并不在于它实例化了一个计算机程序,因为对于任何你喜欢的程序来说,都有可能实例化该程序,却仍然没有任何精神状态。无论大脑是如何产生意向性的,它都不可能包含对程序的实例化,因为任何程序本身都不足以产生意向性。

ACKNOWLEDGMENTS 致谢

I am indebted to a rather large number of people for discussion of these matters and for their patient attempts to overcome my ignorance of artificial intelligence. I would especially like to thank Ned Block, Hubert Dreyfus, John Haugeland, Roger Schank, Robert Wilensky, and Terry Winograd.
我感谢许多人对这些问题的讨论,感谢他们耐心地尝试克服我对人工智能的无知。我要特别感谢内德-布洛克(Ned Block)、休伯特-德雷福斯(Hubert Dreyfus)、约翰-豪格兰(John Haugeland)、罗杰-申克(Roger Schank)、罗伯特-威伦斯基(Robert Wilensky)和特里-维诺格拉德(Terry Winograd)。

NOTES 注释

  1. I am not, of course, saying that Schank himself is committed to these claims.
    当然,我并不是说 Schank 本人就坚持这些说法。
  2. Also, "understanding" implies both the possession of mental (intentional) states and the truth (validity, success) of these states. For the purposes of this discussion we are concerned only with the possession of the states.
    此外,"理解 "既意味着拥有心理(意向)状态,也意味着这些状态的真实性(有效性、成功性)。在本讨论中,我们只关注状态的拥有。
  3. Intentionality is by definition that feature of certain mental states by which they are directed at or about objects and states of affairs in the world. Thus, beliefs, desires, and intentions are intentional states; undirected forms of anxiety and depression are not. For further discussion see Searle (1979c).
    根据定义,意向性是指某些心理状态的特征,即它们是针对或关于世界上的客体和事态的。因此,信念、欲望和意图都是意向性状态,而非定向的焦虑和抑郁则不是。更多讨论见 Searle (1979c)。

Open Peer Commentary 公开同行评论

Commentaries submitted by the qualified professional readership of this journal will be considered for publication in a later issue as Continuing Commentary on this article.
本刊合格的专业读者群提交的评论将被考虑作为本文的续评在以后的期刊中发表。

by Robert P. Abelson
作者:罗伯特-P-艾贝尔森

Department of Psychology, Yale University, Now Haven, Conn. 06520
耶鲁大学心理学系,Now Haven, Conn.06520

Searle's argument is just a set of Chinese symbols
塞尔的论证只是一组中文符号

Searle claims that the apparently commonsensical programs of the Yale Al project really don't display meaningful understanding of text. For him, the computer processing a story about a restaurant visit is just a Chinese symbol manipulator blindly applying uncomprehended rules to uncomprehended text. What is missing, Searle says, is the presence of intentional states.
塞尔声称,耶鲁大学 "阿尔 "项目的程序看似通俗易懂,实际上并没有显示出对文本有意义的理解。对他来说,处理一个关于餐馆之行的故事的计算机只是一个中文符号操纵器,盲目地将未理解的规则应用于未理解的文本。塞尔说,缺少的是有意状态的存在。
Searle is misguided in this criticism in at least two ways. First of all, it is no trivial matter to write rules to transfrom the "Chinese symbols" of a story text into the "Chinese symbols" of appropriate answers to questions about the story. To dismiss this programming feat as mere rule mongering is like downgrading a good piece of literature as something that British Museum monkeys can eventually produce. The programmer needs a very crisp understanding of the real work to write the appropriate rules. Mediocre rules produce feeble-minded output, and have to be rewritten. As rules are sharpened, the output gets more and more convincing, so that the process of rule development is convergent. This is a characteristic of the understanding of a content area, not of blind exercise within it.
塞尔的这种批评至少在两个方面是错误的。首先,编写规则将故事文本的 "中文符号 "转换为对故事问题的适当回答的 "中文符号 "并非小事一桩。将这一编程壮举斥之为纯粹的规则堆砌,无异于将一部优秀的文学作品降格为大英博物馆的猴子最终能创作出来的东西。程序员需要对实际工作有非常清晰的了解,才能编写出合适的规则。平庸的规则只能产生乏力的结果,因此必须重写。随着规则的不断完善,输出结果会越来越令人信服,因此规则的制定过程是趋同的。这是对某一内容领域的理解的特点,而不是在该领域内盲目练习的特点。
Ah, but Searle would say that such understanding is in the programmer and not in the computer. Well, yes, but what's the issue? Most precisely, the understanding is in the programmer's rule set, which the computer exercises. No one I know of (at Yale, at least) has claimed autonomy for the computer. The computer is not even necessary to the representational theory; it is just very, very convenient and very, very vivid.
啊,但塞尔会说,这种理解是程序员的理解,而不是计算机的理解。是的,但问题出在哪里呢?最确切地说,这种理解存在于程序员的规则集中,而计算机在执行这些规则集。据我所知(至少在耶鲁大学),没有人声称计算机拥有自主权。对于表征理论而言,计算机甚至都不是必要的;它只是非常非常方便、非常非常生动而已。
But just suppose that we wanted to claim that the computer itself understood the story content. How could such a claim be defended, given that the computer is merely crunching away on statements in program code and producing other statements in program code which (following translation) are applauded by outside observers as being correct and perhaps even clever. What kind of understanding is that? it is, I would assert, very much the kind of understanding that people display in exposure to new content via language or other symbol systems. When a child learns to add, what does he do except apply rules? Where does "understanding" enter? Is it understanding that the results of addition apply independent of content, so that means that if you have things and you assemble them with things, then you'll have things? But that's a rule, too. Is it understanding that the units place can be translated into pennies, the tens place into dimes, and the hundreds place into dollars, so that additions of numbers are isomorphic with additions of money? But that's a rule connecting rule systems. In general, it seems that as more and more rules about a given content are incorporated, especially if they connect with other content domains, we have a sense that understanding is increasing. At what point does a person graduate from "merely" manipulating rules to "really" understanding?
但是,假设我们想声称计算机本身理解故事内容。鉴于计算机只是在处理程序代码中的语句,并产生程序代码中的其他语句,而这些语句(经翻译后)被外部观察者称赞为正确,甚至可能是聪明的,这样的说法又如何辩护呢?我敢断言,这与人们在通过语言或其他符号系统接触新内容时所表现出的理解能力如出一辙。当孩子学习加法时,除了应用规则,他还能做什么?理解 "从何而来?是理解加法的结果与内容无关,所以 意味着,如果你有 的东西,并把它们与 的东西组合在一起,那么你就会有 的东西?但这也是一条规则。是否可以这样理解:个位可以转化为便士,十位可以转化为一角,百位可以转化为美元,这样数字的加法与货币的加法就是同构的?但这是连接规则系统的规则。一般来说,随着越来越多的特定内容的规则被纳入,尤其是当它们与其他内容领域相联系时,我们就会感觉到理解能力在增强。一个人从 "仅仅 "操作规则到 "真正 "理解规则的过程是怎样的?
Educationists would love to know, and so would I, but I would be willing to bet that by the Chinese symbol test, most of the people reading this don't really understand the transcendental number economic inflation, or nuclear power plant safety, or how sailboats can sail upwind. (Be honest with yourself!) Searle's agrument itself, sallying forth as it does into a symbol-laden domain that is intrinsically difficult to "understand," could well be seen as mere symbol manipulation. His main rule is that if you see the Chinese symbols for "formal computational operations," then you output the Chinese symbols for "no understanding at all."
教育家们很想知道,我也想知道,但我敢打赌,通过中文符号测试,大多数读这篇文章的人并不真正理解超验数 经济通胀、核电站安全或帆船如何逆风航行。(实话实说吧!)塞尔的论证本身是在一个充满符号、本质上很难 "理解 "的领域里大摇大摆地前进,完全可以被看作是单纯的符号操纵。他的主要规则是,如果你看到的中文符号是 "形式化的计算操作",那么你输出的中文符号就是 "完全不理解"。
Given the very commmon exercise in human affairs of linguistic interchange in areas where it is not demonstrable that we know what we are talking about, we might well be humble and give the computer the benefit of the doubt when and if it performs as well as we do. If we
在人类事务中,语言交际是非常常见的事情,在这些领域中,我们无法证明自己知道自己在说什么,因此,如果计算机的表现和我们一样好,我们不妨谦虚一点,给计算机一点怀疑的好处。如果我们

credit people with understanding by virtue of their apparently competent verbal performances, we might extend the same courtesy to the machine. It is a conceit, not an insight, to give ourselves more credit for a comparable periormance.
如果说,人们因其明显胜任的语言表达而获得了理解,那么我们也可以对机器给予同样的礼遇。对自己的类似表现给予更多赞誉是一种自负,而不是一种洞察力。
But Searle airily dismisses this "other minds" argument, and still insists that the computer lacks something essential. Chinese symbol rules only go so far, and for him, if you don't have everything, you don't have anything. I should think rather that if you don't have everything, you don't have everything. But in any case, the missing ingredient for Searle is his concept of intentionality. In his paper, he does not justify why this is the key factor. It seems more obvious that what the manipulator of Chinese symbols misses is extensional validity. Not to know that the symbol for "menu" refers to that thing out in the world that you can hold and fold and look at closely is to miss some real understanding of what is meant by menu. I readily acknowledge the importance of such sensorimotor knowledge. The understanding of how a sailboat sails upwind gained through the feel of sail and rudder is certainly valid, and is not the same as a verbal explanation.
但塞尔轻描淡写地否定了这种 "其他思维 "的论点,仍然坚持认为计算机缺乏一些基本的东西。对他来说,如果你没有一切,你就什么都没有。我倒认为,如果没有一切,就没有一切。但无论如何,塞尔所缺少的要素是他的意向性概念。在他的论文中,他没有说明为什么这是关键因素。似乎更明显的是,中国符号的操纵者缺少的是扩展有效性。如果不知道 "菜单 "的符号指的是世界上那个你可以拿着折叠起来仔细观察的东西,那就等于没有真正理解 "菜单 "的含义。我很乐意承认这种感官运动知识的重要性。通过对帆和舵的感觉来理解帆船如何逆风航行当然是有效的,但与口头解释不同。
Verbal-conceplual computer programs lacking sensorimotor connection with the world may well miss things. Imagine the following piece of a story: "John told Harry he couldn't find the book. Harry rolled his eyes toward the ceiling." Present common sense inference models can make various predictions about Harry's relation to the book and its unfindability. Perhaps he loaned it to John, and therefore would be upset that it seemed lost. But the unique and nondecomposable meaning of eye rolling is hard for a model to capture except by a clumsy, concrete dictionary entry. A human understander, on the other hand, can imitate Harry's eye roll overtly or in imagination and experience holistically the resigned frustration that Harry must feel. It is important to explore the domain of examples like this.
缺乏与世界的感知运动联系的语言-概念计算机程序很可能会错过一些东西。想象一下下面这个故事:"约翰告诉哈里他找不到书了。哈里朝天花板翻了个白眼"。目前的常识推理模型可以对哈里与这本书的关系以及这本书的不可寻性做出各种预测。也许他把书借给了约翰,因此会对书的丢失感到不安。但是,"翻白眼 "这种独特的、不可分解的含义是模型难以捕捉到的,只能通过一个笨拙的、具体的字典条目。另一方面,人类理解者可以公开或在想象中模仿哈里的翻白眼动作,并全面体验哈里所感受到的不甘的挫败感。探索类似例子的领域非常重要。
But why instead is "intentionality" so important for Searle? If we recite his litany, "hopes, fears, and desires," we don't get the point. A computer or a human certainly need not have hopes or fears about the customer in order to understand a story about a restaurant visit. And inferential use of these concepts is well within the capabilities of computer understanding models. Goal-based inferences, for example, are a standard mechanism in programs of the Yale Al project. Rather, the crucial state of "intentionality" for knowledge is the appreciation of the conditions for its falsification. In what sense does the computer realize that the assertion, "John read the menu" might or might not be true, and that there are ways in the real world to find out?
但是,为什么 "意向性 "对塞尔如此重要呢?如果我们背诵他那一连串的 "希望、恐惧和欲望",我们就不会明白他的意思。计算机或人类当然不需要对顾客抱有希望或恐惧,就能理解一个关于餐馆之行的故事。而这些概念的推理使用完全在计算机理解模型的能力范围之内。例如,基于目标的推理是耶鲁大学阿尔项目程序的标准机制。相反,知识的关键 "意向性 "状态是对知识被证伪的条件的理解。在什么意义上,计算机会意识到 "约翰读了菜单 "这一断言可能是真的,也可能是假的,而现实世界中又有什么方法可以找出答案呢?
Well, Searle has a point there, although I do not see it as the trump card he thinks he is playing. The computer operates in a gullible fashion: it takes every assertion to be true. There are thus certain knowledge problems that have not been considered in artificial intelligence programs for language understanding, for example, the question of what to do when a belief about the world is contradicted by data: should the belief be modified, or the data called into question? These questions have been discussed by psychologists in the context of human knowledge-handling proclivities, but the issues are beyond present Al capability. We shall have to see what happens in this area. The naivete of computers about the validity of what we tell them is perhaps touching, but it would hardly seem to justify the total scorn exhibited by Searle. There are many areas of knowledge within which questions of falsifiability are quite secondary - the understanding of literary fiction, for example. Searle has not made convincing his case for the fundamental essentiality of intentionality in understanding. My Chinese symbol processor, at any rate, is not about to output the symbol for "surrender."
塞尔说得有道理,尽管我并不认为这是他自以为的王牌。计算机是以一种易受骗的方式运行的:它认为每一个断言都是真的。因此,在理解语言的人工智能程序中,有一些知识问题尚未得到考虑,例如,当关于世界的信念与数据相矛盾时该怎么办的问题:是修改信念,还是质疑数据?心理学家已经在人类知识处理倾向的背景下讨论过这些问题,但这些问题超出了目前人工智能的能力范围。我们将拭目以待这方面的进展。计算机对我们告诉它们的东西的真实性所持的天真态度也许令人感动,但这似乎并不能成为塞尔表现出的完全蔑视的理由。在许多知识领域,可证伪性问题都是次要的,比如对文学小说的理解。塞尔并没有令人信服地证明意向性在理解中的根本重要性。无论如何,我的中文符号处理器不会输出 "投降 "的符号。

by Ned Block 作者:奈德-布洛克

Department of Linguistics and Philosophy, Massachusetts Institute of Technology, Cambridgo, Mass. 02139
麻省理工学院语言学和哲学系,马萨诸塞州坎布里奇。02139

What intuitions about homunculi don't show
关于同素异形体的直觉并不能说明什么

Searle's argument depends for its force on intuitions that certain entities do not think. There are two simple objections to his argument that are based on general considerations about what can be shown by intuitions that something can't think.
塞尔的论证之所以有说服力,取决于某些实体不会思考的直觉。对他的论证有两个简单的反对意见,这两个反对意见是基于对直觉能证明什么东西不会思考的一般考虑。

First, we are willing, and rightly so, to accept counterintuitive consequences of claims for which we have substantial evidence. It once seemed intuitively absurd to assert that the earth was whirling through space at breakneck speed, but in the face of the evidence for the Copernican view, such an intuition should be (and eventually was) rejected as irrelevant to the truth of the matter. More relevantly, a grapefruit-sized head-enclosed blob of gray protoplasm seems, at least at first blush, a most implausible seat of mentality. But if your intuitions still balk at brains as seats of mentality, you should ignore your intuitions as irrelevant to the truth of the matter, given the remarkable evidence for the role of the brain in our mental life. Searle presents some alleged counterintuitive consequences of the view of cognition as formal symbol manipulation. But his argument does not even have the right form, for in order to know whether we should reject the doctrine because of its alleged counterintuitive consequences, we must know what sort of evidence there is in favor of the doctrine. If the evidence for the doctrine is overwhelming, then incompatible intuitions should be ignored, just as should intuitions that the brain couldn't be the seat of mentality. So Searle's argument has a missing premise to the effect that the evidence isn't sufficient to overrule the intuitions.
首先,我们愿意接受有大量证据支持的主张所产生的反直觉后果,而且这样做是正确的。断言地球以极快的速度在太空中旋转,这在直觉上似乎曾经是荒谬的,但在哥白尼观点的证据面前,这种直觉应该被(最终也被)否定,因为它与事情的真相无关。更有意义的是,一个柚子大小、头部封闭的灰色原生质球体,至少乍看之下,似乎是一个最不可思议的思维场所。但是,如果你的直觉仍然不相信大脑是思维的场所,那么你就应该忽略你的直觉,因为它与事情的真相无关,因为大脑在我们的精神生活中所起的作用已经有了显著的证据。塞尔提出了认知是形式化符号操作这一观点的一些所谓反直觉后果。但他的论证甚至连形式都不正确,因为要知道我们是否应该因为该学说所谓的反直觉后果而拒绝接受它,我们必须知道有什么样的证据支持该学说。如果支持该学说的证据是压倒性的,那么与之不相容的直觉就应该被忽略,就像大脑不可能是精神之所在的直觉一样。因此,塞尔的论证缺少一个前提,即证据不足以推翻直觉。
Well, is such a missing premise true? I think that anyone who takes a good undergraduate cognitive psychology course would see enough evidence to justify tentatively disregarding intuitions of the sort that Searle appeals to. Many theories in the tradition of thinking as formal symbol manipulation have a moderate (though admittedly not overwhelming) degree of empirical support
那么,这样一个缺失的前提是真的吗?我认为,任何一个学过认知心理学本科课程的人都会看到足够的证据,证明可以暂时不考虑塞尔所呼吁的那种直觉。思维是形式化的符号操作这一传统中的许多理论都有适度的(当然不是压倒性的)经验支持
A second point against Searle has to do with another aspect of the logic of appeals to intuition. At best, intuition reveals facts about our concepts (at worst, facts about a motley of factors such as our prejudices, ignorance, and, still worse, our lack of imagination - as when people accepted the deliverance of intuition that two straight lines cannot cross twice). So even if we were to accept Searle's appeal to intuitions as showing that homunculus heads that formally manipulate symbols do not think, what this would show is that our formal symbol-manipulation theories do not provide a sulficient condition for the application of our ordinary intentional concepts. The more interesting issue, however, is whether the homunculus head's formal symbol manipulation falls in the same scientific natural kind (see Putnam 1975a) as our intentional processes. If so, then the homunculus head does think in a reasonable scientific sense of the term - and so much the worse for the ordinary concept. Moreover, if we are very concerned with ordinary intentional concepts, we can give sufficient conditions for their application by building in ad hoc conditions designed to rule out the putative counterexamples. A first stab (inadequate, but improvable - see Putnam 1975b, p. 435; Block 1978, p. 292) would be to add the condition that in order to think, realizations of the symbol-manipulating system must not have operations mediated by entities that themselves have symbol manipulation typical of intentional systems. The ad hocness of such a condition is not an objection to it, given that what we are trying to do is "reconstruct" an everyday concept out of a scientific one; we can expect the everyday concept to be scientifically characterizable only in an unnatural way. (See Fodor's commentary on Searle, this issue.) Finally, there is good reason for thinking that the Putnam-Kripke account of the semantics of "thought" and other intentional terms is correct. If so, and if the formal symbol manipulation of the homunculus head falls in the same natural kind as our cognitive processes, then the homunculus head does think, in the ordinary sense as well as in the scientific sense of the term.
反对塞尔的第二点与诉诸直觉的逻辑的另一个方面有关。在最好的情况下,直觉揭示了我们的概念的事实(在最坏的情况下,揭示了我们的偏见、无知等各种因素的事实,更糟糕的是,揭示了我们缺乏想象力的事实--就像人们接受了两条直线不可能交叉两次的直觉)。因此,即使我们接受塞尔对直觉的诉求,认为它表明了形式上操纵符号的同体头不会思考,这也只能说明,我们的形式符号操纵理论并没有为我们的普通意向概念的应用提供充分的条件。然而,更有趣的问题是,同源体头的形式化符号操纵是否与我们的意向过程属于同一科学自然类型(见普特南 1975a)。如果是的话,那么同体脑袋确实是在合理的科学意义上进行思考的--而普通概念就更糟糕了。此外,如果我们非常关注普通的意向性概念,我们就可以通过建立旨在排除所谓反例的特别条件,为它们的应用提供充分条件。第一个尝试(不够充分,但可以改进--见普特南 1975b, 第 435 页;布洛克 1978, 第 292 页)是增加一个条件,即为了进行思考,符号操纵系统的实现必须没有以实体为中介的操作,而实体本身具有意向系统典型的符号操纵性。鉴于我们要做的是从一个科学概念中 "重建 "一个日常概念,这样一个条件的临时性并不是反对它的理由;我们可以期待日常概念只有以一种非自然的方式才具有科学特征。(最后,我们有充分的理由认为普特南-克里普克关于 "思想 "和其他意向术语的语义学解释是正确的。如果是这样,而且如果同体头部的形式符号操作与我们的认知过程属于同一种自然类型,那么同体头部确实会思考,无论是在普通意义上还是在科学意义上。
The upshot of both these points is that the real crux of the debate rests on a matter that Searle does not so much as mention: what the evidence is for the formal symbol-manipulation point of view.
这两点的结果是,辩论的真正关键在于塞尔只字未提的一个问题:形式符号操纵观点的证据是什么。
Recall that Searle's target is the doctrine that cognition is formal symbol manipulation, that is, manipulation of representations by mechanisms that take account only of the forms (shapes) of the representations. Formal symbol-manipulation theories of cognition postulate a variety of mechanisms that generate, transform, and compare representations. Once one sees this doctrine as Searle's real target, one can simply ignore his objections to Schank. The idea that a machine programmed à la Schank has anything akin to mentality is not worth taking seriously, and casts as much doubt on the symbol-manipulation
回想一下,塞尔的目标是这样一种学说:认知是形式化的符号操纵,即通过只考虑表象形式(形状)的机制来操纵表象。认知的形式符号操纵理论假定了生成、转换和比较表征的各种机制。一旦我们将这一学说视为塞尔的真正目标,就可以忽略他对尚克的反对。那种认为按照尚克的方式编程的机器具有类似于心智的想法是不值得认真对待的,它同样会让人对符号操纵理论产生怀疑。

theory of thought as Hitler casts on doctrine favoring a strong executive branch of government. Any plausibility attaching to the idea that a Schank machine thinks would seem to derive from a crude Turing test version of behaviorism that is anathema to most who view cognition as formal symbol manipulation.'
就像希特勒对主张建立强有力的政府行政部门的学说所持的看法一样,"思想理论 "也是如此。Schank机器会思考这一观点的任何合理性似乎都来自于行为主义的图灵测试版本,而这种行为主义对于大多数将认知视为形式化符号操作的人来说是一种诅咒。
Consider a robot akin to the one sketched in Searle's reply II (omitting features that have to do with his criticism of Schank). It simulates your input-output behavior by using a formal symbol-manipulation theory of the sort just sketched of your cognitive processes (together with a theory of your noncognitive mental processes, a qualification omitted from now on). Its body is like yours except that instead of a brain it has a computer equipped with a cognitive theory true of you. You receive an input: "Who is your favorite philosopher?" You cogitate a bit and reply "Heraclitus." If your robot doppelgänger receives the same input, a mechansim converts the input into a description of the input. The computer uses its description of your cognitive mechanisms to deduce a description of the product of your cogitation. This description is then transmitted to a device that transforms the description into the noise "Heraclitus."
考虑一个类似于塞尔在答复二中所勾画的机器人(省略了与他对申克的批评有关的特征)。它通过使用一种形式化的符号操纵理论来模拟你的输入-输出行为,这种理论就是刚才勾勒的那种关于你的认知过程的理论(还有一种关于你的非认知心理过程的理论,从现在起省略这一限定)。它的身体和你的一样,只不过它没有大脑,而是一台装有与你相同的认知理论的计算机。你接收到一个输入"你最喜欢的哲学家是谁?你思考了一下,回答说 "赫拉克利特"。如果你的机器人二重身接收到相同的输入,机械装置会将输入转换成对输入的描述。计算机通过对你的认知机制的描述,推导出对你的思考结果的描述。然后,这一描述被传送到一个装置,该装置将描述转换成噪音 "赫拉克利特"。
While the robot just described behaves just as you would given any input, it is not obvious that it has any mental states. You cogitate in response to the question, but what goes on in the robot is manipulation of descriptions of your cogitation so as to produce the same response. It isn't obvious that the manipulation of descriptions of cogitation in this way is itself cogitation.
虽然刚才描述的机器人的行为就像你输入任何信息一样,但它显然没有任何心理状态。你在回答问题时进行了思考,但机器人所做的是操纵对你的思考的描述,从而产生同样的反应。以这种方式操纵对思考的描述本身就是思考,这一点并不明显。
My intuitions agree with Searle about this kind of case (see Block, forthcoming), but I have encountered little agreement on the matter. In the absence of widely shared intuition, I ask the reader to pretend to have Searle's and my intuition on this question. Now I ask another favor, one that should be firmly distinguished from the first: take the leap from intuition to fact (a leap that, as I argued in the first four paragraphs of this commentary, Searle gives us no reason to take). Suppose, for the sake of argument, that the robot described above does not in fact have intentional states.
关于这种情况,我的直觉与塞尔一致(见布洛克,即将出版),但我在这个问题上几乎没有遇到过一致意见。在缺乏广泛认同的直觉的情况下,我请读者假装拥有塞尔和我在这个问题上的直觉。现在,我提出另一个请求,这个请求应与第一个请求明确区分开来:从直觉到事实的飞跃(正如我在本评论的前四段所论证的,塞尔没有给我们任何理由去做这个飞跃)。为了论证起见,假设上述机器人事实上并不具有意向状态。
What I want to point out is that even if we grant Searle all this, the doctrine that cognition is formal symbol manipulation remains utterly unscathed. For it is no part of the symbol-manipulation view of cognition that the kind of manipulation attributed to descriptions of our symbol-manipulating cognitive processes is itself a cognitive process. Those who believe formal symbol-manipulation theories of intentionality must assign intentionality to anything of which the theories are true. but the theories cannot be expected to be true of devices that use them to mimic beings of which they are true.
我想指出的是,即使我们承认塞尔所说的这一切,"认知是形式化的符号操纵 "这一学说依然毫发无损。因为在符号操纵认知观中,我们对符号操纵认知过程的描述所归因的那种操纵本身并不是一种认知过程。那些相信关于意向性的形式符号操纵理论的人,必须把意向性赋予这些理论所针对的任何事物。
Thus far, I have pointed out that intuitions that Searle's sort of homunculus head does not think do not challenge the doctrine that thinking is formal symbol manipulation. But a variant of Searle's example, similar to his in its intuitive force, but that avoids the criticism ! just sketched, can be described.
到目前为止,我已经指出,塞尔的那种同体脑袋不会思考的直觉并没有挑战思维是形式化符号操作的学说。但是,我们可以描述塞尔例子的一个变体,它在直觉力量上与塞尔的例子相似,但却避免了刚才的批评。
Recall that it is the aim of cognitive psychology to decompose mental processes into combinations of processes in which mechanisms generate representations, other mechanisms transform representations, and still other mechanisms compare representations, issuing reports to still other mechanisms, the whole network being appropriately connected to sensory input transducers and motor output devices. The goal of such theorizing is to decompose these processes to the point at which the mechanisms that carry out the operations have no internal goings on that are themselves decomposable into symbol manipulation by still further mechanisms. Such ultimate mechanisms are described as "primitive," and are often pictured in flow diagrams as "black boxes" whose realization is a matter of "hardware" and whose operation is to be explained by the physical sciences, not psychology. (See Fodor 1968; 1980; Dennet 1975)
回想一下,认知心理学的目标是将心理过程分解为各种过程的组合,在这些过程中,各种机制生成表象,其他机制转换表象,还有其他机制比较表象,并向还有其他机制发出报告,整个网络与感觉输入传感器和运动输出设备适当连接。这种理论研究的目标是分解这些过程,使执行这些操作的机制没有内部活动,而这些活动本身又可以分解为由更多机制进行的符号操作。这种终极机制被称为 "原始机制",在流程图中常被描绘成 "黑箱",其实现是 "硬件 "问题,其运作应由物理科学而非心理学来解释。(见 Fodor 1968;1980;Dennet 1975)。
Now consider an ideally completed theory along these lines, a theory of your cognitive mechanisms. Imagine a robot whose body is like yours, but whose head contains an army of homunculi, one for each black box. Each homunculus does the symbol-manipulating job of the black box he replaces, transmitting his "output" to other homunculi by telephone in accordance with the cognitive theory. This homunculi head is just a variant of one that Searle uses, and it completely avoids the criticism I sketched above, because the cognitive theory it implements is actually true of . Call this robot the cognitive homunculi head. (The cognitive homunculi head is discussed in more detail in Block 1978, pp. 305-10.) I shall argue that even if you have the intuition that the cognitive homunculi head has no intentionality, you should not regard this intuition as casting doubt on the truth of symbol-manipulation theories of thought.
现在,我们来考虑一个理想的理论,一个关于你的认知机制的理论。想象一个机器人,它的身体和你们一样,但它的脑袋里有一支同源体大军,每个黑盒子都有一个同源体。每个同源体都能完成它所替代的黑盒子的符号操纵工作,并根据认知理论通过电话将它的 "输出 "传送给其他同源体。这个同体头只是塞尔使用的同体头的一个变体,它完全避免了我在上文勾勒的批评,因为它所实现的认知理论实际上是真实的, 。我们称这个机器人为认知同构头。(认知同构头在布洛克 1978 年著作第 305-10 页中有更详细的论述)。我将论证,即使你直觉认知同形头没有意向性,你也不应该把这种直觉视为对思维符号操纵理论的真实性的怀疑。
One line of argument against the cognitive homunculi head is that its persuasive power may be due to a "not seeing the forest for the trees" illusion (see Lycan's commentary, this issue, and Lycan, forthcoming). Another point is that brute untutored intuition tends to balk at assigning intentionality to any physical system, including Searle's beloved brains. Does Searle really think that it is an initially congenial idea that a hunk of gray jelly is the seat of his intentionality? (Could one imagine a less likely candidate?) What makes gray jelly so intuitively satisfying to Searle is obviously his knowledge that brains are the seat of our intentionality. But here we see the difficulty in relying on considered intuitions, namely that they depend on our beliefs, and among the beliefs most likely to play a role in the case at hand are precisely our doctrines about whether the formal symbol-manipulation theory of thinking is true or false.
反对认知同构头像的一个论点是,它的说服力可能是由于一种 "只见树木不见森林 "的错觉(见本期Lycan的评论,以及Lycan,即将出版)。另一点是,未经训练的野蛮直觉倾向于对任何物理系统,包括塞尔所钟爱的大脑,赋予意向性。难道塞尔真的认为,一大块灰色果冻就是他的意向性之所在,这是一个最初就能让人产生共鸣的想法吗?(我们还能想象出更不可能的人选吗?)让苏尔对灰色果冻如此直观地感到满意的,显然是他知道大脑是我们意向性的所在地。但在这里,我们看到了依赖经过深思熟虑的直觉的困难,即直觉依赖于我们的信念,而最有可能在当前案例中发挥作用的信念恰恰是我们关于形式化符号操纵思维理论是真是假的学说。
Let me illustrate this and another point via another example (Block 1978, p. 291). Suppose there is a part of the universe that contains matter that is infinitely divisible. In that part of the universe, there are intelligent creatures much smaller than our elementary particles who decide to devote the next few hundred years to creating out of their matter substances with the chemical and physical characteristics (except at the subelementary particle level) of our elements. The build hordes of space ships of different varieties about the sizes of our electrons, protons, and other elementary particles, and fly the ships in such a way as to mimic the behavior of these elementary particles. The ships contain apparatus to produce and detect the type of radiation elementary particles give off. They do this to produce huge (by our standards) masses of substances with the chemical and physical characteristics of oxygen, carbon, and other elements. You go off on an expedition to that part of the universe, and discover the "oxygen" and "carbon." Unaware of its real nature, you set up a colony, using these "elements" to grow plants for food, provide "air" to breathe, and so on. Since one's molecules are constantly being exchanged with the environment, you and other colonizers come to be composed mainly of the "matter" made of the tiny people in space ships.
让我通过另一个例子来说明这一点和另一点(布洛克,1978 年,第 291 页)。假设宇宙中有一部分物质是无限可分的。在这部分宇宙中,有一些比我们的基本粒子小得多的智慧生物,它们决定在接下来的几百年里,用它们的物质创造出具有我们元素的化学和物理特性(亚基本粒子级除外)的物质。他们建造了大量不同种类的太空飞船,大小与我们的电子、质子和其他基本粒子差不多,并以模仿这些基本粒子行为的方式驾驶飞船。飞船上的仪器可以产生和探测基本粒子发出的辐射类型。他们这样做是为了产生大量(按照我们的标准)具有氧、碳和其他元素的化学和物理特性的物质。你远征宇宙的那一部分,发现了 "氧 "和 "碳"。在不了解其真正性质的情况下,你建立了一个殖民地,利用这些 "元素 "种植植物作为食物,提供 "空气 "供人呼吸等等。由于人的分子不断与环境交换,你和其他殖民者的主要成分就是太空船上的小人制造的 "物质"。
If any intuitions about homunculi heads are clear, it is clear that coming to be made of the homunculi-infested matter would not affect your mentality. Thus we see that intuition need not balk at assigning intentionality to a being whose intentionality owes crucially to the actions of internal homunculi. Why is it so obvious that coming to be made of homunculi-infested matter would not affect our sapience or sentience? I submit that it is because we have all absorbed enough neurophysiology to know that changes in particles in the brain that do not affect the brain's basic (electrochemical) mechanisms do not affect mentality.
如果关于同形体头部的任何直觉都是清晰的,那么很显然,由同形体所构成的物质就不会影响你的心态。因此,我们可以看到,直觉在赋予一个生命以意向性时不需要犹豫,因为这个生命的意向性在很大程度上要归功于内部同形体的行动。为什么由充满同源物的物质构成不会影响我们的智慧或感知力这么明显呢?我认为,这是因为我们都吸收了足够多的神经生理学知识,知道大脑中微粒的变化不会影响大脑的基本(电化学)机制,也不会影响思维。
Our intuitions about the mentality of homunculi heads are obviously influenced (if not determined) by what we believe. If so, then the burden of proof lies with Searle to show that the intuition that the cognitive homunculi head has no intentionality (an intuition that and many others do not share) is not due to doctrine hostile to the symbol-manipulation account of intentionality.
我们对同体元首心态的直觉显然受到(如果不是决定于)我们的信念的影响。如果是这样,那么塞尔就有责任证明,认知同体头没有意向性的直觉( 和许多其他人并不认同这种直觉)并不是由于敌视意向性的符号操纵说的学说造成的。
In sum, an argument such as Searle's requires a careful examination of the source of the intuition that the argument depends on, an examination Searle does not begin.
总之,像塞尔这样的论证需要仔细研究论证所依赖的直觉的来源,而塞尔并没有开始这种研究。

Acknowledgment 鸣谢

I am grateful to Jerry Fodor and Georges Rey for comments on an earlier draft.
感谢 Jerry Fodor 和 Georges Rey 对我的初稿提出的意见。

Note 备注

  1. While the crude version of behaviorism is refuted by well-known arguments, there is a more sophisticated version that avoids them; however, it can be refuted using an example akin to the one Searle uses against Schank. Such an example is sketched in Block 1978, p. 294, and elaborated in Block, forthcoming
    虽然行为主义的粗略版本被众所周知的论据所驳倒,但也有一个更复杂的版本可以避免这些论据;不过,可以用一个类似于塞尔用来反对尚克的例子来驳倒它。这个例子在布洛克 1978 年的著作第 294 页中作了概述,并在布洛克即将出版的著作中作了详细阐述。

by Bruce Bridgeman 作者:布鲁斯-布里奇曼

Psychology Board of Studies, University of California, Santa Cruz, Calif. 95064
加州大学心理学研究委员会,加州圣克鲁兹,95064

Brains + programs - minds
大脑 + 程序 - 头脑

There are two sides to this commentary, the first that machines can embody somewhat more than Searle imagines, and the other that humans embody somewhat less. My conclusion will be that the two systems can in principle achieve similar levels of function.
这篇评论有两个方面,第一方面是机器所能体现的比塞尔想象的要多一些,另一方面是人类所能体现的比塞尔想象的要少一些。我的结论是,这两种系统原则上可以实现类似水平的功能。
My response to Searle's Gedankenexperiment is a variant of the "robot reply": the robot simply needs more information, both environmental and a priori, than Searle is willing to give to it. The robot can internalize meaning only if it can receive information relevant to a definition of meaning, that is, information with a known relationship to the outside world. First it needs some Kantian innate ideas, such as the fact that some input lines (for instance, inputs from the two eyes or from locations in the same eye) are topographically related to one another. In biological brains this is done with labeled lines. Some of the inputs, such as visual inputs, will be connected primarily with spatial processing programs while others such as auditory ones will be more closely related to temporal processing. Further, the system will be built to avoid some input strings (those representing pain, for example) and to seek others (water when thirsty). These properties and many more are built into the structure of human brains genetically, but can be built into a program as a data base just as well. It may be that the homunculus represented in this program would not know what's going on, but it would soon learn, becuase it has all of the information necessary to construct a representation of events in the outside world.
我对塞尔的 "Gedankenexperiment "的回应是 "机器人回应 "的变体:机器人只是需要比塞尔愿意给它的更多的信息,包括环境信息和先验信息。机器人只有接收到与意义定义相关的信息,即与外部世界有已知关系的信息,才能内化意义。首先,它需要一些康德式的先天观念,比如一些输入线(例如,来自两只眼睛或同一只眼睛中不同位置的输入)在地形上相互关联。在生物大脑中,这是通过标记线来实现的。有些输入(如视觉输入)主要与空间处理程序有关,而有些输入(如听觉输入)则与时间处理程序关系更为密切。此外,系统还将避免某些输入字符串(例如代表疼痛的字符串),并寻找其他输入字符串(口渴时寻找水)。这些特性以及其他更多特性都是人类大脑的基因结构,但同样也可以作为一个数据库内置于程序中。这个程序所代表的同源体可能不知道发生了什么,但它很快就会知道,因为它拥有构建外部世界事件表象所需的所有信息。
My super robot would learn about the number five, for instance, in the same way that a child does, by interaction with the outside world where the occurrence of the string of symbols representing "five" in its visual or auditory inputs corresponds with the more direct experience of five of something. The fact that numbers can be coded in the computer in more economical ways is no more relevant than the fact that the number five is coded in the digits of a child's hand. Both a priori knowledge and environmental knowledge could be made similar in quantity and quality to that available to a human.
例如,我的超级机器人学习数字 "5 "的方式与儿童相同,即通过与外部世界的互动,在其视觉或听觉输入中出现代表 "5 "的一串符号,对应于 "5 "这一更直接的体验。数字可以用更经济的方式在计算机中编码,但这并不比数字 "5 "被编码在儿童手掌的数字中更有意义。无论是先验知识还是环境知识,在数量和质量上都可以与人类的知识相媲美。
Now I will try to show that human intentionality is not as qualitatively different from machine states as it might seem to an introspectionist. The brain is similar to a computer program in that it too receives only strings of input and produces only strings of output. The inputs are small 0.1 -volt signals entering in great profusion along afferent nerves, and the outputs are physically identical signals leaving the central nervous system on efferent nerves. The brain is deaf, dumb, and blind, so that the electrical signals (and a few hormonal messages which need not concern us here) are the only ways that the brain has of knowing about its world or acting upon it.
现在,我将试图证明,人类的意向性与机器状态并不像内省论者所认为的那样有质的区别。大脑类似于计算机程序,它也只接收输入串,只产生输出串。输入是0.1伏的小信号,沿着传入神经大量输入,输出是物理上相同的信号,通过传出神经离开中枢神经系统。大脑是又聋又哑又瞎的,因此,电信号(和一些荷尔蒙信息,我们在这里就不关心了)是大脑了解其世界或对其采取行动的唯一途径。
The exception to this rule is the existing information stored in the brain, both that given in genetic development and that added by experience. But it too came without intentionality of the sort that Searle seems to require, the genetic information being received from long strings of DNA base sequences (clearly there is no intentionality here), and previous inputs being made up of the same streams of 0.1 -volt signals that constitute the present input. Now it is clear that no neuron receiving any of these signals or similar signals generated inside the brain has any idea of what is going on. The neuron is only a humble machine which receives inputs and generates outputs as a function of the temporal and spatial relations of the inputs, and its own structural properties. To assert any further properties of brains is the worst sort of dualism.
这一规则的例外是大脑中储存的现有信息,包括遗传发展过程中产生的信息和经验中增加的信息。基因信息是从一长串 DNA 碱基序列中接收的(这里显然不存在意向性),以前的输入也是由构成当前输入的相同的 0.1 伏信号流组成的。现在很清楚了,接收任何这些信号或大脑内部产生的类似信号的神经元都不知道发生了什么。神经元只是一台不起眼的机器,它接收输入,并根据输入的时空关系和自身的结构特性产生输出。断言大脑具有任何其他属性都是最糟糕的二元论。
Searle grants that humans have intentionality, and toward the end of his article he also admits that many animals might have intentionality also. But how far down the phylogenetic scale is he willing to go [see "Cognition and Consciousness in Nonhuman Species" BBS 1(4) 1978|? Does a single-celled animal have intentionality? Clearly not, for it is only a simple machine which receives physically identifiable inputs and "automatically" generates reflex outputs. The hydra with a few dozen neurons might be explained in the same way, a simple nerve network with inputs and outputs that are restricted, relatively easy to understand, and processed according to fixed patterns. Now what about the mollusc with a few hundred neurons, the insect with a few thousand, the amphibian with a few million, or the mammal with billions? To make his argument convincing. Searle needs a criterion for a dividing line in his implicit dualism.
塞尔承认人类具有意向性,在文章末尾,他也承认许多动物也可能具有意向性。但他愿意在系统发育的尺度上向下延伸多远[见《非人类物种的认知与意识》BBS 1(4) 1978]?单细胞动物有意向性吗?显然不是,因为它只是一个简单的机器,接收物理上可识别的输入,并 "自动 "产生反射输出。拥有几十个神经元的九头蛇也可以用同样的方式来解释,它是一个简单的神经网络,输入和输出都是有限的,相对容易理解,并按照固定的模式进行处理。那么,拥有几百个神经元的软体动物、拥有几千个神经元的昆虫、拥有几百万个神经元的两栖动物或拥有几十亿个神经元的哺乳动物呢?为了使他的论证令人信服塞尔需要一个标准来划分他的隐含二元论。
We are left with a human brain that has an intention-free, genetically determined structure, on which are superimposed the results of storms of tiny nerve signals. From this we somehow introspect an intentionality that cannot be assigned to machines. Searle uses the example of arithmetic manipulations to show how humans "understand" something that machines don't. I submit that neither humans nor machines understand numbers in the sense Searle intends. The understanding of numbers greater than about five is always an illusion, for humans can deal with larger numbers only by using memorized tricks rather than true understanding. If I want to add 27 and 54 , I don't use some direct numerical understanding or even a spatial or electrical analogue in brain. Instead, I apply rules that I memorized in elementary school without really knowing what they meant, and combine these rules with memorized facts of addition of one-digit numbers to arrive at an answer without understanding the numbers themselves. Though I have the feeling that I am performing operations on numbers, in terms of the algorithms I use there is nothing numerical about it. In the same way I can add numbers in the billions, although neither I nor anyone else has any concept of what these numbers mean in terms of perceptually meaningful quantities. Any further understanding of the number system that I possess is irrelevant, for it is not used in performing simple computations.
我们看到的人脑是一种无意向的、由基因决定的结构,其上叠加着微小神经信号风暴的结果。由此,我们以某种方式内省出一种无法赋予机器的意向性。塞尔用算术运算的例子来说明人类如何 "理解 "机器不理解的东西。我认为,人类和机器对数字的理解都不符合塞尔的意图。对大于 5 的数字的理解始终是一种假象,因为人类只能通过记忆的技巧而不是真正的理解来处理更大的数字。如果我想把 27 和 54 相加,我不会使用某种直接的数字理解,甚至也不会使用 大脑中的空间或电子模拟。取而代之的是,我运用小学时记住的规则,但并不真正了解这些规则的含义,并将这些规则与记住的一位数加法事实相结合,在不理解数字本身的情况下得出答案。虽然我感觉自己是在对数字进行运算,但就我使用的算法而言,这与数字无关。同样,我可以把数十亿的数字相加,尽管我和其他人都不知道这些数字在感知上意味着什么。我对数字系统的任何进一步理解都是无关紧要的,因为在进行简单计算时并没有用到它。
The illusion of having a consciousness of numbers is similar to the illusion of having a full-color, well focused visual field; such a concept exists in our consciousness, but the physiological reality falls far short of the introspection. High-quality color information is available only in about the central thirty degrees of the visual field, and the best spatial information in only one or two degrees. I suggest that the feeling of intentionality is a cognitive illusion similar to the feeling of the highquality visual image. Consciousness is a neurological system like any other, with functions such as the long-term direction of behavior (intentionality?), access to long-term memories, and several other characteristics that make it a powerful, though limited-capacity. processor of biologically useful information.
拥有数字意识的错觉类似于拥有色彩饱满、聚焦良好的视野的错觉;这样的概念存在于我们的意识中,但生理上的真实情况却与内省相去甚远。高质量的色彩信息只能在视野中心 30 度左右的范围内获得,而最佳的空间信息只能在一到两度的范围内获得。我认为,意向性的感觉是一种认知错觉,类似于高质量视觉图像的感觉。意识与其他神经系统一样,具有行为的长期方向(意向性?)、获取长期记忆等功能,并具有其他一些特点,使其成为一个强大的、但容量有限的生物有用信息处理器。
All of Searle's replies to his Gedankenexperiment are variations on the theme that I have described here, that an adequately designed machine could include intentionality as an emergent quality even though individual parts (transistors, neurons, or whatever) have none. All of the replies have an element of truth, and their shortcomings are more in their failure to communicate the similarity of brains and machines to Searle than in any internal weaknesses. Perhaps the most important difference between brains and machines lies not in their instantiation but in their history, for humans have evolved to perform a variety of poorly understood functions including reproduction and survival in a complex social and ecological context. Programs, being designed without extensive evolution, have more restricted goals and motivations.
塞尔对他的 "Gedankenexperiment "的所有回答都是我在这里所描述的主题的变体,即一个经过充分设计的机器可以把意向性作为一种出现的品质,即使单个部件(晶体管、神经元或其他什么)没有意向性。所有这些回答都有一定的道理,它们的不足之处更多在于没有把大脑和机器的相似性传达给塞尔,而不是内在的弱点。大脑与机器之间最重要的区别也许不在于它们的实例化,而在于它们的历史,因为人类是为了在复杂的社会和生态环境中完成包括繁衍和生存在内的各种鲜为人知的功能而进化的。而程序在设计之初并没有经过广泛的进化,其目标和动机更为有限。
Searle's accusation of dualism in Al falls wide of the mark because the mechanist does not insist on a particular mechanism in the organism, but only that "mental" processes be represented in a physical system when the system is functioning. A program lying on a tape spool in a corner is no more conscious than a brain preserved in a glass jar, and insisting that the program if read into an appropriate computer would function with intentionality asserts only that the adequate machine consists of an organization imposed on a physical substrate. The organization is no more mentalisitc than the substrate itself. Artificial intelligence is about programs rather than machines only becuase the process of organizing information and inputs and outputs into an information system has been largely solved by digital computers. Therefore, the program is the only step in the process left to worry about.
塞尔对阿尔的二元论的指责有失偏颇,因为机械论者并不坚持有机体中的特定机制,而只是坚持在物理系统运行时,"心理 "过程应在物理系统中得到体现。放在角落磁带卷轴上的程序并不比保存在玻璃瓶中的大脑更有意识,而坚持认为如果将程序读入适当的计算机就会以意向性的方式运行,只是断言适当的机器由强加在物理基质上的组织构成。这种组织并不比基质本身更具有心智性。人工智能是关于程序而非机器的,这只是因为将信息和输入输出组织成信息系统的过程在很大程度上已经由数字计算机解决了。因此,程序是这一过程中唯一需要担心的步骤。
Searle may well be right that present programs (as in Schank & Abelson 1977) do not instantiate intentionality according to his definition. The issue is not whether present programs do this but whether it is
塞尔很可能是对的,根据他的定义,现在的程序(如 Schank 和 Abelson 1977 年的程序)并不实例化意向性。问题不在于现在的程序是否这样做,而在于它是否

possible in principle to build machines that make plans and achieve goals. Searle has given us no evidence that this is not possible.
原则上,制造能制定计划和实现目标的机器是可能的。塞尔没有给我们提供任何证据来证明这是不可能的。

by Arthur C. Danto
作者:阿瑟-C-丹托

Department of Philosophy, Columbia University, New York, N. Y. 10027
哥伦比亚大学哲学系,纽约州纽约市,10027

The use and mention of terms and the simulation of linguistic understanding
术语的使用和提及以及语言理解的模拟

In the ballet Coppélia, a dancer mimics a clockwork dancing doll simulating a dancer. The imitating movements, dancing twice removed, are predictably "mechanical," given the discrepancies of outward resemblance between clockwork dancers and real ones. These discrepancies may diminish to zero with the technological progress of clockwork, until a dancer mimicking a clockwork dancer simulating a dancer may present a spectacle of three indiscernible dancers engaged in a pas de trois. By behavioral criteria, nothing would enable us to identify which is the doll, and the lingering question of whether the clockwork doll is really dancing or only seeming to seems merely verbal - unless we adopt a criterion of meaning much favored by behaviorism that makes the question itself nonsensical.
在芭蕾舞剧《科佩莉亚》中,一位舞者模仿一个发条舞偶,模拟一位舞者。由于发条舞者与真实舞者在外表相似度上存在差异,因此模仿动作、两次舞蹈都是 "机械 "的。随着发条技术的进步,这些差异可能会减小到零,直到一个模仿发条舞者的舞者模拟出一个舞者,呈现出三个难以辨认的舞者跳起三人舞的奇观。根据行为学的标准,没有任何东西能让我们辨别哪个是玩偶,而关于发条玩偶是真的在跳舞还是只是看起来在跳舞这个挥之不去的问题似乎只是口头上的--除非我们采用一种行为主义非常喜欢的意义标准,让这个问题本身变得毫无意义。
The question of whether machines instantiate mental predicates has been cast in much the same terms since Turing, and by tacit appeal to outward indiscernibility the question of whether machines understand is either dissolved or trivialized. It is in part a protest against assimilating the meaning of mental predicates to mere behavioral criteria - an assimilation of which Abelson and Schank are clearly guilty, making them behaviorists despite themselves - that animates Searle's effort to mimic a clockwork thinker simulating understanding; to the degree that he instantiates the same program it does and fails to understand what is understood by those whom the machine is designed to simulate even if the output of the three of them cannot be discriminated - then the machine itself fails to understand. The argumentation is picturesque, and may not be compelling for those resolved to define (such terms as) "understanding" by outward criteria. So I shall recast Searle's thesis in logical terms which must force his opponents either to concede machines do not understand or else, in order to maintain they might understand, to abandon the essentially behaviorist theory of meaning for mental predicates.
自图灵以来,关于机器是否将心理谓词实例化的问题一直被以大致相同的措辞抛出,而通过对外在不可辨性的默示诉求,机器是否理解的问题要么被消解,要么被淡化。某种程度上,这是对将心理谓词的意义同化为单纯的行为标准的抗议--阿贝尔森和尚克显然犯了这种同化的错误,尽管他们自己也是行为主义者--正是这种抗议促使塞尔努力模仿一个模拟理解的钟表思想家;只要他实例化了与机器相同的程序,却无法理解机器所要模拟的人所理解的东西,即使他们三人的输出无法分辨--那么机器本身就无法理解。这种论证如诗如画,对于那些决心用外在标准来定义(诸如)"理解 "的人来说,可能并不令人信服。因此,我将用逻辑术语重构塞尔的论点,迫使他的反对者要么承认机器不理解,要么为了坚持机器可能理解,放弃本质上行为主义的心理谓词意义理论。
Consider, as does Searle, a language one does not understand but that one can in a limited sense be said to read. Thus I cannot read Greek with understanding, but I know the Greek letters and their associated phonetic values, and am able to pronounce Greek words. Milton's daughters were able to read aloud to their blind father from Greek, Latin, and Hebrew texts though they had no idea what they were saying. And they could, as can I, answer certain questions about Greek words, if only how many letters there are, what their names are, and how they sound when voiced. Briefly, in terms of the distinction logicians draw between the use and mention of a term, they knew, as I know, such properties of Greek words as may be identified by someone who is unable to use Greek words in Greek sentences. Let us designate these as M-properties, in contrast to U-properties, the latter being those properties one must know in order to use Greek (or any) words. The question then is whether a machine programmed to simulate understanding is restricted to M-properties, that is, whether the program is such that the machine cannot use the words it otherwise may be said to manipulate under M-rules and M-laws. If so, the machine exercises its powers over what we can recognize in the words of a language we do not understand, without, as it were, thinking in that language. There is some evidence that in fact the machine operates pretty much by pattern recognition, much in the manner of Milton's unhappy daughters.
就像塞尔所做的那样,考虑一种我们不理解的语言,但在有限的意义上我们可以说是读懂了这种语言。因此,我不能理解地阅读希腊文,但我知道希腊字母及其相关音值,并且能够发音。弥尔顿的女儿们能够向她们失明的父亲朗读希腊文、拉丁文和希伯来文,尽管她们不知道这些文字在说什么。她们和我一样,也能回答一些关于希腊语单词的问题,哪怕只是有多少个字母、它们的名字是什么、发声时听起来如何。简而言之,根据逻辑学家对术语的使用和提及所做的区分,他们和我一样知道希腊语单词的属性,而这些属性可以被无法在希腊语句子中使用希腊语单词的人识别出来。让我们把这些属性称为 M 属性,与 U 属性相对,后者是人们为了使用希腊语(或任何)词语而必须知道的属性。那么,问题就在于,一台被编程为模拟理解的机器是否仅限于 M-属性,也就是说,该程序是否使机器无法使用它根据 M-规则和 M-法则可以操纵的词语。如果是这样,那么机器就行使了它在我们所不理解的语言文字中所能识别的能力,而无需用这种语言进行思考。有证据表明,事实上,这台机器几乎是通过模式识别来运作的,就像弥尔顿笔下那些不幸的女儿们一样。
Now 1 shall suppose it granted that we cannot define the properties of words exhaustively through their M-properties. If this is true, Schank's machines, restricted to M-properties, cannot think in the languages they simulate thinking in. One can ask whether it is possible for the machines to exhibit the output they do exhibit if all they have is -competence. If not, then they must have some sort of competence. But the difficulty with putting the question thus is that there are two ways in which the output can be appreciated: as showing understanding or as only seeming to, and as such the structure of the problem is of a piece with the structure of the mind-body problem in the following respect. Whatever outward behavior, even of a human being. we would want to describe with a psychological (or mental) predicate say that the action of raising an arm was performed - has a physical description that is true whether or not the psychological description is true - for example, that the arm went up. The physical description then underdetermines the distinction between bodily movements and actions, or between actions and bodily movements that exactly resemble them. So whatever outward behavior takes a (psychological) -predicate takes a (physical) -predicate that underdetermines whether the former is true or false of what the latter is true of. So we cannot infer from a -description whether or not a -description applies. To be sure, we can ruthlessly define -terms as -terms, in which case the inference is easy but trivial, but then we cannot any longer, as Schank and Abelson wish to do, explain outward behavior with such concepts as understanding. In any case, the distinction between -properties and -properties is exactly parallel: anything by way of output we would be prepared to describe in U-terms has an -description true of it, which underdetermines whether the U-description is true or not.
现在,我们假定,我们无法通过词的 M 特性来详尽地定义词的 特性。如果这是真的,那么仅限于M属性的尚克机器就不能用它们所模拟的语言进行思维。我们可以问,如果机器所拥有的只是 -能力,那么它们是否可能表现出它们所表现出的输出呢?如果不能,那么它们就必须具备某种 能力。但是,这样提出问题的困难在于,我们可以从两个方面来理解机器的输出:表现出理解能力或只是看起来理解能力,因此,这个问题的结构在以下方面与身心问题的结构是相同的。无论我们想用心理(或精神)谓词来描述什么外在行为,甚至是人的外在行为,比如,举起手臂这个动作,都有一个物理描述,无论心理描述是否为真,物理描述都是真的,比如,手臂举了起来。这样,物理描述就无法确定身体动作和行为之间的区别,也无法确定行为和与之完全相似的身体动作之间的区别。因此,无论外在行为以(心理的) -谓词还是以(物理的) -谓词,都无法确定前者是真还是后者是假。因此,我们无法从 -描述中推断出 -描述是否适用。当然,我们可以无情地把 -terms 定义为 -terms,在这种情况下,推论是容易但微不足道的,但这样我们就不能再像尚克和艾贝尔森所希望的那样,用理解这样的概念来解释外显行为了。无论如何, -属性与 -属性之间的区别是完全平行的:任何我们准备用U-术语来描述的输出都有一个 -描述,这个描述决定了U-描述的真假。
So no pattern of outputs entails that language is being used, nor hence that the source of the output understands, inasmuch as it may have been cleverly designed to emit a pattern exhaustively describable in M-terms. The problem is perfectly Cartesian. We may worry about whether any of our fellows is an automaton. The question is whether the Schank machine (SAM) is so programmed that only M-properties apply to its output. Then, however closely (exactly) it simulates what someone with understanding would show in his behavior, not one step has been taken toward constructing a machine that understands. And Searle is really right. For while U-competence cannot be defined in M-terms, an -specified simulation can be given of any U-performance, however protracted and intricate. The simulator will only show. not have, the properties of the U-performance. The performances may be indiscriminable, but one constitutes a use of language only if that which emits it in fact uses language. But it cannot be said to use language if its program, as it were, is written solely in M-terms.
因此,任何输出模式都不意味着使用了语言,也不意味着输出源理解了语言,因为它可能被巧妙地设计成输出一种可详尽地用 M 术语描述的模式。这个问题完全是笛卡尔式的。我们可以担心我们的同伴中是否有人是自动机。问题在于,Schank 机器(SAM)是否被编程得只适用于其输出的 M 特性。那么,无论它如何(精确地)模拟有理解力的人在其行为中会表现出来的东西,我们都还没有朝着构建一台有理解力的机器迈出一步。塞尔确实是对的。虽然 "U-能力 "不能用 "M-术语 "来定义,但可以对任何 "U-表现 "进行 -specified simulation,无论它是多么漫长和错综复杂。模拟器只会显示,而不是具有 U-性能的特性。表演可以是无差别的,但只有当发出表演的东西事实上使用了语言,它才构成对语言的使用。但是,如果它的程序仅仅是用 M 术语编写的,就不能说它使用了语言。
The principles on the basis of which a user of language structures a story or text are so different from the principles on the basis of which one could predict, from certain M-properties, what further M-properties to expect, that even if the outputs are indiscernible, the principles must be discernible. And to just the degree that they deviate does a program employing the latter sorts of principles fail to simulate the principles employed in understanding stories or texts. The degree of deviation determines the degree to which the strong claims of are false. This is all the more the case if the M-principles are not to be augmented with -principles.
语言使用者构建一个故事或文本所依据的原则,与人们从某些 M 特性中预测会有哪些进一步的 M 特性所依据的原则是如此不同,以至于即使输出无法辨别,原则也必须是可以辨别的。而采用后一种原则的程序,其偏离程度恰恰决定了它无法模拟在理解故事或文本时所采用的原则。偏离的程度决定了 的强烈主张在多大程度上是错误的。如果M-原则没有得到 -原则的补充,情况就更是如此。
Any of us can predict what sounds a person may make when he answers certain questions that he understands, but that is because we understand where he is going. If we had to develop the ability to predict sounds only on the basis of other sounds, we might attain an astounding congruence with what our performance would have been if we knew what was going on. Even if no one could tell we didn't, understanding would be nil. On the other hand, the question remains as to whether the Schank machine uses words. If it does, Searle has failed as a simulator of something that does not simulate but genuinely possesses understanding. If he is right, there is a pretty consequence. M-properties yield, as it were, pictures of words: and machines, if they encode propositions, do so pictorially.
我们中的任何人都可以预测一个人在回答他所理解的某些问题时可能发出的声音,但那是因为我们理解他的意图。如果我们只能根据其他声音来发展预测声音的能力,那么我们的表现可能会达到惊人的一致,如果我们知道发生了什么事。即使没有人能看出我们不知道,理解力也将是零。另一方面,Schank 机器是否会使用文字仍然是个问题。如果它使用了,那么塞尔作为一个模拟者就失败了,因为他模拟的东西不是模拟,而是真正拥有理解力的东西。如果他是对的,那就有一个很好的结果。M属性产生的,可以说是词语的图画:而机器,如果它们对命题进行编码,也是以图画的方式进行的。

by Daniel Dennett 作者:丹尼尔-丹尼特

Center for Advanced Study in the Behavioral Sciences. Stanford. Califi. 94305
行为科学高级研究中心。斯坦福大学。加利福尼亚州94305

The milk of human intentionality
人类意向性的乳汁

I want to distinguish Searle's arguments, which I consider sophistry. from his positive view, which raises a useful challenge to , if only becuase it should induce a more thoughtful formulation of Al's foundations. First, I must support the charge of sophistry by diagnosing, briefly, the tricks with mirrors that give his case a certain spurious plausibility. Then / will comment briefly on his positive view.
我想把塞尔的论点与他的积极观点区分开来,我认为塞尔的论点是诡辩,而他的积极观点则对 提出了有益的挑战,哪怕只是因为它应该促使人们对阿尔的基础进行更深思熟虑的表述。首先,我必须支持对他诡辩的指控,简要地分析一下他的论据中的镜像技巧,这些技巧使他的论据具有某种虚假的可信性。然后,我将简要评述他的正面观点。
Searle's form of argument is a familiar one to philosophers: he has
塞尔的论证方式是哲学家们所熟悉的:他有

constructed what one might call an intuition pump, a device for provoking a family of intuitions by producing variations on a basic thought experiment. An intuition pump is not, typically, an engine of discovery, but a persuader or pedagogical tool - a way of getting people to see things your way once you've seen the truth, as Searle thinks he has. I would be the last to disparage the use of intuition pumps - I love to use them myself - but they can be abused. In this instance I think Searle relies almost entirely on ill-gotton gains: favorable intuitions generated by misleadingly presented thought experiments.
苏尔建造了一个可以被称为直觉泵的装置,通过对一个基本的思想实验进行变异来激发一系列直觉。一般来说,直觉泵不是发现的引擎,而是一种说服或教学工具--一种一旦你看到了真相,就能让人们按照你的方式看待事物的方法,就像塞尔认为他所做的那样。我是最后一个贬低使用直觉泵的人--我自己也喜欢使用它们--但它们也会被滥用。在这个例子中,我认为塞尔几乎完全依赖于不义之财:由误导性的思想实验产生的有利直觉。
Searle begins with a Schank-style Al task, where both the input and output are linguistic objects, sentences of Chinese. In one regard, perhaps, this is fair play, since Schank and others have certainly allowed enthusiastic claims of understanding for such programs to pass their lips, or go uncorrected; but from another point of view it is a cheap shot, since it has long been a familiar theme within circles that such programs - I call them bedridden programs since their only modes of perception and action are linguistic - tackle at best a severe truncation of the interesting task of modeling real understanding. Such programs exhibit no "language-entry" and "language-exit" transitions, to use Wilfrid Sellars's terms, and have no capacity for non linguistic perception or bodily action. The shortcomings of such models have been widely recognized for years in ; for instance, the recognition was implicit in Winograd's decision to give SHRDLU something to do in order to have something to talk about. "A computer whose only input and output was verbal would always be blind to the meaning of what was written" (Dennett 1969, p. 182). The idea has been around for a long time. So, many if not all supporters of strong would simply agree with Searle that in his initial version of the Chinese room, no one and nothing could be said to understand Chinese, except perhaps in some very strained, elliptical, and attenuated sense. Hence what Searle calls "the robot reply (Yale)" is no surprise, though its coming from Yale suggests that even Schank and his school are now attuned to this point.
Searle 从 Schank 式的 Al 任务开始,输入和输出都是语言对象,即中文句子。从某个角度看,这也许是公平的,因为Schank和其他人肯定会允许对这类程序的理解的热情宣称流于口头,或不加纠正;但从另一个角度看,这又是一种低劣的攻击,因为在 ,这类程序--我称它们为 "卧床不起的程序",因为它们唯一的感知和行动模式是语言的--充其量只是对模拟真实理解这一有趣任务的严重截断,这早已是圈内熟知的话题。用威尔弗莱德-塞拉斯(Wilfrid Sellars)的话说,这些程序没有 "语言进入 "和 "语言退出 "的转换,也没有非语言感知或身体行动的能力。多年来, ,人们已普遍认识到这种模型的缺陷;例如,维诺格拉德决定让 SHRDLU 做一些事情,以便有话可说,就隐含着这种认识。"如果一台计算机的唯一输入和输出是语言,那么它对所写内容的意义永远是视而不见的"(丹尼特,1969 年,第 182 页)。这一观点由来已久。因此,许多甚至所有强 的支持者都会同意塞尔的观点,即在他最初版本的中文房间里,没有人或任何事物可以说是懂中文的,除非是在某种非常紧张、椭圆和衰减的意义上。因此,塞尔所说的 "机器人的回答(耶鲁)"并不令人吃惊,尽管它来自耶鲁大学,这表明甚至尚克和他的学派现在也注意到了这一点。
Searle's response to the robot reply is to revise his thought experiment, claiming it will make no difference. Let our hero in the Chinese room also (unbeknownst to him) control the nonlinguistic actions of, and receive the perceptual informings of, a robot. Still (Searle asks you to consult your intuitions at this point) no one and nothing will really understand Chinese. But Searle does not dwell on how vast a difference this modification makes to what we are being asked to imagine.
塞尔对机器人回答的回应是修改他的思想实验,声称这不会有什么不同。让我们在中文房间里的主人公也(在他不知道的情况下)控制一个机器人的非语言行为,并接收机器人的感知信息。尽管如此(塞尔请你在这一点上参考自己的直觉),没有人、没有东西会真正理解中文。但塞尔并没有详述这种修改对我们被要求想象的东西有多大的影响。
Nor does Searle stop to provide vivid detail when he again revises his thought experiment to meet the "systems reply." The systems reply suggests, entirely correctly in my opinion, that Searle has confused different levels of explanation (and attribution). I understand English: my brain doesn't - nor, more particularly, does the proper part of it (if such can be isolated) that operates to "process" incoming sentences and to execute my speech act intentions. Searle's portrayal and discussion of the systems reply is not sympathetic, but he is prepared to give ground in any case; his proposal is that we may again modify his Chinese room example, if we wish, to accommodate the objection. We are to imagine our hero in the Chinese room to "internalize all of these elements of the system" so that he "incorporates the entire system." Our hero is now no longer an uncomprehending sub-personal part of a supersystem to which understanding of Chinese might be properly attributed, since there is no part of the supersystem external to his skin. Still Searle insists (in another plea for our intuitional support) that no one - not our hero or any other person he may in some metaphysical sense now be a part of - can be said to understand Chinese.
当塞尔再次修改他的思想实验以应对 "系统回答 "时,他也没有停下来提供生动的细节。在我看来,"系统回答 "完全正确地表明,塞尔混淆了不同层次的解释(和归因)。我懂英语:我的大脑不懂英语--更具体地说,我的大脑中负责 "处理 "接收到的句子和执行我的言语行为意图的适当部分(如果可以分离出来的话)也不懂英语。塞尔对系统回答的描述和讨论并不令人同情,但他准备在任何情况下都让步;他的建议是,如果我们愿意,可以再次修改他的中国房间的例子,以适应反对意见。我们可以想象我们的主人公在中式房间里 "内化了系统的所有这些元素",这样他就 "融入了整个系统"。现在,我们的主人公不再是一个超系统中不被理解的亚个人部分,对中文的理解可以恰当地归因于这个超系统,因为没有任何超系统的部分外在于他的皮肤。尽管如此,塞尔仍然坚持认为(在另一个恳求我们直觉支持的场合),没有人--不是我们的主人公,也不是他现在可能在某种形而上学意义上是其一部分的任何其他人--可以说是理解中文的。
But will our intuitions support Searle when we imagine this case in detail? Putting both modifications together, we are to imagine our hero controlling both the linguisitc and nonlinguistic behavior of a robot who is - himslef! When the Chinese words for "Hands up! This is a stickup!" are intoned directly in his ear, he will uncomprehendingly (and at breathtaking speed) hand simulate the program, which leads him to do things (what things - is he to order himself in Chinese to stimulate his own motor neurons and then obey the order?) that lead to his handing over his own wallet while begging for mercy, in Chinese, with his own lips. Now is it at all obvious that, imagined this way, no one in the situation understands Chinese? In point of fact, Searle has simply not told us how he intends us to imagine this case, which we are licensed to do by his two modifications. Are we to suppose that if the words had been in English, our hero would have responded (appropriately) in his native English? Or is he so engrossed in his massive homuncular task that he responds with the (simulated) incomprehension that would be the program-driven response to this bit of incomprehensible ("to the robot") input? If the latter, our hero has taken leave of his Englishspeaking friends for good, drowned in the engine room of a Chinesespeaking "person" inhabiting his body. If the former, the situation is drastically in need of further description by Searle, for just what he is imagining is far from clear. There are several radically different alternatives - all so outlandishly unrealizable as to caution us not to trust our gut reactions about them in any case. When we imagine our hero "incorporating the entire system" are we to imagine that he pushes buttons with his fingers in order to get his own arms to move? Surely not, since all the buttons are now internal. Are we to imagine that when he responds to the Chinese for "pass the salt, please" by getting his hand to grasp the salt and move it in a certain direction, he doesn't notice that this is what he is doing? In short, could anyone who became accomplished in this imagined exercise fail to become fluent in Chinese in the process? Perhaps, but it all depends on details of this, the only crucial thought experiment in Searle's kit, that Searle does not provide.
但是,当我们详细想象这种情况时,我们的直觉会支持塞尔吗?把两种修改放在一起,我们可以想象我们的主人公同时控制着一个机器人的语言和非语言行为,而这个机器人就是他自己!当 "Hands up!当 "举起手来!这是抢劫!"的中文词直接在他耳边响起时,他会不加思索地(以惊人的速度)用手模拟程序,从而做出一些事情(什么事情--难道他要用中文命令自己刺激自己的运动神经元,然后服从命令吗?现在,在这种想象中,没有一个人懂中文,这一点还不明显吗?事实上,塞尔根本没有告诉我们他打算让我们如何想象这个案例,而他的两处修改却允许我们这样做。我们是否可以假设,如果这些话是用英语说的,我们的主人公就会用他的母语英语(恰当地)做出回应?还是说,他沉浸在自己的巨大同传任务中,以至于(模拟的)无法理解的方式做出了反应,而这正是程序驱动的对这种无法理解("对机器人来说")的输入做出的反应?如果是后者,那么我们的主人公就永远离开了他的英语朋友们,被淹死在一个居住在他身体里的说中文的 "人 "的机房里。如果是前者,情况就急需苏尔进一步描述了,因为他所想象的情况还很不清楚。有几种截然不同的选择--都是如此离奇地不可能实现,以至于提醒我们无论如何都不要相信我们对它们的直觉反应。当我们想象我们的主人公 "融入整个系统 "时,我们是在想象他用手指按下按钮来让自己的手臂动起来吗?当然不是,因为现在所有的按钮都是内部的。我们是否可以想象,当他回应中文 "请把盐递给我 "时,用手抓住盐并向某个方向移动,而他并没有注意到这就是他正在做的事情?简而言之,在这个想象的练习中取得成就的人,会不会在这个过程中中文变得不流利呢?也许吧,但这完全取决于苏尔没有提供的这一细节,这是苏尔工具包中唯一关键的思想实验。
Searle tells us that when he first presented versions of this paper to Al audiences, objections were raised that he was prepared to meet, in part, by moditying his thought experiment. Why then did he not present us, his subsequent audience, with the modified thought experiment in the first place, instead of first leading us on a tour of red herrings? Could it be because it is impossible to tell the doubly modified story in anything approaching a cogent and detailed manner without provoking the unwanted intuitions? Told in detail, the doubly modified story suggests either that there are two people, one of whom understands Chinese, inhabiting one body, or that one English-speaking person has, in effect, been engulfed within another person, a person who understands Chinese (among many other things).
塞尔告诉我们,当他第一次向阿尔的听众介绍这篇论文的版本时,有人提出了反对意见,而他准备通过修改他的思想实验来部分地回应这些反对意见。那么,他为什么不首先向我们--他后来的听众--展示经过修改的思想实验,而要先带领我们进行一次 "红线之旅 "呢?会不会是因为要想以近乎有力和详细的方式讲述这个经过双重修改的故事,而又不引起我们不想要的直觉,是不可能的?从细节上讲,这个经过双重修改的故事要么表明有两个人,其中一个懂中文,居住在一个身体里,要么表明一个说英语的人实际上被另一个懂中文的人(除其他外)吞没了。
These and other similar considerations convince me that we may turn our backs on the Chinese room at least until a better version is deployed. In its current state of disrepair I can get it to pump my contrary intuitions at least as plentifully as Searle's. What, though, of his positive view? In the conclusion of his paper, Searle observes: "No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle." I don't think this is just a curious illustration of Searle's vision; I think it vividly expresses the feature that most radically distinguishes his view from the prevailing winds of doctrine. For Searle, intentionality is rather like a wonderful substance secreted by the brain the way the pancreas secretes insulin. Brains produce intentionality, he says, whereas other objects, such as computer programs, do not, even if they happen to be designed to mimic the input-output behavior of (some) brain. There is, then, a major disagreement about what the product of the brain is. Most people in AI (and most functionalists in the philosophy of mind) would say that its product is something like control: what a brain is for is for governing the right, appropriate, intelligent input-output relations, where these are deemed to be, in the end, relations between sensory inputs and behavioral outputs of some sort. That looks to Searle like some sort of behaviorism, and he will have none of it. Passing the Turing test may be prima facie evidence that something has intentionality - really has a mind but "as soon as we knew that the behavior was the result of a formal program, and that the actual causal properties of the physical substance were irrelevant we would abandon the assumption of intentionality."
这些考虑以及其他类似的考虑让我相信,至少在更好的版本问世之前,我们可以放弃 "中国房间"。在它目前失修的状态下,我至少可以让它像塞尔的版本一样,大量灌输我的相反直觉。那么,他的正面观点又是什么呢?塞尔在论文的结论中指出:"没有人会认为我们可以通过计算机模拟泌乳和光合作用中的形式序列来生产牛奶和糖,但就思维而言,许多人愿意相信这样的奇迹"。我不认为这只是对塞尔观点的奇特说明,我认为这生动地表达了他的观点与主流学说截然不同的特征。在塞尔看来,意向性就像是大脑分泌的一种奇妙物质,就像胰腺分泌胰岛素一样。他说,大脑会产生意向性,而其他物体,比如计算机程序,则不会,即使它们碰巧是为了模仿(某些)大脑的输入-输出行为而设计的。因此,关于大脑的产物是什么,存在着重大分歧。人工智能领域的大多数人(以及心灵哲学领域的大多数功能主义者)都会说,大脑的产品是类似于控制的东西:大脑的作用是管理正确的、适当的、智能的输入-输出关系,而这些关系最终被认为是感官输入与某种行为输出之间的关系。这在塞尔看来就像是某种行为主义,而他是不会接受的。通过图灵测试可能是某物具有意向性--真的有思想--的初步证据,但 "只要我们知道行为是形式化程序的结果,而物理物质的实际因果属性无关紧要,我们就会放弃意向性假设"。
So on Searle's view the "right" input-output relations are symptomatic but not conclusive or criterial evidence of intentionality; the proof of the pudding is in the presence of some (entirely unspecified) causal properties that are internal to the operation of the brain. This internality needs highlighting. When Searle speaks of causal properties one may
因此,在塞尔看来,"正确的 "输入-输出关系是意向性的征兆,但不是决定性或标准性的证据;证明布丁的证据在于大脑运作内部存在的某些(完全未指定的)因果属性。这种内在性需要强调。当塞尔谈到因果属性时,人们可以

think at first that those causal properties crucial for intentionality are those that link the activities of the system (brain or computer) to the things in the world with which the system interacts - including, preeminently, the active, sentient body whose behavior the system controls. But Searle insists that these are not the relevant causal properties. He concedes the possibility in principle of duplicating the input-output competence of a human brain with a "formal program," which (suitably attached) would guide a body through the world exactly as that body's brain would, and thus would acquire all the relevant extra systemic causal properties of the brain. But such a brain substitute would utterly fail to produce intentionality in the process, Searle holds, becuase it would lack some other causal properties of the brain's internal operation.'
起初,我们认为那些对意向性至关重要的因果属性是那些将系统(大脑或计算机)的活动与世界上与系统相互作用的事物联系在一起的因果属性,其中最主要的是包括由系统控制其行为的活跃的、有知觉的身体。但塞尔坚持认为,这些并不是相关的因果属性。他承认,原则上有可能用一个 "形式化程序 "来复制人脑的输入输出能力,这个 "形式化程序"(适当地附加)将引导一个身体穿过这个世界,就像这个身体的大脑一样,从而获得大脑所有相关的系统外因果属性。但塞尔认为,这样的大脑替代品在这个过程中完全无法产生意向性,因为它缺乏大脑内部运作的其他一些因果属性。
How, though, would we know that it lacked these properties, if all we knew was that it was (an implementation of) a formal program? Since Searle concedes that the operation of anything - and hence a human brain - can be described in terms of the execution of a formal program, the mere existence of such a level of description of a system would not preclude its having intentionality. It seems that it is only when we can see that the system in question is only the implementation of a formal program that we can conclude that it doesn't make a little intentionality on the side. But nothing could be only the implementation of a formal program; computers exude heat and noise in the course of their operations - why not intentionality too?
但是,如果我们只知道它是(一个)形式化程序的实现,我们怎么会知道它缺乏这些特性呢?既然塞尔承认,任何事物--也包括人脑--的运行都可以用形式化程序的执行来描述,那么,仅仅存在对系统的这种层次的描述,并不能排除它具有意向性。似乎只有当我们看到有关系统只是形式化程序的执行时,我们才能得出结论说它没有一点意向性。但是,没有什么东西可以仅仅是形式化程序的实现;计算机在运行过程中会散发热量和噪音,为什么不能也有意向性呢?
Besides, which is the major product and which the byproduct? Searle can hardly deny that brains do in fact produce lots of reliable and appropriate bodily control. They do this, he thinks, by producing intentionality, but he concedes that something - such as a computer with the right input-output rules - could produce the control without making or using any intentionality. But then control is the main product and intentionality just one (no doubt natural) means of obtaining it. Had our ancestors been nonintentional mutants with mere control systems, nature would just as readily have selected them instead. (I owe this point to Bob Moore.) Or, to look at the other side of the coin, brains with lots of intentionality but no control competence would be producers of an ecologically irrelevant product, which evolution would not protect. Luckily for us, though, our brains make intentionality; if they didn't, we'd behave just as we now do, but of course we wouldn't mean it!
此外,哪个是主要产品,哪个是副产品?塞尔很难否认,大脑确实能够产生大量可靠而适当的身体控制。他认为,大脑是通过产生意向性来做到这一点的,但他也承认,某些东西--比如具有正确输入输出规则的计算机--可以在不产生或不使用任何意向性的情况下产生控制。但是,控制是主要产品,而意向性只是获得控制的一种手段(无疑是自然的)。如果我们的祖先只是拥有控制系统的非意向性突变体,那么大自然同样会选择他们。(或者,从硬币的另一面来看,拥有大量意向性但没有控制能力的大脑会生产出与生态无关的产品,而进化也不会保护这种产品。但幸运的是,我们的大脑具有意向性;如果没有意向性,我们的行为就会和现在一样,但当然我们不是故意的!
Surely Searle does not hold the view I have just ridiculed, although it seems as if he does. He can't really view intentionality as a marvelous mental fluid, so what is he trying to get at? I think his concern with internal properties of control systems is a misconceived attempt to capture the interior point of view of a conscious agent. He does not see how any mere computer, chopping away at a formal program, could harbor such a point of view. But that is because he is looking too deep. It is just as mysterious if we peer into the synapse-filled jungles of the brain and wonder where consciousness is hiding. It is not at that level of description that a proper subject of consciousness will be found. That is the systems reply, which Searle does not yet see to be a step in the right direction away from his updated version of élan vital.
当然,塞尔并不持有我刚才所嘲笑的观点,尽管他似乎持有这种观点。他不可能真的把意向性视为一种奇妙的精神流体,那么他到底想表达什么呢?我认为,他对控制系统内部特性的关注是一种误解,他试图捕捉有意识的人的内部观点。他不明白,任何单纯的计算机,在执行一个形式化的程序时,怎么会怀有这样的观点。但这是因为他看得太深了。如果我们窥探充满突触的大脑丛林,想知道意识藏在哪里,那也同样神秘莫测。在这一描述层面,我们找不到意识的适当主题。这就是系统的回答,塞尔还不认为这是向他更新版的 "生命活力"(élan vital)迈出的正确一步。

Note 备注

  1. For an intuition pump involving exactly this case - a prosthetic brain - but designed to pump contrary intuitions, see "Where Am I?" in Dennett (1978).
    关于这种情况下的直觉泵--义脑--但其目的是泵出相反的直觉,请参阅 Dennett (1978) 的 "我在哪里?

by John C. Eccles
作者:约翰-C-埃克尔斯

Cá a lá Gra, Contra (Locarno) CH-6611, Switzerland
瑞士康特拉(洛迦诺)Cá a lá Gra, CH-6611

A dualist-interactionist perspective
二元论-互动论视角

Searle clearly states that the basis of his critical evaluation of is dependent on two propositions. The first is: "Intentionality in human beings (and animals) is a product of causal features of the brain." He supports this proposition by an unargued statement that it "is an empirical fact about the actual causal relations between mental processes and brains. It says simply that certain brain processes are sufficient for intentionality" (my italics).
塞尔明确指出,他对 的批判性评价的基础取决于两个命题。第一个命题是"人类(和动物)的意向性是大脑因果特征的产物"。为支持这一命题,他不加论证地指出,该命题 "是关于心理过程与大脑之间实际因果关系的经验事实。它只是说,某些大脑过程足以产生意向性"(斜体为笔者所加)。
This is a dogma of the psychoneural identity theory, which is one variety of the materialist theories of the mind. There is no mention of the alternative hypothesis of dualist interactionism that Popper and I published some time ago (1977) and that I have further developed more recently (Eccles 1978; 1979). According to that hypothesis intentionality is a property of the self-conscious mind (World 2 of Popper), the brain being used as an instrument in the realization of intentions. I refer to Fig. E 7-2 of Popper and Eccles (1977), where intentions appear in the box (inner senses) of World 2, with arrows indicating the flow of information by which intentions in the mind cause changes in the liaison brain and so eventually in voluntary movements.
这是精神神经同一论的教条,而精神神经同一论是唯物主义心灵理论的一个分支。波普尔和我在不久前(1977 年)发表了二元论互动论的另一种假说,而我最近又进一步发展了这一假说(埃克斯,1978 年;1979 年)。根据这一假说,意向性是自我意识心智(波普尔的世界 2)的属性,大脑被用作实现意向的工具。我参考了波普尔和埃克斯(1977)的图 E 7-2,图中意图出现在世界 2 的方框(内部感官)中,箭头表示信息流,通过这些信息流,头脑中的意图引起联络脑的变化,并最终导致自主运动。
I have no difficulty with proposition 2 , but I would suggest that 3,4 , and 5 be rewritten with "mind" substituted for "brain." Again the statement: "only a machine could think, and only very special kinds of machines ... with internal causal powers equivalent to those of brains" is the identity theory dogma. I say dogma becuase it is unargued and without empirical support. The identity theory is very weak empirically, being merely a theory of promise.
我对命题 2 没有异议,但我建议将命题 3、4 和 5 改写,用 "心智 "代替 "大脑"。还是那句话"只有机器才能思考,而且只有非常特殊的机器......才具有与大脑相当的内部因果能力",这就是同一论的教条。我之所以说它是教条,是因为它缺乏论证和经验支持。同一性理论在经验上非常薄弱,只是一种承诺理论。
So long as Searle speaks about human performance without regarding intentionality as a property of the brain, I can appreciate that he has produced telling arguments against the strong Al theory. The story of the hamburger with the Gedankenexperiment of the Chinese symbols is related to Premack's attempts to teach the chimpanzee Sarah a primitive level of human language as expressed in symbols [See Premack:"Does the Chimpanzee Have a Theory of Mind?" BBS 1(4) 1978]. The criticism of Lenneberg (1975) was that, by conditioning, Sarah had learnt a symbol game, using symbols instrumentally, but had no idea that it was related to human language. He trained high school students with the procedures described by Premack, closely replicating Premack's study. The human subjects were quickly able to obtain considerably lower error scores than those reported for the chimpanzee. However, they were unable to translate correctly a single one of their completed sentences into English. In fact, they did not understand that there was any correspondence between the plastic symbols and language; instead they were under the impression that their task was to solve puzzles.
只要塞尔在谈论人类的表现时没有把意向性视为大脑的一种属性,我就能理解他提出了反对强艾尔理论的有说服力的论据。汉堡包与中国符号的Gedankenexperiment的故事与普雷马克(Premack)试图教黑猩猩萨拉学会用符号表达的人类语言的原始水平有关[见普雷马克:"黑猩猩有心智理论吗?" BBS 1(4) 1978]。伦纳伯格(1975 年)的批评意见是,通过条件反射,莎拉学会了一种符号游戏,工具性地使用符号,但却不知道这与人类语言有关。他用普雷马克描述的程序对高中生进行了训练,密切复制了普雷马克的研究。人类受试者很快就能获得比黑猩猩低得多的错误分数。然而,他们却无法将一个完整的句子正确地翻译成英语。事实上,他们并不理解塑料符号与语言之间存在任何对应关系;相反,他们的印象是,他们的任务是解谜。
I think this simple experiment indicates a fatal flaw in all the Al work. No matter how complex the performance instantiated by the computer it can be no more than a triumph for the computer designer in simulation. The Turing machine is a magician's dream - or nightmare!
我认为这个简单的实验表明了所有 Al 作品的致命缺陷。无论计算机实例化的表演有多么复杂,它都不过是计算机设计者在模拟中的一次胜利。图灵机是魔术师的梦想,也可以说是噩梦!
It was surprising that after the detailed brain-mind statements of the abstract, I did not find the word "brain" in Searle's text through the whole of his opening three pages of argument, where he uses mind, mental states, human understanding, and cognitive states exactly as would be done in a text on dualist interactionism. Not until "the robot reply" does brain appear as "computer 'brain.' " However, from "the brain simulator reply" in the statements and criticisms of the various other replies, brain, neuron firings, synapses, and the like are profusely used in a rather naive way. For example "imagine the computer programmed with all the synapses of a human brain" is more than I can do by many orders of magnitude! So "the combination reply" reads like fantasy - and to no purpose!
令人惊讶的是,在摘要中详细陈述了脑-心之后,我在塞尔的文章中却没有找到 "脑 "这个词,在他开头的三页论证中,他使用了心智、精神状态、人类理解和认知状态,这与二元论互动论的文章中的用法完全一致。直到 "机器人的回答",大脑才作为 "计算机'大脑'"出现。"然而,从 "大脑模拟器回复 "开始,在其他各种回复的陈述和批评中,大脑、神经元闪烁、突触等都以一种相当幼稚的方式被大量使用。例如,"想象一下计算机编程时具有人脑的所有突触",这比我能做的要多出好几个数量级!因此,"组合回复 "读起来就像天方夜谭,毫无意义!
I agree that it is a mistake to confuse simulation with duplication. But I do not object to the idea that the distinction between the program and its realization in the hardware seems to be parallel to the distinction between the mental operations and the level of brain operations. However, Searle believes that the equation "mind is to brain as program is to hardware" breaks down at several points. I would prefer to substitute programmer for program, because as a dualist interactionist I accept the analogy that as conscious beings we function as programmers of our brains. In particular I regret Searle's third argument: "Mental states and events are literally a product of the operation of the brain, but the program is not in that way a product of the computer," and so later we are told "whatever else intentionality is, it is a biological phenomenon, and it is as likely to be causally dependent on the specific biochemistry of its origins as lactation, photosynthesis, or any other biological phenomenon." I have the feeling of being transported back to the nineteenth century, where, as derisorily recorded by Sherrington (1950): "the oracular Professor Tyndall, presiding over the British Association at Belfast, told his audience that as the bile is a secretion of the liver, so the mind is a secretion of the brain."
我同意将模拟与复制混为一谈是错误的。但我并不反对这样的观点,即程序与其在硬件中的实现之间的区别似乎与心理操作和大脑操作层面的区别是平行的。然而,塞尔认为,"心智之于大脑,就像程序之于硬件 "这一等式在几个点上是不成立的。我更倾向于用 "程序员 "来代替 "程序",因为作为一个二元互动论者,我接受这样的类比,即作为有意识的存在,我们的功能就是我们大脑的程序员。我尤其对塞尔的第三个论点感到遗憾:"精神状态和事件从字面上看是大脑运作的产物,但程序并不是计算机的产物。"因此,后来我们被告知,"无论意向性是什么,它都是一种生物现象,它就像哺乳、光合作用或任何其他生物现象一样,很可能在因果关系上依赖于其起源的特定生物化学"。我有一种回到十九世纪的感觉,正如谢林顿(1950 年)揶揄地记载的那样"神通广大的廷德尔教授在贝尔法斯特主持英国协会会议时告诉听众,正如胆汁是肝脏的分泌物一样,心灵也是大脑的分泌物"。
In summary, my criticisms arise from fundamental differences in
总之,我的批评源于以下方面的根本分歧

respect of beliefs in relation to the brain-mind problem. So long as Searle is referring to human intentions and performances without reference to the brain-mind problem, I can appreciate the criticisms that he marshals against the Al beliefs that an appropriately programmed computer is a mind literally understanding and having other cognitive states. Most of Searle's criticisms are acceptable for dualist interactionism. It is high time that strong AI was discredited.
关于与脑-心问题相关的信念。只要塞尔所指的是人类的意图和行为,而不涉及脑-心问题,我就能理解他针对阿尔的信念所提出的批评,阿尔的信念是,一台编程适当的计算机就是一个能够理解并具有其他认知状态的头脑。塞尔的大多数批评对于二元交互论来说都是可以接受的。现在是否定强人工智能的时候了。

by J. A. Fodor
作者:J. A. Fodor

Department of Psychology. Massachusetts Institute of Technology, Cambridge, Mass. 02139
心理学系。马萨诸塞州剑桥市麻省理工学院心理学系。02139

Searle on what only brains can do
塞尔谈只有大脑才能做到的事

  1. Searle is certainly right that instantiating the same program that the brain does is not, in and of itself, a sufficent condition for having those propositional attitudes characteristic of the organism that has the brain. If some people in Al think that it is, they're wrong. As for the Turing test, it has all the usual difficulties with predictions of "no difference"; you can't distinguish the truth of the prediction from the insensitivity of the test instrument.'
    塞尔当然是对的,大脑实例化相同的程序本身并不是拥有大脑的有机体所特有的命题态度的充分条件。如果阿尔的某些人认为这是一个充分条件,那他们就错了。至于图灵测试,它具有预测 "无差别 "的所有常见困难;你无法将预测的真实性与测试工具的不敏感性区分开来。
  2. However, Searle's treatment of the "robot reply" is quite unconvincing. Given that there are the right kinds of causal linkages between the symbols that the device manipulates and things in the world including the afferent and efferent transducers of the device - it is quite unclear that intuition rejects ascribing propositional attitudes to it. All that Searle's example shows is that the kind of causal linkage he imagines - one that is, in effect, mediated by a man sitting in the head of a robot - is, unsurprisingly, not the right kind.
    然而,塞尔对 "机器人回答 "的处理却很难令人信服。鉴于该装置所操纵的符号与世界上的事物(包括该装置的传入和传出转换器)之间存在着正确的因果联系--直觉拒绝将命题态度赋予它,这一点并不清楚。塞尔的例子所表明的只是,他所想象的那种因果联系--实际上是由一个坐在机器人头部的人作为中介的因果联系--并不是正确的因果联系,这一点不足为奇。
  3. We don't know how to say what the right kinds of causal linkage are. This, also, is unsurprising since we don't know how to answer the closely related question as to what kinds of connection between a formula and the world determine the interpretation under which the formula is employed. We don't have an answer to this question for any symbolic system; a fortiori, not for mental representations. These questions are closely related because. given the mental representation view, it is natural to assume that what makes mental states intentional is primarily that they involve relations to semantically interpreted mental objects; again, relations of the right kind.
    我们不知道如何说出正确的因果联系的种类。这也不足为奇,因为我们不知道如何回答一个密切相关的问题,即一个公式与世界之间的何种联系决定了在何种解释下使用这个公式。对于任何符号系统,我们都无法回答这个问题;更不用说对于心理表征了。这些问题之所以密切相关,是因为考虑到心理表征的观点,我们可以自然而然地假定,心理状态之所以具有意向性,主要在于它们涉及与语义解释的心理对象之间的关系;同样,也是正确类型的关系。
  4. It seems to me that Searle has misunderstood the main point about the treatment of intentionality in representational theories of the mind; this is not surprising since proponents of the theory - especially in - have been notably unlucid in expounding it. For the record, then, the main point is this: intentional properies of propositional attitudes are viewed as inherited from semantic properties of mental representations (and not from the functional role of mental representations, unless "functional role" is construed broadly enough to include symbol-world relations). In effect, what is proposed is a reduction of the problem what makes mental states intentional to the problem what bestows semantic properties on (fixes the interpretation of) a symbol. This reduction looks promising because we're going to have to answer the latter question anyhow (for example, in constructing theories of natural languages); and we need the notion of mental representation anyhow (for example, to provide appropriate domains for mental processes).
    在我看来,塞尔误解了心智表征理论中对待意向性的主要观点;这并不奇怪,因为该理论的支持者--尤其是在 --在阐述这一观点时明显不够清晰。为了记录在案,主要观点是这样的:命题态度的意向性属性被视为从心理表征的语义属性继承而来(而不是从心理表征的功能作用继承而来,除非对 "功能作用 "的解释足够宽泛,以至于包括符号-世界关系)。实际上,我们所提出的是把 "是什么使心理状态具有意向性 "这一问题简化为 "是什么赋予符号以语义属性(固定对符号的解释)"这一问题。这种还原看起来很有希望,因为我们无论如何都要回答后一个问题(例如,在构建自然语言理论时);而且我们无论如何都需要心理表征的概念(例如,为心理过程提供适当的领域)。
It may be worth adding that there is nothing new about this strategy. Locke, for example, thought (a) that the intentional properties of mental states are inherited from the semantic (referential) properties of mental representations; (b) that mental processes are formal (associative); and (c) that the objects from which mental states inherit their intentionality are the same ones over which mental processes are defined: namely ideas. It's my view that no serious alternative to this treatment of propositional attitudes has ever been proposed.
值得补充的是,这一策略并无新意。例如,洛克认为:(a) 心理状态的意向属性是从心理表征的语义(指称)属性继承而来的;(b) 心理过程是形式的(联想的);(c) 心理状态继承其意向性的对象与心理过程被定义的对象是相同的:即观念。我认为,除了对命题态度的这种处理之外,还没有人提出过其他严肃的替代方案。
  1. To say that a computer (or a brain) pertorms formal operations on symbols is not the same thing as saying that it performs operations on formal (in the sense of "uninterpreted") symbols. This equivocation occurs repeatedly in Searle's paper, and causes considerable confusion. If there are mental representations they must, of course, be interpreted objects; it is because they are interpreted objects that mental states are intentional. But the brain might be a computer for all that.
    说计算机(或大脑)对符号进行形式运算,与说它对形式符号(在 "未被解释 "的意义上)进行运算,并不是一回事。这种模棱两可的说法在塞尔的论文中反复出现,造成了相当大的混乱。如果存在心理表征,它们当然必须是被解释的对象;正是因为它们是被解释的对象,心理状态才是有意的。但是,大脑可能就是一台计算机。
  2. This situation - needing a notion of causal connection, but not knowing which notion of causal connection is the right one - is entirely familiar in philosophy. It is, for example, extremely plausible that "a perceives can be true only where there is the right kind of causal connection between and . And we don't know what the right kind of causal connection is here either.
    这种情况--需要一个因果联系的概念,却不知道哪个因果联系的概念才是正确的--在哲学中是完全常见的。例如,"只有在 之间存在正确的因果联系时,'a 感知 '才可能是真的 "这种说法就非常可信。我们也不知道这里的正确因果联系是什么。
Demonstrating that some kinds of causal connection are the wrong kinds would not, of course, prejudice the claim. For example, suppose we interpolated a little man between and , whose function it is to report to on the presence of . We would then have (inter alia) a sort of causal link from a to , but we wouldn't have the sort of causal link that is required for a to perceive . It would, of course, be a fallacy to argue from the fact that this causal linkage fails to reconstruct perception to the conclusion that no causal linkage would succeed. Searle's argument against the "robot reply" is a fallacy of precisely that sort.
当然,证明某些类型的因果联系是错误的,并不会影响我们的主张。例如,假设我们在 之间插入一个小人,其功能是向 报告 的存在。这样,我们就有(除其他外)一种从 a 到 的因果联系,但我们不会有 a 感知 所需的那种因果联系。当然,从这一因果联系未能重构知觉这一事实出发来论证任何因果联系都不会成功这一结论是谬误的。塞尔反对 "机器人回答 "的论证正是这种谬误。
  1. It is entirely reasonable (indeed it must be true) that the right kind of causal relation is the kind that holds between our brains and our transducer mechanisms (on the one hand) and between our brains and distal objects (on the other). It would not begin to follow that only our brains can bear such relations to transducers and distal objects; and it would also not follow that being the same sort of thing our brain is (in any biochemical sense of "same sort") is a necessary condition for being in that relation; and it would also not follow that formal manipulations of symbols are not among the links in such causal chains. And, even if our brains are the only sorts of things that can be in that relation, the fact that they are might quite possibly be of no particular interest; that would depend on why it's true.
    正确的因果关系是我们的大脑和我们的转换机制之间(一方面)以及我们的大脑和远距离物体之间(另一方面)的因果关系,这是完全合理的(事实上它必须是真实的)。这并不是说,只有我们的大脑才能与传导机制和远端对象建立起这种关系;这也不是说,我们的大脑是同类事物(在任何生化意义上的 "同类")是建立这种关系的必要条件;这也不是说,符号的形式化操作不属于这种因果关系链中的环节。而且,即使我们的大脑是唯一可以处于这种关系中的事物,它们处于这种关系中这一事实也很可能并没有什么特别的意义;这取决于它为什么是真的。
Searle gives no clue as to why he thinks the biochemistry is important for intentionality and, prima facie, the idea that what counts is how the organism is connected to the world seems far more plausible. After all, it's easy enough to imagine, in a rough and ready sort of way, how the fact that my thought is causally connected to a tree might bear on its being a thought about a tree. But it's hard to imagine how the fact that (to put it crudely) my thought is made out of hydrocarbons could matter, except on the unlikely hypothesis that only hydrocarbons can be causally connected to trees in the way that brains are.
塞尔没有提供任何线索来说明为什么他认为生物化学对意向性很重要,而从表面上看,重要的是有机体如何与世界相联系的观点似乎更有道理。毕竟,粗略地想象一下,我的想法与一棵树有因果联系这一事实如何影响到它是关于一棵树的想法,还是很容易的。但很难想象,我的思想是由碳氢化合物构成的这个事实(粗略地说)会有什么影响,除非有一个不太可能的假设,即只有碳氢化合物才能像大脑那样与树有因果联系。
  1. The empirical evidence for believing that "manipulation of symbols" is involved in mental processes derives largely from the considerable success of work in linguistics, psychology, and AI that has been grounded in that assumption. Little of the relevant data concerns the simulation of behavior or the passing of Turing tests, though Searle writes as though all of it does. Searle gives no indication at all of how the facts that this work accounts for are to be explained if not on the mental-processes-are-formal-processes view. To claim that there is no argument that symbol manipulation is necessary for mental processing while systematically ignoring all the evidence that has been alleged in favor of the claim strikes me as an extremely curious strategy on Searle's part.
    认为 "操纵符号 "涉及心理过程的经验证据,主要来自语言学、心理学和人工智能领域以这一假设为基础而取得的巨大成功。相关数据很少涉及行为的模拟或图灵测试的通过,尽管苏尔写得好像所有数据都与此有关。塞尔完全没有说明,如果不按照心理过程即形式过程的观点,该如何解释这项工作所说明的事实。一方面声称符号操纵对于心理过程是必要的,另一方面却系统性地忽略了所有支持这一说法的证据,这让我觉得塞尔的策略非常奇怪。
  2. Some necessary conditions are more interesting than others. While connections to the world and symbol manipulations are both presumably necessary for intentional processes, there is no reason (so far) to believe that the former provide a theoretical domain for a science; wheras, there is considerable a posteriori reason to suppose that the latter do. If this is right, it provides some justification for practice, if not for Al rhetoric.
    有些必要条件比其他条件更有趣。虽然与世界的联系和符号操作可能都是有意过程的必要条件,但(到目前为止)没有理由认为前者为科学提供了一个理论领域;而有相当多的后验理由认为后者提供了一个理论领域。如果这是正确的,那么它就为 的实践提供了一些理由,如果不是为阿尔修辞学提供理由的话。
  3. Talking involves performing certain formal operations on symbols: stringing words together. Yet, not everything that can string words together can talk. It does not follow from these banal observations that what we utter are uninterpreted sounds, or that we don't understand what we say, or that whoever talks talks nonsense, or that only hydrocarbons can assert - similaly, mutatis mutandis, if you substitue "thinking" for "talking."
    说话需要对符号进行某些形式上的操作:把词语串联起来。然而,并不是所有能把词串起来的东西都能说话。从这些平淡无奇的观察中,我们并不能得出这样的结论:我们发出的都是未经解释的声音,或者我们并不理解我们所说的话,或者无论谁说话都是胡说八道,或者只有碳氢化合物才能断言--如果把 "说话 "换成 "思考",情况也会大致相同。

Notes 说明

  1. I assume, for simplicity, that there is only one program that the brain instantiates (which, of course, there isn't). Notice, by the way, that even passing the Turing test requires doing more than just manipulating symbols. A device that can't run a typewriter can't play the game.
    为了简单起见,我假设大脑实例化的程序只有一个(当然,事实并非如此)。顺便注意一下,即使通过图灵测试,所需要做的也不仅仅是操作符号。不能运行打字机的设备就不能玩游戏。
Commentary/Searle: Minds, brains, and programs
评论/塞尔思想、大脑和程序
  1. For example, it might be that, in point of physical fact, only things that have the same simultaneous values of weight, density, and shade of gray that brains have can do the things that brains can. This would be surprising, but it's hard to see why a psychologist should care much. Not even if it turned out - still in point of physical fact - that brains are the only things that can have that weight, density. and color. If that's dualism, I imagine we can live with it.
    例如,就物理事实而言,可能只有同时具有与大脑相同的重量、密度和灰度值的东西才能做大脑所能做的事情。这固然令人吃惊,但很难看出心理学家为何会如此在意。即使事实证明--仍然是物理事实--大脑是唯一能拥有重量、密度和颜色的东西,也不会有太大的影响。如果这是二元论,我想我们可以接受。

by John Haugeland 作者:约翰-豪格兰

Center for Advanced Study in the Behavioral Sciences, Stanford, Calif. 94305
行为科学高级研究中心,加州斯坦福 94305

Programs, causal powers, and intentionality
程序、因果力和意向性

Searle is in a bind. He denies that any Turing test for intelligence is adequate - that is, that behaving intelligently is a sufficent condition for being intelligent. But he dare not deny that creatures physiologically very different from people might be intelligent nonetheless - smart green saucer pilots, say. So he needs an intermediate criterion: not so specific to us as to rule out the aliens, yet not so dissociated from specifics as to admit any old object with the right behavior. His suggestion is that only objects (made of stuff) with "the right causal powers" can have intentionality, and hence, only such objects can genuinely understand anything or be intelligent. This suggestion, however, is incompatible with the main argument of his paper.
塞尔陷入了困境。他否认任何图灵智能测试都是充分的,也就是说,智能行为是成为智能人的充分条件。但他又不敢否认,生理上与人类截然不同的生物也可能具有智慧--比如说,聪明的绿碟飞行员。因此,他需要一个中间标准:既不能太具体,以至于排除外星人,又不能太脱离具体,以至于接纳任何具有正确行为的旧物体。他的建议是,只有具有 "正确的因果能力 "的物体(由东西构成)才能具有意向性,因此,只有这样的物体才能真正理解任何事物或具有智慧。然而,这一建议与他的论文的主要论点不符。
Ostensibly, that argument is against the claim that working according to a certain program can ever be sufficient for understanding anything - no matter how cleverly the program is contrived so as to make the relevant object (computer, robot, or whatever) behave as if it understood. The crucial move is replacing the central processor (c.p.u.) with a superfast person - whom we might as well call "Searle's demon." And Searle argues that an English-speaking demon could perfectly well follow a program for simulating a Chinese speaker, without itself understanding a word of Chinese.
从表面上看,这个论点是反对这样一种说法,即按照某个程序工作就足以理解任何事物--无论程序设计得多么巧妙,以使相关对象(计算机、机器人或其他什么东西)表现得好像它理解了什么。关键的一步是用一个超快的人取代中央处理器(c.p.u.)--我们不妨称他为 "塞尔的恶魔"。塞尔认为,一个会说英语的恶魔完全可以按照程序模拟一个会说中文的人,而它自己却一句中文也听不懂。
The trouble is that the same strategy will work as well against any specification of "the right causal powers." Instead of manipulating formal tokens according to the specifications of some computer program, the demon will manipulate physical states or variables according to the specification of the "right" causal interactions. Just to be concrete, imagine that the right ones are those powers that our neuron tips have to titillate one another with neurotransmitters. The green aliens can be intelligent, even though they're based on silicon chemistry, because their (silicon) neurons have the same power of intertitillation. Now imagine covering each of the neurons of a Chinese criminal with a thin coating, which has no effect, except that it is impervious to neurotransmitters. And imagine further that Searle's demon can see the problem, and comes to the rescue; he peers through the coating at each neural tip, determines which transmitter (if any) would have been emitted, and then massages the adjacent tips in a way that has the same effect as if they had received that transmitter. Basically, instead of replacing the c.p.u., the demon is replacing the neurotransmitters.
问题在于,同样的策略也适用于任何 "正确因果力量 "的规范。恶魔不会按照计算机程序的规范操纵形式代币,而是会按照 "正确的 "因果互动规范操纵物理状态或变量。具体来说,我们可以把 "正确的 "因果互动想象成我们的神经元尖端通过神经递质相互刺激的能力。绿色外星人可以是智能的,尽管它们是基于硅化学,因为它们的(硅)神经元具有同样的相互刺激能力。现在想象一下,在中国罪犯的每个神经元上都覆盖一层薄薄的涂层,这层涂层除了不透神经递质外,没有任何作用。再想象一下,苏尔的恶魔能够看到问题所在,并出手相救;他透过涂层窥视每一个神经末梢,确定哪种递质(如果有的话)会被发射出去,然后按摩相邻的神经末梢,使其产生与接收到该递质相同的效果。从根本上说,恶魔不是替换了 c.p.u.,而是替换了神经递质。
By hypothesis, the victim's behavior is unchanged; in particular, she still acts as if she understood Chinese. Now, however, none of her neurons has the right causal powers - the demon has them, and he still understands only English. Therefore, having the right causal powers (even while embedded in a system such that the exercise of these powers leads to "intelligent" behavior) cannot be sufficient for understanding. Needless to say, a corresponding variation will work, whatever the relevant causal powers are.
根据假设,受害者的行为并没有改变,尤其是她仍然表现得好像能听懂中文。然而,现在她的神经元都不具备正确的因果能力--恶魔具备这种能力,但他仍然只懂英语。因此,拥有正确的因果能力(即使嵌入到一个系统中,行使这些能力会导致 "智能 "行为)并不足以让人理解。不用说,无论相关的因果能力是什么,相应的变体都会起作用。
None of this should come as a surprise. A computer program just is a specification of the exercise of certain causal powers: the powers to manipulate various formal tokens (physical objects or states of some sort) in certain specified ways, depending on the presence of certain other such tokens. Of course, it is a particular way of specifying causal exercises of a particular sort - that's what gives the "computational paradigm" its distinctive character. But Searle makes no use of this particularity; his argument depends only on the fact that causal powers can be specified independently of whatever it is that has the power. This is precisely what makes it possible to interpose the demon, in both the token-interaction (program) and neuron-interaction cases.
这些都不足为奇。计算机程序只是对行使某些因果能力的一种规范:即根据某些其他形式标记的存在,以某些特定方式操纵各种形式标记(物理对象或某种状态)的能力。当然,这是一种特定的因果练习的特定方式--这正是 "计算范式 "的独特之处。但塞尔并没有利用这种特殊性;他的论证仅仅依赖于这样一个事实,即因果能力可以独立于拥有这种能力的任何东西而被指定。这正是令牌-交互(程序)和神经元-交互这两种情况下的 "插魔 "成为可能的原因。

There is no escape in urging that this is a "dualistic" view of causal powers, not intrinisically connected with "the actual properties" of physical objects. To speak of causal powers in any way that allows for generalization (to green aliens, for example) is ipso facto to abstract from the particulars of any given "realization." The point is independent of the example - it works just as well for photosynthesis. Thus, flesh-colored plantlike organisms on the alien planet might photosynthesize (I take it, in a full and literal sense) so long as they contain some chemical (not necessarily chlorophyll) that absorbs light and uses the energy to make sugar and tree oxygen out of carbon dioxide (or silicon dioxide?) and water. This is what it means to specify photosynthesis as a causal power, rather than just a property that is, by definition, idiosyncratic to chlorophyll. But now, of course, the demon can enter, replacing both chiorophyll and its alien substitute: he devours photons, and thus energized, makes sugar from and . It seems to me that the demon is photosynthesizing.
我们不能回避的是,这是一种关于因果能力的 "二元论 "观点,与物理对象的 "实际属性 "并无内在联系。以任何允许概括的方式谈论因果能力(例如,绿色外星人),事实上就是对任何特定 "实现 "的特殊性进行抽象。这一点与例子无关--它同样适用于光合作用。因此,只要外星球上的肉色植物类生物含有某种化学物质(不一定是叶绿素),能够吸收光线并利用能量从二氧化碳(或二氧化硅?这就是光合作用作为一种因果能力,而不仅仅是叶绿素所特有的一种性质的含义。但现在,恶魔当然可以进来,取代叶绿素和它的外来替代品:它吞噬光子,从而获得能量,从 制造糖。在我看来,恶魔正在进行光合作用。
Let's set aside the demon argument, however. Searle also suggests that "there is no reason to suppose" that understanding (or intentionality) "has anything to do with" computer programs. This too, I think, rests on his failure to recognize that specitying a program is (in a distinctive way) specifying a range of causal powers and interactions.
然而,让我们暂且搁置 "恶魔 "的论点。塞尔还认为,"没有理由认为 "理解(或意向性)"与 "计算机程序 "有任何关系"。我认为,这也是因为他没有认识到,具体化一个程序就是(以一种独特的方式)具体化一系列因果力量和相互作用。
The central issue is what differentiates original intentionality from derivative intentionality. The former is intentionality that a thing (system. state, process) has "in its own right"; the latter is intentionality that is "borrowed from" or "conferred by" something else. Thus (on standard assumptions, which I will not question here), the intentionality of conscious thought and perception is original, whereas the intentionality (meaning) of linguistic tokens is merely conferred upon them by language users - that is, words don't have any meaning in and of themselves, but only in virtue of our giving them some. These are paradigm cases; many other cases will fall clearly on one side or the other, or be questionable, or perhaps even marginal. No one denies that if Al systems don't have original intentionality, then they at least have derivative intentionality, in a nontrivial sense - because they have nontrivial interpretations. What Searle objects to is the thesis, held by many, that good-enough Al systems have (or will eventually have) original intentionality.
核心问题是如何区分原始意向性与派生意向性。前者是指事物(系统、状态、过程)"本身 "具有的意向性;后者是指 "借用 "或 "赋予 "其他事物的意向性。因此(根据标准假设,我在此不作质疑),意识思维和感知的意向性是原创的,而语言标记的意向性(意义)只是语言使用者赋予它们的--也就是说,词语本身并没有任何意义,只是我们赋予了它们一些意义。这些都是典型案例;其他许多案例都会明显地偏向一方或另一方,或者是值得商榷的,甚至可能是边缘性的。没有人否认,如果阿尔系统不具有原初意向性,那么它们至少具有衍生意向性,在非微不足道的意义上--因为它们具有非微不足道的解释。塞尔反对的是许多人持有的论点,即足够好的阿尔系统具有(或最终将具有)原初意向性。
Thought tokens, such as articulate beliets and desires, and linguistic tokens, such as the expressions of articulate beliefs and desires, seem to have a lot in common - as pointed out, for example, by Searle (1979c). In particular, except for the original/derivative distinction, they have (or at least appear to have) closely parallel semantic structures and variations. There must be some other principled distinction between them, then, in virtue of which the former can be originally intentional, but the latter only derivatively so. A conspicuous candidate for this distinction is that thoughts are semantically active, whereas sentence tokens, written out, say, on a page, are semantically inert. Thoughts are constantly interacting with one another and the world, in ways that are semantically appropriate to their intentional content. The causal interactions of written sentence tokens, on the other hand, do not consistently reflect their content (except when they interact with people).
正如塞尔(1979c)所指出的,思想标记(如明确表达的信念和愿望)和语言标记(如明确表达的信念和愿望的表达)似乎有许多共同之处。尤其是,除了原生/衍生的区别之外,它们具有(或至少看起来具有)紧密平行的语义结构和变化。那么,它们之间一定还有某种原则性的区别,前者可以是原初意向性的,而后者只能是派生意向性的。这种区别的一个显著候选特征是,思维在语义上是活跃的,而写在纸上的句子标记在语义上是惰性的。思想与思想之间以及思想与世界之间不断发生相互作用,其语义方式与其意向内容相适应。另一方面,书面句子标记的因果互动并不能始终如一地反映其内容(与人互动时除外)。
Thoughts are embodied in a "system" that provides "normal channels" for them to interact with the world, and such that these normal interactions tend to maximize the "fit" between them and the world; that is, via perception, beliefs tend toward the truth; and, via action, the world tends toward what is desired. And there are channels of interaction among thoughts (various kinds of inference) via which the set of them tends to become more coherent, and to contain more consequences of its members. Naturally, other effects introduce aberrations and "noise" into the system; but the normal channels tend to predominate in the long run. There are no comparable channels of interaction for written tokens. In fact, (according to this same standard view), the only semantically sensitive interactions that written tokens ever have are with thoughts; insofar as they tend to express truths, it is because they express beliefs, and insofar as they tend to bring about their own satisfaction conditions, it is because they tend to bring about desires. Thus, the only semantically significant interactions that written tokens have with the world are via thoughts; and this, the suggestion
思想体现在一个 "系统 "中,这个 "系统 "提供了思想与世界互动的 "正常渠道",这些正常的互动使思想与世界之间的 "契合 "趋于最大化;也就是说,通过感知,信念趋于真实;通过行动,世界趋于理想。思想之间存在着相互作用的渠道(各种推论),通过这些渠道,思想集合趋向于更加连贯,并包含其成员的更多后果。当然,其他影响也会给系统带来畸变和 "噪音";但从长远来看,正常的渠道往往占主导地位。书面代币没有类似的互动渠道。事实上,(根据同样的标准观点),书面代币唯一具有语义敏感性的互动就是与思想的互动;只要它们倾向于表达真理,那是因为它们表达了信念;只要它们倾向于带来自身的满足条件,那是因为它们倾向于带来欲望。因此,书写符号与世界之间唯一具有语义意义的互动是通过思想进行的。

goes, is why their intentionality is derivative.
这就是为什么他们的意向性是衍生性的。
The interactions that thoughts have among themselves (within a single "system") are particularly important, for it is in virtue of these that thought can be subtle and indirect, relative to its interactions with the world - that is, not easily fooled or thwarted. Thus, we tend to consider more than the immediately present evidence in making judgments, and more than the immediately present options in making plans. We weigh desiderata, seek further information, try things to see if they'll work, formulate general maxims and laws, estimate results and costs, go to the library, cooperate, manipulate, scheme, test, and reflect on what we're doing. All of these either are or involve a lot of thought-thought interaction, and tend, in the long run to broaden and improve the "fit" between thought and world. And they are typical as manifestations both of intelligence and of independence.
思想之间(在一个单一的 "系统 "内)的相互作用尤为重要,因为正是由于这些相互作用,思想相对于其与世界的相互作用而言,可以是微妙和间接的--也就是说,不容易被愚弄或挫败。因此,在做出判断时,我们往往会考虑更多而不是眼前的证据;在制定计划时,我们往往会考虑更多而不是眼前的选择。我们会权衡各种需要,寻求更多的信息,尝试是否可行,制定一般性的格言和法则,估计结果和成本,去图书馆,合作,操纵,计划,测试,以及反思我们正在做的事情。所有这些,要么是思想与思想之间的互动,要么涉及思想与思想之间的互动,从长远来看,往往会扩大和改善思想与世界之间的 "契合度"。它们既是智慧的典型表现,也是独立性的典型表现。
I take it for granted that all of the interactions mentioned are, in some sense, causal - hence, that it is among the system's "causal powers" that it can have (instantiate, realize, produce) thoughts that interact with the world and each other in these ways. It is hard to tell whether these are the sorts of causal powers that Searle has in mind, both because he doesn't say, and because they don't seem terribly similar to photosynthesis and lactation. But, in any case, they strike me as strong candidates for the kinds of powers that would distinguish systems with intentionality - that is, original intentionality - from those without. The reason is that these are the only powers that consistently reflect the distinctively intentional character of the interactors: namely, their "content" or "meaning" (except, so to speak, passively, as in the case of written tokens being read). That is, the power to have states that are semantically active is the "right" causal power for intentionality.
我想当然地认为,所提到的所有互动在某种意义上都是因果性的--因此,系统的 "因果能力 "之一就是它可以拥有(实例化、实现、产生)以这些方式与世界和彼此互动的思想。很难说这些是否就是塞尔心目中的因果能力,一是因为他没有说,二是因为它们似乎与光合作用和哺乳并不十分相似。但无论如何,它们在我看来都是将具有意向性--即原始意向性--的系统与不具有意向性的系统区分开来的因果能力的有力候选者。原因在于,只有这些能力能够始终如一地反映互动者的明显意向性特征:即互动者的 "内容 "或 "意义"(可以说,被动地反映除外,如阅读书写的标记)。也就是说,具有语义活动状态的力量是意向性的 "正确 "因果力量。
It is this plausible claim that underlies the thesis that (sufficiently developed) Al systems could actually be intelligent, and have original intentionality. For a case can surely be made that their "representations" are semantically active (or, at least, that they would be if the system were built into a robot). Remember, we are conceding them at least derivative intentionality, so the states in question do have a content, relative to which we can gauge the "semantic appropriateness" of their causal interactions. And the central discovery of all computer technology is that devices can be contrived such that, relative to a certain interpretation, certain of their states will always interact (causally) in semantically appropriate ways, so long as the devices perform as designed electromechanically - that is, these states can have "normal channels" of interaction (with each other and with the world) more or less comparable to those that underlie the semantic activity of thoughts. This point can hardly be denied, so long as it is made in terms of the derivative intentionality of computing systems; but what it seems to add to the archetypical (and "inert") derivative intentionality of, say, written text is, precisely, semantic activity. So, if (sufficiently rich) semantic activity is what distinguishes original from derivative intentionality (in other words, it's the "right" causal power), then it seems that (sufficiently rich) computing systems can have original intentionality.
正是这种似是而非的说法支撑着这样一个论点,即(经过充分开发的)Al 系统实际上可能是智能的,并具有原始的意向性。因为我们可以肯定,它们的 "表征 "在语义上是活跃的(或者,至少,如果该系统是内置在机器人中的话,它们在语义上是活跃的)。请记住,我们承认它们至少具有派生意向性,因此有关状态确实具有内容,我们可以据此来衡量其因果互动的 "语义适当性"。而所有计算机技术的核心发现就是,可以设计出这样的设备,即相对于某种解释而言,它们的某些状态总是会以语义上适当的方式(因果地)相互作用,只要这些设备的性能符合机电设计的要求--也就是说,这些状态可以有(相互之间以及与世界之间的)"正常的 "相互作用渠道,或多或少可以与思想的语义活动的基础渠道相媲美。只要从计算系统的派生意向性的角度来看,这一点就很难被否认;但它似乎给诸如书面文本的典型(和 "惰性")派生意向性增添的,恰恰是语义活动。因此,如果(足够丰富的)语义活动是区分原始意向性与派生意向性的因素(换句话说,它是 "正确的 "因果力量),那么(足够丰富的)计算系统似乎就可以具有原始意向性。
Now, like Searle, I am inclined to dispute this conclusion; but for entirely different reasons. I don't believe there is any conceptual confusion in supposing that the right causal powers for original intentionality are the ones that would be captured by specifying a program (that is, a virtual machine). Hence. I don't think the above plausibility argument can be dismissed out of hand ("no reason to suppose," and so on); nor can I imagine being convinced that, no matter how good Al got, it would still be "weak" - that is, would not have created a "real" intelligence - because it still proceeded by specifying programs. It seems to me that the interesting question is much more nitty-gritty empirical than that: given that programs might be the right way to express the relevant causal structure, are they in fact so? It is to this question that I expect the answer is no. In other words, I don't much care about Searle's demon working through a program for perfect simulation of a native Chinese speaker - not because there's no such demon, but because there's no such program. Or rather, whether there is such a program, and if not, why not, are, in my view, the important questions.
现在,和塞尔一样,我也倾向于对这一结论提出异议;但理由却完全不同。我认为,假设原初意向性的正确因果力量是通过指定一个程序(即虚拟机)来捕捉的因果力量,这并不会造成任何概念上的混乱。因此我不认为上述可信性论证可以被一概否定("没有理由假设",等等);我也无法想象,我们会相信,无论阿尔有多好,它仍然是 "弱 "的--也就是说,不会创造出 "真正的 "智能--因为它仍然是通过指定程序来进行的。在我看来,有趣的问题要比这更细微的经验主义问题:既然程序可能是表达相关因果结构的正确方式,那么它们事实上是否如此呢?对于这个问题,我认为答案是否定的。换句话说,我不太关心塞尔的恶魔是否通过程序完美地模拟了母语为中文的人--不是因为没有这样的恶魔,而是因为没有这样的程序。或者说,在我看来,重要的问题是是否有这样一个程序,如果没有,为什么没有。

by Douglas R. Hofstadter
作者:道格拉斯-R-霍夫斯塔德

Computer Science Department, Indiana University, Bloomington, Ind. 47405
印第安纳大学计算机科学系,Bloomington, Ind.47405

Reductionism and religion
还原论与宗教

This religious diatribe against , masquerading as a serious scientific argument, is one of the wrongest, most infuriating articles 1 have ever read in my life. It is matched in its power to annoy only by the famous article "Minds, Machines, and Gö̀del" by J. R. Lucas (1961).
这篇针对 的宗教谩骂伪装成严肃的科学论证,是我一生中读过的最错误、最令人愤怒的文章之一。只有 J. R. 卢卡斯(J. R. Lucas,1961 年)的著名文章《思想、机器和哥德尔》(Minds, Machines, and Gö̀del)才能与之媲美。
Searle's trouble is one that I can easily identify with. Like me, he has deep difficulty in seeing how mind, soul, "I," can come out of brain, cells, atoms. To show his puzzlement, he gives some beautiful paraphrases of this mystery. One of my favorites is the water-pipe simulation of a brain. It gets straight to the core of the mind-body problem. The strange thing is that Searle simply dismisses any possibility of such a system's being conscious with a hand wave of "absurd." (I actually think he radically misrepresents the complexity of such a water-pipe system both to readers and in his own mind, but that is a somewhat separable issue.)
塞尔的烦恼我很容易理解。和我一样,他也很难理解思想、灵魂、"我 "是如何从大脑、细胞和原子中产生的。为了表达他的困惑,他对这一谜团做了一些美丽的诠释。其中我最喜欢的是水管模拟大脑。它直指身心问题的核心。奇怪的是,塞尔用 "荒谬 "一词就简单地否定了这种系统有意识的可能性。(实际上,我认为他对读者和他自己都从根本上歪曲了这种水管系统的复杂性,但这是一个可分的问题)。
The fact is, we have to deal with a reality of nature - and realities of nature sometimes are absurd. Who would have believed that light consists of spinning massless wave particles obeying an uncertainty principle while traveling through a curved four-dimensional universe? The fact that intelligence, understanding, mind, consciousness, soul all do spring from an unlikely source - an enormously tangled web of cell bodies, axons, synapses, and dendrites - is absurd, and yet undeniable. How this can create an "I" is hard to understand, but once we accept that fundamental, strange, disorienting fact, then it should seem no more weird to accept a water-pipe "I."
事实上,我们必须面对自然界的现实--而自然界的现实有时是荒谬的。谁会相信光是由旋转的无质量波粒子组成的,在弯曲的四维宇宙中穿行时遵守不确定性原理?智慧、理解力、思想、意识、灵魂的确都来自一个不太可能的源头--由细胞体、轴突、突触和树突组成的巨大纠结网络--这一事实是荒谬的,但又是不可否认的。我们很难理解这怎么会产生一个 "我",但一旦我们接受了这个基本的、奇怪的、令人迷失方向的事实,那么接受一个水管 "我 "似乎也就不那么奇怪了。
Searle's way of dealing with this reality of nature is to claim he accepts it - but then he will not accept its consequences. The main consequence is that "intentionality" - his name for soul - is an outcome of formal processes. I admit that I have slipped one extra premise in here: that physical processes are formal, that is, rule governed. To put it another way, the extra premise is that there is no intentionality at the level of particles. (Perhaps I have misunderstood Searle. He may be a mystic and claim that there is intentionality at that level. But then how does one explain why it seems to manifest itself in consciousness only when the particles are arranged in certain special configurations - brains - but not, say, in water-pipe arrangements of any sort and size?) The conjunction of these two beliefs seems to me to compel one to admit the possibility of all the hopes of artificial intelligence, despite the fact that it will always baffle us to think of ourselves as, at bottom, formal systems.
塞尔处理这一自然现实的方式是声称他接受这一现实--但他不会接受其后果。其主要后果是,"意向性"--他对灵魂的称呼--是形式过程的结果。我承认,我在这里多加了一个前提:物理过程是形式的,即受规则制约的。换一种说法,这个额外的前提是,在粒子层面不存在意向性。(也许我误解了塞尔。他可能是个神秘主义者,声称在那个层次上存在意向性。但又如何解释为什么只有当粒子以某种特殊的构型(大脑)排列时,意向性才会在意识中表现出来,而在任何种类和大小的水管排列中,意向性都不会表现出来呢?在我看来,这两种信念的结合迫使我们承认人工智能的所有希望都是有可能实现的,尽管把我们自己看成形式系统始终会让我们感到困惑。
To people who have never programmed, the distinction between levels of a computer system - programs that run "on" other programs or on hardware - is an elusive one. I believe Searle doesn't really understand this subtle idea, and thus blurs many distinctions while creating other artificial ones to take advantage of human emotional responses that are evoked in the process of imagining unfamiliar ideas.
对于从未接触过编程的人来说,计算机系统的层次--"运行 "在其他程序或硬件上的程序--之间的区别是难以捉摸的。我相信塞尔并不真正理解这个微妙的概念,因此他模糊了许多区别,同时人为地制造了其他区别,以利用人类在想象陌生概念的过程中产生的情绪反应。
He begins with what sounds like a relatively innocent situation: a man in a room with a set of English instructions ("bits of paper") for manipulating some Chinese symbols. At first, you think the man is answering questions (although unbeknown to him) about restaurants, using Schankian scripts. Then Searle casually slips in the idea that this program can pass the Turing test! This is an incredible jump in complexity - perhaps a millionfold increase if not more. Searle seems not to be aware of how radically it changes the whole picture to have that "little" assumption creep in. But even the initial situation, which sounds plausible enough, is in fact highly unrealistic.
他从一个听起来相对单纯的情境开始:一个人在一个房间里,拿着一套操作一些中文符号的英文说明("纸片")。起初,你会认为这个人在使用 Schankian 脚本回答关于餐馆的问题(尽管他并不知道)。接着,塞尔又不经意地提出,这个程序可以通过图灵测试!这在复杂性上是一个令人难以置信的飞跃--也许是一百万倍的增长,甚至更多。塞尔似乎并没有意识到,这个 "小小的 "假设的悄然加入,会给整个局面带来多么大的改变。但是,即使是听起来很有道理的最初情况,实际上也是非常不现实的。
Imagine a human being, hand simulating a complex Al program, such as a script-based "understanding" program. To digest a full story, to go through the scripts and to produce the response, would probably take a hard eight-hour day for a human being. Actually, of course, this hand-simulated program is supposed to be passing the Turing test, not just answering a few stereotyped questions about restaurants. So let's jump up to a week per question, since the program would be so complex. (We are being unbelievably generous to Searle.)
想象一下,人类手动模拟一个复杂的 Al 程序,比如基于脚本的 "理解 "程序。要消化一个完整的故事、阅读脚本并做出回答,人类可能要花费一天八小时的时间。当然,实际上,这个手工模拟的程序应该通过图灵测试,而不仅仅是回答几个关于餐馆的千篇一律的问题。既然程序如此复杂,我们就把每个问题的时间增加到一周吧。(我们对塞尔真是慷慨得令人难以置信)。
Now Searle asks you to identify with this poor slave of a human (he doesn't actually ask you to identify with him - he merely knows you will project yourself onto this person, and vicariously experience the indescribably boring nightmare of that hand simulation). He knows your reaction will be: "This is not understanding the story - this is some sort of formal process!" But remember: any time some phenomenon is looked at on a scale a million times different from its familiar scale, it doesn't seem the same! When I imagine myself feeling my brain running a hundred times too slowly (of course that is paradoxical but it is what Searle wants me to do), then of course it is agonizing, and presumably 1 would not even recognize the feelings at all. Throw in yet another factor of a thousand and one cannot even imagine what it would feel like.
现在,塞尔要求你认同这个可怜的人类奴隶(他实际上并没有要求你认同他--他只是知道你会把自己投射到这个人身上,代入式地体验那只手模拟的无聊噩梦)。他知道你的反应会是:"这不是在理解故事--这是某种形式上的过程!"但请记住:任何时候,如果某种现象的尺度与其熟悉的尺度相差一百万倍,它看起来就不一样了!当我想象自己感觉大脑运转慢了一百倍时(这当然是自相矛盾的,但这正是塞尔希望我做的),那当然是痛苦的,大概 1 甚至根本不会意识到这种感觉。再加上一千倍的因素,人们甚至无法想象那会是什么感觉。
Now this is what Searle is doing. He is inviting you to identify with a nonhuman which he lightly passes off as a human, and by doing so he asks you to participate in a great fallacy. Over and over again he uses this ploy, this emotional trickery, to get you to agree with him that surely, an intricate system of water pipes can't think! He forgets to tell you that a water-pipe simulation of the brain would take, say, a few trilion water pipes with a few trilion workers standing at faucets turning them when needed, and he forgets to tell you that to answer a question it would take a year or two. He forgets to tell you, because if you remembered that, and then on your own, imagined taking a movie and speeding it up a million times, and imagined changing your level of description of the thing from the faucet level to the pipe-cluster level, and on through a series of ever higher levels until you reached some sort of eventual symbolic level, why then you might say, "Hey, when I imagine what this entire system would be like when perceived at this time scale and level of description, I can see how it might be conscious after all!"
这就是塞尔的所作所为。他在邀请你认同一个非人类,而他却轻描淡写地把这个非人类说成是人类,通过这样做,他要求你参与一个巨大的谬误。他一次又一次地使用这种伎俩,这种情感上的诡计,让你同意他的观点,即复杂的水管系统肯定不会思考!他忘了告诉你,用水管模拟大脑需要几兆根水管,几兆个工人站在水龙头前,在需要的时候转动水管;他忘了告诉你,回答一个问题需要一两年的时间。他忘了告诉你,因为如果你记住了这一点,然后自己想象着把一部电影加速一百万倍,想象着把你对这件事的描述水平从水龙头水平改变到水管群水平,然后通过一系列越来越高的水平,直到你达到某种最终的象征性水平,那么为什么你可能会说:"嘿,当我想象着这整个系统在这个时间尺度和描述水平下被感知时会是什么样子时,我可以看到它毕竟可能是有意识的!"
Searle is representative of a class of people who have an instinctive horror of any "explaining away" of the soul. I don't know why certain people have this horror while others, like me, find in reductionism the ultimate religion. Perhaps my lifelong training in physics and science in general has given me a deep awe at seeing how the most substantial and familiar of objects or experiences fades away, as one approaches the infinitesimal scale, into an eerily insubstantial ether, a myriad of ephemeral swirling vortices of nearly incomprehensible mathematical activity. This in me evokes a kind of cosmic awe. To me, reductionism does not "explain away"; rather, it adds mystery. I know that this journal is not the place for philosophical and religious commentary, yet it seems to me that what Searle and I have is, at the deepest level, a religious disagreement, and I doubt that anything I say could ever change his mind. He insists on things he calls "causal intentional properties" which seem to vanish as soon as you analyze them, find rules for them, or simulate them. But what those things are, other than epiphenomena, or 'innocently emergent" qualities, I don't know.
塞尔是一类人的代表,他们对任何 "解释 "灵魂的行为都有一种本能的恐惧。我不知道为什么某些人会有这种恐惧,而另一些人,比如我,却在还原论中找到了终极宗教。也许是我终生接受的物理学和科学的训练,让我在看到最实在、最熟悉的物体或经验是如何在接近无穷小的尺度时逐渐消失的时候,产生了一种深深的敬畏之情,这种敬畏之情消失在一种令人毛骨悚然的不实在的以太之中,消失在无数瞬息万变的漩涡之中,消失在几乎无法理解的数学活动之中。这让我产生了一种对宇宙的敬畏。对我来说,还原论并不能 "解释一切",反而会增加神秘感。我知道这本杂志不是发表哲学和宗教评论的地方,但在我看来,塞尔和我之间最深层次的分歧是宗教分歧,我怀疑我说的任何话都无法改变他的想法。他坚持他称之为 "因果意向属性 "的东西,而一旦你对它们进行分析,为它们找到规则,或对它们进行模拟,它们似乎就会消失。但除了表象或 "无辜出现的 "特质之外,这些东西到底是什么,我也不知道。

by B. Libet 作者:B.

Department of Physiology, University of California, San Francisco, Calif. 94143
加州大学生理学系,加州旧金山,94143

Mental phenomena and behavior
心理现象和行为

Searle states that the main argument of his paper is directed at establishing his second proposition, that "instantiating a computer program is never by itself a sufficient condition of intentionality" (that is, of a mental state that includes beliefs, desires, and intentions). He accomplishes this with a Gedankenexperiment to show that even "a human agent could instantiate the program and still not have the relevant intentionality"; that is. Searle shows, in a masterful and convincing manner, that the behavior of the appropriately programmed computer could transpire in the absence of a cognitive mental state. 1 believe it is also possible to establish the proposition by means of an argument based on simple formal logic.
塞尔指出,他的论文的主要论点旨在确立他的第二个命题,即 "计算机程序的实例化本身绝不是意向性的充分条件"(即包括信念、欲望和意图在内的心理状态的充分条件)。他通过一个 "Gedankenexperiment "实验来证明,即使 "人类代理人可以将程序实例化,但仍然不具备相关的意向性";也就是说,"人类代理人可以将程序实例化,但仍然不具备相关的意向性"。塞尔以一种高超而令人信服的方式表明,在没有认知心理状态的情况下,适当编程的计算机的行为也可以发生。1 我们相信,通过基于简单形式逻辑的论证,也可以确立这一命题。
We start with the knowledge that we are dealing with two different systems: system is the computer, with its appropriate program; system is the human being, particularly his brain. Even if system could be arranged to behave and even to look like system , in a manner that might make them indistinguishable to an external observer, system must be at least internally different from . If and were identical, they would both be human beings and there would be no thesis to discuss
我们首先要知道,我们面对的是两个不同的系统:系统 是计算机,及其相应的程序;系统 是人,尤其是人的大脑。即使系统 可以被安排得像系统 ,甚至看起来也像系统 ,以至于外部观察者无法将它们区分开来,系统 至少在内部必须与 不同。如果 完全相同,那么他们都是人,也就没有论文可讨论了。
Let us accept the proposal that, on an input-output basis, system and system could be made to behave alike, properties that we may group together under category . The possession of the relevant mental states (including understanding, beliefs, desires, intentions, and the like) may be called property . We know that system has property . Remembering that systems and are known to be different, it is an error in logic to argue that because systems and both have property , they must also both have property
让我们接受这样的建议,即在输入-输出的基础上,系统 和系统 可以表现出相同的性质,我们可以将这些性质归入类别 。拥有相关的心理状态(包括理解、信念、欲望、意图等)可称为属性 。我们知道,系统 具有属性 。我们知道系统 是不同的,因此,如果认为系统 都有属性 ,那么它们也一定都有属性,这是逻辑上的错误。
The foregoing leads to a more general proposition - that no behavior of a computer, regardless of how successful it may be in simulating human behavior, is ever by itself sufficient evidence of any mental state. Indeed, Searle also appears to argue for this more general case when, later in the discussion, he notes: (a) To get computers to feel pain or fall in love would be neither harder nor easier than to get them to have cognition. (b) "For simulation, all you need is the right input and output and a program in the middle that transforms the former into the latter." And, (c) "to confuse simulation with duplication is the same mistake, whether it is pain, love, cognition." On the other hand, Searle seems not to maintain this general proposition with consistency. In his discussion of "IV. The combination reply" (to his analytical example or thought experiment), Searle states: "If we could build a robot whose behavior was indistinguishable over a large range from human behavior, we would ... find it rational and indeed irresistible to ... attribute intentionality to it, pending some reason not to." On the basis of my argument, one would not have to know that the robot had a formal program (or whatever) that accounts for its behavior, in order not to have to attribute intentionality to it. All we need to know is that the robot's internal control apparatus is not made in the same way and out of the same stuff as is the human brain, to reject the thesis that the robot must possess the mental states of intentionality, and so on.
以上论述引出了一个更普遍的命题--计算机的任何行为,无论它在模拟人类行为方面多么成功,其本身都不足以证明任何心理状态。事实上,塞尔在后面的讨论中似乎也在论证这种更普遍的情况,他指出(a) 让计算机感受痛苦或坠入爱河,并不比让它们具有认知能力更难,也不比让它们具有认知能力更容易。(b) "对于模拟来说,你所需要的只是正确的输入和输出,以及在中间将前者转化为后者的程序"。还有,(c)"混淆模拟与复制是同样的错误,无论是痛苦、爱还是认知"。另一方面,塞尔似乎并没有始终如一地坚持这一一般性命题。在讨论 "IV.组合回答"(对他的分析例子或思想实验)的讨论中,塞尔指出:"如果我们能制造出一个机器人,其行为在很大范围内与人类行为无异,我们就会......发现......将意向性归因于它是合理的,甚至是不可抗拒的,除非有理由不这样做"。根据我的论点,我们不必知道机器人有一个能解释其行为的正式程序(或其他什么),就不必把意向性归因于它。我们只需要知道机器人的内部控制装置不是以与人脑相同的方式、用与人脑相同的材料制造的,就可以否定机器人必须具备意向性等心理状态的论点。
Now, it is true that neither my nor Searle's argument excludes the possibility that an appropriately programmed computer could also have mental states (property ); the argument merely states it is not warranted to propose that the robot must have mental states . However, Searle goes on to contribute a valuable analysis of why so many people have believed that computer programs do impart a kind of mental process or state to the computer. Searle notes that, among other factors, a residual behaviorism or operationalism underlies the willingness to accept input-output patterns as sufficient for postulating human mental states in appropriately programmed computers. I would add that there are still many psychologists and perhaps philosophers who are similarly burdened with residual behaviorism or operationalism even when dealing with criteria for existence of a conscious subjective experience in human subjects (see Libet 1973; 1979).
诚然,我和塞尔的论证都没有排除一种可能性,即编程适当的计算机也可能具有心理状态(属性 );论证只是指出,没有理由提出机器人必须具有心理状态 。然而,塞尔接着提出了一个有价值的分析,说明为什么这么多人相信计算机程序确实向计算机传递了一种心理过程或状态。塞尔指出,除其他因素外,残余的行为主义或操作主义是人们愿意接受输入-输出模式的基础,认为这足以假设适当编程的计算机具有人类心理状态。我想补充的是,即使在处理人类主体有意识主观体验的存在标准时,仍有许多心理学家,或许还有哲学家,同样背负着残余行为主义或操作主义的包袱(见利贝特,1973 年;1979 年)。

by William G. Lycan
作者:威廉-G-莱肯

Department of Philosophy, Ohio State University, Columbus, Ohio 43210
俄亥俄州立大学哲学系,俄亥俄州哥伦布市,43210

The functionalist reply (Ohio State)
功能主义的回答(俄亥俄州立大学)

Most versions of philosophical behaviorism have had the consequence that if an organism or device passes the Turing test, in the sense of systematically manifesting all the same outward behavioral dispositions that a normal human does, the has all the same sorts of contentful or intentional states that humans do. In light of fairly obvious counterexamples to this thesis, materialist philosophers of mind have by and large rejected behaviorism in favor of a more species-chauvinistic view: D's manifesting all the same sorts of behavioral dispositions we do does not alone suffice for having intentional states; it is necessary in addition that produce behavior from stimuli in roughly the way that we do - that 's inner functional organization be not unlike ours and that process the stimulus input by analogous inner procedures. On this "functionalist" theory, to be in a mental state of such and such a kind is to incorporate a functional component or system of components of type so and so which is in a certain distinctive state of its own. "Functional components" are individuated according to the roles they play within their owners' overall functional organization.'
哲学行为主义的大多数版本都认为,如果一个有机体或设备 通过了图灵测试,即系统地表现出与正常人相同的所有外在行为倾向,那么这个 就具有与人类相同的所有内容状态或意向状态。鉴于这一论点存在着相当明显的反例,唯物主义心灵哲学家基本上都摒弃了行为主义,转而支持一种更具物种沙文主义色彩的观点:D表现出与我们相同的各种行为倾向,这并不足以说明 具有意向状态;此外, 从刺激中产生行为的方式也必须与我们大致相同-- 的内部功能组织与我们并无二致, 通过类似的内部程序处理刺激输入。根据这种 "功能主义 "理论,处于这样或那样的心理状态,就是包含了这样或那样类型的功能成分或成分系统,它本身就处于某种独特的状态。"功能成分 "是根据它们在其所有者的整体功能组织中所扮演的角色而被个性化的。
Searle offers a number of cases of entities that manifest the
苏尔提供了一些实体案例,这些实体体现了

Commentary/Searle: Minds, brains, and programs
评论/塞尔思想、大脑和程序

behavioral dispositions we associate with intentional states but that rather plainly do not have any such states. I accept his intuitive judgments about most of these cases. Searle plus rule book plus pencil and paper presumably does not understand Chinese, nor does Searle with memorized rule book or Searle with TV camera or the robot with Searle inside. Neither my stomach nor Searle's liver nor a thermostat nor a light switch has beliefs and desires. But none of these cases is a counterexample to the functionalist hypothesis. The systems in the former group are pretty obviously not functionally isomorphic at the relevant level to human beings who do understand Chinese: a native Chinese carrying on a conversation is implementing procedures of his own, not those procedures that would occur in a mockup containing the cynical, English-speaking, American-acculturated homuncular Searle. Therefore they are not counterexamples to a functionatist theory of language understanding, and accordingly they leave it open that a computer that was functionally isomorphic to a real Chinese speaker would indeed understand Chinese also. Stomachs, thermostats, and the like, because of their brutish simplicity, are even more clearly dissimilar to humans. (The same presumably is true of Schank's existing language-understanding programs.)
我们把行为倾向与意向状态联系在一起,但 ,很明显没有任何这样的状态。 我接受他对其中大多数情况的直觉判断。苏尔加上规则书和纸笔大概听不懂中文,背着规则书的苏尔、带着电视摄像机的苏尔或里面装着苏尔的机器人也听不懂中文。我的胃和苏尔的肝脏、恒温器和电灯开关都没有信念和欲望。但这些情况都不是功能主义假设的反例。前一组中的系统在相关层面上显然与懂中文的人类没有功能上的同构性:一个母语为中文的人在进行对话时执行的是他自己的程序,而不是在一个包含愤世嫉俗、说英语、受过美国文化熏陶的同声传译的苏尔的模型中会发生的那些程序。因此,它们并不是语言理解的功能主义理论的反例,相应地,它们留下了一个可能性,即一台与真正的说中文者功能同构的计算机确实也能理解中文。胃、恒温器之类的东西,由于其简单粗暴,与人类的相似性就更加明显了。(Schank现有的语言理解程序大概也是如此)。
I have hopes for a sophisticated version of the "brain simulator" (or the "combination" machine) that Searle illustrates with his plumbing example. Imagine a hydraulic system of this type that does replicate; perhaps not the precise neuroanatomy of a Chinese speaker, but all that is relevant of the Chinese speaker's higher functional organization; individual water pipes are grouped into organ systems precisely analogous to those found in the speaker's brain, and the device processes linguistic input in just the way that the speaker does. (It does not merely simulate or describe this processing.) Moreover, the system is automatic and does all this without the intervention of Searle or any other deus in machina. Under these conditions and given a suitable social context, 1 think it would be plausible to accept the functionalist consequence that the hydraulic system does understand Chinese.
我对苏尔以水管为例说明的 "大脑模拟器"(或 "组合 "机器)的复杂版本抱有希望。试想一下,这种水力系统的确可以复制;也许不能精确复制中文演讲者的神经解剖学,但可以复制中文演讲者高级功能组织的所有相关内容;单个水管被组合成器官系统,与演讲者大脑中的器官系统精确相似,而且该设备可以按照演讲者的方式处理语言输入(它不仅仅是模拟或描述这种处理过程)。(此外,该系统是自动的,不需要塞尔或任何其他 "机器之神 "的干预。在这些条件下,如果有一个合适的社会环境,我认为接受功能主义的结果,即液压系统确实能听懂中文,是有道理的。
Searle's paper suggest two objections to this claim. First. "where is the understanding in this system?" All Searle sees is pipes and valves and flowing water. Reply: Looking around the fine detail of the system's hardware, you are too small to see that the system is understanding Chinese sentences. If you were a tiny, cell-sized observer inside a real Chinese speaker's brain, all you would see would be neurons stupidly, mechanically transmitting electrical charge, and in the same tone you would ask, "Where is the understanding in this system?" But you would be wrong in concluding that the system you were observing did not understand Chinese; in like manner you may well be wrong about the hydraulic device.
塞尔的论文对这种说法提出了两个反对意见。第一:"这个系统的理解力在哪里?"塞尔看到的只是管道、阀门和流水。回复:环顾系统硬件的细节,你太渺小了,无法看到系统在理解中文句子。如果你是一个小小的、细胞大小的观察者,在一个真正会说中文的人的大脑里,你看到的只是神经元愚蠢地、机械地传递电荷,你会以同样的语气问:"这个系统的理解力在哪里?"但是,如果你断定你所观察的系统不懂中文,那你就错了;同样,你对液压装置的看法也很可能是错误的。
Second, even if a computer were to replicate all of the Chinese speaker's relevant functional organization, all the computer is really doing is performing computational operations on formally specified elements. A purely formally or syntactically characterized element has no meaning or content in itself, obviously, and no amount of mindless syntactic manipulation of it will endow it with any. Reply: The premise is correct, and I agree it shows that no computer has or could have intentional states merely in virtue of performing syntactic operations on formally characterized elements. But that does not suffice to prove that no computer can have intentional states at all. Our brain states do not have the contents they do just in virtue of having their purely formal properties either;" a brain state described "syntactically" has no meaning or content on its own. In virtue of what, then, do brain states (or mental states however construed) have the meanings they do? Recent theory advises that the content of a mental representation is not determined within its owner's head (Putnam 1975a; Fodor 1980); rather, it is determined in part by the objects in the environment that actually figure in the representation's etiology and in part by social and contextual factors of several other sorts (Stich, in preparation). Now, present-day computers live in highly artificial and stifling environments. They receive carefully and tendentiously preselected input; their software is adventitiously manipulated by uncaring programmers; and they are isolated in laboratories and offices, deprived of any normal interaction within a natural or appropriate social setting. For this reason and several others, Searle is surely right in saying that presentday computers do not really have the intentional states that we fancifully incline toward attributing to them. But nothing Searle has said impugns the thesis that if a sophisticated future computer not only replicated human functional organization but harbored its inner representations as a result of the right sort of causal history and had also been nurtured within a favorable social setting, we might correctly ascribe intentional states to it. This point may or may not afford lasting comfort to the Al community.
其次,即使计算机能够复制汉语说话人的所有相关功能组织,计算机真正做的也只是对形式上指定的元素进行计算操作。显然,一个纯粹形式上或句法上表征的元素本身并没有意义或内容,对它进行再多无谓的句法操作也不会赋予它任何意义或内容。请回答:前提是正确的,我同意它表明,没有计算机仅仅凭借对形式表征的元素进行句法操作就具有或可能具有意向状态。但这并不足以证明任何计算机都不可能有意向性状态。我们的大脑状态也不会仅仅因为具有纯粹的形式属性而具有它们所具有的内容;""语法上 "描述的大脑状态本身并没有意义或内容。那么,大脑状态(或无论如何解释的心理状态)是凭借什么而具有意义的呢?最近的理论认为,心理表征的内容并不是在其所有者的头脑中决定的(普特南,1975a;福多,1980 年);相反,它部分是由环境中的物体决定的,这些物体实际上是表征的成因,部分则是由其他几种社会和环境因素决定的(斯蒂奇,正在撰写中)。现在,当今的计算机生活在高度人工化和令人窒息的环境中。它们接受经过精心挑选、带有倾向性的输入;它们的软件被冷酷无情的程序员随意操纵;它们被隔离在实验室和办公室,无法在自然或适当的社会环境中进行任何正常的互动。 出于这个原因和其他一些原因,塞尔说当今的计算机并不真正具有我们幻想中的意向状态,这无疑是正确的。但是,塞尔所说的一切并不妨碍这样一个论点,即如果未来的精密计算机不仅复制了人类的功能组织,而且由于正确的因果历史而拥有其内在表征,并且还在有利的社会环境中得到培育,那么我们就可能正确地把意向状态归因于它。这一点可能会,也可能不会给阿尔社区带来持久的安慰。

Notes 说明

  1. This characterization is necessarily crude and vague. For a very useful survey of different versions of functionalism and their respective foibles, see Block (1978); I have developed and detended what I think is the most promising version of functionalism in Lycan (forthcoming).
    这种描述必然是粗略和模糊的。关于不同版本的功能主义及其各自的缺陷,见 Block (1978)。
  2. For further discussion of cases of this kind, see Block (forthcoming).
    关于此类案例的进一步讨论,见 Block(即将出版)。
  3. A much expanded version of this reply appears in section 4 of Lycan (forthcoming).
    本答复的详细内容见《Lycan》(即将出版)第 4 节。
4.1 do not understand Searle's positive suggestion as to the source of intentionality in our own brains. What "neurobiological causal properties"?
4.1 不理解塞尔关于我们大脑中意向性来源的积极建议。什么 "神经生物学因果属性"?
  1. As Fodor (forthcoming) remarks, SHRDLU as we interpret him is the victim of a Cartesian evil demon; the "blocks" he manipulates do not exist in reality.
    正如 Fodor(即将出版)所说,我们所理解的 SHRDLU 是笛卡尔邪魔的牺牲品;他所操纵的 "区块 "在现实中并不存在。

by John McCarthy 作者:约翰-麦卡锡

Artifcial Intelligence Laboratory. Stanford University, Stantord, Calii. 94305
艺术智能实验室。斯坦福大学,加州斯坦托德94305

Beliefs, machines, and theories
信念、机器和理论

John Searle's refutation of the Berkeley answer that the system understands Chinese proposes that a person (call him Mr. Hyde) carry out in his head a process (call it Dr. Jekyll) for carrying out a written conversation in Chinese. Everyone will agree with Searle that Mr. Hyde does not understand Chinese, but I would contend, and I suppose his Berkeley interlocutors would also, that provided certain other conditions for understanding are met, Dr. Jekyll understands Chinese. In Robert Louis Stevenson's story, it seems assumed that Dr. Jekyll and Mr. Hyde time-share the body, while in Searle's case, one interprets a program specifying the other.
约翰-塞尔(John Searle)在驳斥伯克利 "系统理解中文 "的答案时,提议让一个人(称他为海德先生)在他的头脑中执行一个用中文进行书面对话的过程(称它为杰基尔博士)。每个人都会同意塞尔的观点,即海德先生不懂中文,但我会争辩说,只要满足理解的某些其他条件,杰基尔博士是懂中文的,我想他的伯克利对话者也会这么认为。在罗伯特-路易斯-史蒂文森(Robert Louis Stevenson)的故事中,似乎假定 Jekyll 博士和 Hyde 先生共享身体,而在 Searle 的故事中,一方解释了指定另一方的程序。
Searle's dismissal of the idea that thermostats may be ascribed belief is based on a misunderstanding. It is not a pantheistic notion that all machinery including telephones, light switches, and calculators believe. Belief may usefully be ascribed only to systems about which someone's knowledge can best be expressed by ascribing beliefs that satisfy axioms such as those in McCarthy (1979). Thermostats are sometimes such systems. Telling a child, "If you hold the candle under the thermostat, you will fool it into thinking the room is too hot, and it will turn off the furnace" makes proper use of the child's repertoire of intentional concepts.
塞尔否认恒温器可以被赋予信仰的观点是基于一种误解。包括电话、电灯开关和计算器在内的所有机器都有信仰,这并不是泛神论的观念。只有那些满足麦卡锡(McCarthy,1979 年)所提公理的信念才能最好地表达某个人的知识的系统,才有可能被赋予信念。恒温器有时就是这样的系统。告诉孩子:"如果你把蜡烛放在恒温器下面,你就能骗过它,让它以为房间太热了,然后它就会关掉火炉",这就恰当地利用了孩子的意向性概念。
Formalizing belief requires treating simple cases as well as more interesting ones. Ascribing beliefs to thermostats is analogous to including and in the number system even though we would not need a number system to treat the null set or sets with just one element; indeed we wouldn't even need the concept of set.
将信念形式化需要处理简单的情况和更有趣的情况。把信念赋予恒温器就好比把 纳入数字系统,尽管我们不需要数字系统来处理空集或只有一个元素的集;事实上,我们甚至不需要集的概念。
However, a program that understands should not be regarded as a theory of understanding any more than a man who understands is a theory. A program can only be an illustration of a theory, and a useful theory will contain much more than an assertion that "the following program understands about restaurants." I can't decide whether this last complaint applies to Searle or just to some of the Al researchers he criticizes.
然而,"理解 "的程序不应被视为 "理解 "的理论,就像 "理解 "的人不应被视为理论一样。一个程序只能是一种理论的例证,而一个有用的理论所包含的远不止 "以下程序理解餐馆 "这样的断言。我无法确定最后这句抱怨是适用于塞尔,还是只适用于他所批评的一些阿尔研究者。

by John C Marshall
作者:约翰-C-马歇尔

Neuropsychology Unit, University Department of Clinical Neurology. The Radclife Infrmary, Oxford, England
大学临床神经学系神经心理学组。英国牛津拉德克利夫医院

Artificial intelligence - the real thing?
人工智能--真正的东西?

Searle would have us believe that the present-day inhabitants of respectable universities have succumbed to the Faustian dream. (Mephistopheles: "What is it then?" Wagner: "A man is in the making.") He assures us, with a straight face, that some contemporary scholars think that "an appropriately programmed computer really is a mind," that such artificial creatures "literally have cognitive states." The real thing indeed! But surely no one could believe this? I mean, if
塞尔会让我们相信,当今那些受人尊敬的大学校长们已经屈服于浮士德式的梦想。(梅菲斯特:"那是什么?"瓦格纳:"一个人正在形成。")他一脸正经地向我们保证,一些当代学者认为,"适当编程的计算机真的就是一个头脑",这种人造生物 "确实具有认知状态"。的确是真的!但肯定没人会相信这一点吧?我是说,如果

someone did, then wouldn't he want to give his worn-out IBM a decent burial and say Kaddish for it? And even if some people at Yale, Berkeley, Stanford, and so forth do instantiate these weird belief states, what conceivable scientific interest could that hold? Imagine that they were right, and that their computers really do perceive, understand, and think. All that our Golem makers have done on Searle's story is to create yet another mind. If the sole aim is to "reproduce" (Searle's term, not mine) mental phenomena, there is surely no need to buy a computer.
那么,他难道不想给他那破旧的 IBM 一个体面的葬礼,并为它念 Kaddish 经吗?即使耶鲁大学、伯克利大学、斯坦福大学等高校的某些人真的将这些奇怪的信念状态实例化了,这又能带来什么可想象的科学兴趣呢?想象一下,如果他们是对的,他们的计算机真的能感知、理解和思考。在塞尔的故事中,我们的 "泥巨人 "制造者所做的只是创造了另一种思维。如果唯一的目的是 "再现"(塞尔的术语,不是我的术语)心理现象,那么肯定就没有必要购买计算机了。
Frankly, I just don't care what some members of the Al community think about the ontological status of their creations. What I do care about is whether anyone can produce principled, revealing accounts of. say, the perception of tonal music (Longuet-Higgins 1979), the properties of stereo vision (Marr & Poggio 1979), and the parsing of natural language sentences (Thorne 1968). Everyone that I know who tinkers around with computers does so because he has an attractive theory of some psychological capacity and wishes to explore certain consequences of the theory algorithmically. Searle refers to such activity as "weak Al," but I would have thought that theory construction and testing was one of the stronger enterprises that a scientist could indulge in. Clearly, there must be some radical misunderstanding here.
坦率地说,我并不关心阿尔社区的某些成员如何看待其创作的本体论地位。我关心的是,是否有人能对音调音乐的感知(朗格-希金斯,1979 年)、立体视觉的特性(马尔和波吉奥,1979 年)以及自然语言句子的解析(索恩,1968 年)做出有原则、有启发性的解释。据我所知,每一个摆弄计算机的人都是因为他对某种心理能力有一套吸引人的理论,并希望通过算法探索该理论的某些后果。塞尔把这种活动称为 "弱阿尔"(weak Al),但我认为,理论的构建和测试是科学家所能从事的更强的事业之一。显然,这里一定存在着某种根本性的误解。
The problem appears to lie in Searle's (or his Al informants') strange use of the term 'theory.' Thus Searle writes in his shorter abstract: "According to strong AI, appropriately programmed computers literally have cognitive states, and therefore the programs are psychological theories." Ignoring momentarily the "and therefore," which introduces a simple non sequitur, how could a program be a theory? As Moor (1978) points out, a theory is (at least) a collection of related propositions which may be true or false, whereas a program is (or was) a pile of punched cards. For all I know, maybe suitably switched-on computers do "literally" have cognitive states, but even if they did, how could that possibly licence the inference that the program per se was a psychological theory? What would one make of an analogous claim applied to physics rather than psychology? "Appropriately programmed computers literally have physical states, and therefore the programs are theories of matter" doesn't sound like a valid inference to me. Moor's exposition of the distinction between program and theory is particularly clear and thus worth quoting at some length:
问题似乎在于塞尔(或他的艾尔线人)对'理论'一词的奇怪用法。因此,塞尔在其较短的摘要中写道:"根据强人工智能,适当编程的计算机确实具有认知状态,因此程序是心理学理论"。暂且不提 "因此 "这个引入了一个简单的非连续性问题的词,程序怎么可能是一种理论呢?正如摩尔(Moor,1978 年)所指出的,理论(至少)是相关命题的集合,这些命题可能是真的,也可能是假的,而程序则是(或曾经是)一堆打好的卡片。据我所知,也许适当开启的计算机确实 "字面上 "具有认知状态,但即使它们真的具有认知状态,这又怎么可能允许推论说程序本身就是一种心理学理论呢?如果把类似的说法应用于物理学而不是心理学,我们又会怎么看呢?在我看来,"适当编程的计算机确实具有物理状态,因此程序是物质理论 "并不是一个有效的推论。摩尔对程序与理论之间区别的阐述尤为清晰,因此值得详细引述:
A program must be interpreted in order to generate a theory. In the process of interpreting, it is likely that some of the program will be discarded as irrelevant since it will be devoted to the technicalities of making the program acceptable to the computer. Moreover, the remaining parts of the program must be organized in some coherent fashion with perhaps large blocks of the computer program taken to represent specific processes. Abstracting a theory from the program is not a simple matter, for different groupings of the program can generate different theories. Therefore, to the extent that a program, understood as a model, embodies one theory, it may well embody many theories. (Moor 1978, p. 215)
程序必须经过解释才能产生理论。在解释的过程中,程序的某些部分很可能会因为无关紧要而被舍弃,因为这些部分将用于使程序为计算机所接受的技术性问题。此外,程序的其余部分必须以某种连贯的方式组织起来,也许计算机程序的大块内容会被用来表示特定的过程。从程序中抽象出理论并不是一件简单的事,因为程序的不同组合会产生不同的理论。因此,如果把程序理解为一个模型,它体现的是一种理论,那么它也可能体现多种理论。(摩尔,1978 年,第 215 页)
Searle reports that some of his informants believe that running programs are other minds, albeit artificial ones; if that were so, would these scholars not attempt to construct theories of artificial minds, just as we do for natural ones? Considerable muddle then arises when Searle's informants ignore their own claim and use the terms 'reproduce' and 'explain' synonymously: "The project is to reproduce and explain the mental by designing programs." One can see how hopelessly confused this is by transposing the argument back from computers to people. Thus I have noticed that many of my daughter's mental states bear a marked resemblance to my own; this has arisen, no doubt, because part of my genetic plan was used ot build her hardware and because I have shared in the responsibility of programming her. All well and good, but it would be straining credulity to regard my daughter as "explaining" me, as being a "theory" of me.
塞尔报告说,他的一些线人相信运行的程序是另一种思维,尽管是人造的;如果真是这样,这些学者难道不会试图构建人造思维的理论,就像我们构建自然思维的理论一样吗?当塞尔的信息提供者无视他们自己的主张,将 "再现 "和 "解释 "作为同义词使用时,就出现了相当大的混乱:"这个项目就是通过设计程序来再现和解释心理"。只要把这个论点从计算机转回到人,我们就能明白这是多么令人绝望的混淆。因此,我注意到,我女儿的许多心理状态与我自己的心理状态十分相似;毫无疑问,这是因为我的部分遗传计划被用于构建她的硬件,而且我也分担了为她编程的责任。这一切都很好,但如果把我的女儿看作是在 "解释 "我,看作是我的一种 "理论",那就太难以置信了。
What one would like is an elucidation of the senses in which programs, computers and other machines do and don't figure in the explanation of behavior (Cummins 1977; Marshall 1977). It is a pity that Searle disregards such questions in order to discuss the everyday use of mental vocabulary, an enterprise best left to lexicographers. Searle writes: "The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't." Well, perhaps it does start there, but that is no reason to suppose it must finish there. How would such an "argument" fare in natural philosophy? "The study of physics starts with such facts as that tables are solid objects without holes in them, whereas Gruyere cheese. ..." Would Searle now continue that "If you get a theory that denies this point, you have produced a counterexample to the theory and the theory is wrong"? Of course a thermostat's "belief" that the temperature should be a little higher is not the same kind of thing as my "belief" that it should. It would be totally uninteresting if they were the same. Surely the theorist who compares the two must be groping towards a deeper parallel; he has seen an analogy that may illuminate certain aspects of the control and regulation of complex systems. The notion of positive and negative feedback is what makes thermostats so appealing to Alfred Wallace and Charles Darwin, to Claude Bernard and Walter Cannon, to Norbert Wiener and Kenneth Craik. Contemplation of governors and thermostats has enabled them to see beyond appearances to a level at which there are profound similarities between animals and artifacts (Marshall 1977). It is Searle, not the theoretician, who doesn't really take the enterprise seriously. According to Searle, "what we wanted to know is what distinguishes the mind from thermostats and livers," Yes, but that is not all; we also want to know at what levels of description there are striking resemblances between disparate phenomena.
我们所希望的是阐明程序、计算机和其他机器在行为解释中的作用(Cummins 1977;Marshall 1977)。遗憾的是,塞尔为了讨论心理词汇的日常使用而忽略了这些问题,而这项工作最好留给词典编纂者去做。塞尔写道:"对心智的研究始于这样的事实:人类有信念,而恒温器、电话和加法器没有。好吧,也许确实是从这里开始的,但这并不能成为认为它必须在这里结束的理由。在自然哲学中,这样的 "论证 "会是怎样的呢?"物理学的研究是从这样的事实开始的:桌子是没有洞的固体物体,而格鲁耶尔奶酪则是......"塞尔现在会继续说:"如果你得到一个否认这一点的理论,你就提出了一个理论的反例,这个理论就是错的 "吗?当然,恒温器认为温度应该再高一点的 "信念 "与我认为温度应该再高一点的 "信念 "不是一回事。如果它们是一样的,那就完全没有意思了。当然,将两者相提并论的理论家一定是在摸索更深层次的相似之处;他看到了一个类比,这个类比可能会阐明复杂系统控制和调节的某些方面。正反馈的概念正是恒温器吸引阿尔弗雷德-华莱士(Alfred Wallace)和查尔斯-达尔文(Charles Darwin)、克劳德-贝尔纳(Claude Bernard)和沃尔特-坎农(Walter Cannon)、诺伯特-维纳(Norbert Wiener)和肯尼斯-克雷克(Kenneth Craik)的原因。对调速器和恒温器的思考使他们能够透过表象,看到动物与人工制品之间的深刻相似之处(马歇尔,1977 年)。是塞尔,而不是理论家,没有真正认真对待这项事业。塞尔认为,"我们想知道的是,心灵与恒温器和肝脏的区别在哪里。"是的,但这还不是全部;我们还想知道,在描述的哪个层次上,不同的现象之间存在着惊人的相似之处。
In the opening paragraphs of Leviathan, Thomas Hobbes (1651, p 8) gives clear expression to the mechanist's philosophy:
在《利维坦》的开篇,托马斯-霍布斯(1651 年,第 8 页)明确表达了机械论哲学:
Nature, the art whereby God hath made and governs the world, is by the art of man, as in many other things, in this also imitated, that it can make an artificial animal. ... For what is the heart but a spring, and the nerves so many strings; and joints so many wheels giving motion to the whole body, such as was intended by the artificer? What is the notion of "imitation" that Hobbes is using here? Obviously not the idea of exact imitation or copying. No one would confuse a cranial nerve with a piece of string, a heart with the mainspring of a watch, or an ankle with a wheel. There is no question of trompe l'oeil. The works of the scientist are not in that sense reproductions of nature; rather they are attempts to see behind the phenomenological world to a hidden reality. It was Galileo, of course, who articulated this paradigm most forcefully: sculpture, remarks Galileo,
大自然是上帝创造和管理世界的艺术,人类的艺术也模仿了大自然,就像模仿许多其他事物一样,人类可以制造人造动物。...因为心脏不过是一个弹簧,神经不过是许多线,关节不过是许多轮子,给整个身体带来运动,这就是造物主的意图吗?霍布斯在这里使用的 "模仿 "概念是什么?显然不是完全模仿或复制的概念。没有人会把颅神经与一根线混为一谈,把心脏与手表的主发条混为一谈,把脚踝与车轮混为一谈。不存在 "涂鸦"(trompe l'oeil)的问题。从这个意义上说,科学家的作品不是自然的再现,而是试图在现象世界的背后看到隐藏的现实。当然,最有力地阐明这一范式的是伽利略:雕塑,伽利略说、
is "closer to nature" than painting in that the material substratum manipulated by the sculptor shares with the matter manipulated by nature herself the quality of three-dimensionality. But does this fact rebound to the credit of sculpture? On the contrary, says Galileo, it greatly "diminishes its merit": What will be so wonderful in imitating sculptress Nature by sculpture itself?" And he concludes: "The most artistic imitation is that which represents the three-dimensional by its opposite, which is the plane." (Panofsky 1954, p. 97)
雕塑比绘画 "更接近自然",因为雕塑家所操纵的物质基质与自然界所操纵的物质本身具有相同的三维性。但这一事实会给雕塑带来好处吗?伽利略说,恰恰相反,它大大 "削弱了雕塑的优点":用雕塑本身来模仿大自然的女雕塑家,会有什么美妙之处呢?他最后说他总结道:"最具艺术性的模仿,是用三维的反面,即平面,来表现三维"。(帕诺夫斯基,1954 年,第 97 页)
Galileo summarizes his position in the following words: "The further removed the means of imitation are from the thing to be imitated, the more worthy of admiration the imitation will be" (Panofsky 1954). In a footnote to the passage, Panofsky remarks on "the basic affinity between the spirit of this sentence and Galileo's unbounded admiration for Aristarchus and Copernicus 'because they trusted reason rather than sensory experience" (Panoisky 1954)
伽利略用下面的话概括了他的立场:"模仿的手段离被模仿的事物越远,模仿就越值得推崇"(帕诺夫斯基,1954 年)。在这段话的脚注中,帕诺夫斯基指出:"这句话的精神与伽利略对亚里士多德和哥白尼的无限钦佩之间存在着基本的亲和力,'因为他们相信理性而不是感官经验'"(帕诺夫斯基,1954 年)
Now Searle is quite right in pointing out that in Al one seeks to model cognitive states and their consequences (the real thing) by a formal syntax, the interpretation of which exists only the eye of the beholder. Precisely therein lies the beauty and significance of the enterprise - to try to provide a counterpart for each substantive distinction with a syntactic one. This is essentially to regard the study of the relationships between physical transactions and symbolic operations as an essay in cryptanalysis (Freud 1895; Cummins 1977). The interesting question then arises as to whether there is a unique mapping between the formal elements of the system and their "meanings" (Householder 1962).
塞尔指出,在阿尔,人们试图用形式句法来模拟认知状态及其后果(真实事物),而对认知状态及其后果的解释只存在于观察者的眼中。这正是这项事业的魅力和意义所在--试图用句法为每一个实质性的区别提供对应物。这实质上是将物理交易与符号操作之间关系的研究视为一篇密码分析论文(弗洛伊德,1895 年;卡明斯,1977 年)。有趣的问题是,在系统的形式元素与其 "意义"(Householder,1962 年)之间是否存在独特的映射关系。
Searle, however, seems to be suggesting that we abandon entirely both the Galilean and the "linguistic" mode in order merely to copy cognitions. He would apparently have us seek mind only in "neurons with axons and dendrites," although he admits, as an empirical possibility, that such objects might "produce consciousness, intentionality and all the rest of it using some other sorts of chemical principles
然而,塞尔似乎在建议我们完全放弃伽利略模式和 "语言 "模式,以便仅仅复制认知。他显然想让我们只在 "有轴突和树突的神经元 "中寻找心智,尽管他承认,作为一种经验可能性,这些对象可能 "利用其他一些化学原理产生意识、意向性和所有其他的东西"。

than human beings use." But this admission gives the whole game away. How would Searle know that he had built a silicon-based mind (rather than our own carbon-based mind) except by having an appropriate abstract (that is, nonmaterial) characterization of what the two life forms hold in common? Searle finesses this problem by simply "attributing" cognitive states to himsell, other people, and a variety of domestic animals: 'In 'cognitive sciences' one presupposes the reality and knowability of the mental in the same way that in physical sciences one has to presuppose the reality and knowability of physical objects." But this really won't do: we are, after all, a long way from having any very convincing evidence that cats and dogs have "cognitive states" in anything like Searle's use of the term [See "Cognition and Consciousness in Nonhuman Species" BBS 1(4) 1978|.
"。但这一承认暴露了整个游戏。除了对这两种生命形式的共同点进行适当的抽象(即非物质)描述,塞尔又怎么会知道自己构建了一个硅基思维(而不是我们自己的碳基思维)呢?塞尔通过简单地把认知状态 "归属 "于他自己、其他人和各种家畜来解决这个问题:"在'认知科学'中,人们预先假定精神的现实性和可知性,就像在物理科学中,人们必须预先假定物理对象的现实性和可知性一样。但这真的行不通:毕竟,我们还远未获得任何非常令人信服的证据,证明猫和狗具有塞尔所使用的 "认知状态"[见 "非人类物种的认知与意识 "BBS 1(4) 1978]。
Thomas Huxley (1874, p. 156) poses the question in his paraphrase of Nicholas Malebranche's orthodox Cartesian line: "What proof is there that brutes are other than a superior race of marionettes, which eat without pleasure, cry without pain, desire nothing, know nothing, and only simulate intelligence as a bee simulates a mathematician?" Descartes' friend and correspondent, Marin Mersenne, had little doubt about the answer to this kind of question. In his discussion of the perceptual capacities 'of animals he forthrightly denies mentality to the beasts:
托马斯-赫胥黎(1874 年,第 156 页)在转述尼古拉斯-马勒布兰切的正统笛卡尔观点时提出了这个问题:"有什么证据能证明野蛮人只是一种高级的提线木偶,他们吃东西时没有快乐,哭泣时没有痛苦,没有欲望,什么也不知道,只是模拟智慧,就像蜜蜂模拟数学家一样?笛卡尔的朋友和通信者马林-梅森对这类问题的答案几乎没有疑问。在讨论'动物的感知能力'时,他直截了当地否定了野兽的智力:
Animals have no knowledge of these sounds, but only a representation, without knowing whether what they apprehend is a sound or a color or something else; so one can say that they do not act so much as are acted upon, and that the objects make an impression upon their senses from which their action necessarily follows, as the wheels of a clock necessarily follow the weight or spring which drives them. (Mersenne 1636)
动物对这些声音并不了解,而只是一种表象,不知道它们听到的是声音还是颜色或其他东西;因此可以说,它们并不是在行动,而是在被行动,物体给它们的感官留下了印象,而它们的行动必然来自于这种印象,就像钟表的轮子必然来自于驱动它们的重物或弹簧一样。(梅森 1636)
For :Mersenne, then, the program inside animals is indeed an uninterpreted calculus, a syntax without a semantics (See Fodor: "Methodological Solipsism" BBS 3(1) 1980]. Searle, on the other hand, seems to believe that apes, monkeys, and dogs do "have mental states" because they "are made of similar stuff to ourselves" and have eyes, a nose, and skin. I fail to see how the datum supports the conclusion. One might have thought that some quite intricate reasoning and subtle experimentation would be required to justify the ascription of intentionality to chimpanzees (Marshall 1971; Woodruff & Premack 1979). That chimpanzees look quite like us is a rather weak fact on which to build such a momentous conclusion.
那么,对于梅森来说,动物内部的程序确实是一种未被解释的微积分,一种没有语义的语法(见福多:《方法论的唯我论》,BBS 3(1) 1980]:"方法论的唯我论" BBS 3(1) 1980]。另一方面,塞尔似乎认为,猿猴和狗确实 "有精神状态",因为它们 "由与我们相似的东西构成",有眼睛、鼻子和皮肤。我看不出这些数据是如何支持这一结论的。人们可能会认为,要证明把意向性归因于黑猩猩是合理的,需要一些相当复杂的推理和微妙的实验(Marshall 1971; Woodruff & Premack 1979)。黑猩猩和我们长得很像,这个事实对于得出如此重要的结论来说是相当薄弱的。
When Jacques de Vaucanson - the greatest of all Al theorists - had completed his artificial duck he showed it, in all its naked glory of wood, string, steel, and wire. However much his audience may have preferred a more cuddly creature, Vaucanson firmly resisted the temptation to clothe it:
当雅克-德-武康松--最伟大的人工鸭理论家--完成他的人工鸭时,他展示了这只由木头、绳子、钢铁和铁丝组成的光秃秃的鸭子。无论观众多么喜欢可爱的动物,沃康森都坚决抵制了给它穿衣服的诱惑:
Perhaps some Ladies, or some People, who only like the Outside of Animals, had rather have seen the whole cover'd; that is the Duck with Feathers. But besides, that I have been desir'd to make every thing visible; I wou'd not be thought to impose upon the Spectators by any conceal'd or juggling Contrivance (Fryer & Marshall 1979).
也许有些女士或有些人只喜欢动物的外表,他们更希望看到的是整个被遮盖起来的鸭子,即有羽毛的鸭子。但除此之外,我一直想让所有的东西都清晰可见;我不想让观众觉得我有任何隐藏或变戏法的伎俩(弗莱尔和马歇尔,1979 年)。
For Vaucanson, the theory that he has embodied in the model duck is the real thing
对武康森来说,他在模型鸭身上体现的理论才是真正的理论

Acknowledgment 鸣谢

I thank Dr. J. Loew for his comments on earlier versions of this work.
感谢 J. Loew 博士对本论文早期版本提出的意见。

by Grover Maxwell 作者:格罗弗-马克斯韦尔

Center for Pnilosophy of Science, University of Minnesota, Minneapolis, Minn 55455
明尼苏达大学科学哲学中心,明尼阿波利斯,明尼苏达州 55455

Intentionality: Hardware, not software
意图:硬件而非软件

It is a rare and pleasant privilege to comment on an article that surely is destined to become, almost immediately, a classic. But, alas, what comments are called for? Following BBS instructions, I'll resist the very strong temptation to explain how Searle makes exactly the right central points and supports them with exactly the right arguments; and I shall leave it to those who, for one reason or another, still disagree with his central contentions to call attention to a few possible weaknesses, perhaps even a mistake or two, in the treatment of some of his ancillary claims. What I shall try to do, is to examine, briefly - and therefore sketchily and inadequately - what seem to me to be some implications of his results for the overall mind-body problem.
能对一篇肯定会立即成为经典的文章发表评论,是一种难得而愉快的荣幸。但是,唉,该怎么评论呢?按照 BBS 的指示,我将抵制强烈的诱惑,不去解释塞尔是如何提出完全正确的中心论点并用完全正确的论据来支持这些论点的;我将把注意力留给那些出于某种原因仍然不同意他的中心论点的人,让他们注意在处理他的一些附带主张时可能存在的一些弱点,也许甚至是一两个错误。我将尝试做的是,简要地--因此也是粗略和不充分地--研究他的成果对整个心身问题的一些影响。
Quite prudently, in view of the brevity of his paper, Searle leaves some central issues concerning mind-brain relations virtually untouched. In particular, his main thrust seems compatible with interactionism, with epiphenomenalism, and with at least some versions of the identity thesis. It does count, very heavily, against eliminative materialism, and, equally importantly, it reveals "functionalism" (or "functional materialism") as it is commonly held and interpreted (by, for example, Hilary Putnam and David Lewis) to be just another variety of eliminative materialism (protestations to the contrary notwithstanding). Searle correctly notes that functionalism of this kind (and strong , in general) is a kind of dualism. But it is not a mental-physical dualism; it is a form-content dualism, one, moreover, in which the form is the thing and content doesn't matter! [See Fodor: "Methodological Solipsism" BBS 3(1) 1980 .]
考虑到他的论文篇幅简短,塞尔相当谨慎地没有触及有关心脑关系的一些核心问题。尤其是,他的主要观点似乎与互动论、表象论以及至少某些版本的同一论相兼容。同样重要的是,它揭示了 "功能主义"(或 "功能唯物主义")通常被认为和解释为(例如希拉里-普特南(Hilary Putnam)和戴维-刘易斯(David Lewis))只是另一种消除唯物主义(尽管有相反的抗议)。塞尔正确地指出,这种功能主义(以及一般的强 )是一种二元论。但它不是精神-物理二元论;它是形式-内容二元论,而且,在这种二元论中,形式就是事物,内容并不重要![见福多:"方法论的唯我论" BBS 3(1) 1980 .]
Now I must admit that in order to find these implications in Searle's results I have read into them a little more than they contain explicity. Specifically, I have assumed that intentional states are genuinely mental in the what-is-it-like-to-be-a-bat? sense of "mental" (Nagel 1974) as well as, what I suppose is obvious, that eliminative materialism seeks to "eliminate" the genuinely mental in this sense. But it seems to me that it does not take much reading between the lines to see that Searle is sympathetic to my assumptions. For example, he does speak of "genuinely mental [systems]," and he says (in Searle 1979c) that he believes that "only beings capable of conscious states are capable of intentional states" (my italics), although he says that he does not know how to demonstrate this. (How, indeed, could anyone demonstrate such a thing? How could one demonstrate that fire burns?)
现在,我必须承认,为了从塞尔的结果中找到这些含义,我对它们的解读比它们所包含的更多一些。具体地说,我假定意向状态是 "精神 "意义上的真正精神(纳格尔,1974 年),以及我认为显而易见的消除唯物主义试图 "消除 "这种意义上的真正精神。但在我看来,不需要过多的字里行间的解读,就能看出塞尔对我的假设表示同情。例如,他确实谈到了 "真正的精神[系统]",而且他说(在塞尔 1979c 中)他相信 "只有能够意识状态的生命才能够意向状态"(我的斜体),尽管他说他不知道如何证明这一点。(事实上,怎么可能有人证明这一点呢?如何证明火会燃烧?)
The argument that Searle gives for the conclusion that only machines can think (can have intentional states) appears to have two suppressed premisses: (1) intentional states must always be causally produced, and (2) any causal network (with a certain amount of organization and completeness, or some such condition) is a machine. accept for the purposes of this commentary his premises and his conclusion. Next I want to ask: what kind of hardware must a thinking machine incorporate? (By "thinking machine" I mean of course a machine that has genuinely mental thoughts: such a machine, I contend, will also have genuinely mental states or events instantiating sensations, emotions, and so on in all of their subjective, qualitative, conscious, experiential richness.) To continue this line of investigation, I want to employ an "event ontology," discarding substance metaphysics altogether. (Maxwell 1978, provides a sketch of some of the details and of the contentions that contemporary physics, quite independently of philosophy of mind, leads to such an ontology.) An event is (something like) the instancing of a property or the instancing (concrete realization) of a state. A causal network, then, consists entirely of a group of events and the causal links that interconnect them. A fortiori, our "machine" will consist entirely of events and causal connections. In other words, the hardware of this machine (or of any machine, for example, a refrigerator) consists of its constituent events and the machine consists of nothing else (except the causal linkages). Our thinking machine in the only form we know it today is always a brain (or if you prefer, an entire human or other animal body). which, as we have explained, is just a certain causal network of events. The mind-brain identity theory in the version that I defend says that some of the events in this network are (nothing but) genuinely mental events (instances of intentional states, or of pains, or the like). Epiphenomenalism says that the mental events "dangle" from the main net (the brain) by causal connections which are always one way (from brain to dangler). (Epiphenomenalism is, I believe, obviously, though contingently, false.) Interactionism says that there are causal connections in both directions but that the mental events are somehow in a different realm from the brain events. (How come a "different realm" or whatever? Question: Is there a real difference between interactionism and identity theory in an event ontology?)
苏尔为 "只有机器才会思考(才会有意向性状态)"这一结论所做的论证似乎有两个被压制的前提:(1)意向性状态必须总是因果性地产生;(2)任何因果网络(具有一定的组织性和完整性,或诸如此类的条件)都是机器。 ,为了本评论的目的,我们接受他的前提和结论。接下来我想问:一台会思考的机器必须包含什么样的硬件?(我所说的 "会思考的机器 "当然是指具有真正精神思想的机器:我认为,这样的机器也将具有真正的精神状态或事件,这些状态或事件将感觉、情感等所有主观的、定性的、有意识的、丰富的经验实例化)。为了继续这一研究方向,我想采用一种 "事件本体论",完全摒弃物质形而上学。(马克斯韦尔 1978 年著作《当代物理学与心灵哲学》提供了一些细节和争论的草图,这些争论导致了这样一种本体论。)事件是(类似于)属性的实例化或状态的实例化(具体实现)。因此,一个因果网络完全由一组事件和连接它们的因果联系组成。更何况,我们的 "机器 "将完全由事件和因果联系组成。换句话说,这台机器(或任何机器,例如冰箱)的硬件由其组成事件构成,而机器则不包括任何其他东西(因果联系除外)。我们今天所知的唯一形式的思维机器始终是大脑(如果你愿意,也可以是整个人体或其他动物躯体)。正如我们已经解释过的,大脑只是一个由事件组成的因果网络。我所捍卫的心脑同一论认为,这个网络中的某些事件(只不过是)真正的精神事件(意向状态、痛苦或类似事件的实例)。外显说认为,这些心理事件是通过因果联系从主网(大脑)上 "悬挂 "下来的,而这些因果联系总是单向的(从大脑到悬挂者)。(互动论认为,因果联系是双向的,但心理事件与大脑事件处于不同的领域。(何来 "不同领域 "之说?问题:在事件本体论中,交互作用论和同一性理论真的有区别吗?)
Assuming that Searle would accept the event ontology, if no more than for the sake of discussion, would he say that mental events, in general, and instances of intentional states, in particular, are parts of
假设塞尔会接受事件本体论,如果只是为了讨论的目的,他是否会说,一般的心理事件,特别是意向状态的实例,都是 "本体 "的一部分?

the machine, or is it his position that they are just products of the machine? That is, would Searle be inclined to accept the identity thesis, or would he lean toward either epiphenomenalism or interactionism? For my money, in such a context, the identity theory seems by far the most plausible, elegant, and economical guess. To be sure, it must face serious and, as yet, completely unsolved problems, such as the "grain objection" (see, for example, Maxwell 1978), and emergence versus panpsychism (see, for example, Popper & Eccles 1977), but I believe that epiphenomenalism and interactionism face even more severe difficulties.
还是他认为它们只是机器的产物?也就是说,塞尔倾向于接受同一论,还是倾向于表象论或互动论?在我看来,在这种情况下,同一论似乎是迄今为止最合理、最优雅、最经济的猜测。可以肯定的是,它必须面对严重的、至今还完全没有解决的问题,比如 "谷物反对"(参见麦克斯韦,1978 年),以及涌现与泛心理主义(参见波普尔与埃克尔斯,1977 年),但我认为表象论和互动论面临着更为严峻的困难。
Before proceeding, I should emphasize that contemporary scientific knowledge not only leads us to an event ontology but also that it indicates the falsity of naive realism and "gently persuades" us to accept what I have (somewhat misleadingly, I fear) called "structural realism." According to this, virtually all of our knowledge of the physical world is knowledge of the structure (including space-time structure) of the causal networks that constitute it. (See, for example, Russell 1948 and Maxwell 1976). This holds with full force for knowledge about the brain (except for a very special kind of knowledge, to be discussed soon). We are, therefore, left ignorant as to what the intrinsic (nonstructural) properties of "matter" (or what contemporary physics leaves of it) are. In particular, if only we knew a lot more neurophysiology, we would know the structure of the (immense) causal network that constitutes the brain, but we would not know its content; that is, we still wouldn't know what any of its constituent events are. Identity theory goes a step further and speculates that some of these events just are (instances of) our intentional states, our sensations, our emotions, and so on, in all of their genuinely mentalistic richness, as they are known directly "by acquaintance." This is the "very special knowledge" mentioned above, and if identity theory is true, it is knowledge of what some (probably a very small subset) of the events that constitute the brain are. In this small subset of events we know intrinsic as well as structural properties.
在继续之前,我应该强调,当代科学知识不仅引导我们走向事件本体论,而且还表明了天真的现实主义的虚假性,并 "温和地说服 "我们接受我所称(恐怕有点误导)的 "结构现实主义"。根据这种观点,我们对物理世界的几乎所有知识都是关于构成物理世界的因果网络结构(包括时空结构)的知识。(例如,见 Russell 1948 和 Maxwell 1976)。这一点对于有关大脑的知识也完全适用(除了即将讨论的一种非常特殊的知识)。因此,我们对 "物质 "的内在(非结构)属性(或当代物理学对物质的认识)一无所知。特别是,如果我们知道更多的神经生理学知识,我们就会知道构成大脑的(巨大的)因果网络的结构,但我们不会知道它的内容;也就是说,我们仍然不知道它的任何组成事件是什么。同一性理论更进一步推测,其中一些事件就是我们的意向状态、感觉、情感等等的(实例),它们真正具有丰富的精神性,因为它们是直接 "通过相识 "而知道的。这就是上文提到的 "非常特殊的知识",如果同一性理论是真的,那么它就是关于构成大脑的某些事件(可能是很小的一个子集)是什么的知识。在这一小部分事件中,我们既知道其内在属性,也知道其结构属性。
Let us return to one of the questions posed by Searle: "could an artifact, a man-made machine, think?" The answer he gives is, I think, the best possible one, given our present state of unbounded ignorance in the neurosciences, but I'd like to elaborate a little. Since, I have claimed above, thoughts and other (genuinely) mental events are part of the hardware of "thinking machines," such hardware must somehow be got into any such machine we build. At present we have no inkling as to how this could be done. The best bet would seem to be, as Searle indicates, to "build" a machine (out of protoplasm) with neurons, dentrites, and axons like ours, and then to hope that, from this initial hardware, mental hardware would be mysteriously generated (would "emerge"). But this "best bet" seems to me extremely implausible. However, I do not conclude that construction of a thinking machine is (even contingently, much less logically) impossible. I conclude, rather, that we must learn a lot more about physics, neurophysiology, neuropsychology, psychophysiology, and so on, not just more details - but much more about the very foundations of our theroretical knowledge in these areas, before we can even speculate with much sense about building thinking machines. (I have argued in Maxwell 1978 that the foundations of contemporary physics are in such bad shape that we should hope for truly "revolutionary" changes in physical theory, that such changes may very well aid immensely in "solving the mind-brain problems." and that speculations in neurophysiology and perhaps even psychology may very well provide helpful hints for the physicist in his renovation of, say, the foundations of space-time theory.) Be all this as it may, Searle has shown the total futility of the strong route to genuine artificial intelligence.
让我们回到塞尔提出的一个问题:"人工制品,人造机器,会思考吗?"鉴于我们目前在神经科学方面的无知,我认为他给出的答案是最好的答案,但我还想再详细说明一下。我在上文已经说过,思想和其他(真正的)心理事件是 "思维机器 "硬件的一部分,因此,我们制造的任何机器都必须以某种方式植入这种硬件。目前,我们还不知道如何才能做到这一点。最好的办法似乎是,正如塞尔所指出的那样,(用原生质)"制造 "一台像我们这样具有神经元、神经原和轴突的机器,然后希望从这个初始硬件中神秘地产生("出现")心理硬件。但在我看来,这种 "最好的赌注 "是极其难以置信的。然而,我并没有得出结论说,制造一台会思考的机器是不可能的(即使是偶然的,更不是逻辑上的)。相反,我的结论是,我们必须对物理学、神经生理学、神经心理学、心理生理学等有更多的了解,不仅仅是更多的细节,而是对我们在这些领域的理论知识的基础有更多的了解,然后我们才能对制造思维机器进行有意义的推测。(我曾在《麦克斯韦 1978》一书中指出,当代物理学的基础如此糟糕,以至于我们应该希望物理理论发生真正的 "革命性 "变化,而这种变化很可能会极大地有助于 "解决心脑问题",神经生理学甚至心理学的推测很可能会为物理学家翻新时空理论的基础提供有益的提示)。尽管如此,塞尔已经表明,强 ,通往真正的人工智能之路是完全徒劳无益的。

by E.W. Menzel, Jr.
作者:E.W. Menzel, Jr.

Department of Psychology. State University of New York at Stony Brook, Stony Brook, N.Y. 11794
心理学系。纽约州立大学石溪分校,纽约州石溪,11794

Is the pen mightier than the computer?
笔是否比电脑更强大?

The area of artificial intelligence (Al) differs from that of natural intelligence in at least three respects. First, in one is perforce limited to the use of formalized behavioral data or "output" as a basis for making inferences about one's subjects. (The situation is no different, however, in the fields of history and archaeology.) Second, by convention, if nothing else, in one must ordinarily assume, until proved otherwise, that one's subject has no more mentality than a rock: whereas in the area of natural intelligence one can often get away with the opposite assumption, namely, that until proved otherwise, one's subject can be considered to be sentient. Third, in analysis is ordinarily limited to questions regarding the "structure" of intelligence. whereas a complete analysis of natural intelligence must also consider questions of function, development, and evolution.
人工智能(Al)领域至少在三个方面不同于自然智能领域。首先,在 中,人们不得不局限于使用形式化的行为数据或 "输出 "作为对研究对象进行推断的基础(但在历史学和考古学领域,情况并无不同)。(然而,在历史学和考古学领域,情况并无不同)。其次,按照惯例,如果没有其他原因,在 ,人们通常必须假定,在没有得到证明之前,研究对象并不比石头有更多的思维:而在自然智能领域,人们往往可以不作相反的假定,即在没有得到证明之前,可以认为研究对象是有知觉的。第三,对 的分析通常仅限于智能的 "结构 "问题。而对自然智能的全面分析还必须考虑功能、发展和进化等问题。
In other respects, however, it seems to me that the problems of inferring mental capacities are very much the same in the two areas And the whole purpose of the Turing test (or the many counterparts to that test which are the mainstay of comparative psychology) is to devise a clear set of rules for determining the status of subjects of any species, about whose possession of a given capacity we are uncertain. This is admittedly a game, and it cannot be freed of all possible arbitrary aspects any more than can, say, the field of law. Furthermore, unless everyone agrees to the rules of the game, there is no way to prove one's case for (or against) a given capacity with absolute and dead certainty.
然而,在其他方面,我认为推断心理能力的问题在这两个领域中是非常相同的。图灵测试(或作为比较心理学主要内容的该测试的许多对应测试)的全部目的是设计一套明确的规则,用于确定任何物种的主体的地位,而我们对这些主体是否拥有某种能力并不确定。诚然,这是一种游戏,它不可能像法律领域那样摆脱所有可能的任意性。此外,除非每个人都同意游戏规则,否则就无法绝对肯定地证明自己支持(或反对)某种能力。
As I see it, Searle quite simply refuses to play such games, at least according to the rules proposed by . He assigns himself the role of a judge who knows beforehand in most cases what the correct decision should be. And he does not, in my opinion, provide us with any decision rules for the remaining (and most interesting) undecided cases, other than rules of latter-day common sense (whose pitfalls and ambiguities are perhaps the major reason for devising objective tests that are based on performance rather than physical characteristics such as species, race, sex, and age.)
在我看来,塞尔非常干脆地拒绝玩这种游戏,至少是拒绝按照 提出的规则玩这种游戏。他让自己扮演一个法官的角色,在大多数情况下,他事先就知道正确的决定是什么。在我看来,除了后世的常识规则之外,他并没有为剩下的(也是最有趣的)未决案件提供任何裁决规则(这些规则的缺陷和模糊性也许正是设计基于表现而非物种、种族、性别和年龄等身体特征的客观测试的主要原因)。
Let me be more specific. First of all, the discussion of 'the brain' and "certain brain processes" is not only vague but seems to me to displace and complicate the problems it purports to solve. In saying this I do not imply that physiological data are irrelevant; I only say that their relevance is not made clear, and the problem of deciding where the brain leaves off and nonbrain begins is not as easy as it sounds. Indeed, I doubt that many neuroanatomists would even try to draw any sharp and unalterable line that demarcates exactly where in the animal kingdom "the brain" emerges from "the central nervous system"; and I suspect that some of them would ask. Why single out the brain as crucial to mind or intentionality? Why not the central nervous system or DNA or (to become more restrictive rather than liberal) the human brain or the Caucasian brain? Precisely analogous problems would arise in trying to specify for a single species such as man precisely how much intact brain, or what parts of it, or which of the "certain brain processes," must be taken into account and when one brain process leaves off and another one begins. Quite coincidentally, I would be curious as to what odds Searle would put on the likelihood that a neurophysiologist could distinguish between the brain processes of Searle during the course of his hypothetical experiment and the brain processes of a professor of Chinese. Also, I am curious as to what mental status he would assign to, say, an earthworm.
让我说得更具体一些。首先,关于 "大脑 "和 "某些大脑过程 "的讨论不仅含糊不清,而且在我看来,它似乎取代了它想要解决的问题,并使之复杂化了。我这么说并不是暗示生理数据无关紧要,我只是说它们的相关性并不明确,而且决定大脑从哪里开始、非大脑从哪里开始的问题并不像听起来那么容易。事实上,我怀疑许多神经解剖学家甚至会试图划出一条锐利而不可改变的界线,来划分 "大脑 "与 "中枢神经系统 "在动物王国中的确切位置;我怀疑他们中的一些人会问:"为什么要把大脑作为对人类至关重要的部分?为什么单单把大脑作为心智或意向性的关键?为什么不是中枢神经系统或 DNA,或者(为了更严格而不是更自由)人脑或高加索人脑?在试图为人类这样的单一物种明确规定必须考虑多少完整的大脑,或大脑的哪些部分,或 "某些大脑过程 "中的哪些过程,以及一个大脑过程何时结束,另一个大脑过程何时开始时,会出现完全类似的问题。非常巧合的是,我很好奇苏尔认为神经生理学家能够区分苏尔在其假设实验过程中的大脑过程和一位中文教授的大脑过程的可能性有多大。另外,我也很好奇他会赋予蚯蚓怎样的精神状态。
Second, it seems to me that, especially in the domain of psychology, there are always innumerable ways to skin a cat, and that these ways are not necessarily commensurate, especially when one is discussing two different species or cultures or eras. Thus, for example, I would be quite willing to concede that to "acquire the calculus" I would not require the intellectual power of Newton or of Leibnitz, who invented the calculus. But how would Searle propose to quantify the relative "causal powers" that are involved here, or how would he establish the relative similarity of the "effects"? The problem is especially difficult when Searle talks about subjects who have "zero understanding," for we possess no absolute scales or ratio scales in this domain, but only relativistic ones. In other words, we can assume by definition that a given subject may be taken as a criterion of "zero understanding," and assess the competence of other subjects by comparing them against this norm; but someone else is always free to invoke some other norm. Thus, for example, Searle uses himself as a standard of comparison and assumes he possesses zero understanding of Chinese. But what if I proposed that a better norm would be, say, a dog? Unless Searle's performance were no better than that of the dog, it seems to me that
其次,在我看来,特别是在心理学领域,总有无数种方法可以剥一只猫的皮,而且这些方法并不一定相称,特别是当我们讨论两个不同的物种或文化或时代时。因此,举例来说,我很愿意承认,要 "掌握微积分",我并不需要牛顿或发明微积分的莱布尼茨那样的智力。但是,塞尔打算如何量化这里涉及的相对 "因果能力",或者如何确定 "效果 "的相对相似性呢?当塞尔谈到 "零理解力 "的主体时,这个问题就显得尤为棘手,因为我们在这个领域没有绝对尺度或比率尺度,只有相对尺度。换句话说,我们可以根据定义假定某个主体可以作为 "零理解 "的标准,并通过将其他主体与这一标准进行比较来评估他们的能力;但其他人总是可以自由地援引其他标准。因此,举例来说,塞尔以自己为比较标准,假设自己对中文的理解为零。但如果我提出更好的标准是狗呢?除非苏尔的表现并不比狗好,否则在我看来

the student of could argue that Searle's understanding must be greater than zero, and that his hypothetical experiment is therefore inconclusive; that is, the computer, which pertorms as he did, cannot necessarily be said to have zero understanding either.
的学生可以争辩说,塞尔的理解力一定大于零,因此他的假设实验是没有结论的;也就是说,像他那样进行实验的计算机也不一定可以说理解力为零。
In addition to these problems. Searle's hypothetical experiment is based on the assumption that Al would be proved "false" if it could be shown that even a single subject on a single occasion could conceivably pass a Turing test despite the fact that he possesses what may be assumed to be zero understanding. This, in my opinion, is a mistaken assumption. No student of would, to my knowledge, claim infallibility. His predictions would be at best probabilistic or statistical; and, even apart from problems such as cheating, some errors of classification are to be expected on the basis of "chance" alone. Turing, for example, predicted only that by the year 2000 computers will be able to fool an average interrogator on a Turing test, and be taken for a person, at least 30 times out of 100 . In brief, I would agree with Searle if he had said that the position of strong is unprovable with dead certainty; but by his criteria no theory in empirical science is provable, and I therefore reject his claim that he has shown Al to be false.
除了这些问题之外。塞尔的假设实验基于这样一个假设,即如果能够证明,即使是一个受试者在一个场合通过了图灵测试,尽管他拥有的理解力可以被认为是零,那么阿尔就会被证明是 "错误的"。在我看来,这是一个错误的假设。据我所知,没有一个 的学生会声称自己是无懈可击的。他的预测充其量只是概率性或统计性的;而且,即使不考虑作弊等问题,仅凭 "偶然性",一些分类错误也是意料之中的。例如,图灵只预言到 2000 年,计算机将能够在图灵测试中骗过普通的审问者,并在 100 次中至少有 30 次被当成人。简而言之,如果塞尔说强 的立场是死无对证的,我会同意他的观点;但根据他的标准,实证科学中没有任何理论是可证实的,因此我拒绝接受他关于他已证明阿尔是错误的说法。
Perhaps the central question raised by Searle's paper is, however, Where does mentality lie? Searle tells us that the intelligence of computers lies in our eyes alone. Einstein, however, used to say, 'My pencil is more intelligent than ; and this maxim seems to me to come at least as close to the truth as Searle's position. It is quite true that without a brain to guide it and interpret its output, the accomplishments of a pencil or a computer or of any of our other "tools of thought" would not be very impressive. But, speaking for myself, I confess I'd have to take the same dim view of my own accomplishments. In other words, I am quite sure that I could not even have "had" the thoughts expressed in the present commentary without the assistance of various means of externalizing and objectifying "them" and rendering them accessible not only for further examination but for their very formulation. I presume that there were some causal connections and correspondences between what is now on paper (or is it in the reader's eyes alone?) and what went on in my brain or mind; but it is an open question as to what these causal connections and correspondences were. Furthermore, it is only if one confuses present and past, and internal and external happenings with each other, and considers them a single "thing," that "thinking" or even the causal power behind thought can be allocated to a single "place" or entity. I grant that it is metaphorical if not ludicrous to give my pencil the credit or blame for the accomplishment of "the thoughts in this commentary." But it would be no less metaphorical and ludicrous - at least in science, as opposed to everyday life - to give the credit or blame to my brain as such. Whatever brain processes or mental processes were involved in the writing of this commentary, they have long since been terminated. In the not-too-distant future not even "I" as a body will exist any longer. Does this mean that the reader of the future will have no valid basis for estimating whether or not I was (or, as a literary figure, "am") any more intelligent than a rock? I am curious as to how Searle would answer this question. In particular, I would like to know whether he would ever infer from an artifact or document alone that its author had a brain or certain brain processes - and, if so, how this is different from making inferences about mentality from a subject's outputs alone.
然而,塞尔论文提出的核心问题或许是:智力究竟在哪里?塞尔告诉我们,计算机的智能仅仅在于我们的眼睛。然而,爱因斯坦曾经说过:"我的铅笔比 更聪明;在我看来,这句格言至少与塞尔的立场一样接近真理。诚然,如果没有大脑的引导和解释,铅笔、计算机或任何其他 "思想工具 "的成就都不会令人印象深刻。但是,就我自己而言,我承认我对我自己的成就也持同样的否定态度。换句话说,我非常肯定,如果没有各种手段的帮助,将 "它们 "外化和客观化,使它们不仅可以被进一步研究,而且可以被表述出来,我甚至不可能 "拥有 "本评论中所表达的思想。我推测,现在写在纸上(还是只写在读者的眼睛里?)的东西与我大脑或头脑中的东西之间存在着某些因果联系和对应关系;但这些因果联系和对应关系是什么,却是一个未决问题。此外,只有当一个人把现在和过去、内部和外部发生的事情相互混淆,并把它们视为一个单一的 "事物 "时,"思维 "甚至思维背后的因果力量才能被分配到一个单一的 "地方 "或实体上。我承认,把 "这篇评论中的思想 "的成就归功于或归咎于我的铅笔,即使不是可笑,也是一种隐喻。但是,至少在科学领域,相对于日常生活而言,将功劳或责任归于我的大脑,也同样具有隐喻性和可笑性。无论撰写这篇评论的大脑过程或心理过程是怎样的,它们早已被终止了。在不远的将来,连 "我 "这个躯体都将不复存在。这是否意味着,未来的读者将没有任何有效的依据来估计 "我"(或者,作为一个文学形象,"我")是否比一块石头更聪明?我很好奇塞尔会如何回答这个问题。特别是,我想知道他是否会仅仅从一件文物或一份文件中推断出其作者拥有大脑或某些大脑过程--如果是的话,这与仅仅从一个主体的产出来推断其心智有什么不同。

by Marvin Minsky 作者:马文-明斯基

Artificial Intelligence Laboratory. Massachusetts Institute of Technology, Cambridge, Mass. 02139
人工智能实验室。麻省理工学院,马萨诸塞州,剑桥市,0213902139

Decentralized minds 分散式思维

In this essay, Searle asserts without argument: "The study of the mind starts with such facts as that humans have beliefs, while thermostats, telephones, and adding machines don't. If you get a theory that denies this. . .the theory is false."
在这篇文章中,塞尔毫无论据地断言:"对心灵的研究始于这样的事实:人类有信念,而恒温器、电话和加法器没有。如果有一种理论否认这一点.......这个理论就是错误的"。
No. The study of mind is not the study of belief; it is the attempt to discover powerful concepts - be they old or new - that help explain why some machines or animals can do so many things others cannot. I will argue that traditional, everyday, precomputational concepts like believing and understanding are neither powerful nor robust enough for developing or discussing the subject.
对心智的研究不是对信念的研究;它是试图发现强有力的概念--无论是新的还是旧的--来帮助解释为什么一些机器或动物可以做很多其他机器或动物做不到的事情。我要论证的是,传统的、日常的、前计算概念(如 "相信 "和 "理解")既不强大,也不够有力,不足以发展或讨论这一主题。

In centuries past, biologists argued about machines and life much as Searle argues about programs and mind; one might have heard: "The study of biology begins with such facts as that humans have life, while locomotives and telegraphs don't. If you get a theory that denies this the theory is false." Yet today a successtul biological science is based on energetics and information processing; no notion of "alive" appears in the theory, nor do we miss it. The theory's power comes from replacing, for example, a notion of "self-reproduction" by more sophisticated notions about encoding, translation, and recombination - to explain the reproduction of sexual animals that do not, exactly, "reproduce."
在过去的几个世纪里,生物学家对机器和生命的争论,就像塞尔对程序和思维的争论一样;人们可能会听到这样的话:"生物学的研究始于这样的事实:人类有生命,而火车头和电报机没有:"生物学的研究始于这样的事实:人类有生命,而火车头和电报机没有。如果你得到的理论否认了这一点,那么这个理论就是错误的。然而,今天成功的生物科学是以能量学和信息处理为基础的;理论中没有出现 "生命 "的概念,我们也不会错过它。例如,该理论的力量来自于用编码、翻译和重组等更复杂的概念取代了 "自我繁殖 "的概念--以解释有性动物的繁殖,而这些动物并不完全是在 "繁殖"。
Similarly in mind science, though prescientific idea germs like "believe," know," and "mean" are useful in daily life, they seem technically too coarse to support powerful theories; we need to supplant, rather than to support and explicate them. Real as "self" or "understand" may seem to us today, they are not (like milk and sugar) objective things our theories must accept and explain; they are only first steps toward better concepts. It would be inappropriate here to put forth my own ideas about how to proceed; instead consider a fantasy in which our successors recount our present plight: "The ancient concept of 'belief' proved inadequate until replaced by a continuum in which, it turned out, stones placed near zero, and termostats scored 0.52 . The highest human score measured so far is 67.9. Because it is theoretically possible for something to be believed as intensely as 3600 , we were chagrined to learn that men in fact believe so poorly. Nor, for that matter, are they very proficient (on an absolute scale) at intending. Still, they are comfortably separated from the thermostats." IOlivaw, R.D. (2063) Robotic reflections, Phenomenological Science 67:60.1 A joke, of course; I doubt any such onedimensional idea would help much. Understanding how parts of a program or mind can relate to things outside of it - or to other parts within - is complicated, not only because of the intricacies of hypothetical intentions and meanings, but because different parts of the mind do different things - both with regard to externals and to one another. This raises another issue: "In employing formulas like ' believes that means ,' our philosophical precursors became unwittingly entrapped in the 'single-agent fallacy' - the belief that inside each mind is a single believer (or meaner) who does the believing. It is strange how long this idea survived, even after Freud published his first clumsy portraits of our inconsistent and adversary mental constitutions. To be sure, that myth of 'self' is indispensable both for social purposes, and in each infant's struggle to make simplified models of his own mind's structure. But it has not otherwise been of much use in modern applied cognitive theory, such as our work to preserve, rearrange, and recombine those aspects of a brain's mind's parts that seem of value." (Byerly, . (2080) New hope for the Dead, Reader's Digest, March 13.1
同样,在心智科学中,尽管 "相信"、"知道 "和 "意思 "等先知先觉的观念萌芽在日常生活中很有用,但它们在技术上似乎过于粗糙,无法支持强有力的理论;我们需要取代它们,而不是支持和阐释它们。今天,"自我 "或 "理解 "在我们看来可能是真实的,但它们并不是(像牛奶和糖)我们的理论必须接受和解释的客观事物;它们只是迈向更好概念的第一步。在此,我不宜就如何继续前进提出自己的想法;倒不如考虑一下我们的后人在其中描述我们目前困境的幻想:"古代的'信仰'概念被证明是不充分的,直到被一个连续体所取代,在这个连续体中,石头的分值接近零,而恒温动物的分值为 0.52。迄今为止,人类测得的最高分是 67.9 分。因为从理论上讲,人们有可能像 3600 一样强烈地相信某件事情,所以我们很懊恼地发现,事实上人类的相信程度如此之低。在这方面,他们的意图能力(绝对值)也不高。不过,他们与恒温器之间的距离还是很舒适的"。IOlivaw, R.D. (2063) Robotic reflections, Phenomenological Science 67:60.1 当然,这只是个玩笑;我怀疑任何这样的一维想法都不会有太大帮助。理解程序或思维的各个部分如何与外部事物--或内部其他部分--发生联系是一件复杂的事情,这不仅是因为假设的意图和意义错综复杂,还因为思维的不同部分做着不同的事情--既涉及外部事物,也涉及彼此。这就提出了另一个问题:"我们的哲学先驱在使用' 相信 意味着 '这样的公式时,不知不觉地陷入了'单一代理谬误'--认为每个心灵内部都有一个单一的相信者(或意味着者)在做相信的事情。奇怪的是,即使在弗洛伊德发表了他对我们不一致的和敌对的心理结构的第一幅拙劣的肖像之后,这种想法依然存在了很长时间。可以肯定的是,这种 "自我 "的神话对于社会目的和每个婴儿为自己的心智结构建立简化模型而进行的斗争都是不可或缺的。但是,在现代应用认知理论中,比如在我们保存、重新排列和重组大脑思维中那些看起来有价值的部分的工作中,它并没有发挥多大作用"。(拜尔利, .(2080 年)《死者的新希望》,《读者文摘》,3 月 13.1 日
Searle talks of letting "the individual internalize all of these elements of the system" and then complains that "there isn't anything in the system that isn't in him." Just as our predecessors sought "life" in the study of biology, Searle still seeks "him" in the study of mind, and holds strong to be impotent to deal with the phenomenology of understanding. Because this is so subjective a topic, 1 feel it not inappropriate to introduce some phenomenology of my own. While reading about Searle's hypothetical person who incorporates into his mind - "without understanding" - the hypothetical "squiggle squoggle" process that appears to understand Chinese, I found my own experience to have some of the quality of a double exposure: "The text makes sense to some parts of my mind but, to other parts of my mind, it reads much as though it were itself written in Chinese. I understand its syntax, can parse the sentences, and can follow the technical deductions. But the terms and assumptions themselves what the words like 'intend' and 'mean' intend and mean - escape me. They seem suspiciously like Searle's 'formally specified symbols' because their principal meanings engage only certain older parts of my mind that are not in harmonious, agreeable contact with just those newer parts that seem better able to deal with such issues (precisely because they know how to exploit the new concepts of strong Al)."
塞尔说要让 "个体内化系统中的所有这些要素",然后又抱怨说 "系统中没有任何东西不在他身上"。就像我们的前辈在生物学研究中寻求 "生命 "一样,塞尔在心智研究中仍然寻求 "他",并认为强大的 ,无法处理理解的现象学。由于这是一个如此主观的话题,我觉得介绍一些我自己的现象学并无不妥。在读到塞尔假想的人 "不理解 "地将假想的 "squiggle squoggle "过程纳入自己的头脑,似乎能理解中文时,我发现自己的经验具有某种双重暴露的特质:"对我头脑的某些部分来说,这篇文章是有意义的,但对我头脑的其他部分来说,它读起来就好像它本身是用中文写的一样。我理解它的语法, 可以解析句子, 可以遵循技术推导。但是,"意图 "和 "意义 "等词的意图和意义是什么,我却无法理解。它们似乎很像塞尔的'形式化的符号',因为它们的主要含义只涉及我头脑中某些旧的部分,而这些旧的部分与那些似乎能够更好地处理这些问题的新的部分(正是因为它们知道如何利用强阿尔的新概念)并不和谐、不一致。
Searle considers such internalizations - ones not fully integrated in the whole mind - to be counterexamples, or reductiones ad absurdum of some sort, setting programs somehow apart from minds. I see them
塞尔认为这种内化--没有完全整合到整个思维中的内化--是反例,或者说是某种荒诞的还原,使程序与思维有某种程度的区别。我认为它们

as illustrating the usual condition of normal minds, in which different fragments of structure interpret - and misinterpret - the fragments of activity they "see" in the others. There is absolutely no reason why programs, too, cannot contain such conflicting elements. To be sure, the excessive clarity of Searle's example saps its strength; the man's Chinese has no contact at all with his other knowledge - while even the parts of today's computer programs are scarcely ever jammed together in so simple a manner.
在这种情况下,不同的结构片段会对它们在其他结构片段中 "看到 "的活动片段进行解释--或曲解。程序也完全没有理由不包含这种相互冲突的元素。可以肯定的是,塞尔的例子过于清晰,削弱了它的力量;这个人的中文与他的其他知识完全没有联系--而即使是当今计算机程序的各个部分,也很少以如此简单的方式组合在一起。
In the case of a mind so split into two parts that one merely executes some causal housekeeping for the other, I should suppose that each part - the Chinese rule computer and its host - would then have its own separate phenomenologies - perhaps along different time scales. No wonder, then, that the host can't "understand" Chinese very fluently - here I agree with Searle. But (for language, if not for walking or breathing) surely the most essential nuances of the experience of intending and understanding emerge, not from naked data bases of assertions and truth values, but from the interactions - the consonances and conflicts among different reactions within various partial selves and self-images.
如果思维被分割成两个部分,其中一部分只是为另一部分执行一些因果性的内务,那么我想,每一部分--中文规则计算机和它的主机--都会有自己独立的现象学--也许是沿着不同的时间尺度。难怪主机无法流利地 "理解 "中文--在这一点上,我同意塞尔的观点。但是(对于语言来说,如果不是走路或呼吸的话),意图和理解的体验中最本质的细微差别肯定不是来自赤裸裸的断言和真值数据库,而是来自相互作用--各种部分自我和自我图像中不同反应之间的一致和冲突。
What has this to do with Searle's argument? Well, I can see that if one regards intentionality as an all-or-none attribute, which each machine has or doesn't have, then Searle's idea - that intentionality emerges from some physical semantic principle - might seem plausible. But in may view this idea (of intentionality as a simple attribute) results from an oversimplification - a crude symbolization - of complex and subtle introspective activities. In short, when we construct our simplified models of our minds, we need terms to represent whole classes of such consonances and conflicts - and, 1 conjecture, this is why we create omnibus terms like "mean" and "intend." Then, those terms become reified.
这与塞尔的论证有什么关系呢?好吧,我明白,如果把意向性看作是一种全有或全无的属性,每台机器都有或没有,那么塞尔的想法--意向性从某种物理语义原则中产生--可能看起来是有道理的。但在我看来,这种观点(将意向性视为一种简单属性)是对复杂而微妙的内省活动的过度简化--一种粗糙的符号化--的结果。简而言之,当我们构建简化的心智模型时,我们需要一些术语来代表一整类这样的一致和冲突--而且,我猜想,这就是我们创造 "意思 "和 "意图 "这样的总括术语的原因。然后,这些术语就被重新定义了。
It is possible that only a machine as decentralized yet interconnected as a human mind would have anything very like a human phenomenology. Yet even this supports no Searle-like thesis that mind's character depends on more than abstract information processing - on, for example, the "particular causal properties" of the substances of the brain in which those processes are embodied. And here I find Searle's arguments harder to follow. He criticizes dualism, yet complains about fictitious antagonists who suppose mind to be as substantial as sugar. He derides "residual operationalism" - yet he goes on to insist that, somehow, the chemistry of a brain can contribute to the quality or flavor of its mind with no observable effect on its behavior.
可能只有像人类心智这样分散但又相互关联的机器,才会有非常像人类的现象学。然而,即便如此,我们也无法支持类似塞尔的论点,即心灵的特性不仅仅取决于抽象的信息处理--例如,还取决于体现这些处理过程的大脑物质的 "特定因果属性"。在这里,我发现塞尔的论证更加难以理解。他批评二元论,却又抱怨那些虚构的对立面,认为心灵就像糖一样具有实质。他嘲笑 "残余操作主义"--但他又坚持认为,大脑的化学性质可以在某种程度上促进心智的质量或味道,而对其行为却没有可观察到的影响。
Strong Al enthusiasts do not maintain, as Searle suggests, that "what is specifically mental about the mind has no intrinsic connection with the actual properties of the brain." Instead, they make a much more discriminating scientific hypothesis: about which such causal properties are important in mindlike processes - namely computationsupporting properties. So, what Searle mistakenly sees as a difference in kind is really one of specific detail. The difference is important because what might appear to Searle as careless inattention to vital features is actually a deliberate - and probably beneficial - scientific strategy! For, as Putnam points out:
强艾尔热衷者并不像塞尔所说的那样,认为 "关于心智的具体心智与大脑的实际属性没有内在联系"。相反,他们提出了一个更有辨别力的科学假设:在类心智过程中,哪些因果属性是重要的--即支持计算的属性。因此,塞尔误认为是种类上的差异,其实是具体细节上的差异。这种差异之所以重要,是因为在塞尔看来可能是对重要特征的漫不经心的忽视,实际上是一种深思熟虑的--而且很可能是有益的--科学策略!正如普特南所指出的:
What is our intellectual form? is the question, not what the matter is. Small effects may have to be explained in terms of the actual physics of the brain. But when are not even at the level of an idealized description of the functional organization of the brain, to talk about the importance of small perturbations seems decidedly premature. Now, many strong Al enthusiasts go on the postulate that functional organization is the only such dependency, and it is this bold assumption that leads directly to the conclusion Searle seems to dislike so; that nonorganic machines may have the same kinds of experience as people do. That seems fine with me. I just can't see why Searle is so opposed to the idea that a really big pile of junk might have feelings like ours. He proposes no evidence whatever against it, he merely tries to portray it as absurd to imagine machines, with minds like ours intentions and all - made from stones and paper instead of electrons and atoms. But I remain left to wonder how Searle, divested of despised dualism and operationalism, proposes to distinguish the authentic intentions of carbon compounds from their behaviorally identical but mentally counterfeit imitations.
问题是我们的智力形式是什么,而不是物质是什么。微小的影响可能需要从大脑的实际物理角度来解释。但是,当我们甚至还没有达到理想化描述大脑功能组织的水平时,谈论微小扰动的重要性似乎还为时过早。现在,许多 "阿尔 "理论的狂热者都在假设功能组织是唯一的依赖关系,而正是这种大胆的假设直接导致了塞尔似乎不喜欢的结论:非有机机器可能拥有与人相同的经验。这在我看来没什么问题。我只是不明白为什么塞尔如此反对这样的观点:一堆真正的垃圾也可能像我们一样有感觉。他没有提出任何反对的证据,他只是试图把想象中的机器描绘成荒谬的,这些机器有着像我们一样的思想意图和一切--由石头和纸张而不是电子和原子制造而成。但我仍然不禁要问,苏尔摆脱了他所鄙视的二元论和操作主义,又是如何将碳化合物的真实意图与它们行为上相同但精神上伪造的仿制品区分开来的呢?

I feel that I have dealt with the arguments about Chinese, and those about substantiality. Yet a feeling remains that there is something deeply wrong with all such discussions (as this one) of other minds; nothing ever seems to get settled in them. From the finest minds, on all sides, emerge thoughts and methods of low quality and little power. Surely this stems from a burden of traditional ideas inadequate to this tremendously difficult enterprise. Even our logic may be suspect. Thus, I could even agree with Searle that modern computational ideas are of little worth here - if, with him, I could agree to judge those ideas by their coherence and consistency with earlier constructions in philosophy. However, once one suspects that there are other bad apples in the logistic barrel, rigorous consistency becomes much too fragile a standard - and we must humbly turn to what evidence we can gather. So, beacuse this is still a formative period for our ideas about mind, 1 suggest that we must remain especially sensitive to the empirical power that each new idea may give us to pursue the subject further. And, as Searle nowhere denies, computationalism is the principal source of the new machines and programs that have produced for us the first imitations, however limited and shabby, of mindlike activity.
我觉得我已经讨论了关于 "中国 "和 "实质 "的争论。然而,我仍然感觉到,其他思想的所有此类讨论(如本讨论)都存在着深刻的问题,似乎从来没有解决过任何问题。从各方面最优秀的思想中,都会涌现出低质量、低力量的思想和方法。这肯定是由于传统思想的负担不足以应对这项艰巨的事业。甚至我们的逻辑也可能值得怀疑。因此,我甚至可以同意塞尔的观点,即现代计算思想在此并无多大价值--如果我和他一样,同意根据这些思想与哲学中早期建构的连贯性和一致性来判断它们的话。然而,一旦我们怀疑逻辑学的木桶里还有其他坏苹果,严格的一致性就成了太脆弱的标准--我们必须谦卑地转向我们所能收集到的证据。因此,由于现在仍是我们关于心智的观念的形成期,我建议我们必须对每一个新观念可能赋予我们的经验力量保持特别的敏感,以便进一步探讨这个问题。而且,正如塞尔所否认的,计算主义是新机器和新程序的主要来源,这些机器和程序为我们提供了对心灵活动的最初模仿,无论这种模仿多么有限和寒酸。

by Thomas Natsoulas 作者:托马斯-纳楚拉斯

Deparment of Psychology, University of California, Davis, Calif. 95616
加州大学戴维斯分校心理学系,加州 95616

The primary source of intentionality
意向性的主要来源

I have shared Searle's belief: the level of description that computer programs exemplify is not one adequate to the explanation of mind. My remarks about this have appeared in discussions of perceptual theory that make little if any reference to computer programs per se (Natsoulas 1974; 1977; 1978a; 1978b; 1980). Just as Searle argues for the material basis of mind - "that actual human mental phenomena [depend] on actual physical-chemical properties of actual human brains" - I have argued that the particular concrete nature of perceptual awarenesses, as occurrences in a certain perceptual system, is essential to the references they make to objects, events, or situations in the stimulational environment.
我赞同塞尔的观点:计算机程序所体现的描述水平不足以解释心智。我关于这一点的论述曾出现在对知觉理论的讨论中,而这些讨论很少提及计算机程序本身(纳楚拉斯,1974;1977;1978a;1978b;1980)。正如塞尔论证了心智的物质基础--"实际的人类心智现象[取决于]实际人脑的实际物理化学属性"--我也论证了知觉意识作为特定知觉系统中的现象,其特定的具体性质对于它们对刺激环境中的对象、事件或情境的参照至关重要。
In opposition to Gibson (for example, 1966; 1967; 1972), whose perceptual theory amounts to hypotheses concerning the pickup by perceptual systems of abstract entities called "informational invariants" from the stimulus flux, I have stated:
吉布森(Gibson,例如 1966 年;1967 年;1972 年)的知觉理论等同于关于知觉系统从刺激通量中拾取被称为 "信息不变量 "的抽象实体的假设,与之相反,我曾说过:
Perceptual systems work in their own modes to ascribe the detected properties which are specified informationally to the actual physical environment around us. The informational invariants to which a perceptual system resonates are defined abstractly (by Gibson) such that the resonating process itself can exemplify them. But the resonance process is not itself abstract. And characterization of it at the level of informational invariants cannot suffice for the theory of perception. It is crucial to the theory of perception that informational invariants are resonated to in concrete modes that are characteristic of the organism as the kind of perceiving physical system that it is. (Natsoulas 1978b, p. 281)
感知系统以其自身的模式工作,将检测到的属性以信息的方式指定给我们周围的实际物理环境。知觉系统共振的信息不变性是抽象定义的(由吉布森定义),因此共振过程本身就能体现这些不变性。但共振过程本身并不抽象。在信息不变式的层面上对其进行表征对于感知理论来说是不够的。对于感知理论来说,信息不变式以具体的模式引起共鸣是至关重要的,而这种共鸣正是有机体作为感知物理系统所特有的。(纳祖拉斯,1978 年 b,第 281 页)
The latter is crucial to perceptual theory, I have argued, if that theory is to explain the intentionality, or aboutness, of perceptual awarenesses Isee also Uliman: "Against Direct perception" BBS 3(3) 1980, this issuel.
我认为,如果知觉理论要解释知觉意识的意向性或关于性,后者对知觉理论至关重要:"另见乌利曼:《反对直接知觉》,BBS 3(3) 1980 年,本期。
And just as Searle summarily rejects the attempt to eliminate intentionality, saying that it does no good to "feign anesthesia," । argued as follows against Dennett's effort to treat intentionality as merely a heuristic overlay on the extensional theory of the nervous system and bodily motions.
正如塞尔断然拒绝消除意向性的企图,认为 "假装麻醉 "毫无益处一样,।对丹尼特( )将意向性仅仅视为神经系统和身体运动的外延理论的启发式叠加的努力提出了如下论证。
In knowing that we are knowing subjects, here is one thing that we know: that we are aware of objects in a way other than the "colorless" way in which we sometimes think of them. The experienced presence of objects makes it difficult if not impossible to claim that perceptions involve only the acquisition of information. . It is this. . kind of presence that makes perceptual aboutness something more than an "interpretation" or "heuristic overlay" to be dispensed with once a complete enough extensional account is at hand. The qualitative being thereness of objects and scenes. . is about as easy to doubt as our own existence. (Natsoulas 1977, pp. ; cf. Searle, , p. 261 , on "presentational immediacy.")
在知道我们是知性主体的同时,我们还知道一件事:我们对客体的感知方式不同于我们有时认为的 "无色 "方式。对象的经验存在使得我们很难甚至不可能声称知觉只涉及信息的获取。.正是这种.正是这种 "存在 "使得知觉的 "关于性 "不仅仅是一种 "解释 "或 "启发式叠加",而一旦有了足够完整的外延解释,这种 "解释 "或 "启发式叠加 "就可以免去了。物体和场景的质的存在性.......就像我们自身的存在一样容易被怀疑。(Natsoulas 1977, pp. ; cf. Searle, , p. 261, on "presentational immediacy.")
However, I do not know in what perceptual aboutness consists. have been making some suggestions and I believe, with Sperry (1969; 1976), that an objective description of subjective experience is possible in terms of brain function. Such a description should include that feature or those features of perceptual awareness that make it be of (or as if it is of, in the hallucinatory instance) an object, occurrence, or situation in the physical environment or in the perceiver's body outside the nervous system. If the description did not include this feature it would be incomplete, in my view, and in need of further development.
然而,我不知道知觉的 "关于性 "是由什么构成的。我一直在提出一些建议,我相信,与斯佩里(1969; 1976)一样,从大脑功能的角度对主观体验进行客观描述是可能的。这种描述应该包括知觉意识的那些特征或那些特点,这些特征或特点使知觉意识是关于(或好像是关于,在幻觉的例子中)物理环境中或知觉者身体中神经系统之外的一个物体、事件或情况的。在我看来,如果描述中不包括这一特征,那么它将是不完整的,需要进一步发展。
In another recent article on intentionality, Searle (1979a) has written of the unavoidability of "the intentional circle," arguing that any explanation of intentionality that we may come up with will presuppose an understanding of intentionality: "There is no analysis of intentionality into logically necessary and sufficient conditions of the form ' is an intentional state if and only if ' , and ,' where ' , and ' make no use of intentional notions" (p. 195). I take this to say that the intentionality of mental states is not reducible; but 1 don't think it is meant, by itself, to rule out the possibility that intentionality might be a property of certain brain processes. Searle could still take the view that it is one of their as yet unknown "ground floor" properties.
在另一篇关于意向性的最新文章中,塞尔(1979a)写到了 "意向性循环 "的不可避免性,认为我们可能提出的任何关于意向性的解释都将以对意向性的理解为前提:"不存在把意向性分析为' 是意向状态 ,当且仅当' ,和 ,'这种形式的逻辑必要条件和充分条件,其中' ,和 '没有使用意向性概念"(第 195 页)。我认为这句话的意思是,心理状态的意向性是不可还原的;但我认为这句话本身并不意味着排除意向性可能是某些大脑过程的一种属性的可能性。塞尔仍然可以认为,意向性是它们尚未知晓的 "底层 "属性之一。
But Searle's careful characterization, in the target article, of the relation between brain and mental processes as causal, with mental processes consistently said to be produced by brain processes, gives a different impression. Of course brain processes produce other brain processes, but if he had meant to include mental processes among the latter, would he have written about only the causal properties of the brain in a discussion of the material basis of intentionality?
但是,塞尔在目标文章中把大脑和心理过程之间的关系仔细地描述为因果关系,一直说心理过程是由大脑过程产生的,这给人一种不同的印象。大脑过程当然会产生其他大脑过程,但如果他的本意是把心理过程包括在后者之中,那么在讨论意向性的物质基础时,他会只写大脑的因果属性吗?
One is tempted to assume that Searle would advocate some form of interactionism with regard to the relation of mind to brain. I think that his analogy of mental processes to products of biological processes, such as sugar and milk, was intended to illuminate the causal basis of mental processes and not their nature. His statement that intentionality is "a biological phenomenon" was prefaced by "whatever else intentionality is" and was followed by a repetition of his earlier point concerning the material basis of mind (mental processes as produced by brain processes). And 1 feel quite certain that Searle would not equate mental processes with another of the brain's effects, namely behaviors (see Searle 1979b)
人们很容易假定,塞尔会在心智与大脑的关系问题上主张某种形式的互动论。我认为,他把心理过程比作糖和牛奶等生物过程的产物,是为了阐明心理过程的因果基础,而不是心理过程的本质。他在说意向性是 "一种生物现象 "之前,先说了 "不管意向性是什么",然后又重复了他先前关于心智的物质基础(由大脑过程产生的心智过程)的观点。而且我十分肯定,塞尔不会把心理过程等同于大脑的另一种作用,即行为(见塞尔,1979b)。
Though it may be tempting to construe Searle's position as a form of dualism, there remains the more probable alternative that he has simple chosen not to take, in these recent writings, a position on the ontological question. He has chosen to deal only with those elements that seem already clear to him as regards the problem of intentionality. However, his emphasis in the target article on intentionality's material basis would seem to be an indication that he is now inching toward a position on the ontological question and the view that this position matters to an understanding of intentionality.
尽管把塞尔的立场理解为二元论的一种形式很有诱惑力,但还有一种更有可能的选择,即他在最近的这些著作中只是选择不对本体论问题采取立场。在意向性问题上,他选择只讨论那些对他来说似乎已经很清楚的内容。然而,他在目标文章中对意向性的物质基础的强调似乎表明,他现在正逐步走向对本体论问题的立场,并认为这一立场对理解意向性很重要。
I emphasize the latter because of what Searle has written on "the form of realization" of mental states in still another recent article on intentionality. In this article (Searle 1979c), he made the claim that the "traditional ontological problems about mental states are for the most part simply irrelevant to their Intentional features" (p. 81). It does not matter how a mental state is realized, Searle suggested, so long as in being realized it is realized so as to have those features. To know what an intentional state is requires only that we know its "representative content" and its "psychological mode."
我之所以强调后者,是因为塞尔在最近另一篇关于意向性的文章中就心理状态的 "实现形式 "所写的内容。在这篇文章(塞尔 1979c)中,他声称 "关于心理状态的传统本体论问题在很大程度上与它们的意向性特征无关"(第 81 页)。塞尔认为,心理状态如何被实现并不重要,只要它在被实现时具有这些特征就可以了。要知道意向状态是什么,我们只需要知道它的 "代表性内容 "和 "心理模式"。
But this would not tell us actually what the state is, only which one it is, or what kind it is. For example, I may know that the mental state that just occurred to me was a passing thought to the effect that it is raining in London at this moment. It is an it-is-raining-in-London-right-now kind of thought that just occurred to me. Knowing of this thought's occurrence and of that of many others, which is to know their representative contents and psychological modes, would not be to know what the thought is, what the mental occurrence is that is the passing thought.
但这并不能告诉我们状态到底是什么,只能告诉我们是哪种状态,或者是哪种类型的状态。例如,我可能知道刚刚出现在我脑海中的心理状态是一个一闪而过的念头,大意是此时此刻伦敦正在下雨。这是一种 "伦敦现在正在下雨 "的想法。知道了这个想法和其他许多想法的发生,也就是知道了它们的代表性内容和心理模式,并不意味着知道了这个想法是什么,也不意味着知道了这个一闪而过的想法的心理发生是什么。
Moreover, for a mental state or occurrence to have its intentional features, it must have a form of realization that gives it to them. Searle has stated: 'It doesn't matter how an Intentional state is realized, as long as the realization is a realization of its Intentionality" (1979c, . 81). The second part of this sentence is an admission that it does matter how it is realized. The explanation of intentionality remains incomplete in the absence of an account of its source and of its relation to that source. This point becomes vivid when considered in the context of another discussion contained in the same article.
此外,心理状态或心理发生要具有意向性特征,就必须有一种实现形式赋予它意向性特征。塞尔曾指出意向状态是如何实现的并不重要,只要这种实现是对其意向性的实现"(1979c, . 81)。这句话的后半部分承认了如何实现的确很重要。如果不说明意向性的来源及其与该来源的关系,对意向性的解释仍然是不完整的。如果结合同一篇文章中的另一个讨论来考虑,这一点就会变得生动。
In this part of the article, Searle gave some attention to what he called "the primary form of Intentionality," namely perception (ct. Searle 1979b, pp. 260-61). One's visual experience of a table was said to be a "presentation" of the table, as opposed to a representation of it. Still, such presentations are intrinsically intentional, for whenever they occur, the person thereby perceives or hallucinates a table. The visual experience if of a table, even though it is not a representation of a table, because it is satisfied by the physical presence of a table there where the table visually seems to be located.
在文章的这一部分,塞尔对他所谓的 "意向性的主要形式",即知觉,给予了一定的关注(参见塞尔 1979b, 第 260-61 页)。据说,一个人对一张桌子的视觉体验是对桌子的 "呈现",而不是对桌子的表征。不过,这种呈现本质上是有意的,因为每当它们出现时,人就会因此感知或幻觉到一张桌子。这种视觉体验是对桌子的体验,尽管它并不是桌子的表象,因为它是由桌子的实际存在所满足的,而桌子在视觉上似乎就位于那里。
Concluding this discussion, Searle added,
在结束讨论时,苏尔补充道、
To say that I have a visual experience whenever I perceive the table visually is to make an ontological claim, a claim about the form of realization of an intentional state, in a way that to say all my beliefs have a representational content is not to make an ontological claim. (1979c, p. 91)
说每当我用视觉感知桌子时,我就有了视觉体验,这是在提出本体论的主张,是关于意向状态的实现形式的主张,而说我的所有信念都有表征内容,则不是在提出本体论的主张。(1979c,第 91 页)
Since Searle would say that he, and the people reading his article, and animals, and so on, have visual experiences, the question he needs to answer, as the theorist of intentionality he is, is: what is the ontological claim he is making in doing so? Or, what is the "form of realization" of our visual experiences that Searle is claiming when he attributes them to us?
既然塞尔说他自己、读他文章的人、动物等等都有视觉经验,那么作为意向性理论家,他需要回答的问题是:他这样做的本体论主张是什么?或者说,当塞尔把我们的视觉经验归因于我们时,他所声称的视觉经验的 "实现形式 "是什么?

by Roland Puccetti 作者:罗兰-普切蒂

Department of Philosophy, Dalhousie University, Halifax, Nove Scotia, Canade B3H
达尔豪西大学哲学系,新斯科舍省哈利法克斯,Canade B3H

The chess room: further demythologizing of strong Al
棋牌室:强力阿尔的进一步非神学化

On the grounds he has staked out, which are considerable, Searle seems to me completely victorious. What I shall do here is to lift the sights of his argument and train them on a still larger, very tempting target.
苏尔提出的理由相当充分,在我看来,他完全胜券在握。我在这里要做的,是把他论证的视线移开,瞄准一个更大、更诱人的目标。
Suppose we have an intelligent human from a chess-free culture, say Chad in Central Africa, and we introduce him to the chess room. There he confronts a computer console on which are displayed numbers 1-8 and letters , and , plus the words WHITE and BLACK. He is told WHITE goes first, then BLACK, alternately, until the console lights go out. There is, of course, a binary machine representation of the chessboard that prevents illegal moves, but he need know nothing of that. He is instructed to identify himself with WHITE, hence to move first, and that the letter-number combination P-K4 is a good beginning. So he presses P-K4 and waits.
假设我们有一个来自无国际象棋文化的智能人,比如中非的乍得,我们把他介绍到国际象棋室。在那里,他面对着一个电脑控制台,上面显示着数字 1-8、字母 , 和 , 以及 WHITE 和 BLACK 两个单词。他被告知先下 "白",再下 "黑",交替进行,直到控制台的灯熄灭。当然,棋盘的二进制机器表示法可以防止非法走棋,但他对此一无所知。他得到的指令是,他要认定自己是白棋,因此要先走,字母数字组合 P-K4 是一个好的开始。于是他按下 P-K4 并等待。
BLACK appears on the console, followed by three alternative letter-number combinations, P-K4, P-QB4, and P-K3. If this were a "depth-first" program, each of these replies would be searched two plies further and a static evaluation provided. Thus to BLACK's P-K4, WHITE could try either N-KB3 or B-B4. If N-KB3, BLACK's rejoinder or both yield an evaluation for WHITE of +1 ; whereas if B-B4, BLACK's reply of either B-B4 or N-KB3 produces an evaluation of, respectively, +0 and +3 . Since our Chadian has been instructed to reject any letter-number combinations yielding an evaluation of less than +1 , he will not pursue B-B4, but is prepared to follow N-KB3 unless a higher evaluation turns up. And in fact it does. The BLACK response allows , and to that, BLACK's best countermoves , and -QB3 produce evaluations of , and +8 . On the other hand, if this were a "breadth-first" program, in which all nodes (the point at which one branching move or half-move subdivides into many smaller branches in the game tree) at one level are examined prior to nodes at a deeper level, WHITE's continuations would proceed more statically; but again this does not matter to the Chadian in the chess room, who, in instantiating either kind of program, hasn't the foggiest notion what he is doing
黑 "出现在控制台上,接着是三个可供选择的字母数字组合:P-K4、P-QB4 和 P-K3。如果这是一个 "深度优先 "程序,那么每个回复都要再搜索两层,并提供静态评估。因此,对于黑棋的 P-K4,白棋可以尝试 N-KB3 或 B-B4。如果是 N-KB3,黑棋的回子 都会给白棋带来+1 的评价;而如果是 B-B4,黑棋的回子 B-B4 或 N-KB3 分别会给白棋带来+0 和+3 的评价。由于我们的乍得棋手已接到指令,拒绝接受任何结果小于+1的字母数字组合,因此他不会下B-B4,而是准备下N-KB3,除非出现更高的结果。事实上确实如此。黑棋的回应 允许走 ,而黑棋最好的反击棋步 -QB3 产生的评估结果是 和 +8 。另一方面,如果这是一个 "广度优先 "程序,即先检查一个层次上的所有节点(在对局树中,一步分支棋或半步棋细分为许多更小分支的点),然后再检查更深一层的节点,那么白方的连续就会更静态地进行;但这对棋室里的乍得人来说并不重要,因为他在实例化这两种程序时,根本不知道自己在做什么。
We must get perfectly clear what this implies. Both programs described here play chess (Frey 1977), and the latter with striking success in recent competition when run on a more powerful computer than before, a large scale Control Data Cyber 170 system (Frey 1977 Appendix). Yet there is not the slightest reason to believe either program understands chess play. Each performs "computational operations on purely formally specified elements," but so would the uncomprehending Chadian in our chess room, although of course much more slowly (we could probably use him only for postal chess, for this reason). Such operations, by themselves cannot, then, constitute understanding the game, no matter how intelligently played.
我们必须非常清楚这意味着什么。这里描述的两个程序都会下国际象棋(弗雷 1977 年),而且后者在最近的比赛中取得了惊人的成功,因为它是在一台比以前更强大的计算机上运行的,这台计算机就是大型的 Control Data Cyber 170 系统(弗雷 1977 年附录)。然而,我们没有丝毫理由相信这两个程序懂国际象棋。每个程序都在执行 "纯粹形式化元素的计算操作",但我们棋室里那个不懂下棋的乍得人也会这样做,当然速度要慢得多(出于这个原因,我们可能只能用他来下帖子棋)。因此,无论下棋者多么聪明,这种操作本身并不能构成对棋局的理解。
It is surprising that this has not been noticed before. For example, the authors of the most successful program to date (Slate & Atkin 1977) write that the evaluative function of CHESS 4.5 understands that it is bad to make a move that leaves one of one's own pieces attacked and undefended, it is good to make a move that forks two enemy pieces, and good to make a move that prevents a forking maneuver by the opponent (p.114). Yet in a situation in which the same program is playing WHITE to move with just the WHITE king on KB5, the BLACK king on KR6, and BLACK's sole pawn advancing from KR4 to a possible queening, the initial evaluation of WHITE's six legal moves is as follows:
令人惊讶的是,这种情况以前从未被注意到。例如,迄今为止最成功的棋谱(Slate & Atkin 1977)的作者写道,CHESS 4.5的评估功能认为,下一步棋让自己的一颗棋子被攻击而无法防御是不好的,下一步棋叉住敌方两颗棋子是好的,下一步棋阻止对方的叉子行动是好的(第114页)。然而,在白方下白方王在KB5,黑方王在KR6,黑方唯一的棋子从KR4推进到可能的后的局面中,白方的六步合法棋步的初步评估如下:"白方王在KB5,黑方王在KR6,黑方唯一的棋子从KR4推进到可能的后:
Move PREL score
K-K4 -116
K-B4 -114
K-N4 -106
K-K5 -121
K-K6 -129
K-B6 -127
In other words, with a one-ply search the program gives a slightly greater preference to WHITE moving K-N4 because one of its evaluators encourages the king to be near the surviving enemy pawn, and is as close as the WHITE king can legally get. This preliminary score does not differ much from that of the other moves since, as the authors admit, "the evaluation function does not understand that the pawn will be captured two half-moves later (p. 111)." It is only after a two-ply and then a three-ply iteration of K-N4 that the program finds that all possible replies are met. The authors candidly conclude:
换句话说,在单步搜索中,程序略微偏向于白方下K-N4,因为其中一个评价器鼓励王靠近残存的敌方兵,而 是白方王可以合法接近的距离。这一初步得分与其他棋步的得分差别不大,因为正如作者所承认的,"评估函数并不了解两步半之后兵将被吃掉(第111页)"。只有在K-N4的两步和三步迭代之后,程序才会发现所有可能的答复都得到了满足。作者坦率地总结道
The whole 3-ply search here was completed in about 100 milliseconds. In a tournament the search would have gone out to perhaps 12 phy to get the same result, since the program lacks the sense to see that since White can force a position in which all material is gone, the game is necessarily drawn. (p. 113).
这里的整个3层搜索大约在100毫秒内完成。在比赛中,要得到同样的结果,搜索可能要花费12费时,因为程序没有意识到,既然白方能逼出一个所有棋材都已耗尽的局面,那么对局必然是和棋。(p. 113).
But then if CHESS 4.5 does not understand even this about chess, why say it "understands" forking maneuvers, and the like? All this can mean is that the program has built-in evaluators that discourage it from getting into forked positions and encourage it to look for ways to fork its opponent. That is not understanding, since as we saw, our Chadian in the chess room could laboriously achieve the same result on the console in blissful ignorance of chess boards, chess positions, or indeed how the game is played. Intelligent chess play is of course simulated this way, but chess understanding is not thereby duplicated.
但是,如果 CHESS 4.5 连这一点都不了解国际象棋,为什么又说它 "了解 "分叉等操作呢?这只能说明,程序内置的评估器不鼓励它进入分叉局面,而鼓励它寻找分叉对手的方法。这并不令人理解,因为正如我们所看到的,我们在棋室里的乍得人可以在控制台上费力地取得同样的结果,而对棋盘、棋势或游戏的玩法却一无所知。智能下棋当然是通过这种方式模拟出来的,但对国际象棋的理解并没有因此被复制。
Up until the middle of this century, chess-playing machines were automata with cleverly concealed human players inside them (Carroll 1975). We now have much more complex automata, and while the programs they run on are inside them, they do not have the intentionality towards the chess moves they make that midget humans had in the hoaxes of yesteryear. They simply know not what they do.
直到本世纪中叶,下棋机都是巧妙隐藏了人类棋手的自动机(卡罗尔,1975 年)。现在,我们有了更复杂的自动机,虽然它们体内运行着程序,但它们并不像侏儒人类在昔日的骗局中那样,对自己所走的棋步具有意向性。它们根本不知道自己在做什么。
Searle quite unnecessarily mars his argument near the end of the target article by offering the observation, perhaps to disarm hardnosed defenders of strong , that we humans are "thinking machines." But surely if he was right to invoke literal meaning against claims that, for example, thermostats have beliefs, he is wrong to say humans are machines of any kind. There were literally no machines on this planet 10,000 years ago, whereas the species Homo sapiens has existed here for at least 100,000 years, so it cannot be that men are machines.
塞尔在这篇文章的结尾处提出了一个观点,说我们人类是 "会思考的机器",这也许是为了打消强 ,从而给他的论证造成了不必要的损害。但可以肯定的是,如果说他引用字面意思来反对恒温器有信仰等说法是正确的,那么他说人类是任何一种机器都是错误的。一万年前,地球上还没有机器,而智人这个物种至少已经存在了十万年,所以人类不可能是机器。

by Zenon W. Pylyshyn
作者:Zenon W. Pylyshyn

Center for Advanced Study in the Behavioral Sciences, Stanford, Caliti. 94305.
加州斯坦福大学行为科学高级研究中心。94305.

The "causal power" of machines
机器的 "因果力

What kind of stuff can refer? Searle would have us believe that computers, qua formal symbol manipulators, necessarily lack the quality of intentionality, or the capacity to understand and to refer, because they have different "causal powers" from us. Although just what having different causal powers amounts to (other than not being capable of intentionality) is not spelled out, it appears at least that systems that are functionally identical need not have the same "causal powers." Thus the relation of equivalence with respect to causal powers is a refinement of the relation of equivalence with respect to function. What Searle wants to claim is that only systems that are equivalent to humans in this stronger sense can have intentionality. His thesis thus hangs on the assumption that intentionality is tied very closely to specific material properties - indeed, that it is literally caused by them. From that point of view it would be extremely unlikely that any system not made of protoplasm - or something essentially identical to protoplasm - can have intentionality. Thus if more and more of the cells in your brain were to be replaced by integrated circuit chips, programmed in such a way as to keep the input-output function of each unit identical to that of the unit being replaced, you would in all likelihood just keep right on speaking exactly as you are doing now except that you would eventually stop meaning anything by it. What we outside observers might take to be words would become for you just certain noises that circuits caused you to make.
什么样的东西可以指涉?塞尔会让我们相信,计算机作为形式化的符号操纵者,必然缺乏意向性,或者说缺乏理解和指称的能力,因为它们具有与我们不同的 "因果能力"。虽然我们没有明确说明不同的因果能力究竟意味着什么(除了不具备意向性之外),但至少可以看出,功能上相同的系统不一定具有相同的 "因果能力"。因此,关于因果能力的等价关系是关于功能的等价关系的细化。塞尔想要宣称的是,只有在这种更强的意义上等同于人类的系统才能具有意向性。因此,他的论点建立在这样一个假设之上:意向性与特定的物质属性紧密相连--事实上,意向性确实是由物质属性引起的。从这个角度来看,任何不是由原生质--或与原生质基本相同的东西--构成的系统都极不可能具有意向性。因此,如果你大脑中越来越多的细胞被集成电路芯片所取代,并通过编程使每个单元的输入输出功能与被取代单元的输入输出功能保持一致,那么你很可能就会像现在这样继续说话,只不过你最终会停止表达任何意思。我们这些外部观察者可能认为是言语的东西,对你来说只是电路使你发出的某些声音。
Searle presents a variety of seductive metaphors and appeals to intuition in support of this rather astonishing view. For example, he asks: why should we find the view that intentionality is tied to detailed properties of the material composition of the system so surprising, when we so readily accept the parallel claim in the case of lactation? Surely it's obvious that only a system with certain causal powers can produce milk; but then why should the same not be true of the ability to refer? Why this example should strike Searle as even remotely relevant is not clear, however. The product of lactation is a substance, milk whose essential defining properties are, naturally, physical and chemical ones (although nothing prevents the production of synthetic milk using a process that is materially very different from mammalian lactation). Is Searle then proposing that intentionality is a substance secreted by the brain, and that a possible test for intentionality might involve, say, titrating the brain tissue that realized some putative mental episodes?
塞尔提出了各种诱人的隐喻,并诉诸直觉来支持这一相当惊人的观点。例如,他问道:既然我们如此容易接受哺乳的平行说法,为什么我们会觉得意向性与系统的物质构成的详细属性相联系的观点如此令人惊讶呢?显然,只有具有某种因果能力的系统才能生产牛奶;但为什么指涉能力就不能如此呢?然而,为什么这个例子会让塞尔觉得有一点相关,这一点并不清楚。泌乳的产物是一种物质,即牛奶,它的基本定义属性自然是物理和化学属性(尽管没有什么能阻止使用一种与哺乳动物泌乳在物质上截然不同的过程来生产合成牛奶)。那么,塞尔是否在暗示,意向性是大脑分泌的一种物质,而对意向性的可能检验可能涉及,比如说,滴定实现某些假定的心理事件的脑组织?
Similarly, Searle says that it's obvious that merely having a program can't possibly be a sufficient condition for intentionality since you can implement that program on a Turing machine made out of "a roll of toilet paper and a pile of small stones." Such a machine would not have intentionality because such objects "are the wrong kind of stuff to have intentionality in the first place." But what is the right kind of stuft? Is it cell assemblies, individual neurons, protoplasm, protein molecules, atoms of carbon and hydrogen, elementary particles? Let Searle name the level, and it can be simulated perfectly well using "the wrong kind of stuff." Clearly it isn't the stuff that has the intentionality. Your brain cells don't refer any more than do the water pipes, bits of paper computer operations, or the homunculus in the Chinese room examples. Searle presents no argument for the assumption that what makes the difference between being able to refer and not being able to refer or to display any other capacity - is a "finer grained" property of the system than can be captured in a functional description. Furthermore, it's obvious from Searle's own argument that the nature of the stuff cannot be what is relevant, since the monolingual English speaker who has memeorized the formal rules is supposed to be an example of a system made of the right stuff and yet it allegedly still lacks the relevant intentionality.
同样,塞尔说,很明显,仅仅拥有一个程序不可能成为意向性的充分条件,因为你可以在用 "一卷卫生纸和一堆小石头 "做成的图灵机上实现这个程序。这样的机器不会具有意向性,因为这样的物体 "首先就不应该具有意向性"。但什么才是正确的东西呢?是细胞组合、单个神经元、原生质、蛋白质分子、碳原子和氢原子,还是基本粒子?让塞尔说出层次,用 "错误的东西 "就能很好地模拟它。显然,具有意向性的不是东西。你的脑细胞并不像水管、计算机操作的纸片或中式房间里的同形体那样具有指称性。塞尔没有提出任何论据来支持这样的假设,即能够指涉与不能指涉或显示任何其他能力之间的区别,是系统的一种 "更细粒度 "的属性,而不是功能描述所能捕捉到的。此外,从塞尔自己的论证中可以明显看出,东西的性质不可能是相关的,因为记忆了形式规则的单语英语使用者应该是一个由正确的东西构成的系统的例子,但据称它仍然缺乏相关的意向性。
Having said all this, however, one might still want to maintain that in some cases - perhaps in the case of Searle's example - it might be appropriate to say that nothing refers, or that the symbols are not being used in a way that refers to something. But if we wanted to deny
不过,说了这么多,人们可能还是想坚持认为,在某些情况下--也许就塞尔的例子而言--说 "无所指 "或 "符号的使用方式并不指向某物 "可能是合适的。但如果我们想否认

that these symbols referred, it would be appropriate to ask what licences us ever to say that a symbol refers. There are at least three different approaches to answering that question: Searle's view that it is the nature of the embodiment of the symbol (of the brain substance itself), the traditional functionalist view that it is the functional role that the symbol plays in the overall behavior of the system, and the view associated with philosophers like Kripke and Putnam, that it is in the nature of the causal connection that the symbol has with certain past events. The latter two are in fact compatible insofar as specifying the functional role of a symbol in the behavior of a system does not preclude specifying its causal interactions with an environment. It is noteworthy that Searle does not even consider the possibility that a purely formal computational model might constitute an essential part of an adequate theory, where the latter also contained an account of the system's transducers and an account of how the symbols came to acquire the role that they have in the functioning of the system.
那么,我们应该问一问,是什么让我们有资格说一个符号指代了什么?要回答这个问题,至少有三种不同的方法:塞尔认为,是符号的体现(大脑物质本身)的性质;传统的功能主义观点认为,是符号在系统的整体行为中所扮演的功能角色;与克里普克和普特南等哲学家相关的观点认为,是符号与过去某些事件的因果联系的性质。后两种观点实际上是兼容的,因为指明符号在系统行为中的功能作用并不排除指明符号与环境的因果互动。值得注意的是,塞尔甚至没有考虑纯粹形式化的计算模型可能构成适当理论的一个重要部分的可能性,而后者还包含对系统转换器的说明,以及对符号如何在系统运行中获得其作用的说明。
Functionalism and reference. The functionalist view is currently the dominant one in both and information-processing psychology. In the past, mentalism often assumed that reference was established by relations of similarity; an image referred to a horse if it looked sufficiently like a horse. Mediational behaviorism took it to be a simple causal remnant of perception: a brain event referred to a certain object if it shared some of the properties of brain events that occur when that object is perceived. But information-processing psychology has opted for a level of description that deals with the informational, or encoded, aspects of the environment's effects on the organism. On this view it has typically been assumed that what a symbol represents can be seen by examining how the symbol enters into relations with other symbols and with transducers. It is this position that Searle is quite specifically challenging. My own view is that although Searle is right in pointing out that some versions of the functionalist answer are in a certain sense incomplete, he is off the mark both in his diagnosis of where the problem lies and in his prognosis as to just how impoverished a view of mental functioning the cognitivist position will have to settle for (that is, his "weak Al")
功能主义与参照。功能主义观点是目前 和信息处理心理学的主流观点。过去,心理主义通常假定参照是通过相似性关系建立的;如果一个图像看起来足够像一匹马,它就指的是一匹马。中介行为主义认为它是感知的简单因果残余:如果大脑事件与感知某个对象时发生的大脑事件的某些属性相同,那么该大脑事件就指代了该对象。但信息处理心理学选择了一个描述层次,即环境对有机体影响的信息或编码方面。根据这种观点,人们通常认为,通过研究符号如何与其他符号和转换器发生关系,就可以看出符号代表了什么。塞尔正是对这一立场提出了明确的挑战。我自己的观点是,虽然塞尔指出功能主义答案的某些版本在某种意义上是不完整的,这是对的,但他对问题所在的诊断以及他对认知主义立场将不得不满足于一种多么贫乏的心理功能观点(即他的 "弱Al")的预言都是不正确的。
The sense in which a functionalist answer might be incomplete is if it failed to take the further step of specifying what it was about the system that warranted the ascription of one particular semantic content to the functional states (or to the symbolic expressions that express that state) rather than some other logically possible content. A cognitive theory claims that the system behaves in a certain way because certain expressions represent certain things (that is, have a certain semantic interpretation). It is, furthermore, essential that it do so: otherwise we would not be able to subsume certain classes of regular behaviors in a single generalization of the sort "the system does because the state represents such and such" (for example, the person ran out of the building because he believed that it was on fire). (For a discussion of this issue, see Pylyshyn 1980b.) But the particular interpretation appears to be extrinsic to the theory inasmuch as the system would behave in exactly the same way without the interpretation. Thus Searle concludes that it is only we, the theorists. who take the expression to represent, say, that the building is on fire. The system doesn't take it to represent anything because it literally doesn't know what the expression refers to: only we theorists do. That being the case, the system can't be said to behave in a certain way because of what it represents. This is in contrast with the way in which our behavior is determined: we do behave in certain ways because of what our thoughts are about. And that, according to Searle, adds up to weak ; that is, a functionalist account in which formal analogues "stands in" for, but themselves neither have, nor explain, mental contents.
功能主义的答案可能不完整的地方在于,它没有进一步说明是什么原因使系统的功能状态(或表达该状态的符号表达式)被赋予特定的语义内容,而不是其他逻辑上可能的内容。认知理论认为,系统之所以有某种行为,是因为某些表达方式代表了某些事物(即具有某种语义解释)。此外,它还必须这样做:否则我们就无法将某些类别的常规行为归纳为 "系统之所以做 ,是因为状态 代表了这样那样的事情"(例如,这个人跑出大楼是因为他认为大楼着火了)这样的单一概括。(关于这个问题的讨论,见 Pylyshyn 1980b。)但是,特定的解释似乎是理论之外的,因为如果没有解释,系统的行为也会完全一样。因此,塞尔得出结论说,只有我们这些理论家才会把这个表达式理解为 "比如说,大楼着火了"。系统不认为它代表任何东西,因为它根本不知道这个表达式指的是什么:只有我们理论家知道。既然如此,就不能说系统会因为它所代表的东西而以某种方式行事。这与我们的行为被决定的方式形成了鲜明对比:我们确实是因为我们的想法而以某种方式行事。塞尔认为,这就是弱 ;也就是说,在功能主义的解释中,形式类似物 "代替 "了心理内容,但它们本身既不具有心理内容,也不能解释心理内容。
The last few steps, however, are non sequiturs. The fact that it was we, the theorists, who provided the interpretation of the expressions doesn't by itself mean that such an interpretation is simply a matter of convenience, or that there is a sense in which the interpretation is ours rather than the system's. Of course it's logically possible that the interpretation is only in the mind of the theorist and that the system behaves the way it does for entirely different reasons. But even if that happened to be true, it wouldn't follow simply from the fact that the AI theorist was the one who came up with the interpretation. Much depends on his reasons for coming up with that interpretation. In any case, the question of whether the semantic interpretation resides in the head of the programmer or in the machine is the wrong question to ask. A more relevant question would be: what fixes the semantic interpretation of functional states, or what latitude does the theorist have in assigning a semantic interpretation to the states of the system?
然而,最后几个步骤是非连续的。事实上,是我们这些理论家提供了对表达式的解释,这本身并不意味着这种解释只是为了方便,也不意味着在某种意义上,这种解释是我们的,而不是系统的。当然,从逻辑上讲,也有可能这种解释只是理论家的想法,而系统的行为方式是出于完全不同的原因。但即使这是真的,也不能简单地从人工智能理论家是提出这种解释的人这一事实中得出结论。这在很大程度上取决于他提出这种解释的理由。无论如何,语义解释究竟是在程序员的头脑中还是在机器中,这个问题问得不对。一个更相关的问题应该是:是什么固定了功能状态的语义解释,或者说理论家在为系统状态赋予语义解释时有多大的自由度?
When a computer is viewed as a self-contained device for processing formal symbols, we have a great deal of latitude in assigning semantic interpretations to states. Indeed, we routinely change our interpretation of the computer's functional states, sometimes viewing them as numbers, sometimes as alphabetic characters, sometimes as words or descriptions of a scene, and so on. Even where it is difficult to think of a coherent interpretation that is different from the one the programmer had in mind, such alternatives are always possible in principle. However, if we equip the machine with transducers and allow it to interact freely with both natural and linguistic environments, and it we endow it with the power to make (syntactically specified) inferences, it is anything but obvious what latitude, if any, the theorist (who knows how the transducers operate, and therefore knows what they respond to) would still have in assigning a coherent interpretation to the functional states in such a way as to capture psychologically relevant regularities in behavior.
当计算机被视为处理形式符号的自足设备时,我们在为状态赋予语义解释方面有很大的自由度。事实上,我们经常改变对计算机功能状态的解释,有时把它们看作数字,有时看作字母字符,有时看作词语或场景描述,等等。即使很难想到一种与程序员心目中的解释不同的连贯解释,原则上也总是有可能出现这样的替代方案。然而,如果我们给机器配备了传感器,允许它自由地与自然环境和语言环境交互,并赋予它做出(语法上指定的)推论的能力,那么理论家(他知道传感器是如何工作的,因此也知道传感器对什么做出反应)在为功能状态指定连贯的解释以捕捉行为中与心理学相关的规律性时,还能有多大的余地(如果有的话)就不言而喻了。
The role of intuitions. Suppose such connections between the system and the world as mentioned above (and possibly other considerations that no one has yet considered) uniquely constrained the possible interpretations that could be placed on representational states. Would this solve the problem of justifying the ascription of particular semantic contents to these states? Here I suspect that one would run into differences of opinion that may well be unresolvable, simply because they are grounded on different intuitions. For example there immediately arises the question of whether we possess a privileged interpretation of our own thoughts that must take precedence over such functional analyses. And if so, then there is the further question of whether being conscious is what provides the privileged access; and hence the question of what one is to do about the apparent necessity of positing unconscious mental processes. So far as I can see the only thing that recommends that particular view is the intuition that, whatever may be true of other creatures, I at least know what my thoughts refer to because I have direct experiential access to the referents of my thoughts. Even if we did have strong intuitions about such cases, there is good reason to believe that such intuitions should be considered as no more than secondary sources of constraint, whose validity should be judged by how well theoretical systems based on them perform. We cannot take as sacred anyone's intuitions about such things as whether another creature has intentionality - especially when such intuitions rest (as Searle's do, by his own admission) on knowing what the creature (or machine) is made of (for instance, Searle is prepared to admit that other creatures might have intentionality if "we can see that the beasts are made of similar stuff to ourselves"). Clearly, intuitions based on nothing but such anthropocentric chauvinism cannot form the foundation of a science of cognition [See "Cognition and Consciousness in Nonhuman Species" BBS 1(4) 1978].
直觉的作用。假设上文提到的系统与世界之间的这种联系(可能还有其他考虑因素,但还没有人考虑过)唯一地限制了对表征状态的可能解释。这是否能解决为这些状态赋予特定语义内容的合理性问题呢?在这里,我怀疑我们会遇到意见分歧,而这些分歧很可能是无法解决的,因为它们是基于不同的直觉。例如,马上就会出现这样一个问题:我们是否对自己的思想有一种优先的解释,而这种解释必须优先于这种功能分析。如果是,那么还有一个问题是,有意识是否提供了这种特权;因此还有一个问题是,我们应该如何对待假设无意识心理过程的明显必要性。在我看来,唯一值得推荐这种观点的是这样一种直觉,即无论其他生物的情况如何,我至少知道我的思想所指为何,因为我可以通过经验直接接触到我思想的所指。即使我们对这种情况确实有强烈的直觉,我们也有充分的理由相信,这种直觉应被视为次要的约束来源,其有效性应根据基于这些直觉的理论体系的表现来判断。我们不能把任何人对另一种生物是否具有意向性等问题的直觉视为神圣不可侵犯--尤其是当这种直觉(正如塞尔自己承认的那样)建立在知道这种生物(或机器)是由什么构成的基础之上时(例如,塞尔准备承认,如果 "我们能够看到这些野兽是由与我们自己类似的东西构成的",那么其他生物就可能具有意向性)。显然,仅仅基于这种人类中心主义沙文主义的直觉无法构成认知科学的基础[见 "非人类物种的认知与意识 "BBS 1(4) 1978]。
A major problem in science - especially in a developing science like cognitive psychology - is to decide what sorts of phenomena "go together." in the sense that they will admit of a uniform set of explanatory principles. Information-processing theories have achieved some success in accounting for aspects of problem solving, language processing, perception, and so on, by deliberately glossing over the conscious-unconscious distinction; by gouping under common principles a wide range of rule-governed processes necessary to account for functioning, independent of whether or not people are aware of them. These theories have also placed to one side questions as to what constitute consciously experienced qualia or "raw feels" dealing only with some of their reliable functional correlates (such as the belief that one is in pain, as opposed to the experience of the pain itself) - and they have to a large extent deliberately avoided the
科学中的一个主要问题--尤其是像认知心理学这样一门发展中的科学--就是决定什么样的现象 "可以放在一起",即它们可以接受一套统一的解释原则。信息处理理论在解释问题解决、语言处理、感知等方面取得了一定的成功,它有意淡化了有意识与无意识之间的区别;把解释功能所必需的各种规则支配过程归纳到共同的原则之下,而与人们是否意识到这些过程无关。这些理论还把关于什么是有意识体验的质点或 "原始感觉 "的问题搁置一边,只处理它们的一些可靠的功能关联(如认为自己处于疼痛之中,而不是疼痛本身的体验)--它们在很大程度上有意回避了 "意识 "和 "非意识 "之间的区别。

question of what gives symbols their semantics. Because has chosen to carve up phenomena in this way, people like Searle are led to conclude that what is being done is weak - or the modelling of the abstract functional structure of the brain without regard for what its states represent. Yet there is no reason to think that this program does not in fact lead to strong in the end. There is no reason to doubt that at asymptote (for example, when and if a robot is built) the ascription of intentionality to programmed machines will be just as warranted as its ascription to people, and for reasons that have absolutely nothing to do with the issue of consciousness.
是什么赋予了符号语义的问题。由于 选择了这种分割现象的方式,像塞尔这样的人就会得出这样的结论:现在所做的是弱 ,或者说是对大脑抽象功能结构的建模,而不考虑其状态所代表的是什么。然而,我们没有理由认为这一方案最终不会导致强 。我们没有理由怀疑,在渐近阶段(例如,当机器人被制造出来的时候),对编程机器的意向性归因将与对人的归因一样有理有据,而且其原因与意识问题完全无关。
What is frequently neglected in discussions of intentionality is that we cannot state with any degree of precision what it is that entitles us to claim that people refer (though there are one or two general ideas, such as those discussed above), and therefore that arguments against the intentionality of computers typically reduce to "argument from ignorance." If we knew what it was that warranted our saying that people refer, we might also be in a position to claim that the ascription of semantic content to formal computational expressions - though it is in fact accomplished in practice by "inference to the best explanation" - was in the end warranted in exactly the same way. Humility, if nothing else, should prompt us to admit that there's a lot we don't know about how we ought to describe the capacities of future robots and other computing machines, even when we do know how their electronic circuits operate.
在关于意向性的讨论中经常被忽视的一点是,我们无法精确地说出是什么让我们有权声称人们在指称(尽管有一两个一般性的想法,比如上文讨论过的),因此,反对计算机意向性的论点通常会沦为 "无知的论证"。如果我们知道是什么让我们有理由说人们在指涉,那么我们也就有理由说,对形式化计算表达式的语义内容的归属--尽管它实际上是通过 "推论出最佳解释 "来实现的--最终也是以完全相同的方式有理由的。如果没有别的原因,谦逊应该促使我们承认,关于我们应该如何描述未来机器人和其他计算机器的能力,我们还有很多东西不知道,即使我们知道它们的电子电路是如何工作的。

Note 备注

  1. Current address: Department of Psychology, University of Western Ontario, London, Canada, N6A 5C2.
    当前地址加拿大西安大略大学心理学系,伦敦,N6A 5C2。

by Howard Rachlin 作者:霍华德-拉赫林

Department of Psychology. State University of New York at Stony Brook, Stony Brook, N.r. 11794
心理学系。纽约州立大学石溪分校,石溪,N.r. 11794

The behaviorist reply (Stony Brook)
行为主义者的回答(石溪)

It is easy to agree with the negative point Searle makes about mind and Al in this stimulating paper. What is difficuli to accept is Searle's own conception of mind.
我们很容易同意塞尔在这篇令人振奋的论文中对心智和阿尔的否定观点。难以接受的是塞尔自己对心灵的概念。
His negative point is that the mind can never be a computer program. Of course, that is what behaviorists have said all along ("residual behaviorism" in the minds of Al researchers notwithstanding). His positive point is that the mind is the same thing as the brain. But this is just as clearly false as the strong l position that he criticizes.
他的反面观点是,心灵永远不可能是一个计算机程序。当然,这也是行为主义者一直以来的观点(尽管艾尔研究者认为这是 "残余行为主义")。他的积极观点是,心灵与大脑是一回事。但这与他所批评的强烈 l 立场一样,显然是错误的。
Perhaps the behaviorist viewpoint can best be understood through two examples, one considered by Searle and one (although fairly obvious) not considered. The combination robot example is essentially a behavioral one. A robot behaves exactly like a man. Does it think? Searle says "If the robot looks and behaves sufficiently like us, then we would suppose, until proven otherwise, that it must have mental states like ours" (italics mine). Of couse we would. And let us be clear about what this robot would be required to do. It might answer questions about a story that it hears, but it should also laugh and cry in the right places; it should be able to tell when the story is over. If the story is a moral one the robot might change its subsequent behavior in situations similar to the ones the story describes. The robot might ask questions about the story itself, and the answers it receives might change its behavior later. The list of typically human behaviors in "response" to stories is endless. With a finite number of tests we can never be absolutely positive that the robot understood the story. But proof otherwise can only come from one place - the robot's subsequent behavior. That is, the robot may prove that it did not understand a story told to it at time by doing or saying something at a later time that a normal human would not do who heard a similar story under similar conditions. If it passes all our behavior tests we would say that, pending future behavioral evidence, the robot understood the story. And we would say this even if we were to open up the robot and find a man translating Chinese, a computer, a dog, a monkey, or a piece of stale cheese.
也许通过两个例子可以最好地理解行为主义观点,一个是塞尔考虑过的,另一个(虽然相当明显)没有考虑过。组合机器人的例子本质上是一个行为学的例子。机器人的行为与人一模一样。它会思考吗?塞尔说:"如果机器人的外表和行为足够像我们,那么我们就会认为,除非证明不是这样,否则它一定有像我们一样的精神状态"(斜体为笔者所加)。我们当然会这么想。让我们弄清楚这个机器人需要做什么。它可能会回答关于它听到的故事的问题,但它也应该在适当的地方笑和哭,它应该能够知道故事什么时候结束。如果故事寓意深刻,机器人可能会在与故事类似的情况下改变自己的行为。机器人可能会就故事本身提出问题,而得到的答案可能会改变它之后的行为。人类 "回应 "故事的典型行为不胜枚举。由于测试次数有限,我们永远无法绝对肯定机器人是否听懂了故事。但是,我们只能从一个方面来证明--机器人随后的行为。也就是说,机器人可以在 ,通过在以后的时间里做一些或说一些正常人在类似条件下听到类似故事时不会做的事情,来证明它没有听懂故事。如果它通过了我们所有的行为测试,我们就会说,在未来的行为证据出现之前,机器人听懂了这个故事。即使我们打开机器人,发现它是一个正在翻译中文的男人、一台电脑、一条狗、一只猴子或一块过期的奶酪,我们也会这么说。
The appropriate test is to see whether the robot, upon hearing the story, behaves like a normal human being. How does a normal human being behave when told a story? That is a valid question - one in which behaviorists have been interested and one to which Searle and his fellow mentalists might also profitably devote their attention when they finish fantasizing about what goes on inside the head. The neural mythology that Searle suggests is no better than the computerprogram mythology of the Al researchers.
适当的测试是看机器人在听完故事后是否表现得像正常人一样。正常人在听故事时会有什么表现?这是一个有效的问题--行为学家一直对这个问题很感兴趣,而塞尔和他的精神学家们在幻想完大脑内部发生的事情之后,也可以把注意力放在这个问题上。塞尔提出的神经神话并不比艾尔研究者的计算机程序神话好多少。
Searle is willing to abandon the assumption of intentionality (in a robot) as soon as he discovers that a computer was running it after all. Here is a perfect example of how cognitive concepts can serve as a mask for ignorance. The robot is said to think until we find out how it works. Then it is said not to think. But suppose, contrary to anyone's expectations, all of the functional properties of the human brain were discovered. Then the "human robot" would be unmasked, and we might as well abandon the assumption of intentionality for humans too. It is only the behaviorist, it seems, who is willing to preserve terms such as thought, intentionality, and the like (as patterns of behavior). But there are no "mental states underlying . . behavior" in the way that a skeleton underlies our bodily structure. The pattern of the behavior is the mental state. These patterns are results of internal and external factors in the present and in the past - not of a controlling mental state - even one identified with the brain.
当塞尔发现机器人毕竟是由计算机在运行时,他就愿意放弃(机器人的)意向性假设。这里有一个完美的例子,说明认知概念是如何充当无知的面具的。据说机器人会思考,直到我们发现它是如何工作的。然后又说它不会思考。但假设,与所有人的期望相反,人脑的所有功能特性都被发现了。那么,"人类机器人 "的面纱就会被揭开,我们也不妨放弃对人类意向性的假设。似乎只有行为主义者才愿意保留思想、意向性等术语(作为行为模式)。但"......行为背后的精神状态 "并不存在,就像骨架是我们身体结构的基础一样。行为模式就是心理状态。这些模式是现在和过去的内部和外部因素的结果,而不是控制心理状态的结果--即使是与大脑相关的心理状态。
That the identification of mind with brain will not hold up is obvious from the consideration of another example which I daresay will be brought up by other commentators - even Al researchers - so obvious is it. Let's call it "The Donovan's brain reply (Hollywood)." A brain is removed from a normal adult human. That brain is placed inside a computer console with the familiar input-output machinery - tape recorders, teletypewriters, and so on. The brain is connected to the machinery by a series of interface mechanisms that can stimulate all nerves that the body can normally stimulate and measure the state of all nerves that normally affect muscular movement. The brain. designed to interact with a body, will surely do no better (and probably a lot worse) at operating the interface equipment than a standard computer mechanism designed for such equipment. This "robol" meets Searle's criterion for a thinking machine - indeed it is an ideal thinking machine from his point of view. But it would be ridiculous to say that it could think. A machine that cannot behave like a human being cannot, by definition, think.
我敢说,其他评论家--甚至艾尔研究者--也会提出这个例子,因为它太明显了。让我们称之为 "多诺万大脑的回答(好莱坞)"。从一个正常的成年人身上取出一个大脑。这个大脑被放置在一个电脑控制台中,控制台上有我们熟悉的输入输出设备--录音机、电传打字机等等。大脑通过一系列接口机制与机器相连,这些接口机制可以刺激人体通常可以刺激的所有神经,并测量通常影响肌肉运动的所有神经的状态。为与身体互动而设计的大脑,在操作界面设备方面肯定不会比为此类设备设计的标准计算机机制做得更好(可能会差很多)。这台 "机器人 "符合塞尔关于思维机器的标准--从他的观点来看,这的确是一台理想的思维机器。但如果说它能思考,那就太荒谬了。根据定义,一台不能像人一样行动的机器是不会思考的。

by Martin Ringle 作者:马丁-林格

Computer Science Department, Vassar Collego, Poughkeepsio, N.Y. 12601
纽约州波基普西奥瓦萨学院计算机科学系 邮编:12601

Mysticism as a philosophy of artificial intelligence
作为人工智能哲学的神秘主义

Searle identifies a weakness in AI methodology that is certainly worth investigating. He points out that by focusing attention at such a high level of cognitive analysis, Al ignores the foundational role that physical properties play in the determination of intentionality. The case may be stated thus: in human beings, the processing of cognized features of the world involves direct physical activity of neural structures and substructures as well as causal interactions between the nervous system and external physical phenomena. When we stipulate a "program" as an explanation (or, minimally, a description) of a cognitive process we abstract information-processing-type elements at some arbitrary level of resolution and we presuppose the constraints and contributions made at lower levels (for example, physical instantiation). Al goes wrong, according to Searle, by forgetting the force of this presupposition and thereby assuming that the computer implementation of the stipulated program will, by itself, display the intentional properties of the original human phenomenon.
塞尔指出了人工智能方法论的一个弱点,这当然值得研究。他指出,由于把注意力集中在如此高水平的认知分析上,阿尔忽略了物理属性在决定意向性方面所起的基础性作用。情况可以这样说:在人类中,对认知世界特征的处理涉及神经结构和亚结构的直接物理活动,以及神经系统与外部物理现象之间的因果互动。当我们把 "程序 "规定为对认知过程的解释(或最低限度的描述)时,我们就在某个任意的分辨率水平上抽象出了信息处理类型的元素,并预先假定了在较低水平上(例如,物理实例化)的约束和贡献。塞尔认为,阿尔的错误在于忘记了这种预设的力量,从而假定规定程序的计算机实现本身就会显示出原始人类现象的意向属性。
Al doctrine, of course, holds that the lower-level properties are irrelevant to the character of the higher level cognitive processes thus following the grand old tradition inaugurated by Turing (1964) and Putnam (1960).
当然,阿尔学说认为,低层次的属性与高层次认知过程的特征无关,从而沿袭了图灵(1964 年)和普特南(1960 年)开创的伟大传统。
If this is in fact the crux of the dispute between Searle and it is of relatively small philosophical interest. For it amounts to saying nothing more than that there may be important information processes occurring at the intraneuronal and subneuronal levels - a question that can only be decided empirically. If it turns out that such processes do not exist, then current approaches in Al are vindicated; if, on the other
如果这确实是塞尔和 之间争论的关键所在,那么它在哲学上的意义相对较小。因为它无非是说,在神经元内和亚神经元水平上可能存在重要的信息过程--这个问题只能根据经验来决定。如果事实证明这种过程并不存在,那么当前的阿尔研究方法就是正确的。

hand, Searle's contention is correct, then Al must accommodate lower level processes in its cognitive models. Pragmatically, the simulation of subneuronal processes on a scale large enough to be experimentally significant might prove to be impossible (at least with currently envisioned technology). This is all too likely and would, if proven to be the case, spell methodological doom for as we now know it. Nevertheless, this would have little philosophical import since the inability to model the interface between complex subneural and interneuronal processes adequately would constitute a technical, not a theoretical, failure.
如果塞尔的论点是正确的,那么阿尔就必须在其认知模型中考虑较低层次的过程。实事求是地说,要在足够大的范围内模拟具有实验意义的亚神经元过程,可能被证明是不可能的(至少在目前设想的技术条件下是不可能的)。这很有可能,而且,如果事实证明确实如此,那么我们现在所知的 在方法论上将万劫不复。尽管如此,这在哲学上并无多大意义,因为无法充分模拟复杂的神经下过程和神经元间过程之间的界面将构成技术上的失败,而非理论上的失败。
But Searle wants much more than this. He bases his denial of the adequacy of AI models on the belief that the physical properties of neuronal systems are such that they cannot in principle be simulated by a nonprotoplasmic computer system. This is where Searle takes refuge in what can only be termed mysticism.
但塞尔想要的远不止于此。他之所以否认人工智能模型的充分性,是因为他相信神经元系统的物理特性决定了它们原则上无法被非原生质的计算机系统所模拟。这就是塞尔陷入神秘主义的地方。
Searle refers to the privileged properties of protoplasmic neuronal systems as "causal powers." I can discover at least two plausible interpretations of this term, but neither will satisfy Searle's argument. The first reading of "causal power" pertains to the direct linkage of the nervous system to physical phenomena of the external world. For example, when a human being processes visual images, the richness of the internal information results from direct physical interaction with the world. When a computer processes a scene, there need be no actual link between light phenomena in the world and an internal "representation" in the machine. Because the internal "representation" is the result of some stipulated program, one could (and often does in Al) input the "representation" by hand, that is, without any physical, visual apparatus. In such a case, the causal link between states of the world and internal states of the machine is merely stipulated. Going one step further, we can argue that without such a causal link, the internal states cannot be viewed as cognitive states since they lack any programmer-independent semantic content. Al workers might try to remedy the situation by introducing appropriate sensory transducers and effector mechanisms (such as "hands") into their systems, but I suspect that Searle could still press his point by arguing that the causal powers of such a system would still fail to mirror the precise causal powers of the human nervous system. The suppressed premise that Searle is trading on, however, is that nothing but a system that shared the physical properties of our systems would display precisely the same sort of causal links.
塞尔将原生质神经元系统的特权属性称为 "因果能力"。我至少可以发现对这个词的两种合理解释,但这两种解释都不能满足塞尔的论点。对 "因果能力 "的第一种解读涉及神经系统与外部世界物理现象的直接联系。例如,当人类处理视觉图像时,丰富的内部信息来自于与世界的直接物理互动。而当计算机处理一个场景时,世界中的光现象与机器中的内部 "表象 "之间不需要任何实际联系。因为内部 "表象 "是某些规定程序的结果,人们可以(在 Al 中也经常这样做)用手输入 "表象",也就是说,不需要任何物理、视觉设备。在这种情况下,世界状态与机器内部状态之间的因果联系仅仅是规定性的。更进一步,我们可以说,如果没有这种因果联系,内部状态就不能被视为认知状态,因为它们缺乏任何与程序员无关的语义内容。阿尔法工作者可能会尝试在他们的系统中引入适当的感官转换器和效应机制(如 "手")来补救这种情况,但我怀疑塞尔仍然可以坚持他的观点,认为这种系统的因果能力仍然无法反映人类神经系统的精确因果能力。然而,塞尔所依据的被压制的前提是,除了一个与我们的系统具有相同物理特性的系统之外,其他任何系统都不会显示出精确的因果联系。
Yet if the causality with which Searle is concerned involves nothing more than direct connectivity between internal processes and sensorimotor states, it would seem that he is really talking about functional properties, not physical ones. He cannot make his case that a photo-electric cell is incapable of capturing the same sort of information as an organic rod or cone in a human retina unless he can specifically identify a (principled) deficiency of the former with respect to the latter. And this he does not do. We may sum up by saying that "causal powers," in this interpretation, does presuppose embodiment but that no particular physical makeup for a body is demanded. Connecting actual sensorimotor mechanisms to a perceptronlike internal processor should, therefore, satisfy causality requirements of this sort (by removing the stipulational character of the internal states).
然而,如果塞尔所关注的因果关系仅仅涉及内部过程与感觉运动状态之间的直接联系,那么他所谈论的似乎真的是功能特性,而不是物理特性。除非他能具体指出前者相对于后者的(原则性的)缺陷,否则他就无法证明光电细胞无法捕捉与人类视网膜上的有机杆或锥体相同的信息。而他并没有做到这一点。我们可以总结说,在这种解释中,"因果能力 "确实以具体化为前提,但并不要求身体有特定的物理构成。因此,将实际的感觉运动机制与类似感知器的内部处理器连接起来,应该可以满足这类因果性要求(通过消除内部状态的规定性)。
Under the second interpretation, the term "causal powers" refers to the capacities of protoplasmic neurons to produce phenomenal states, such as felt sensations, pains, and the like. Here, Searle argues that things like automobiles and typewriters, because of their inorganic physical composition, are categorically incapable of causing felt sensations, and that this aspect of consciousness is crucial to intentionality.
在第二种解释中,"因果能力 "指的是原生质神经元产生现象状态的能力,如感觉、疼痛等。在这里,塞尔认为,像汽车和打字机这样的东西,由于其无机的物理构成,根本无法产生感觉,而意识的这一方面对于意向性至关重要。
There are two responses to this claim. First, arguing with Dennett, Schank, and others, we might say that Searle is mistaken in his view that intentionality necessarily requires felt sensations, that in fact the functional components of sensations are all that is required for a cognitive model. But, even if we accept Searle's account of intentionality, the claim stills seems to be untenable. The mere fact that mental phenomena such as felt sensations have been, historically speaking, confined to protoplasmic organisms in no way demonstrates that such phenomena could not arise in a nonprotoplasmic system. Such an assertion is on a par with a claim (made in antiquity) that only organic creatures such as birds or insects could fly. Searle explicitly and repeatedly announces that intentionality "is a biological phenomenon," but he never explains what sort of biological phenomenon it is, nor does he ever give us a reason to believe that there is a property or set of properties inherent in protoplasmic neural matter that could not, in principle, be replicated in an alternative physical substrate.
对这种说法有两种回应。首先,与登尼特、申克等人争论时,我们可以说塞尔认为意向性必然需要感觉的观点是错误的,事实上,感觉的功能成分就是认知模型所需要的一切。但是,即使我们接受塞尔对意向性的解释,这种说法似乎仍然站不住脚。从历史上看,诸如感觉之类的心理现象仅限于原生质有机体,这一事实本身并不能证明此类现象不可能出现在非原生质系统中。这种断言与(古代)只有鸟类或昆虫等有机生物才能飞行的说法不相上下。塞尔明确地、反复地宣称,意向性 "是一种生物现象",但他从未解释过这是一种什么样的生物现象,也从未给出理由让我们相信,原生质神经物质固有的某种特性或一系列特性原则上无法在另一种物理基质中复制。
One can only conclude that the knowledge of the necessary connection between intentionality and protoplasmic embodiment is obtained through some sort of mystical revelation. This, of course, shouldn't be too troublesome to Al researchers who, after all, trade on mysticism as much as anyone in cognitive science does these days. And so it goes.
我们只能得出这样的结论:意向性与原生质体之间的必然联系是通过某种神秘的启示获得的。当然,这对艾尔研究者来说应该不是什么大问题,毕竟他们和当今认知科学领域的任何人一样,都是以神秘主义为生的。事情就是这样。

by Richard Rorty 作者:理查德-罗蒂

Department of Philosophy. Princeton University, Princeton, N.J. 08544
哲学系。普林斯顿大学,新泽西州普林斯顿,08544

Searle and the special powers of the brain
塞尔与大脑的特殊能力

Searle sets up the issues as would a fundamentalist Catholic defending transubstantiation. Suppose a demythologizing Tillichian theologian urges that we think of the Eucharist not in terms of substantial change, but rather in terms of significance in the lives of the faithful. The defender of orthodoxy will reply that the "natural supernatural distinction cannot be just in the eye of the beholder but must be intrinsic: otherwise it would be up to any beholder to treat anything as supernatural." (Compare Searle on the mental-nonmental distinction, p. 420) Theology, the orthodox say, starts with such facts as that the Catholic Eucharist is a supernatural event whereas a Unitarian minister handing around glasses of water isn't. Searle says that "the study of the mind starts with such facts as that humans have beliefs, while thermostats . . and adding machines don't." In theology, the orthodox continue, one presupposes the reality and knowability of the supernatural. Searle says, 'In 'cognitive sciences' one presupposes the reality and knowability of the mental." The orthodox think that the demythologizers are just changing the subject, since we know in advance that the distinction between the natural and the supernatural is a distinction between two sorts of entities having different special causal powers. We know that we can't interpret the Eucharist "functionally" in terms of its utility as an expression of our ultimate concern, for there could be such an expression without the elements actually changing their substantial form. Similarly, Searle knows in advance that a cognitive state "couldn't be just computational processes and their output because the computational processes and their output can exist without the cognitive state." Both the orthodox theologian and Searle criticize their opponents as "unashamedly behavioristic and operationalist."
塞尔提出的问题就像一个原教旨主义天主教徒为 "变体 "辩护一样。假设一位非神学化的提利希神学家敦促我们不要从实质变化的角度来思考圣餐,而要从信徒生活的意义的角度来思考。正统派的捍卫者会回答说,"自然与超自然的区别不能只在观察者的眼中,而必须是内在的:否则,任何观察者都可以把任何事物视为超自然的"。(正统派说,神学的起点是这样的事实:天主教的圣餐仪式是超自然事件,而一神论的牧师分发水杯则不是。塞尔说,"对心灵的研究始于这样的事实:人类有信仰,而恒温器......和加法器没有"。在神学中,正统派继续说,人们预设了超自然的现实性和可知性。塞尔说,'在'认知科学'中,人们预设了精神的现实性和可知性"。正统派认为,去神学化者只是在转移话题,因为我们事先就知道,自然与超自然之间的区别是具有不同特殊因果能力的两种实体之间的区别。我们知道,我们不能 "功能性地 "解释圣餐礼,将其作为表达我们终极关怀的效用,因为可以有这样一种表达,而元素实际上并没有改变它们的实质形式。同样,塞尔预先知道,认知状态 "不可能只是计算过程及其输出,因为计算过程及其输出可以在没有认知状态的情况下存在"。正统神学家和塞尔都批评他们的对手是 "不知羞耻的行为主义和操作主义"。
Searle uses the example of being trained to answer enough questions in Chinese to pass a Turing test. The defender of transubstantiation would use the example of a layman disguised as a priest reciting the mass and fooling the parishioners. The initial reply to Searle's example is that if the training kept on for years and years so that Searle became able to answer all possible Chinese questions in Chinese, then he jolly well would understand Chinese. If you can fool all the people all of the time, behaviorists and operationalists say, it doesn't count as fooling any more. The initial reply to the orthodox theologian's example is that when Anglican priests perform Eucharist services what happens is functionally identical with what happens in Roman churches, despite the "defect of intention" in Anglican orders. When you get a body of worshippers as large as the Anglican communion to take the Eucharist without the necessary "special causal powers" having been present, that shows you that those powers weren't essential to the sacrament. Sufficiently widely accepted simulation is the real thing. The orthodox, however, will reply that a "consecrated" Anglican Host is no more the Body of Christ than a teddy bear is a bear, since the "special causal powers" are the essence of the matter. Similarly, Searle knows in advance that "only something that has the same causal powers as brains can have intentionality."
塞尔以训练回答足够多的中文问题来通过图灵测试为例。反圣体论的辩护者会用一个伪装成牧师的外行人背诵弥撒经文并愚弄教区居民的例子。对于苏尔的例子,最初的回答是,如果长年累月地进行训练,使苏尔能够用中文回答所有可能的中文问题,那么他就完全能听懂中文了。行为主义者和操作主义者说,如果你能一直愚弄所有的人,那就不算是愚弄了。对于正统神学家的例子,最初的回答是,当圣公会牧师举行圣餐仪式时,尽管圣公会的命令存在 "意图缺陷",但所发生的事情在功能上与罗马教堂中发生的事情完全相同。当你让像英国圣公会这样庞大的崇拜者群体在没有必要的 "特殊因果能力 "的情况下接受圣餐礼时,就说明这些能力对圣餐礼来说并不重要。足够广泛接受的模拟才是真实的。然而,正统派会回答说,英国圣公会 "祝圣 "的圣体并不是基督的身体,就像泰迪熊不是小熊一样,因为 "特殊因果能力 "才是事情的本质。同样,塞尔预先知道,"只有与大脑具有相同因果能力的东西才能具有意向性"。
How does Searle know that? In the same way, presumably, as the orthodox theologian knows things. Searle knows what "mental" and "cognitive" and such terms mean, and so he knows that they can't be properly applied in the absence of brains - or, perhaps, in the absence
塞尔是如何知道这一点的呢?大概和正统神学家的认识方式一样吧。塞尔知道 "心智 "和 "认知 "之类的术语是什么意思,所以他知道,在没有大脑的情况下,这些术语是无法正确应用的--或者,在没有大脑的情况下
Commentary/Searle: Minds, brains, and programs
评论/塞尔思想、大脑和程序
of something that is much like a brain in respect to "causal powers." How would we tell whether something was sufficiently like?, behaviorists and operationalists ask. Presumably they will get no answer until we discover enough about how the brain works to distinguish intentionality from mere simulations of intentionality. How might a neutral party judge the dispute between Anglicans and Roman Catholics about the validity of Anglican orders? Presumably he will have to wait until we discover more about God.
在 "因果能力 "方面与大脑非常相似的东西。行为主义者和操作主义者问道,我们如何判断某种东西是否足够像大脑?据推测,在我们对大脑的工作原理有足够的了解,能够区分意向性和单纯的意向性模拟之前,他们是不会得到答案的。对于英国圣公会和罗马天主教之间关于英国圣公会教规有效性的争议,中立者该如何判断?他大概要等到我们对上帝有了更多的了解。
But perhaps the analogy is faulty: we moderns believe in brains but not in God. Still, even if we dismiss the theological analogue, we may have trouble knowing just what brain research is supposed to look for. We must discover content rather than merely form, Searle tells us, for mental states are "literally a product of the operation of the brain" and hence no conceivable program description (which merely gives a form, instantiable by many different sorts of hardware) will do. Behaviorists and operationalists, however, think the form-content and programhardware distinctions merely heuristic, relative, and pragmatic. This is why they are, if not shocked, at least wary, when Searle claims "that actual human mental phenomena might be dependent on actual physical-chemical properties of actual human brains." If this claim is to be taken in a controversial sense, then it seems just a device for ensuring that the secret powers of the brain will move further and further back out of sight every time a new model of brain functioning is proposed. For Searle can tell us that every such model is merely a discovery of formal patterns, and that "mental content" has still escaped us. (He could buttress such a suggestion by citing Henri Bergson and Thomas Nagel on the ineffable inwardness of even the brute creation.) There is, after all, no great difference - as far as the form-content distinction goes - between building models for the behavior of humans and for that of their brains. Without further guidance about how to tell content when we finally encounter it, we may well feel that all research in the area
但也许这种类比是错误的:我们现代人相信大脑,但不相信上帝。尽管如此,即使我们摒弃了神学类比,我们也可能很难知道大脑研究到底应该寻找什么。塞尔告诉我们,我们必须发现内容,而不仅仅是形式,因为心理状态 "实际上是大脑运作的产物",因此任何可以想象的程序描述(仅仅给出一种形式,由许多不同种类的硬件进行实例化)都是不可行的。然而,行为主义者和操作主义者认为,形式-内容和程序-硬件的区分只是启发式的、相对的和实用的。这就是为什么当塞尔声称 "实际的人类心理现象可能依赖于实际人类大脑的实际物理化学属性 "时,他们即使不感到震惊,至少也会保持警惕。如果从有争议的意义上理解这种说法,那么它似乎只是一种手段,每当有人提出大脑运作的新模型时,它就会确保大脑的秘密力量会越来越远离人们的视线。因为塞尔可以告诉我们,每一个这样的模型都只是形式模式的发现,而 "精神内容 "仍然没有被我们发现。(他可以引用亨利-柏格森(Henri Bergson)和托马斯-纳格尔(Thomas Nagel)的论述来支持这种说法,他们认为即使是野蛮的创造物也具有不可言喻的内在性)。毕竟,就形式与内容的区别而言,为人类的行为建立模型与为人类的大脑建立模型并无太大区别。如果没有进一步的指导,当我们最终遇到内容时该如何分辨它,我们很可能会觉得该领域的所有研究
is an arch wherethro.
是一个拱门。
Gleams that untravell'd world whose margin fades
闪耀着那个边际逐渐消失的未知世界
For ever and for ever when I move. (Tennyson: Ulysses)
当我移动时,永远永远。(丁尼生:尤利西斯)
My criticisms of Searle should not, however, be taken as indicating sympathy with Al. In 1960 Putnam remarked that the mind-program analogy did not show that we can use computers to help philosophers solve the mind-body problem, but that there wasn't any mind-body problem for philosophers to solve. The last twenty years' worth of work in Al have reinforced Putnam's claim. Nor, alas, have they done anything to help the neurophysiologists - something they actually might, for all we could have known, have done. Perhaps it was worth it to see whether programming computers could produce some useful models of the brain, if not of "thought" or "the mind." Perhaps, however, the money spent playing Turing games with expensive computers should have been used to subsidize relatively cheap philosophers like Searle and me. By now we might have worked out exactly which kinds of operationalism and behaviorism to be ashamed of and which not. Granted that some early dogmatic forms of these doctrines were a bit gross, Peirce was right in saying something like them has got to be true if we are to shrug off arguments about transubstantiation. If Searle's present pre-Wittgensteinian attitude gains currency, the good work of Ryle and Putnam will be undone and "the mental" will regain its numinous Cartesian glow. This will boomerang in favor of Al. "Cognitive scientists" will insist that only lots more simulation and money will shed empirical light upon these deep "philosophical" mysteries. Surely Searle doesn't want that.
然而,我对塞尔的批评不应被视为对阿尔的同情。普特南在1960年曾说过,心智程序类比并不能说明我们可以用计算机来帮助哲学家解决心身问题,而是根本就没有什么心身问题需要哲学家去解决。过去二十年来,阿尔的研究工作加强了普特南的这一说法。可惜的是,他们也没有为神经生理学家提供任何帮助--就我们所知道的而言,他们实际上可能已经做了些什么。也许值得一试,看看计算机编程能否产生一些有用的大脑模型,即使不是 "思想 "或 "心灵 "的模型。不过,用昂贵的计算机玩图灵游戏所花的钱,也许应该用来补贴像塞尔和我这样相对廉价的哲学家。现在,我们也许已经弄清了哪些操作主义和行为主义应该感到羞耻,哪些不应该感到羞耻。诚然,这些学说的某些早期教条形式有些粗俗,但皮尔斯说得没错,如果我们要摆脱关于 "圣体 "的争论,那么类似这些学说的东西就必须是真实的。如果塞尔现在的前维特根斯坦式态度流行起来,赖尔和普特南的努力就会前功尽弃,"精神 "就会重现笛卡尔式的神圣光辉。这将有利于阿尔。"认知科学家 "会坚持认为,只有更多的模拟和金钱才能从经验上揭示这些深奥的 "哲学 "奥秘。塞尔当然不希望这样。

by Roger C. Schank
罗杰-C-申克著

Department of Computer Science, Yale University, New Haven, Conn, 06520
耶鲁大学计算机科学系,康涅狄格州纽黑文,06520

Understanding Searle 了解塞尔

What is understanding? What is consciousness? What is meaning? What does it mean to think? These, of course, are philosopher's questions. They are the bread and butter of philosophy. But what of the role of such questions in A? Shouldn't Al researchers be equally concerned with such questions? I believe the answer to be yes and no.
什么是理解?什么是意识?什么是意义?思考意味着什么?当然,这些都是哲学家的问题。它们是哲学的面包和黄油。但是,这些问题在 A 中的作用又是什么呢?难道 Al 研究人员不应该同样关注这些问题吗?我认为答案是肯定的,也是否定的。

According to the distinction between weak and strong I would have to place myself in the weak Al camp with a will to move to the strong side. In a footnote, Searle mentions that he is not saying that I am necessarily committed to the two "Al claims" he cites. He states that claims that computers can understand stories or that programs can explain understanding in humans are unsupported by my work.
根据弱型和强型 的区别,我必须将自己置于弱型阿尔阵营,并有意愿转向强型阵营。苏尔在脚注中提到,他并不是说我一定要坚持他所引用的两种 "阿尔主张"。他说,关于计算机可以理解故事或程序可以解释人类的理解力的说法,在我的研究中没有得到支持。
is certainly right in that statement. No program we have written can be said to truly understand yet. Because of that, no program we have written "explains the human ability to understand. '"
这句话无疑是正确的。我们编写的任何程序都还不能说真正理解了人类。正因为如此,我们编写的任何程序都不能 "解释人类的理解能力。'"
I agree with Searle on this for two reasons. First, we are by no means finished with building understanding machines. Our programs are at this stage partial and imcomplete. They cannot be said to be truly understanding. Because of this they cannot be anything more than partial explanations of human abilities.
我同意塞尔的观点,原因有二。首先,我们还没有完成制造理解机器的工作。在这个阶段,我们的程序是片面的、不完整的。它们不能说是真正的理解机器。正因为如此,它们只能是对人类能力的部分解释。
Of course, I realize that Searle is making a larger claim than this. He means that our programs never will be able to understand or explain human abilities. On the latter claim he is clearly quite wrong. Our programs have provided successful embodiments of theories that were later tested on human subjects. All experimental work in psychology to date has shown, for example, that our notion of a script (Schank & Abelson 1977) is very much an explanation of human abilities (see Nelson & Gruendel 1978; Gruendel 1980; Smith, Adams, & Schorr 1978; Bower, Black, & Turner 1979; Graesser et al. 1979; Anderson 1980).
当然,我意识到塞尔提出了比这更大的主张。他的意思是,我们的程序永远无法理解或解释人类的能力。关于后一种说法,他显然大错特错。我们的程序已经成功地体现了后来在人类身上进行测试的理论。例如,迄今为止所有的心理学实验工作都表明,我们的脚本概念(Schank 和 Abelson,1977 年)在很大程度上解释了人类的能力(见 Nelson 和 Gruendel,1978 年;Gruendel,1980 年;Smith、Adams 和 Schorr,1978 年;Bower、Black 和 Turner,1979 年;Graesser 等,1979 年;Anderson,1980 年)。
All of the above papers are reports of experiments on human subjects that support the notion of a script. Of course, Searle can hedge here and say that it was our theories rather than our programs that explained human abilities in that instance. In that case, I can only attempt to explain carefully my view of what is all about. We cannot have theories apart from our computer implementations of those theories. The range of the phenomena to be explained is too broad and detailed to be covered by a theory written in English. We can only know if our theories of understanding are plausible if they can work by being tested on a machine.
上述论文都是以人为对象的实验报告,支持脚本的概念。当然,塞尔可以在这里避重就轻地说,在那个例子中,是我们的理论而不是我们的程序解释了人类的能力。在这种情况下,我只能试图仔细解释我对 的看法。我们的理论不能脱离计算机对这些理论的实现。要解释的现象范围太广、太详细,不是用英语写成的理论所能涵盖的。我们只有通过在机器上进行测试,才能知道我们的理解理论是否可信。
Searle is left with objecting to psychological experiments themselves as adequate tests of theories of human abilities. Does he regard psychology as irrelevant? The evidence suggests that he does, although he is not so explicit on this point. This brings me back to his first argument. "Can a machine understand?" Or, to it put another way, can a process model of understanding tell us something about understanding? This question applies whether the target of attack is AI or psychology.
塞尔只能反对把心理学实验本身作为对人类能力理论的充分检验。他认为心理学无关紧要吗?证据表明他是这样认为的,尽管他在这一点上并不明确。这又把我带回了他的第一个论点。"机器能理解吗?"或者换一种说法,理解的过程模型能否告诉我们一些关于理解的东西?无论攻击的目标是人工智能还是心理学,这个问题都适用。
To answer this question I will attempt to draw an analogy. Try to explain what "life" is. We can give various biological explanations of life. But, in the end, 1 ask, what is the essence of life? What is it that distinguishes a dead body that is physically intact from a live body? Yes, of course, the processes are ongoing in the live one and not going (or "dead") in the dead one. But how to start them up again? The jolt of electricity from Dr. Frankenstein? What is the "starter"? What makes life?
为了回答这个问题,我将尝试打个比方。试着解释什么是 "生命"。我们可以对生命做出各种生物学解释。但归根结底,我们要问,生命的本质是什么?是什么将身体完好无损的死尸与活人区分开来?是的,当然,活人的生命过程还在继续,而死人的生命过程却没有结束(或 "死亡")。但如何重新启动它们呢?弗兰肯斯坦博士的电击?什么是 "启动器"?是什么造就了生命?
Biologists can give various process explanations of life, but in the end that elusive "starter of life" remains unclear. And so it is with understanding and consciousness.
生物学家可以对生命的过程做出各种解释,但最终那个难以捉摸的 "生命的起点 "仍然不清楚。理解和意识也是如此。
We attribute understanding, consciousness, and life to others on the grounds that we ourselves have these commodities. We really don't know if anyone else "understands," "thinks," or even is "alive." We assume it on the rather unscientific basis that since we are all these things, others must be also.
我们把理解、意识和生命归于他人,理由是我们自己也拥有这些商品。我们真的不知道别人是否 "理解"、"思考",甚至 "活着"。我们的假设是相当不科学的,因为我们都是这些东西,所以别人也一定是。
We cannot give scientific explanations for any of these phenomena. Surely the answers, formulated in chemical terms, should not satisfy Searle. I find it hard to believe that what philosophers have been after for centuries were chemical explanations for the phenomena that pervade our lives
我们无法对这些现象给出科学的解释。用化学术语给出的答案肯定不会让塞尔满意。我很难相信,几个世纪以来,哲学家们一直在追寻的是对我们生活中普遍存在的现象的化学解释
Yet, that is the position that Searle forces himself into. Because, apart from chemical explanation, what is left? We need explanations in human terms, in terms of the entities that we meet and deal with in our daily lives, that will satisty our need to know about these things.
然而,这就是塞尔强迫自己采取的立场。因为,除了化学解释,还剩下什么呢?我们需要从人类的角度,从我们在日常生活中遇到和打交道的实体的角度来解释,这样才能满足我们对这些事物的求知欲。
Now I will return to my analogy. Can we get at the process explanation of "life"? Yes, of course, we could build a model that functioned "as if it were alive," a robot. Would it be alive?
现在我再回到我的比喻。我们能得到 "生命 "的过程解释吗?当然可以,我们可以制造一个 "像活着一样 "运作的机器人模型。它会有生命吗?
The same argument can be made with respect to consciousness and understanding. We could build programs that functioned as if they understood or had free conscious thought. Would they be conscious? Would they really understand?
关于意识和理解,也可以提出同样的论点。我们可以构建一些程序,它们的功能就好像它们能够理解或拥有自由的意识思维。它们会有意识吗?它们真的会理解吗?
I view these questions somewhat differently from most of my Al colleagues. I do not attribute beliefs to thermostats, car engines, or computers. My answers to the above questions are tentative no's. A robot is not alive. Our story-understanding systems do not understand in the sense of the term that means true empathy of feeling and expression.
我对这些问题的看法与大多数阿尔同事有些不同。我不把信念归结为恒温器、汽车发动机或电脑。我对上述问题的回答是 "不"。机器人不是活的。我们的故事理解系统并不能理解真正意义上的感同身受和表达。
Can we ever hope to get our programs to "understand" at that level? Can we ever create "lite"? Those are, after all, empirical questions.
我们能指望让我们的程序 "理解 "到那个水平吗?我们还能创造出 "轻型 "程序吗?这些毕竟都是经验性的问题。
In the end, my objection to Searle's remarks can be formulated this way. Does the brain understand? Certainly we humans understand, but does that lump of matter we call our brain understand? All that is going on there is so many chemical reactions and electrical impulses, just so many Chinese symbols.
最后,我对塞尔言论的反对意见可以这样表述。大脑理解吗?我们人类当然能理解,但我们称之为大脑的那块物质能理解吗?那里发生的一切不过是许多化学反应和电脉冲,不过是许多中文符号而已。
Understanding means finding the system behind the Chinese symbols, whether written for brains or for computers. The person who wrote the rules for Searle to use to put out the correct Chinese symbols at the appropriate time - now that was a linguist worth hiring. What rules did he write? The linguist who wrote the rules "understood" in the deep sense how the Chinese language works. And, the rules he wrote embodied that understanding.
理解意味着找到中文符号背后的系统,无论是写给大脑的还是写给计算机的。那个为苏尔写下规则,让他在适当的时候写出正确的中文符号的人,才是值得聘用的语言学家。他写的是什么规则?写出规则的语言学家从深层次上 "理解 "了中文是如何运作的。他写的规则就体现了这种理解。
Searle wants to call into question the enterprise of , but in the end, even he must appreciate that the rules for manipulating Chinese symbols would be a great achievement. To write them would require a great understanding of the nature of language. Such rules would satisfy many of the questions of philosophy, linguistics, psychology, and AI.
塞尔想对 的事业提出质疑,但最终,即使是他也必须承认,中国符号的操作规则将是一项伟大的成就。要写出这些规则,需要对语言本质的深刻理解。这样的规则可以解决哲学、语言学、心理学和人工智能的许多问题。
Does Searle, who is using those rules, understand? No. Does the hardware configuration of the computer understand? No. Does the hardware configuration of the brain understand? No.
使用这些规则的苏尔明白吗?计算机的硬件配置能理解吗?大脑的硬件配置能理解吗?不懂。
Who understands then? Why, the person who wrote the rules of course. And who is he? He is what is called an Al researcher.
那谁能理解呢?当然是制定规则的人。他是谁?他就是所谓的阿尔研究员。

by Aaron Sloman and Monica Croucher
作者:亚伦-斯洛曼、莫妮卡-克劳奇

School of Social Sciences, University of Sussex, Brighton BN 1 QQN. England
苏塞克斯大学社会科学学院,布莱顿 BN 1 QQN。英国

How to turn an information processor into an understander
如何将信息处理者变为理解者

Searle's delightfully clear and provocative essay contains a subtle mistake, which is also often made by Al researchers who use familiar mentalistic language to describe their programs. The mistake is a failure to distinguish form from function.
塞尔的文章清晰明了,极具启发性,但却包含了一个微妙的错误,使用熟悉的心智学语言描述程序的阿尔研究人员也经常犯这种错误。这个错误就是没有区分形式与功能。
That some mechanism or process has properties that would, in a suitable context, enable it to perform some function, does not imply that it already performs that function. For a process to be understanding, or thinking, or whatever, it is not enough that it replicate some of the structure of the processes of understanding, thinking, and so on. It must also fulfil the functions of those processes. This requires it to be causally linked to a larger system in which other states and processes exist. Searle is therefore right to stress causal powers. However, it is not the causal powers of brain cells that we need to consider, but the causal powers of computational processes. The reason the processes he describes do not amount to understanding is not that they are not produced by things with the right causal powers, but that they do not have the right causal powers, since they are not integrated with the right sort of total system.
某种机制或过程所具有的特性,在适当的情况下可以使它发挥某种功能,但这并不意味着它已经发挥了这种功能。要使一个过程成为理解过程或思维过程或其他过程,仅仅复制理解、思维等过程的某些结构是不够的。它还必须履行这些过程的功能。这就要求它与存在着其他状态和过程的更大系统有因果联系。因此,塞尔强调因果力量是正确的。然而,我们需要考虑的不是脑细胞的因果能力,而是计算过程的因果能力。他所描述的过程之所以不等于理解,并不是因为它们不是由具有正确因果能力的事物产生的,而是因为它们不具有正确的因果能力,因为它们没有与正确的总体系统结合在一起。
That certain operations on symbols occurring in a computer, or even in another person's mind, happen to be isomorphic with certain formal operations in your mind does not entail that they serve the same function in the political economy of your mind. When you read a sentence, a complex, mostly unconscious, process of syntactic and semantic analysis occurs, along with various inferences, alterations to your long-term memory, perhaps changes in your current plans, or even in your likes, dislikes, or emotional state. Someone else reading the sentence will at most share a subset of these processes. Even if there is a subset of formal symbolic manipulations common to all those who hear the sentence, the existence of those formal processes will not, in isolation, constitute understanding the sentence. Understanding can occur only in a context in which the process has the opportunity to interact with such things as beliefs, motives, perceptions, inferences, and decisions - because it is embedded in an appropriate way in an appropriate overall system.
在计算机中,甚至在他人的头脑中,对符号的某些操作恰好与你头脑中的某些形式操作同构,但这并不意味着它们在你头脑的政治经济学中具有相同的功能。当你阅读一个句子时,会发生一个复杂的、大多是无意识的句法和语义分析过程,以及各种推论、对你长期记忆的改变,也许是你当前计划的改变,甚至是你的喜好、厌恶或情绪状态的改变。其他人在读这个句子时最多只能分享这些过程的一个子集。即使所有听到这个句子的人都有一个共同的形式符号操作子集,这些形式过程的存在也不会单独构成对句子的理解。理解只能发生在这样一种语境中,即这一过程有机会与诸如信念、动机、感知、推论和决定等事物相互作用--因为它以适当的方式嵌入了一个适当的整体系统。
This may look like what Searle calls "The robot reply" attributed to Yale. However, it is not enough to say that the processes must occur in some physical system which it causes to move about, make noises, and so on. We claim that it doesn't even have to be a physical system: the properties of the larger system required for intentionality are computational not physical. (This, unlike Searle's position, explains why it makes sense to ordinary folk to attribute mental states to disembodied souls, angels, and the like, though not to thermostats.)
这看起来就像塞尔所说的归因于耶鲁的 "机器人回答"。然而,仅仅说这些过程必须发生在某个物理系统中是不够的,它还必须使这个物理系统移动、发出声音等等。我们认为,它甚至不必是一个物理系统:意向性所需的更大系统的属性是计算的,而非物理的。(这一点与塞尔的立场不同,它解释了为什么普通人把精神状态归因于非实体的灵魂、天使等,而不归因于恒温器。)
What sort of larger system is required? This is not easy to answer. There is the beginning of an exploration of the issues in chapters 6 and 10 of Sloman (1978) and in Sloman (1979). (See also Dennett 1978.) One of the central problems is to specify the conditions under which it could be correct to describe a computational system, whether embodied in a human brain or not, as possessing its own desires, preferences, tastes, and other motives. The conjecture we are currently exploring is that such motives are typically instantiated in symbolic representations of states of affairs, events, processes, or selection criteria, which play a role in controlling the operations of the system, including operations that change the contents of the store of motives, as happens when we manage (often with difficulty) to change our own likes and dislikes, or when an intention is abandoned because it is found to conflict with a principle. More generally, motives will control the allocation of resources, including the direction of attention in perceptual processes, the creation of goals and subgoals, the kind of information that is processed and stored for future use, and the inferences that are made, as well as controlling external actions if the system is connected to a set of 'motors' (such as muscles) sensitive to signals transmitted during the execution of plans and strategies. Some motives will be capable of interacting with beliefs to produce the complex disturbances characteristic of emotional states, such as fear. anger, embarrassment, shame, and disgust. A precondition for the system to have its own desires and purposes is that its motives should evolve as a result of a teedback process during a lengthy sequence of experiences, in which beliefs, skills (programs), sets of concepts, and the like also develop. This, in turn requires the system of motives to have a multilevel structure, which we shall not attempt to analyse further here.
需要什么样的大型系统?这个问题不容易回答。斯洛曼(1978)的第 6 章和第 10 章以及斯洛曼(1979)开始了对这个问题的探索。(核心问题之一是要明确在什么条件下,描述一个计算系统(无论是否体现在人脑中)拥有自己的欲望、偏好、品味和其他动机才是正确的。我们目前正在探索的猜想是,这些动机通常是在事务状态、事件、过程或选择标准的符号表征中实例化的,它们在控制系统的运行中发挥着作用,包括改变动机存储内容的运行,就像我们设法(通常很困难)改变自己的好恶,或者发现某个意图与某个原则相冲突而放弃它时发生的情况一样。更广义地说,动机可以控制资源的分配,包括感知过程中注意力的方向、目标和子目标的建立、处理和存储供未来使用的信息种类、做出的推断,如果系统与一组对计划和策略执行过程中传递的信号敏感的 "马达"(如肌肉)相连接,动机还可以控制外部行动。有些动机能够与信念相互作用,产生情绪状态所特有的复杂干扰,如恐惧、愤怒、尴尬、羞愧和厌恶。系统要有自己的愿望和目的,前提条件是它的动机应该是在一连串漫长的经历中,在信念、技能(程序)、成套概念等的发展过程中,作为一个回溯过程的结果而演变出来的。这反过来又要求动机系统具有多层次的结构,我们在此不作进一步分析。
This account looks circular because it uses mentalistic terminology, but our claim, and this is a claim not considered by Searle, is that further elaboration of these ideas can lead to a purely formal specification of the computational architecture of the required system. Fragments can already be found in existing operating systems (driven in part by priorities and interrupts), and in Al programs that interpret images, build and debug programs, and make and execute plans. But not existing system comes anywhere near combining all the intricacies required before the familiar mental processes can occur. Some of the forms are already there, but not yet the functions.
这种说法看起来是循环论证,因为它使用的是心智学术语,但我们的主张是,对这些观点的进一步阐释可以导致对所需系统计算架构的纯粹形式化规范,这也是塞尔没有考虑过的主张。在现有的操作系统(部分由优先级和中断驱动),以及解释图像、构建和调试程序、制定和执行计划的 Al 程序中,我们已经可以找到一些片段。但是,在我们熟悉的思维过程发生之前,现有的系统还远远无法将所有复杂的因素结合在一起。有些形式已经存在,但功能还没有。
Searle's thought experiment, in which he performs uncomprehending operations involving Chinese symbols does not involve operations linked into an appropriate system in the appropriate way. The news, in Chinese, that his house is on fire will not send him scurring home, even though in some way he operates correctly with the symbols. But, equally, none of the so-called understanding programs produced so far is linked to an appropriate larger system of beliefs and decision. Thus, as far as the ordinary meanings of the words are concerned, it is incorrect to say that any existing programs understand, believe, learn, perceive, or solve problems. Of course, it might be argued (though not by us) that they already have the potential to be so linked they have a form that is adequate for the function in question. If this were so, they might perhaps be used as extensions of people - for example, as aids for the deaf or blind or the mentally handicapped, and they could then be part of an understanding system.
在塞尔的思想实验中,他进行的涉及中文符号的难以理解的操作,并不涉及以适当方式链接到适当系统中的操作。即使他在某种程度上正确地使用了符号,他的房子着火的中文消息也不会让他飞奔回家。但是,同样地,迄今为止产生的所谓理解程序都没有与更大的信念和决策系统相联系。因此,就词语的普通含义而言,说任何现有的 程序理解、相信、学习、感知或解决问题都是不正确的。当然,也可以说(虽然不是我们说的),它们已经具备了这样的潜力,它们的形式足以满足相关功能的需要。如果是这样的话,它们也许可以作为人的延伸--例如,作为聋人、盲人或弱智者的辅助工具,它们就可以成为理解系统的一部分。
It could be argued that mentalistic language should be extended to
可以说,心智语言应扩展到

encompass all systems with the potential for being suitably linked into a complete mind. That is, it could be argued that the meanings of words like "understand," "perceive," "intend," "believe" should have their functional preconditions altered, as if we were to start calling things screwdrivers or speed controllers if they happened to have the appropriate structure to perform the functions, whether or not they were ever used or even intended to be used with the characteristic functions of screwdrivers and speed controllers. The justification for extending the usage of intentional and other mental language in this way would be the discovery that some aspects of the larger architecture (such as the presence of subgoal mechanisms or inference mechanisms) seem to be required within such isolated subsystems to enable them to satisfy even the formal preconditions. However, our case against Searle does not depend on altering meanings of familiar words.
这就是说,"理解"、"感知"、"意图"、"相信 "等词的含义应该改变其功能前提条件,就像我们开始把事物称为 "心智 "一样。也就是说,可以说 "理解"、"感知"、"意图"、"相信 "等词的含义应该改变其功能性前提条件,就像我们开始称螺丝刀或速度控制器为东西一样,如果它们碰巧具有执行这些功能的适当结构的话,无论它们是否被使用过,甚至是否打算被用于螺丝刀和速度控制器的特征功能。以这种方式扩展意向语言和其他心智语言的用法的理由是,我们发现在这些孤立的子系统中似乎需要更大架构的某些方面(如子目标机制或推理机制的存在),以使它们甚至能够满足形式上的先决条件。然而,我们反对塞尔的理由并不依赖于改变熟悉词语的含义。
Is it necessary that a mental system be capable of controlling the operations of a physcial body or that it be linked to physical sensors capable of receiving information about the physical environment? This is close to the question whether a totally paralysed, deaf, blind, person without any functioning sense organs might nevertheless be conscious, with thoughts, hopes, and fears. (Notice that this is not too different from the state normal people enter temporarily each night.) We would argue that there is no reason (apart from unsupportable behaviourist considerations) to deny that this is a logical possiblility. However, if the individual had never interacted with the external world in the normal way, then he could not think of President Carter, Paris, the battle of Hastings, or even his own body: at best his thoughts and experiences would refer to similar nonexistent entities in an imaginary world. This is because successful reference presupposes causal relationships which would not hold in the case of our disconnected mind.
精神系统是否必须能够控制躯体的运作,或者必须与能够接收物理环境信息的物理传感器相连?这与一个完全瘫痪、失聪、失明、没有任何感觉器官的人是否仍有意识、有思想、希望和恐惧的问题很接近。(注意,这与正常人每晚暂时进入的状态没有太大区别)。我们认为,没有理由(除了不成立的行为主义考虑之外)否认这是一种逻辑上的可能性。然而,如果这个人从未以正常的方式与外部世界互动过,那么他就不可能想到卡特总统、巴黎、黑斯廷斯战役,甚至是他自己的身体:他的思想和经验充其量不过是指向一个想象世界中类似的不存在的实体。这是因为成功的参照预设了因果关系,而这种因果关系在我们断开的思维中是不成立的。
It might be thought that we have missed the point of Searle's argument since whatever the computational architecture we finally posit for a mind, connected or disconnected, he will always be able to repeat his thought experiment to show that a purely formal symbolmanipulating system with that structure would not necessarily have motives, beliefs, or percepts. For he could execute all the programs himself (at least in principle) without having any of the alleged desires, beliefs, perceptions, emotions, or whatever.
也许有人会认为,我们忽略了塞尔论证的重点,因为无论我们最终为心灵假定了什么样的计算结构,是连接的还是断开的,他总能重复他的思想实验,以证明具有这种结构的纯形式符号操纵系统并不一定有动机、信念或知觉。因为他可以自己执行所有的程序(至少在原则上),而不会有任何所谓的欲望、信念、知觉、情感或其他东西。
At this point the "other minds" argument takes on a curious twist. Searle is assuming that he is a final authority on such questions as whether what is going on in his mental activities includes seeing (or appearing to see) pink elephants, thinking about Pythagoras's theorem, being afraid of being burnt at the stake, or understanding Chinese sentences. In other words, he assumes, without argument, that it is impossible for another mind to be based on his mental processes without his knowing. However, we claim (compare the discussion of consciousness in Sloman 1978, chapter 10) that if he really does faithfully execute all the programs, providing suitable time sharing between parallel subsystems where necessary, then a collection of mental processes will occur of whose nature he will be ignorant, if all he thinks he is doing is manipulating meaningless symbols. He will have no more basis for denying the existence of such mental processes than he would have if presented with a detailed account of the low-level internal workings of another person's mind, which he can only understand in terms of electrical and chemical processes, or perhaps sequences of abstract patterns embedded in such processes.
在这一点上,"其他思维 "的论证出现了奇特的转折。塞尔假定,在他的心理活动是否包括看到(或似乎看到)粉红色的大象、思考毕达哥拉斯定理、害怕被烧死或理解中文句子等问题上,他是最终的权威。换句话说,他不加论证地假定,另一个思维不可能在他不知情的情况下以他的思维过程为基础。然而,我们声称(比较斯洛曼 1978 年著作第 10 章中关于意识的讨论),如果他真的忠实地执行了所有程序,并在必要时在并行子系统之间提供了适当的时间共享,那么就会出现一系列心理过程,而如果他认为他所做的只是操作无意义的符号,那么他对这些心理过程的性质将一无所知。他否认这些心理过程存在的依据,不会比向他详细介绍另一个人的低层次思维内部运作过程的依据更充分,因为他只能从电子和化学过程的角度来理解这些过程,或者从嵌入这些过程的抽象模式序列的角度来理解这些过程。
If the instructions Searle is executing require him to use information about things he perceives in the environment as a basis for selecting some of the formal operatons, then it would even be possible for the "passenger" to acquire information about Searle (by making inferences from Searle's behaviour and from what other people say about him) without Searle ever realising what is going on. Perhaps this is not too unlike what happens in some cases of multiple personalities?
如果苏尔执行的指令要求他使用他在环境中感知到的事物的信息作为选择某些形式操作的基础,那么 "乘客 "甚至有可能获取有关苏尔的信息(通过对苏尔的行为和其他人对他的评价进行推断),而苏尔却从未意识到发生了什么。也许这与某些多重人格的情况不太一样?

by William E. Smythe
作者:威廉-斯迈思

Deparment of Psychology, University of Toronto, Toronto, Ontario.
安大略省多伦多市多伦多大学心理学系。
Canada M5S IAI 加拿大 M5S IAI

Simulation games 模拟游戏

Extensive use of intentional idioms is now common in discussions of the capabilities and functioning of systems. Often these descriptions are to be taken no more substantively than in much of ordinary programming where one might say, for example, that a statistical regression program "wants" to minimize the sum of squared deviations or "believes" it has found a best-fitting function when it has done so. In other cases, the intentional account is meant to be taken more literally. This practice requires at least some commitment to the claim that intentional states can be achieved in a machine just in virtue of its performing certain computations. Searle's article serves as a cogent and timely indicator of some of the pitfalls that attend such a claim.
在讨论 系统的能力和功能时,大量使用意向性惯用语是很常见的。通常情况下,这些描述的实质意义并不亚于普通编程,例如,我们可以说一个统计回归程序 "希望 "最小化平方差之和,或者 "相信 "它已经找到了一个最佳拟合函数。在其他情况下,意向性描述的含义则更多地体现在字面上。这种做法至少要求我们对这样一种说法做出某种承诺,即机器只需执行某些计算,就能实现意向状态。塞尔的文章有力而及时地指出了这种说法的一些陷阱。
If certain Al systems are to possess intentionality, while other computational systems do not, then it ought to be in virtue of some set of purely computational principles. However, as Searle points out, no such principles have yet been forthcoming from Al. Moreover, there is reason to believe that they never will be. A sketch of one sort of argument is as follows: intentional states are, by definition, "directed at" objects and states of affairs in the world. Hence the first requirement for any theory about them would be to specity the relation between the states and the world they are "about." However it is precisely this relation that is not part of the computational account of mental states (cf. Foder 1980). A computational system can be interfaced with an external environment in any way a human user may choose. There is no dependence of this relation on any ontogenetic or phylogenetic history of interaction with the environment. In fact, the relation between system and environment can be anything at all without affecting the computations performed on symbols that purportedly refer to it. This fact casts considerable doubt on whether any purely computational theory of intentionality is possible.
如果某些阿尔系统具有意向性,而其他计算系统不具有意向性,那么它应该是凭借某种纯粹的计算原则。然而,正如塞尔所指出的,阿尔尚未提出这样的原则。而且,我们有理由相信,它们永远也不会出现。有一种论证的概要如下:根据定义,意向状态是 "指向 "世界中的对象和事态的。因此,关于意向状态的任何理论的首要条件就是要具体说明意向状态与它们 "指向 "的世界之间的关系。然而,恰恰是这种关系不属于心理状态的计算论(参见福德,1980 年)。计算系统可以以人类用户选择的任何方式与外部环境对接。这种关系并不依赖于与环境互动的任何本体论或系统论历史。事实上,系统与环境之间的关系可以是任何关系,而不会影响对据称是指它的符号所进行的计算。这一事实让我们对意向性的任何纯粹计算理论是否可能产生了相当大的怀疑。
Searle attempts to establish an even stronger conclusion: his argument is that the computational realization of intentional states is, in fact, impossible on a priori grounds. The argument is based on a "simulation game" - a kind of dual of Turing's imitation game - in which man mimics computer. In the simulation game, a human agent instantiates a computer program by performing purely syntactic operations on meaningless symbols. The point of the demonstration is that merely following rules for the performance of such operations is not sufficient for manifesting the right sort of intentionality. In particular, a given set of rules could create an effective mimicking of some intelligent activity without bringing the rule-following agent any closer to having intentional states pertaining to the domain in question.
塞尔试图建立一个更有力的结论:他的论点是,根据先验的理由,意图状态的计算实现事实上是不可能的。这个论点基于一个 "模拟游戏"--图灵模仿游戏的一种对偶--在这个游戏中,人模仿计算机。在模拟游戏中,人类代理通过对无意义的符号进行纯粹的语法操作,将计算机程序实例化。演示的重点在于,仅仅遵循执行此类操作的规则并不足以体现正确的意向性。特别是,一组给定的规则可以有效地模仿某种智能活动,而不会使遵循规则的代理更接近于拥有与相关领域有关的意向状态。
One difficulty with this argument is that it does not distinguish between two fundamentally different ways of instantiating a computer program or other explicit prodecure in a physical system. One way is to imbed the program in a system that is already capable of interpreting and following rules. This requires that the procedure be expressed in a "language" that the imbedding system can already "understand." A second way is to instantiate the program directly by realizing its "rules" as primitive hardware operations. In this case a rule is followed, not by "interpreting" it, but by just running off whatever procedure the rule denotes. Searle' simulation game is germane to the first kind of instantiation but not the second. Following rules in natural language (as the simulation game requires) involves the mediation of other intentional states and so is necessarily an instance of indirect instantiation. To mimic a direct instantiation of a program faithfully, on the other hand, the relevant primitives would have to be realized nonmediately in one' own activity. If such mimicry were possible, it would be done only at the cost of being unable to report on the system's lack of intentional states, if in fact it had none.
这一论点的难点在于,它没有区分在物理系统中实例化计算机程序或其他显式产品的两种根本不同的方式。一种方法是将程序植入一个已经能够解释和遵循规则的系统中。这就要求程序必须用嵌入系统已经能够 "理解 "的 "语言 "来表达。第二种方法是直接将程序实例化,将其 "规则 "实现为原始硬件操作。在这种情况下,遵循规则的方式不是 "解释 "规则,而是直接运行规则所表示的程序。塞尔的模拟游戏与第一种实例化有关,但与第二种无关。遵循自然语言中的规则(如模拟游戏所要求的)涉及其他意向状态的中介,因此必然是间接实例化的一个实例。另一方面,要忠实地模仿程序的直接实例化,相关的基元必须在自己的活动中非即时地实现。如果这种模仿是可能的,那么它的代价只能是无法报告系统是否缺乏意向状态,如果事实上系统没有意向状态的话。
The distinction between directly and indirectly instantiating computational procedures is important because both kinds of processes are required to specify a computational system completely. The first comprises its architecture or set of primitives, and the second comprises the algorithms the system can apply (Newell 1973; 1980). Hence Searle's argument is a challenge to strong Al when that view is put forward in terms of the capabilities of programs, but not when it is framed (as, for example, by Pylyshyn 1980a) in terms of computational systems. The claim that the latter cannot have intentional states must therefore proceed along different lines. The approach considered earlier, for example, called attention to the arbitrary relation between computational symbol and referent. Elsewhere the argument has been put forward in more detail that it is an overly restrictive notion of symbol
直接实例化计算程序和间接实例化计算程序之间的区别非常重要,因为要完整地说明一个计算系统,需要这两种过程。前者包括其架构或基元集,后者包括系统可以应用的算法(纽厄尔,1973;1980)。因此,塞尔的论证是对强阿尔的挑战,当这一观点是从程序的能力角度提出时,而不是从计算系统的角度提出时(例如,皮里申 1980a)。因此,关于后者不可能有意向性状态的说法必须沿着不同的思路进行。例如,前文所考虑的方法就要求注意计算符号与所指之间的任意关系。在其他地方,我们更详细地提出了一个论点,即这是一个过于狭隘的符号概念。

that creates the most serious difficulties for the computational theory (Kolers & Smythe 1979; Smythe 1979). The notion of an independent token subject to only formal syntactic manipulation is neither a sufficient characterization of what a symbol is, nor well motivated in the domain of human cognition. Sound though this argument is, it is not the sort of secure conclusion that Searle's simulation game tries to demonstrate.
这给计算理论带来了最严重的困难(科尔斯和斯迈思,1979;斯迈思,1979)。一个独立标记的概念只服从于形式化的句法操作,这既不能充分描述符号是什么,也不是人类认知领域的动机。尽管这个论点很有道理,但它并不是塞尔的模拟游戏试图证明的那种可靠结论。
However, the simulation game does shed some light on another issue. Why is it that the belief is so pervasive that Al systems are truly constitutive of mental events? One answer is that many people seem to be playing a different version of the simulation game from the one that Searle recommends. The symbols of most Al and cognitive simulation systems are rarely the kind of meaningless tokens that Searle's simulation game requires. Rather, they are often externalized in forms that carry a good deal of surplus meaning to the user, over and above their procedural identity in the system itself, as pictorial and linguistic inscriptions, for example. This sort of realization of the symbols can lead to serious theoretical problems. For example, systems like that of Kosslyn and Shwartz (1977) give the appearance of operating on mental images largely because their internal representations "look" like images when displayed on a cathode ray tube. It is unclear that the system could be said to manipulate images in any other sense. There is a similar problem with language understanding systems. The semantics of such systems is often assessed by means of an informal procedure that Hayes (1977, p. 559) calls "pretent-it's-English." That is, misleading conclusions about the capabilities of these systems can result from the superficial resemblance of their internal representations to statements in natural language. An important virute of Searle's argument is that it specities how to play the simulation game correctly. The procedural realization of the symbols is all that should matter in a computational theory; their external appearance ought to be irrelevant.
然而,模拟游戏确实揭示了另一个问题。为什么人们如此普遍地相信阿尔系统是心理事件的真正构成因素呢?答案之一是,许多人似乎在玩与塞尔推荐的模拟游戏不同的版本。大多数 Al 和认知模拟系统的符号很少是塞尔的模拟游戏所要求的那种无意义的代币。相反,这些符号往往是外化的,对用户来说,除了它们在系统本身中的程序特性之外,还具有大量的剩余意义,例如,作为图像和语言的铭文。这种符号的实现方式会导致严重的理论问题。例如,像 Kosslyn 和 Shwartz(1977 年)这样的系统之所以看起来像是在心理图像上运行,主要是因为它们的内部表示在显像管上显示时 "看起来 "像图像。目前还不清楚该系统是否可以说是在任何其他意义上操纵图像。语言理解系统也有类似的问题。这类系统的语义通常是通过一种非正式的程序来评估的,Hayes(1977 年,第 559 页)称之为 "假装是英语"(pretent-it's-English)。也就是说,由于这些系统的内部表征与自然语言语句的表面相似性,可能会对这些系统的能力产生误导性结论。塞尔论证的一个重要缺陷在于,它具体说明了如何正确玩模拟游戏。在计算理论中,符号的程序实现才是最重要的;它们的外在表现应该是无关紧要的。
The game, played this way, may not firmly establish that computational systems lack intentionality. However, it at least undermines one powerful tacit motivation for supposing they have it.
以这种方式玩游戏,也许并不能确定计算系统缺乏意向性。但是,它至少削弱了人们假定计算系统具有意向性的一个强大隐性动机。

by Donald 0 . Walter
作者:唐纳德 0 .沃尔特

Brain Research Institute and Dapanment of Psychiatry, University of Callfornia. Los Angeles, Calif. 90024
加州大学脑研究所和精神病学系。加州洛杉矶 90024

The thermostat and the philosophy professor
恒温器和哲学教授

Searle: The man certainly doesn't understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands. . .
苏尔人当然不懂中文,水管也不懂中文,如果我们试图采用一种我认为荒谬的观点,认为人和水管的结合体在某种程度上理解了........ .
Walter: The bimetallic strip by itself certainly doesn't keep the temperature within limits, and neither does the furnace by itself, and if we are tempted to adopt the view that somehow a system of bimetallic strip and furnace will keep the temperature within limits - or (paraphrasing Hanson 1969; or others), Searle's left retina does not see, nor does his right, nor either (or both) optic nerve(s); we can even imagine a "disconnection syndrome" in which Searle's optic cortex no longer connects with the rest of his brain, and so conclude that his optic cortex doesn't see, either. If we then conclude that because no part sees, therefore he cannot see, are we showing consistency, or are we failing to see something about our own concepts?
沃尔特双金属带本身肯定无法将温度控制在一定范围内,熔炉本身也是如此,如果我们倾向于采用这样一种观点,即双金属带和熔炉系统会以某种方式将温度控制在一定范围内--或者(套用汉森 1969;我们甚至可以想象一种 "断线综合症",在这种综合症中,苏尔的视皮层不再与大脑的其他部分相连接,因此得出结论说,他的视皮层也看不到东西。如果我们得出结论说,因为没有任何部分能看见,所以他也看不见,那么,我们是在展示一致性呢,还是没有看到我们自己概念中的某些东西呢?
Searle: No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down... Why on earth would anyone suppose that a computer simulation of understanding actually understood anything?
苏尔没有人会认为计算机模拟的五级大火会烧毁整个社区......究竟为什么会有人认为,计算机模拟的理解实际上什么也不懂?
Walter: No one supposes that a novelist's description of a fivealarm fire will burn the neighborhood down; why would anyone suppose that a novelist writing about understanding actually understood it?
沃尔特没有人会认为小说家对五级火警的描述会烧毁整个街区;为什么会有人认为写理解的小说家真的理解了呢?
Searle: If we knew independently how to account for its behavior without such assumptions we would not attribute intentionality to it, especially if we knew it had a formal program.
塞尔如果我们独立地知道如何在没有这些假设的情况下解释它的行为,我们就不会赋予它意向性,尤其是如果我们知道它有一个形式化的程序。
Hofstadter (1979, p. 601): There is a related "Theorem" about progress in Al: once some mental function is programmed, people soon cease to consider it as an essential ingredient of "real thinking."
霍夫斯塔德(1979 年,第 601 页):关于 Al 的进步,有一个相关的 "定理":一旦某些心理功能被编程化,人们很快就不再将其视为 "真正思维 "的基本要素。

The ineluctable core of intelligence is always that next thing which hasn't yet been programmed.
智能的核心永远是下一件尚未编程的事情。
Walter: Searle seems to be certain that a program is formal (though he plays, to his own advantage, on the ambiguity between "adequately definable through form or shape" and "completely definable through nothing but form or shape"), whereas "intentionality." "causal powers" and "actual properties" are radically different things that are unarguably present in any (normal? waking?) human brain, and possibly in the quaint brains of "Martians" (if they were "alive," at least in the sense that we did not understand what went on inside them). These radically different things are also not definable in terms of their form but of their content. He asserts this repeatedly, without making anything explicit of this vital alternative. I think it is up to Searle to establish communication with the readers of this journal, which he has not done in his target article. Let us hope that in his Response he will make the mentioned but undescribed alternative more nearly explicit to us.
沃尔特塞尔似乎确信程序是形式的(尽管他为了自己的利益,在 "通过形式或形状充分定义 "与 "除了形式或形状之外完全可以定义 "之间玩弄含糊不清的手法),而 "意向性"、"因果能力 "和 "实际属性 "则是完全不同的东西,它们无可争辩地存在于任何(正常?而 "意向性"、"因果能力 "和 "实际属性 "则是完全不同的东西,它们无可争辩地存在于任何(正常的、清醒的?这些完全不同的东西也不是从形式上而是从内容上可以定义的。他反复断言这一点,但却没有明确指出这一重要的替代方案。我认为,塞尔应该与本刊读者建立沟通,而他在目标文章中却没有做到这一点。让我们希望,在他的回应中,他将向我们更明确地阐述所提及但未描述的替代方案。

by Robert Wilensky 罗伯特-威伦斯基

Department of Electrical Engineering and Computer Science, University of Californis, Berkeley, Calii. 94720
加州大学伯克利分校电子工程与计算机科学系94720

Computers, cognition and philosophy
计算机、认知和哲学

Searle's arguments on the feasibility of computer understanding contain several simple but fatal logical flaws. I can deal only with the most important difficulties here. However, it is the general thrust of Searle's remarks rather than the technical flaws in his arguments that motivates this commentary. Searle' paper suggests that even the best simulation of intelligent behavior would explain nothing about cognition, and he argues in support of this claim. Since I would like to claim that computer simulation can yield important insights into the nature of human cognitive processes, it is important to show why Searle's arguments do not threaten this enterprise.
塞尔关于计算机理解可行性的论证包含几个简单但致命的逻辑缺陷。我在这里只能讨论其中最重要的困难。然而,促使我发表这篇评论的是塞尔言论的主旨,而不是他论证中的技术缺陷。塞尔的论文认为,即使是对智能行为最好的模拟,也无法解释任何关于认知的问题,他的论证支持了这一说法。由于我想宣称计算机模拟可以对人类认知过程的本质产生重要的启示,因此有必要说明为什么塞尔的论点不会威胁到这一事业。
My main objection to Searle's argument he has termed the "Berkeley systems reply." The position states that the man-in-the-room scenario presents no problem to a strong Al-er who claims that understanding is a property of an information-processing system. The man in the room with the ledger, functioning in the manner prescribed by the cognitive theorist who instructed his behavior, constitutes one such system. The man functioning in his normal everyday manner is another system. The "ordinary man" system may not understand Chinese, but this says nothing about the capabilities of the "manin-the-room" system, which must therefore remain at least a candidate for consideration as an understander in view of its language-processing capabilities.
我对塞尔的论点的主要反对意见被他称为 "伯克利系统回复"。这一立场认为,对于一个声称理解是一个信息处理系统的属性的强阿尔论者来说,房间里那个人的情景并不构成问题。房间里那个拿着账本的人,按照指导他行为的认知理论家规定的方式运作,就构成了这样一个系统。以正常的日常方式工作的人则是另一个系统。普通人 "系统可能不懂中文,但这并不能说明 "房间里的人 "系统的能力,因此,鉴于其语言处理能力,该系统至少仍可被视为理解者。
Searle's response to this argument is to have the man internalize the "man-in-the-room" system by keeping all the rules and computations in his head. He now encompasses the whole system. Searle argues that if the man "doesn't understand, then there is no way the system could understand because the system is just a part of him."
塞尔对这一论点的回应是,让人把 "人在房间里 "的系统内化,把所有的规则和计算都记在脑子里。他现在包含了整个系统。塞尔认为,如果这个人 "不理解,那么系统就不可能理解,因为系统只是他的一部分"。
However, this is just plain wrong. Lots of systems (in fact, most interesting systems) are embodied in other systems of weaker capabilities. For example, the hardware of a computer may not be able to multiply polynomials, or sort lists, or process natural language, although programs written for those computers can; individual neurons probably don't have much - if any - understanding capability, although the systems they constitute may understand quite a bit.
然而,这完全是错误的。很多系统(事实上,大多数有趣的系统)都体现在其他能力较弱的系统中。例如,计算机硬件可能无法进行多项式乘法运算、列表排序或处理自然语言,尽管为这些计算机编写的程序可以做到这一点;单个神经元可能没有多少(如果有的话)理解能力,尽管它们所构成的系统可能理解很多东西。
The difficulty in comprehending the systems position in the case of Searle's paradox is in being able to see the person as two separate systems. The following elaborations may be useful. Suppose we decided to resolve the issue once and for all simply by asking the person involved whether he understands Chinese. We hand the person a piece of paper with Chinese characters that mean (loosely translated) "Do you understand Chinese?" If the man-in-the-room system were to respond, by making the appropriate symbol manipulations, it would return a strip of paper with the message: "Of course I understand Chinese! What do you think l've been doing? Are you joking?" A heated dialogue then transpires, after which we apologize to the man-in-the-room system for our rude innuendos. Immediately
在塞尔悖论中,理解系统立场的难点在于将人视为两个独立的系统。下面的阐述可能会有所帮助。假设我们决定通过询问当事人是否懂中文来一劳永逸地解决这个问题。我们递给他一张纸,上面写着汉字,意思是(粗略翻译)"你懂中文吗?如果 "人在房间 "系统做出回应,进行适当的符号操作,它就会返回一张纸条,上面写着:"我当然懂中文!":"我当然懂中文!你以为我在干什么?你在开玩笑吗?随后,一场激烈的对话开始了,之后我们为自己无礼的暗示向 "房间里的人 "系统道歉。随即

thereafter, we approach the man himself (that is, we ask him to stop playing with the pieces of paper and talk to us directly) and ask him if he happens to know Chinese. He will of course deny such knowledge.
此后,我们走近他本人(即请他停止摆弄纸片,直接与我们交谈),问他是否懂中文。他当然会否认懂中文。
Searle's mistake of identifying the experiences of one system with those of its implementing system is one philosophers often make when referring to Al systems. For example, Searle says that the English subsystem knows that "hamburgers" refer to hamburgers, but that the Chinese subsystem knows only about formal symbols. But it is really the homunculus who is conscious of symbol manipulation, and has no idea what higher level task he is engaged in. The parasitic system is involved in this higher level task, and has no knowledge at all that he is implemented via symbol manipulation, anymore than we are aware of how our own cognitive processes are implemented.
塞尔将一个系统的经验与其执行系统的经验相提并论,这是哲学家们在提到 Al 系统时经常犯的错误。例如,塞尔说英语子系统知道 "汉堡包 "指的是汉堡包,但汉语子系统只知道形式符号。但实际上,意识到符号操作的是同源体,他并不知道自己在从事什么更高层次的任务。寄生系统参与了这项更高层次的任务,却完全不知道自己是通过符号操作来实现的,就像我们不知道自己的认知过程是如何实现的一样。
What's unusual about this situation is not that one system is embedded in a weak one, but that the implementing system is so much more powerful than it need be. That is, the homunculus is a full-fledged understander, operating at a small percentage of its capacity to push around some symbols. If we replace the man by a device that is capable of performing only these opertaions, the temptation to view the systems as identical greatly diminishes.
这种情况的不寻常之处不在于一个系统嵌入了一个弱小的系统,而在于执行系统比它所需要的强大得多。也就是说,"同体 "是一个完整的理解器,它的运行能力只占很小一部分,只能推动一些符号。如果我们用一个仅能执行这些操作的装置来代替人,那么将这两个系统视为相同系统的诱惑力就会大大降低。
It is important to point out, contrary to Searle's claim, that the systems position itself does not constitute a strong Al claim. It simply shows that if it is possible that a system other than a person functioning in the standard manner can understand, then the man-in-the-room argument is not at all problematic. If we deny this possibility to begin with, then the delicate man-in-the-room argument is unnecessary - a computer program is something other than a person functioning normally, and by assumption would not be capable of understanding.
必须指出的是,与塞尔的说法相反,系统立场本身并不构成强有力的阿尔主张。它只是表明,如果除了以标准方式运作的人之外,其他系统也有可能理解,那么人在房间里的论证就完全没有问题。如果我们一开始就否认这种可能性,那么微妙的 "人在房间里 "的论证就没有必要了--计算机程序是除正常运作的人之外的另一种东西,根据假设,它不可能理解。
Searle also puts forth an argument about simulation in general. He states that since a simulation of a storm won't leave us wet, why should we assume that a simulation of understanding should understand? Well, the reason is that while simulations don't necessarily preserve all the properties of what they simulate, they don't necessarliy violate particular properties either. I could simulate a storm in the lab by spraying water through a hose. If ' 1 'm interested in studying particular properties, I don't have to abandon simulations; I merely have to be careful about which properties the simulation I construct is likely to preserve.
塞尔还提出了一个关于一般模拟的论点。他说,既然模拟风暴不会让我们湿透,为什么我们要假设模拟理解也会理解呢?原因在于,虽然模拟并不一定保留其模拟对象的所有属性,但也不一定违反特定的属性。我可以在实验室里用水管喷水来模拟一场暴风雨。如果'我'有兴趣研究特定的属性,我并不一定要放弃模拟;我只需注意我构建的模拟可能会保留哪些属性。
So it all boils down to the question, what sort of thing is understanding? If it is an inherently physical thing, like fire or rain or digestion, then preserving the logical properties of understanding will in fact not preserve the essential nature of the phenomenon, and a computer simulation will not understand. If, on the other hand, understanding is essentially a logical or symbolic type of activity, then preserving its logical properties would be sufficient to have understanding, and a computer simulation will literally understand.
因此,这一切都归结为一个问题:理解是一种什么样的东西?如果理解本质上是一种物理事物,比如火、雨或消化,那么保留理解的逻辑属性实际上就无法保留这种现象的本质属性,计算机模拟也就无法理解。另一方面,如果理解本质上是一种逻辑或符号类型的活动,那么保留其逻辑属性就足以拥有理解,计算机模拟也就真的理解了。
Searle's claim is that the term "understanding" refers to a physical phenomenon, much in the same way that the term "photosynthesis" does. His argument here is strictly an appeal to our intuitions about the meaning of this term. My own intuitions simply do not involve the causal properties of biological organisms (although they do involve their logical and behavioral properties). It seems to me that this must be true for most people, as most people could be fooled into thinking that a computer simulation really understands, but a simulation of photosynthesis would not fool anyone into thinking it had actually created sugar from water and carbon dioxide.
塞尔的主张是,"理解 "一词指的是一种物理现象,就像 "光合作用 "一词一样。严格来说,他在这里的论证是诉诸我们对这个词含义的直觉。我自己的直觉根本不涉及生物有机体的因果属性(尽管它们确实涉及生物有机体的逻辑和行为属性)。在我看来,这对大多数人来说肯定是正确的,因为大多数人可能会被蒙骗,以为计算机模拟真的能理解,但光合作用的模拟却不会蒙骗任何人,让他们以为它真的从水和二氧化碳中创造出了糖。
A major theme in Searle's paper is that intentionality is really at the bottom of the problem. Computers fail to meet the critieria of true understanding because they just don't have intentional states, with all that entails. This, according to Searle, is in fact what boggles one's intuitions in the man-in-the-room example.
塞尔论文的一个重要主题是,意向性是问题的根本所在。计算机不符合真正理解的标准,因为它们没有意向性状态,也就不存在意向性状态。塞尔认为,这正是 "房间里的人 "这个例子让人感到困惑的地方。
However, it seems to me that Searle's argument has nothing to do with intentionality at all. What causes difficultly in attributing intentional states to machines is the fact that most of these states have a subjective nature as well. If this is the case, then Searle's manin-the-room example could be used to simulate a person having some nonintentional but subjective state, and still have its desired effect. This is precisely what happens. For example, suppose we simulated someone undergoing undirected anxiety. It's hard to believe that anything - the man doing the simulation or the system he implements is actually experiencing undirected anxiety, even though this is not an intentional state.
然而,在我看来,塞尔的论证与意向性根本无关。将有意状态归因于机器的困难之处在于,这些状态大多也具有主观性。如果是这样的话,那么塞尔的 "人在房间里 "的例子就可以用来模拟一个人的某种非有意但主观的状态,而且仍然可以达到预期的效果。事实正是如此。举个例子,假设我们模拟一个人处于非定向焦虑状态。很难相信任何事物--进行模拟的人或他所实现的系统--真的在经历非定向焦虑,尽管这并不是一种有意的状态。
Futhermore, the experience of discomfort seems proportional to subjectivity, but independent of intentionality. It doesn't bother my intuitions much to hear that a computer can understand or know something; that it is believing something is a little harder to swallow, and that it has love, hate, rage, pain, and anxiety are much worse. Notice that the subjectivity seems to increase in each case, but the intentionality remains the same. The point is that Searle's argument has nothing to do with intentionality per se, and sheds no light on the nature of intentional states or on the kinds of mechanisms capable of having them.
此外,不适的体验似乎与主观性成正比,但与意向性无关。听到电脑能理解或知道某些事情,我的直觉并不会感到不适;听到电脑正在相信某些事情,我的直觉就有点难以接受了,而听到电脑有爱、恨、愤怒、痛苦和焦虑,我的直觉就更难受了。请注意,在每种情况下,主观性似乎都在增加,但意向性却保持不变。问题的关键在于,塞尔的论证与意向性本身无关,对意向性状态的本质或能够产生意向性状态的机制种类没有任何启示。
I'd like to sum up by saying one last word on Searle's manin-the-room experiment, as this forms the basis for most of his subsequent arguments. Woody Allen in Without Feathers describes a mythical beast called the Great Roe. The Great Roe has the head of a tion, and the body of a lion - but not the same lion. Searle's Gedankenexperiment is really a Great Roe - the head of an understander and the body of an understander, but not the same understander. Herein lies the difficulty.
最后,我想谈谈塞尔的 "人在房间里 "实验,因为这是他后来大部分论点的基础。伍迪-艾伦(Woody Allen)在《没有羽毛》(Without Feathers)一书中描述了一种名为 "大狍子"(Great Roe)的神兽。大狍子有狮子的头和狮子的身体--但不是同一头狮子。塞尔的 "Gedankenexperiment "实际上也是一只 "大狍子"--有理解者的头和理解者的身体,但不是同一个理解者。难点就在这里。

Author's Response 作者回复

by John Searle 作者:约翰-塞尔

Department of Philosophy, University of Califomia, Berkeley, Calif. 94720
加州大学伯克利分校哲学系 邮编:94720

Intrinsic intentionality
内在意向性

I am pleased at the amount of interest my target article has aroused and grateful that such a high percentage of the commentaries are thoughtful and forcefully argued. In this response I am going to attempt to answer every major criticism directed at my argument. To do that, however, I need to make fully explicit some of the points that were implicit in the target article, as these points involve recurring themes in the commentaries.
我很高兴我的目标文章引起了这么多人的兴趣,也很感谢这么多的评论都是经过深思熟虑和有力论证的。在这篇回应中,我将试图回答针对我的论点提出的所有主要批评。然而,要做到这一点,我需要充分阐明目标文章中隐含的一些观点,因为这些观点涉及评论中反复出现的主题。
Strong AI. One of the virtues of the commentaries is that they make clear the extreme character of the strong AI thesis. The thesis implies that of all known types of specifically biological processes, from mitosis and meiosis to photosynthesis, digestion, lactation, and the secretion of auxin, one and only one type is completely independent of the biochemistry of its origins, and that one is cognition. The reason it is independent is that cognition consists entirely of computational processes, and since those processes are purely formal, any substance whatever that is capable of instantiating the formalism is capable of cognition. Brains just happen to be one of the indefinite number of different types of computers capable of cognition, but computers made of water pipes, toilet paper and stones, electric wires - anything solid and enduring enough to carry the right program - will necessarily have thoughts, feelings, and the rest of the forms of intentionality, because that is all that intentionality consists in: instantiating the right programs. The point of strong is not that if we built a computer big enough or complex enought to carry the actual programs that brains presumably instantiate we would get intentionality as a byproduct (contra Dennett), but rather that there isn't anything to intentionality other than instantiating the right program.
强人工智能这些评论的优点之一是,它们清楚地表明了强人工智能理论的极端性。该论点暗示,在所有已知的具体生物过程类型中,从有丝分裂和减数分裂到光合作用、消化、哺乳和分泌辅酶,只有一种是完全独立于其起源的生物化学的,那就是认知。它之所以独立,是因为认知完全由计算过程组成,而由于这些过程纯粹是形式化的,因此任何能够将形式化具体化的物质都能够产生认知。大脑恰好是能够认知的无数种不同类型计算机中的一种,但由水管、卫生纸和石头、电线--任何足以承载正确程序的坚固而持久的东西--制成的计算机必然会有思想、情感和意向性的其他形式,因为意向性就在于:将正确的程序实例化。强 的观点并不是说,如果我们建造了一台足够大或足够复杂的计算机,来承载大脑可能实例化的实际程序,我们就会得到意向性这个副产品(与丹尼特相反),而是说,除了实例化正确的程序之外,意向性并不存在其他任何东西。
Now I find the thesis of strong AI incredible in every sense of the word. But it is not enough to find a thesis incredible, one has to have an argument, and I offer an argument that is very simple: instantiating a program could not be constitutive of intentionality, because it would be possible for an agent to instantiate the program and still not have the right kind of
现在,我发现强人工智能的论点在任何意义上都是不可思议的。但仅仅发现论题不可思议是不够的,还必须有论据,而我提出的论据非常简单:实例化程序不可能构成意向性,因为代理人有可能实例化程序,但仍然没有正确的意向性。

intentionality. That is the point of the Chinese room example. Much of what follows will concern the force of that argument.
意向性。这就是中式房间这个例子的意义所在。接下来的大部分内容将涉及这一论点的力量。
Intuitions. Several commentators (Block, Dennett, Pylyshyn, Marshall) claim that the argument is just based on intuitions of mine, and that such intuitions, things we feel ourselves inclined to say, could never prove the sort of thing I am trying to prove (Block), or that equally valid contrary intuitions can be generated (Dennett), and that the history of human knowledge is full of the refutation of such intuitions as that the earth is flat or that the table is solid, so intuitions here are of no force.
直觉。有几位评论家(布洛克、丹尼特、派利辛、马歇尔)声称,这个论证只是基于我的直觉,而这种直觉,即我们自己觉得倾向于说的东西,永远无法证明我试图证明的那种东西(布洛克),或者可以产生同样有效的相反直觉(丹尼特),而且人类知识史上充满了对地球是平的或桌子是实心的这类直觉的反驳,所以直觉在这里没有任何力量。
But consider. When I now say that I at this moment do not understand Chinese, that claim does not merely record an intuition of mine, something I find myself inclined to say. It is a plain fact about me that I don't understand Chinese. Furthermore, in a situation in which I am given a set of rules for manipulating uninterpreted Chinese symbols, rules that allow no possibility of attaching any semantic content to these Chinese symbols, it is still a fact about me that I do not understand Chinese. Indeed, it is the very same fact as before. But, Wilensky suggests, suppose that among those rules for manipulating symbols are some that are Chinese for "Do you understand Chinese?," and in response to these I hand back the Chinese symbols for "Of course I understand Chinese." Does that show, as Wilensky implies, that there is a subsystem in me that understands Chinese? As long as there is no semantic content attaching to these symbols, the fact remains that there is no understanding.
但请想一想。当我现在说我此时此刻不懂中文时,这种说法并不仅仅记录了我的一种直觉,一种我发现自己倾向于说的话。我不懂中文是我的一个明显事实。此外,在给我一套规则来操作未被解释的中文符号的情况下,这些规则不允许给这些中文符号附加任何语义内容,我不懂中文仍然是我的一个事实。事实上,这和以前的事实是一样的。但是,威伦斯基建议,假设在这些操作符号的规则中,有一些是中文的 "你懂中文吗?",而作为对这些规则的回应,我交还的中文符号是 "我当然懂中文"。这是否表明,正如威伦斯基所暗示的那样,我体内有一个懂中文的子系统?只要这些符号没有附带语义内容,事实就仍然是没有理解。
The form of Block's argument about intuition is that since there are allegedly empirical data to show that thinking is just formal symbol manipulation, we could not refute the thesis with untutored intuitions. One might as well try to refute the view that the earth is round by appealing to our intuition that it is flat. Now Block concedes that it is not a matter of intuition but a plain fact that our brains are "the seat" of our intentionality. I want to add that it is equally a plain fact that I don't understand Chinese. My paper is an attempt to explore the logical consequences of these and other such plain facts. Intuitions in his deprecatory sense have nothing to do with the argument. One consequence is that the formal symbol manipulations could not be constitutive of thinking. Block never comes to grips with the arguments for this consequence. He simply laments the feebleness of our intuitions.
布洛克关于直觉的论证形式是,既然有所谓的经验数据表明思维只是形式化的符号操作,我们就不能用未经训练的直觉来反驳这一论点。这就好比我们试图用 "地球是平的 "这一直觉来反驳 "地球是圆的 "这一观点一样。现在,布洛克承认,我们的大脑是我们意向性的 "所在地",这不是一个直觉问题,而是一个显而易见的事实。我想补充的是,我不懂中文也同样是一个明显的事实。我的论文试图探讨这些事实和其他这类事实的逻辑后果。他的贬义的意向与论证无关。一个后果是,形式化的符号操作不可能构成思维。布洛克从来没有抓住这一结果的论据。他只是感叹我们直觉的软弱无力。
Dennett thinks that he can generate counterintuitions. Suppose, in the "robot reply," that the robot is my very own body. What then? Wouldn't I understand Chinese then? Well, the trouble is that the case, as he gives it to us, is underdescribed, because we are never told what is going on in the mind of the agent. (Remember, in these discussions, always insist on the first person point of view. The first step in the operationalist sleight of hand occurs when we try to figure out how we would know what it would be like for others.) If we describe Dennett's case sufficiently explicitly it is not hard to see what the facts would be. Suppose that the program contains such instructions as the following: when somebody holds up the squiggle-squiggle sign, pass him the salt. With such instructions it wouldn't take one long to figure out that "squiggle squiggle" probably means pass the salt. But now the agent is starting to learn Chinese from following the program. But this "intuition" doesn't run counter to the facts I was pointing out, for what the agent is doing in such a case is attaching a semantic content to a formal symbol and thus taking a step toward language comprehension. It would be equally possible to describe a case in such a way that it was impossible to attach any semantic content, even though my own body was in question, and in such a case it would be impossible for me to learn Chinese from following the program. Dennett's examples do not generate counterintuitions, they are simply so inadequately described that we can't tell from his description what the facts would be.
丹尼特认为他可以产生反直觉。在 "机器人回答 "中,假设机器人就是我自己的身体。然后呢?难道我就听不懂中文了吗?嗯,问题在于,他给我们的这个案例描述得不够充分,因为我们从来没有被告知代理者的头脑中发生了什么。(记住,在这些讨论中,一定要坚持第一人称视角。当我们试图弄清自己如何知道他人的想法时,就已经开始了操作主义花招的第一步)。如果我们充分明确地描述一下丹尼特的情况,就不难看出事实会是怎样。假设程序包含如下指令:当有人举起 "squiggle-squiggle "符号时,把盐递给他。有了这样的指令,人们很快就会明白 "squiggle squiggle "可能就是递盐的意思。但现在,代理已经开始跟着程序学习中文了。但这种 "直觉 "并不违背我所指出的事实,因为在这种情况下,代理所做的是将语义内容附加到形式符号上,从而向语言理解迈出了一步。我们同样可以描述这样一种情况:即使我自己的身体有问题,也不可能附加任何语义内容,在这种情况下,我也不可能通过跟随程序来学习中文。丹尼特的例子并没有产生反直觉,它们只是描述得太不充分,以至于我们无法从他的描述中看出事实会是怎样。
At one point Dennett and I really do have contrary intuitions. He says "I understand English my brain doesn't." I think on the contrary that when I understand English; it is my brain that is doing the work. I find nothing at all odd about saying that my brain understands English, or indeed about saying that my brain is conscious. I find his claim as implausible as insisting, "I digest pizza; my stomach and digestive tract don't."
有一点,我和丹尼特的直觉确实相反。他说 "我懂英语,但我的大脑不懂"相反,我认为当我理解英语时,是我的大脑在做这项工作。说我的大脑懂英语,或者说我的大脑有意识,我一点也不觉得奇怪。我觉得他的说法就像坚持说 "我消化披萨,我的胃和消化道不消化披萨 "一样难以置信。
Marshall suggests that the claim that thermostats don't have beliefs is just as refutable by subsequent scientific discovery as the claim that tables are solid. But notice the difference. In the case of tables we discovered previously unknown facts about the microstructure of apparently solid objects. In the case of thermostats the relevant facts are all quite well known already. Of course such facts as that thermostats don't have beliefs and that I don't speak Chinese are, like all empirical facts, subject to disconfirmation. We might for example discover that, contrary to my deepest beliefs, I am a competent speaker of Mandarin. But think how we would establish such a thing. At a minimum we would have to establish that, quite unconsciously, I know the meanings of a large number of Chinese expressions; and to establish that thermostats had beliefs, in exactly the same sense that I do, we would have to establish, for example, that by some miracle thermostats had nervous systems capable of supporting mental states, and so on. In sum, though in some sense intuition figures in any argument, you will mistake the nature of the present dispute entirely if you think it is a matter of intuitions against someone else's, or that some set of contrary intuitions has equal validity. The claim that I don't speak Chinese and that my thermostat lacks beliefs aren't just things that I somehow find myself mysteriously inclined to say.
马歇尔认为,"恒温器没有信仰 "的说法与 "桌子是实心的 "说法一样,都可以被后来的科学发现所驳倒。但请注意其中的区别。在桌子的案例中,我们发现了关于表面上看似固体的物体的微观结构的未知事实。而恒温器的相关事实都是众所周知的。当然,像恒温器没有信仰和我不会说中文这样的事实,和所有的经验事实一样,也会被否定。例如,我们可能会发现,与我内心深处的信念相反,我是一个能说普通话的人。但想想我们该如何确定这样的事实。我们至少要证实,我在无意识中知道大量中文表达的含义;要证实恒温器也有信念,就像我有信念一样,我们必须证实,例如,恒温器奇迹般地拥有能够支持精神状态的神经系统,等等。总之,虽然从某种意义上说,直觉在任何论证中都有其作用,但如果你认为这是一个 直觉与别人的直觉对立的问题,或者认为某些相反的直觉具有同等的有效性,那你就完全搞错了当前争论的性质。关于我不会说中文和我的恒温器缺乏信仰的说法,并不只是我自己莫名其妙地倾向于说的东西。
Finally, in response to Dennett (and also Pylyshyn ), I do not, of course, think that intentionality is a fluid. Nor does anything I say commit me to that view. I think, on the contrary, that intentional states, processes, and events are precisely that: states, processes, and events. The point is that they are both caused by and realized in the structure of the brain. Dennett assures me that such a view runs counter to "the prevailing winds of doctrine." So much the worse for the prevailing winds.
最后,作为对丹尼特(以及皮里申)的回应,我当然不认为意向性是一种流体。我所说的任何话都不会让我信奉这种观点。相反,我认为意向性的状态、过程和事件恰恰就是:状态、过程和事件。问题的关键在于,它们既是由大脑结构引起的,也是在大脑结构中实现的。丹尼特向我保证,这种观点与 "学说的主流 "背道而驰。盛行之风也不过如此。
Intrinsic intentionality and observer-relative ascriptions of intentionality. Why then do people feel inclined to say that, in some sense at least, thermostats have beliefs? I think that in order to understand what is going on when people make such claims we need to distinguish carefully between cases of what I will call intrinsic intentionality, which are cases of actual mental states, and what I will call observerrelative ascriptions of intentionality, which are ways that people have of speaking about entities figuring in our activities but lacking intrinsic intentionality. We can illustrate this distinction with examples that are quite uncontroversial. If I say that I am hungry or that Carter believes he can win the election, the form of intentionality in question is intrinsic. I am discussing, truly or falsely, certain psychological facts about me and Carter. But if I say the word "Carter" refers to the present president, or the sentence "Es regnet" means it's raining, I am not ascribing any mental states to the word "Carter" or the sentence "Es regnet." These are ascriptions of intentionality made to entities that lack any mental states, but in which the ascription is a manner of speaking about the intentionality of the observers. It is a way of saying that people use the name Carter to refer, or that when people say literally "Es regnet" they mean it's raining.
内在意向性和与观察者相关的意向性描述。那么,为什么人们倾向于说,至少在某种意义上,恒温器具有信念呢?我认为,为了理解人们提出这种说法时所发生的事情,我们需要仔细区分两种情况,一种是我称之为内在意向性的情况,即实际心理状态的情况;另一种是我称之为观察者相关意向性描述的情况,即人们谈论在我们的活动中存在但缺乏内在意向性的实体的方式。我们可以用一些没有争议的例子来说明这种区别。如果我说我饿了,或者卡特相信他能赢得大选,那么有关的意向性形式就是内在的。我是在真实或虚假地讨论关于我和卡特的某些心理事实。但是,如果我说 "卡特 "这个词指的是现任总统,或者 "Es regnet "这个句子的意思是下雨了,我并没有把任何心理状态赋予 "卡特 "这个词或 "Es regnet "这个句子。这些都是对缺乏任何心理状态的实体所做的意向性描述,但在这些描述中,意向性是一种谈论观察者意向性的方式。这是说人们用 "卡特 "这个名字来指代,或者说当人们说 "Es regnet "时,他们的意思是下雨了。
Observer-relative ascriptions of intentionality are always
与观察者相关的意向性描述总是

dependent on the intrinsic intentionality of the observers. There are not two kinds of intentional mental states; there is only one kind, those that have intrinsic intentionality; but there are ascriptions of intentionality in which the ascription does not ascribe intrinsic intentionality to the subject of the ascription. Now I believe that a great deal of the present dispute rests on a failure to appreciate this distinction. When McCarthy stoutly maintains that thermostats have beliefs, he is confusing observer-relative ascriptions of intentionality with ascriptions of intrinsic intentionality. To see this point, ask yourself why we make these attributions to thermostats and the like at all. It is not because we suppose they have a mental life very much like our own; on the contrary, we know that they have no mental life at all. Rather, it is because we have designed them (our intentionality) to serve certain of our purposes (more of our intentionality), to perform the sort of functions that we perform on the basis of our intentionality. I believe it is equally clear that our ascription of intentionality to cars, computers, and adding machines is observer relative.
这取决于观察者的内在意向性。意向性心理状态不存在两种,只有一种,即那些具有内在意向性的心理状态;但在对意向性的描述中,描述并没有将内在意向性赋予描述的主体。我认为,目前的争论很大程度上是由于没有认识到这一区别。当麦卡锡坚称恒温器具有信念时,他是把与观察者相关的意向性描述与内在意向性描述混为一谈了。要理解这一点,请扪心自问,我们为什么要对恒温器之类的东西做出这些归因。这并不是因为我们认为它们的精神生活与我们的非常相似;相反,我们知道它们根本没有精神生活。相反,这是因为我们设计了它们(我们的意向性)来服务于我们的某些目的(更多的是我们的意向性),来执行我们在意向性基础上执行的那种功能。我相信,同样清楚的是,我们把意向性赋予汽车、计算机和加法机是观察者的相对行为。
Functionalism, by the way, is an entire system erected on the failure to see this distinction. Functional attributions are always observer relative. There is no such thing as an intrinsic function, in the way that there are intrinsic intentional states.
顺便说一句,功能主义的整个体系就是建立在看不到这种区别的基础上的。功能归属总是相对于观察者而言的。就像不存在固有的意向状态一样,也不存在固有的功能。
Natural kinds. This distinction between intrinsic intentionality and observer-relative ascriptions of intentionality might seem less important if we could, as several commentators (Minsky, Block, Marshall) suggest, assimilate intrinsic intentionality to some larger natural kind that would subsume both existing mental phenomena and other natural phenomena under a more general explanatory apparatus. Minsky says that "prescientific idea germs like 'believe" " have no place in the mind science of the future (presumably "mind" will also have no place in the "mind science" of the future). But even if this is true, it is really quite irrelevant to my argument, which is addressed to the mind science of the present. Even if, as Minsky suggests, we eventually come to talk of our present beliefs as if they were on a continuum with things that are not intentional states at all, this does not alter the fact that we do have intrinsic beliefs and computers and thermostats do not. That is, even if some future science comes up with a category that supersedes belief and thus enables us to place thermostats and people on a single continuum, this would not alter the fact that under our present concept of belief, people literally have beliefs and thermostats don't. Nor would it refute my diagnosis of the mistake of attributing intrinsic mental states to thermostats as based on a confusion between intrinsic intentionality and observer-relative ascriptions of intentionality.
自然种类。如果我们能像几位评论家(明斯基、布洛克、马歇尔)所建议的那样,把内在意向性同化为某种更大的自然类型,把现存的心理现象和其他自然现象都归入一个更普遍的解释工具之下,那么内在意向性与观察者相关的意向性描述之间的这种区别似乎就不那么重要了。明斯基说,"像'相信'这样的先验思想萌芽 "在未来的心智科学中没有位置(大概 "心智 "在未来的 "心智科学 "中也没有位置)。不过,即使这是真的,也与我的论点完全无关,我的论点针对的是当下的心智科学。即使像明斯基所说的那样,我们最终会把我们现在的信念与那些根本不是意向状态的东西放在一个连续体上来谈论,但这并不能改变这样一个事实,即我们确实有内在信念,而计算机和恒温器却没有。也就是说,即使未来的某门科学提出了一个超越信念的范畴,从而使我们能够把恒温器和人放在同一个连续体上,这也不会改变这样一个事实,即根据我们现在的信念概念,人确实有信念,而恒温器没有。我把恒温器的内在心理状态归因于内在意向性与观察者对意向性的相关描述之间的混淆,这也不能反驳我对这一错误的诊断。
Minsky further points out that our own mental operations are often split into parts that are not fully integrated by any "self" and only some of which carry on interpretation. And, he asks, if that is how it is in our own minds, why not in computers as well? The reply is that even if there are parts of our mental processes where processing takes place without any intentional content, there still have to be other parts that attach semantic content to syntactic elements if there is to be any understanding to all. The point of the Chinese room example is that the formal symbol manipulations never by themselves carry any semantic content, and thus instantiating a computer program is not by itself sufficient for understanding.
明斯基进一步指出,我们自己的思维操作往往被分割成若干部分,这些部分并没有被任何 "自我 "完全整合,只有其中的一部分在进行解释。他问道,如果我们自己的思维是这样,为什么计算机不是这样呢?我的回答是,即使在我们的思维过程中,有一部分处理过程是在没有任何意图内容的情况下进行的,但如果要有任何理解,仍必须有其他部分将语义内容附加到句法元素上。中式房间的例子的关键在于,形式化的符号操作本身从不携带任何语义内容,因此,将计算机程序实例化本身并不足以实现理解。
How the brain works. Several commentators take me to task because I don't explain how the brain works to produce intentionality, and at least two (Dennett and Fodor) object to my claim that where intentionality is concerned - as opposed to the conditions of satisfaction of the intentionality - what matters are the internal and not the external causes. Well I don't know how the brain produces mental phenomena, and apparently no one else does either, but that it produces mental phenomena and that the internal operations of the brain are causally sufficient for the phenomena is fairly evident from what we do know.
大脑是如何工作的有几位评论家指责我,因为我没有解释大脑是如何工作以产生意向性的,至少有两位(丹尼特和福多)反对我的说法,即在意向性问题上--与意向性的满足条件相对--重要的是内部原因而不是外部原因。我不知道大脑是如何产生心理现象的,显然其他人也不知道,但从我们所知的情况来看,大脑产生心理现象以及大脑的内部运作在因果关系上足以产生心理现象是相当明显的。
Consider the following case, in which we do know a little about how the brain works. From where I am seated, I can see a tree. Light reflected from the tree in the form of photons strikes my optical apparatus. This sets up a series of sequences of neural firings. Some of these neurons in the visual cortex are in fact remarkably specialized to respond to certain sorts of visual stimuli. When the whole set of sequences occurs, it causes a visual experience, and the visual experience has intentionality. It is a conscious mental event with an intentional content; that is, its conditions of satisfaction are internal to it. Now I could be having exactly that visual experience even if there were no tree there, provided only that something was going on in my brain sufficient to produce the experience. In such a case I would not see the tree but would be having a hallucination. In such a case, therefore, the intentionality is a matter of the internal causes; whether the intentionality is satisfied, that is, whether I actually see a tree as opposed to having a hallucination of the tree, is a matter of the external causes as well. If I were a brain in a vat I could have exactly the same mental states I have now; it is just that most of them would be false or otherwise unsatisfied. Now this simple example of visual experience is designed to make clear what I have in mind when I say that the operation of the brain is causally sufficient for intentionality, and that it is the operation of the brain and not the impact of the outside world that matters for the content of our intentional states, in at least one important sense of "content."
请看下面这个案例,在这个案例中,我们确实对大脑的工作原理略知一二。从我所坐的位置,我可以看到一棵树。树反射的光以光子的形式照射到我的光学仪器上。这引发了一系列神经元的连续跳动。事实上,视觉皮层中的某些神经元非常专业化,能够对某些视觉刺激做出反应。当整套序列发生时,就会产生视觉体验,而视觉体验具有意向性。它是一个有意识的心理事件,具有意向性的内容;也就是说,它的满足条件是内在的。现在,即使那里没有树,我也会有这种视觉体验,前提是我的大脑中发生了足以产生这种体验的事情。在这种情况下,我不会看到树,而是产生了幻觉。因此,在这种情况下,意向性是内部原因的问题;意向性是否得到满足,即我是否真的看到了树,而不是产生了对树的幻觉,也是外部原因的问题。如果我是一个缸中的大脑,我也可以拥有与现在完全相同的心理状态;只是其中大部分都是虚假的,或者没有得到满足。现在,当我说大脑的运作在因果关系上足以产生意向性时,这个简单的视觉经验例子旨在说明我在想什么;至少在 "内容 "的一个重要意义上,是大脑的运作而不是外部世界的影响对我们意向性状态的内容很重要。
Some of the commentators seem to suppose that I take the causal powers of the brain by themselves to be an argument against strong . But that is a misunderstanding. It is an empirical question whether any given machine has causal powers equivalent to the brain. My argument against strong AI is that instantiating a program is not enough to guarantee that it has those causal powers.
一些评论者似乎认为,我认为大脑的因果能力本身就是反对强 。但这是一种误解。任何给定的机器是否具有与大脑相当的因果能力,这是一个经验问题。我反对强人工智能的论点是,将程序实例化并不足以保证它具有这些因果能力。
Wait till next year. Many authors (Block, Sloman & Croucher, Dennett, Lycan, Bridgeman, Schank) claim that Schank's program is just not good enough but that newer and better programs will defeat my objection. I think this misses the point of the objection. My objection would hold against any program at all, qua formal computer program. Nor does it help the argument to add the causal theory of reference, for even if the formal tokens in the program have some causal connection to their alleged referents in the real world, as long as the agent has no way of knowing that, it adds no intentionality whatever to the formal tokens. Suppose, for example, that the symbol for egg foo yung in the Chinese room is actually causally connected to egg foo yung. Still, the man in the room has no way of knowing that. For him, it remains an uninterpreted formal symbol, with no semantic content whatever. I will return to this last point in the discussion of specific authors, especially Fodor.
明年再说吧。许多作者(布洛克、斯洛曼与克劳奇、丹尼特、莱肯、布里奇曼、尚克)都声称尚克的程序不够好,但更新更好的程序会打败我的反对意见。我认为这忽略了反对意见的重点。我的反对意见是针对任何程序的,就正式的计算机程序而言。因为即使程序中的形式符码与其在现实世界中的指称符码有某种因果联系,只要行为主体无从知晓,它也不会给形式符码增添任何意向性。例如,假设中国房间里的 "蛋芙蓉 "符号实际上与 "蛋芙蓉 "有因果联系。尽管如此,房间里的人却无从知晓。对他来说,这仍然是一个未经解释的形式符号,没有任何语义内容。我将在讨论具体作者,尤其是福多时再谈最后这一点。
Seriatim. I now turn, with the usual apologies for brevity, from these more general considerations to a series of specific arguments.
附议。现在,我将从这些较为宽泛的考虑转而讨论一系列具体论点,在此,我对篇幅的简短表示歉意。
Haugeland has an argument that is genuinely original. Suppose a Chinese speaker has her neurons coated with a thin coating that prevents neuron firing. Suppose "Searle's demon" fills the gap by stimulating the neurons as if they had been fired. Then she will understand Chinese even though none of her neurons has the right causal powers; the demon has them, and he understands only English.
豪格兰有一个真正具有独创性的论点。假设一个会说中文的人的神经元被涂上了一层薄薄的涂层,这层涂层阻止了神经元的发射。假设 "塞尔的恶魔 "通过刺激神经元来填补这一空白,就像神经元被点燃一样。这样,她就能听懂中文了,尽管她的神经元都不具备正确的因果能力;而 "魔鬼 "却具备这种能力,而且他只懂英语。
My objection is only to the last sentence. Her neurons still have the right causal powers; they just need some help from the demon. More generally if the stimulation of the causes is
我反对的只是最后一句。她的神经元仍然拥有正确的因果能力,只是需要恶魔的帮助。更笼统地说,如果对原因的刺激是

at a low enough level to reproduce the causes and not merely describe them, the "simulation" will reproduce the effects. If what the demon does is reproduce the right causal phenomena, he will have reproduced the intentionality, which constitutes the effects of that phenomena. And it does not, for example, show that my brain lacks the capacity for consciousness if someone has to wake me up in the morning by massaging my head.
如果 "模拟 "的层次足够低,能够再现原因而不仅仅是描述原因,那么 "模拟 "就会再现效果。如果恶魔所做的是再现正确的因果现象,那么他就再现了意向性,而意向性构成了该现象的效果。举例来说,如果有人早上必须通过按摩我的头部来唤醒我,这并不能说明我的大脑缺乏意识能力。
Haugeland's distinction between original and derivative intentionality is somewhat like mine between intrinsic intentionality and observer-relative ascriptions of intentionality. But he is mistaken in thinking that the only distinction is that original intentionality is "sufficiently rich" in its "semantic activity": the semantic activity in question is still observerrelative and hence not sufficient for intentionality. My car engine is, in his observer-relative sense, semantically active in all sorts of "rich" ways, but it has no intentionality. A human infant is semantically rather inactive, but it still has intentionality.
豪格兰对原始意向性和派生意向性的区分有点像我对内在意向性和观察者相关意向性描述的区分。但是,他认为唯一的区别在于原始意向性在 "语义活动 "方面 "足够丰富",这种想法是错误的:有关的语义活动仍然是观察者相关的,因此不足以构成意向性。我的汽车发动机在他的观察者相关意义上是以各种 "丰富 "的方式进行语义活动的,但它没有意向性。人类婴儿在语义上相当不活跃,但它仍然具有意向性。
Rorty sets up an argument concerning transubstantiation that is formally parallel to mine concerning intrinsic and observer-relative attributions of intentionality. Since the premises of the transubstantiation argument are presumably false, the parallel is supposed to be an objection to my argument. But the parallel is totally irrelevant. Any valid argument whatever from true premises to true conclusions has exact formal analogues from false premises to false conclusions. Parallel to the familiar "Socrates is mortal" argument we have "Socrates is a dog. All dogs have three heads. Therefore Socrates has three heads." The possibility of such formal parallels does nothing to weaken the original arguments. To show that the parallel was insightful Rorty would have to show that my premises are as unfounded as the doctrine of transubstantiation. But what are my premises? They are such things as that people have mental states such as beliefs, desires, and visual experiences, that they also have brains, and that their mental states are causally the products of the operation of their brains. Rorty says nothing whatever to show that these propositions are false, and I frankly can't suppose that he doubts their truth. Would he like evidence for these three? He concludes by lamenting that if my views gain currency the "good work" of his favorite behaviorist and functionalist authors will be "undone." This is not a prospect I find at all distressing, since implicit in my whole account is the view that people really do have mental states, and to say so is not just to ascribe to them tendencies to behave, or to adopt a certain kind of stance toward them, or to suggest functional explanations of their behaviours. This does not give the mental a "numinous Cartesian glow," it just implies that mental processes are as real as any other biological processes.
罗蒂提出了一个关于 "变体 "的论证,这个论证在形式上与我关于意向性的内在归因和观察者相关归因的论证是平行的。由于 "超体 "论证的前提可能是错误的,因此这种平行应该是对我的论证的反对。但是,这种平行是完全不相干的。任何从真前提到真结论的有效论证,都有从假前提到假结论的确切形式相似点。与我们熟悉的 "苏格拉底是凡人 "的论证相似,我们还有 "苏格拉底是一条狗。所有的狗都有三个头。因此苏格拉底有三个头"。这种形式上相似的可能性丝毫不会削弱原来的论证。要证明这种平行是有见地的,罗蒂就必须证明我的前提就像 "变体说 "一样毫无根据。但我的前提是什么呢?它们是这样一些东西:人有精神状态,如信念、欲望和视觉体验,他们也有大脑,他们的精神状态是大脑运作的因果产物。罗蒂没有说什么来证明这些命题是错误的,坦率地说,我不认为他怀疑这些命题的真实性。他想要这三个命题的证据吗?最后,他哀叹道,如果我的观点流行起来,他最喜欢的行为主义和功能主义作家的 "丰功伟绩 "就会 "前功尽弃"。我觉得这种前景一点也不令人沮丧,因为我的整个论述都隐含着这样一种观点,即人们确实有心理状态,而这么说并不仅仅是把行为倾向归于心理状态,或对心理状态采取某种立场,或对心理状态的行为提出功能性解释。这并没有赋予心理 "笛卡尔式的神圣光辉",而只是意味着心理过程与其他生物过程一样真实。
McCarthy and Wilensky both endorse the "systems reply." The major addition made by Wilensky is to suppose that we ask the Chinese subsystem whether it speaks Chinese and it answers yes. I have already suggested that this adds no plausibility whatever to the claim that there is any Chinese understanding going on in the system. Both Wilensky and McCarthy fail to answer the three objections I made to the systems reply.
麦卡锡和威伦斯基都赞同 "系统回答"。威伦斯基的主要补充是假设我们问中文子系统是否会说中文,而它的回答是肯定的。我已经说过,这对于系统中存在任何中文理解的说法没有任何可信性。威伦斯基和麦卡锡都没有回答我对系统回答提出的三个反对意见。
  1. The Chinese subsystem still attaches no semantic content whatever to the formal tokens. The English subsystem knows that "hamburger" means hamburger. The Chinese subsystem knows only that squiggle squiggle is followed by squoggle squoggle.
    中文子系统仍然没有给形式标记附加任何语义内容。英语子系统知道 "hamburger "是汉堡包的意思。汉语子系统只知道 "squiggle squiggle "后面是 "squoggle squoggle"。
  2. The systems reply is totally unmotivated. Its only motivation is the Turing test, and to appeal to that is precisely to beg the question by assuming what is in dispute.
    系统的回答完全没有动机。它唯一的动机就是图灵测试,而诉诸图灵测试恰恰是在假设有争议的东西,从而引出问题。
  3. The systems reply has the consequence that all sorts of systematic input-output relations (for example, digestion) would have to count as understanding, since they warrant as much observer-relative ascription of intentionality as does the
    系统回答的结果是,各种系统化的输入-输出关系(例如消化)都必须算作理解,因为它们和 "意向性 "一样,都需要观察者相关的意向性归因。

    Chinese subsystem. (And it is, by the way, no answer to this point to appeal to the cognitive impenetrability of digestion, in Pylyshyn's [1980a] sense, since digestion is congnitively penetrable: the content of my beliefs can upset my digestion.)
    中文子系统。(顺便说一句,在皮里申[1980a]的意义上,诉诸消化在认知上的不可渗透性并不能回答这一点,因为消化在本质上是可渗透的:我的信念内容会扰乱我的消化。)
Wilensky seems to think that it is an objection that other sorts of mental states besides intentional ones could have been made the subject of the argument. But I quite agree. I could have made the argument about pains, tickles, and anxiety, but these are (a) less interesting to me and (b) less discussed in the AI literature. I prefer to attack strong AI on what its proponents take to be their strongest ground.
威伦斯基似乎认为,除了意向性精神状态之外,其他类型的精神状态也可以成为论证的对象,这是一种反对意见。但我完全同意。我本可以就疼痛、搔痒和焦虑进行论证,但这些东西(a)对我来说不那么有趣,(b)在人工智能文献中讨论得较少。我更倾向于从强人工智能支持者认为最有力的方面来攻击强人工智能。
Pylyshyn misstates my argument. I offer no a priori proof that a system of integrated circuit chips couldn't have intentionality. That is, as I say repeatedly, an empirical question. What I do argue is that in order to produce intentionality the system would have to duplicate the causal powers of the brain and that simply instantiating a formal program would not be sufficient for that. Pylyshyn offers no answer to the arguments I give for these conclusions.
派利欣错误地陈述了我的论点。我没有先验地证明集成电路芯片系统不可能具有意向性。正如我反复强调的那样,这是一个经验问题。我所论证的是,为了产生意向性,该系统必须复制大脑的因果能力,而仅仅实例化一个形式化的程序是不够的。对于我为这些结论提出的论据,派利辛没有给出答案。
Since Pylyshyn is not the only one who has this misunderstanding, it is perhaps worth emphasizing just what is at stake. The position of strong AI is that anything with the right program would have to have the relevant intentionality. The circuit chips in his example would necessarily have intentionality, and it wouldn't matter if they were circuit chips or water pipes or paper clips, provided they instantiated the program. Now I argue at some length that they couldn't have intentionality solely in virtue of instantiating the program. Once you see that the program doesn't necessarily add intentionality to a system, it then becomes an empirical question which kinds of systems really do have intentionality, and the condition necessary for that is that they must have causal powers equivalent to those of the brain. I think it is evident that all sorts of substances in the world, like water pipes and toilet paper, are going to lack those powers, but that is an empirical claim on my part. On my account it is a testable empirical claim whether in repairing a damaged brain we could duplicate the electrochemical basis of intentionality using some other substance, say silicon. On the position of strong there cannot be any empirical questions about the electrochemical bases necessary for intentionality since any substance whatever is sufficient for intentionality if it has the right program. I am simply trying to lay bare for all to see the full preposterousness of that view.
既然有这种误解的不只派利辛一个人,我们或许应该强调一下其中的利害关系。强人工智能的立场是,任何具有正确程序的东西都必须具有相关的意向性。在他的例子中,电路芯片必然具有意向性,至于它们是电路芯片、水管还是回形针并不重要,只要它们能将程序实例化。现在,我花了一些篇幅来论证,它们不可能仅仅因为实例化了程序而具有意向性。一旦你发现程序并不一定会给系统增加意向性,那么哪种系统真正具有意向性就成了一个经验问题,而实现这一点的必要条件就是它们必须具有与大脑相当的因果能力。我认为,世界上的各种物质,如水管和卫生纸,显然都缺乏这种能力,但这只是我的经验之谈。在我看来,在修复受损大脑的过程中,我们是否可以用其他物质(比如硅)复制意向性的电化学基础,这是一个可以检验的经验之谈。在强 的立场上,关于意向性所需的电化学基础不可能有任何经验问题,因为只要程序正确,任何物质都足以产生意向性。我只是想让大家看到这种观点的荒谬之处。
I believe that Pylyshyn also misunderstands the distinction between intrinsic and observer-relative ascriptions of intentionality. The relevant question is not how much latitude the observer has in making observer-relative ascriptions, but whether there is any intrinsic intentionality in the system to which the ascriptions could correspond.
我认为派利辛还误解了意向性的内在描述与观察者相关描述之间的区别。相关的问题不在于观察者在做出与观察者相关的描述时有多大的自由度,而在于系统中是否有任何内在的意向性可以与描述相对应。
Schank and I would appear to be in agreement on many issues, but there is at least one small misunderstanding. He thinks I want "to call into question the enterprise of AI." That is not true. I am all in favor of weak AI, at least as a research program. I entirely agree that if someone could write a program that would give the right input and output for Chinese stories it would be a "great achievement" requiring a "great understanding of the nature of language." I am not even sure it can be done. My point is that instantiating the program is not constitutive of understanding.
我和 Schank 似乎在很多问题上意见一致,但至少存在一个小小的误解。他认为我想 "质疑人工智能事业"。事实并非如此。我完全支持弱人工智能,至少作为一个研究项目。我完全同意,如果有人能写出一个程序,为中文故事提供正确的输入和输出,那将是一项 "伟大的成就",需要 "对语言本质的深刻理解"。我甚至不确定能否做到这一点。我的观点是,将程序实例化并不构成理解。
Abelson, like Schank, points out that it is no mean feat to program computers that can simulate story understanding. But, to repeat, that is an achievement of what I call weak AI, and I would enthusiastically applaud it. He mars this valid point by insisting that since our own understanding of most things, arithmetic for example, is very imperfect, "we might well be humble and give the computer the benefit of the doubt when and if it performs as well as we do." I am afraid that neither this nor his other points meets my arguments to
阿贝尔森和尚克一样,都指出为计算机编程以模拟对故事的理解并非易事。但是,重复一遍,这就是我所说的弱人工智能的成就,我会热情地为它喝彩。他坚持认为,由于我们自己对大多数事物(比如算术)的理解并不完美,因此 "我们不妨谦虚一点,如果计算机的表现和我们一样好,就给它一点怀疑的好处",从而抹杀了这一有理有据的观点。恐怕他的这一观点和其他观点都不符合我的论点。

show that, humble as we would wish to be, there is no reason to suppose that instantiating a formal program in the way a computer does is any reason at all for ascribing intentionality to it.
这表明,尽管我们希望自己是谦逊的,但我们没有理由认为,以计算机的方式将形式化程序实例化,就是将意向性赋予它的任何理由。
Fodor agrees with my central thesis that instantiating a program is not a sufficient condition of intentionality. He thinks, however, that if we got the right causal links between the formal symbols and things in the world that would be sufficient. Now there is an obvious objection to this variant of the robot reply that I have made several times: the same thought experiment as before applies to this case. That is, no matter what outside causal impacts there are on the formal tokens, these are not by themselves sufficient to give the tokens any intentional content. No matter what caused the tokens, the agent still doesn't understand Chinese. Let the egg foo yung symbol be causally connected to egg foo yung in any way you like, that connection by itself will never enable the agent to interpret the symbol as meaning egg foo young. To do that he would have to have, for example, some awareness of the causal relation between the symbol and the referent; but now we are no longer explaining intentionality in terms of symbols and causes but in terms of symbols, causes, and intentionality, and we have abandoned both strong AI and the robot reply. Fodor's only answer to this is to say that it shows we haven't yet got the right kind of causal linkage. But what is the right kind, since the above argument applies to any kind? He says he can't tell us, but it is there all the same. Well I can tell him what it is: it is any form of causation sufficient to produce intentional content in the agent, sufficient to produce, for example, a visual experience, or a memory, or a belief, or a semantic interpretation of some word.
福多同意我的中心论点,即把程序实例化并不是意向性的充分条件。但他认为,如果我们在形式符号和世界事物之间建立起正确的因果联系,那就足够了。现在,对于机器人回答的这一变体,有一个显而易见的反对意见,我已经多次提出过:与之前相同的思想实验也适用于这种情况。也就是说,无论形式符码受到什么外部因果影响,这些影响本身并不足以赋予符码任何意向性内容。不管是什么原因造成了这些符号,代理人仍然不懂中文。让 "蛋芙蓉 "符号以任何方式与 "蛋芙蓉 "发生因果联系吧,但这种联系本身永远不会让行为主体把符号解释为 "蛋芙蓉 "的意思。要做到这一点,比如说,他必须对符号和所指之间的因果关系有一定的认识;但现在我们不再从符号和原因的角度来解释意向性,而是从符号、原因和意向性的角度来解释意向性,因此我们既放弃了强人工智能,也放弃了机器人的回答。福多对此的唯一回答是,这表明我们还没有找到正确的因果联系。既然上述论证适用于任何一种因果联系,那么什么才是正确的因果联系呢?他说他不能告诉我们,但它确实存在。好吧,我可以告诉他这是什么:这是任何形式的因果关系,足以在行为主体身上产生意向内容,足以产生例如视觉体验、记忆、信念或对某个词的语义解释。
Fodor's variant of the robot reply is therefore confronted with a dilemma. If the causal linkages are just matters of fact about the relations between the symbols and the outside world, they will never by themselves give any interpretation to the symbols; they will carry by themselves no intentional content. If, on the other hand, the causal impact is sufficient to produce intentionality in the agent, it can only be because there is something more to the system than the fact of the causal impact and the symbol, namely the intentional content that the impact produces in the agent. Either the man in the room doesn't learn the meaning of the symbol from the causal impact, in which case the causal impact adds nothing to the interpretation, or the causal impact teaches him the meaning of the word, in which case the cause is relevant only because it produces a form of intentionality that is something in addition to itself and the symbol. In neither case is symbol, or cause and symbol, constitutive of intentionality.
因此,福多的机器人回复变体面临着一个两难选择。如果因果联系只是关于符号与外部世界之间关系的事实问题,那么它们本身就不会给符号带来任何解释;它们本身就不带有任何意向性内容。另一方面,如果因果影响足以在行为主体身上产生意向性,那只能是因为除了因果影响和符号的事实之外,系统中还存在着更多的东西,即影响在行为主体身上产生的意向性内容。要么房间里的那个人并没有从因果影响中学到符号的意义,在这种情况下,因果影响并没有给解释增加任何东西;要么因果影响教会了他这个词的意义,在这种情况下,原因之所以相关,只是因为它产生了一种意向性形式,而这种意向性形式是除了它本身和符号之外的东西。在这两种情况下,符号或原因与符号都不是意向性的构成要素。
This is not the place to discuss the general role of formal processes in mental processes, but I cannot resist calling attention to one massive use-mention confusion implicit in Fodor's account. From the fact that, for example, syntactical rules concern formal objects, it does not follow that they are formal rules. Like other rules affecting human behavior they are defined by their content, not their form. It just so happens that in this case their content concerns forms.
这里不是讨论形式过程在心理过程中的一般作用的地方,但我不能不提请注意福多的论述中隐含的一个巨大的使用与提及的混淆。例如,句法规则涉及形式对象,但这并不意味着它们就是形式规则。与其他影响人类行为的规则一样,它们是由其内容而非形式定义的。在这种情况下,它们的内容恰好与形式有关。
In what is perhaps his crucial point, Fodor suggests that we should think of the brain or the computer as performing formal operations only on interpreted and not just on formal symbols. But who does the interpreting? And what is an interpretation? If he is saying that for intentionality there must be intentional content in addition to the formal symbols, then I of course agree. Indeed, two of the main points of my argument are that in our own case we have the "interpretation," that is, we have intrinsic intentionality, and that the computer program could never by itself be sufficient for that. In the case of the computer we make observer-relative ascriptions of intentionality, but that should not be mistaken for the real thing since the computer program by itself has no intrinsic intentionality.
福多提出了一个也许是至关重要的观点,他认为我们应该把大脑或计算机看作是只对解释过的形式运算,而不仅仅是对形式符号的运算。但谁来进行解释呢?什么是解释?如果他是说意向性除了形式符号之外还必须有意向内容,那我当然同意。事实上,我论证的两个要点是:在我们自己的例子中,我们有 "解释",也就是说,我们有内在的意向性,而计算机程序本身永远不足以实现这一点。就计算机而言,我们对意向性做出了与观察者相关的描述,但这不应被误认为是真实的东西,因为计算机程序本身并不具有内在的意向性。

Sloman & Croucher claim that the problem in my thought experiment is that the system isn't big enough. To Schank's story understander they would add all sorts of other operations, but they emphasize that these operations are computational and not physical. The obvious objection to their proposal is one they anticipate: I can still repeat my thought experiment with their system no matter how big it is. To this, they reply that I assume "without argument, that it is impossible for another mind to be based on his [my] mental process without his [my] knowing." But that is not what I assume. For all I know, that may be false. Rather, what I assume is that you can't understand Chinese if you don't know the meanings of any of the words in Chinese. More generally, unless a system can attach semantic content to a set of syntactic elements, the introduction of the elements in the system adds nothing by way of intentionality. That goes for me and for all the little subsystems that are being postulated inside me.
斯洛曼和克劳奇声称,我的思想实验的问题在于系统不够大。对于尚克的故事理解器,他们会添加各种其他操作,但他们强调这些操作是计算性的,而非物理性的。对他们的提议提出的反对意见显然是他们预料到的:无论他们的系统有多大,我仍然可以用它重复我的思想实验。对此,他们回答说,我假定 "无需论证,另一个思维不可能在他(我)不知情的情况下以他(我)的思维过程为基础"。但我的假设并非如此。据我所知,这可能是错误的。相反,我的假设是,如果你不知道中文中任何一个词的意思,你就无法理解中文。更笼统地说,除非一个系统能将语义内容附加到一组句法元素上,否则在系统中引入这些元素不会增加任何意向性。我和我体内假设的所有小子系统都是如此。
Eccles points out quite correctly that I never undertake to refute the dualist-interaction position held by him and Popper. Instead, I argue against strong AI on the basis of what might be called a monist interactionist position. My only excuse for not attacking his form of dualism head-on is that this paper really had other aims. I am concerned directly with strong AI and only incidentally with the "mind-brain problem." He is quite right in thinking that my arguments against strong AI are not by themselves inconsistent with his version of dualist interactionism, and I am pleased to see that we share the belief that "it is high time that strong AI was discredited."
埃克斯非常正确地指出,我从未承诺反驳他和波普尔所持的二元论-互动论立场。相反,我是基于所谓的一元论互动论立场来反对强人工智能的。我没有正面攻击他的二元论形式的唯一借口是,本文确实另有目的。我直接关注的是强人工智能,而 "心脑问题 "只是附带的。他认为我反对强人工智能的论点本身与他的二元论互动论并不矛盾,这一点是非常正确的,我很高兴看到我们都认为 "现在是否定强人工智能的时候了"。
I fear I have nothing original to say about Rachlin's behaviorist response, and if I discussed it I would make only the usual objections to extreme behaviorism. In my own case I have an extra difficulty with behaviorism and functionalism because I cannot imagine anybody actually believing these views. I know that poeple say they do, but what am I to make of it when Rachlin says that there are no "mental states underlying ... behavior" and "the pattern of the behavior is the mental state"? Are there no pains underlying Rachlin's pain behavior? For my own case 1 must confess that there unfortunately often are pains underlying my pain behavior, and I therefore conclude that Rachlin's form of behaviorism is not generally true.
对于拉赫林的行为主义回应,我恐怕没有什么新意可言,如果我讨论它,我也只会对极端行为主义提出通常的反对意见。就我个人而言,我对行为主义和功能主义有一种额外的困难,因为我无法想象会有人真的相信这些观点。我知道人们说他们相信,但当拉赫林说"......行为背后没有心理状态 "和 "行为模式就是心理状态 "时,我该怎么理解呢?难道拉赫林的疼痛行为背后没有疼痛吗?就我自己的情况而言,我必须承认,不幸的是,我的疼痛行为背后往往隐藏着疼痛,因此我得出结论,拉赫林的行为主义并不普遍正确。
Lycan tells us that my counterexamples are not counterexamples to a functionalist theory of language understanding, because the man in my counterexample would be using the wrong programs. Fine. Then tell us what the right programs are, and we will program the man with those programs and still produce a counterexample. He also tells us that the right causal connections will determine the appropriate content to attach to the formal symbols. I believe my reply to Fodor and other versions of the causal or robot reply is relevant to his argument as well, and so I will not repeat it.
莱肯告诉我们,我的反例不是语言理解功能主义理论的反例,因为我的反例中的人使用了错误的程序。好吧。那就告诉我们什么是正确的程序,我们就用那些程序给那个人编程,结果还是会产生一个反例。他还告诉我们,正确的因果联系将决定附加在形式符号上的适当内容。我相信,我给福多的答复以及其他版本的因果或机器人答复也与他的论证有关,所以我就不再重复了。
Hofstadter cheerfully describes my target article as "one of the wrongest, most infuriating articles I have ever read in my life." I believe that he would have been less (or perhaps more?) infuriated if he had troubled to read the article at all carefully. His general strategy appears to be that whenever I assert , he says that I assert not . For example, I reject dualism, so he says I believe in the soul. I think it is a plain fact of nature that mental phenomena are caused by neurophysiological phenomena, so he says I have "deep difficulty" in accepting any such view. The whole tone of my article is one of treating the mind as part of the (physical) world like anything else, so he says I have an "instinctive horror" of any such reductionism. He misrepresents my views at almost every point, and in consequence I find it difficult to take his commentary seriously. If my text is too difficult I suggest Hofstadter read Eccles who correctly perceives my rejection of dualism.
霍夫斯塔德兴高采烈地称我的目标文章是 "我一生中读过的最错误、最令人愤怒的文章之一"。我相信,如果他能认真读一读这篇文章,他就不会那么愤怒了(或许会更愤怒?他的一般策略似乎是,每当我断言 ,他就说我断言不 。例如,我反对二元论,他就说我相信灵魂。我认为精神现象是由神经生理现象引起的,这是自然界的一个明显事实,所以他说我 "很难 "接受任何这样的观点。我文章的整个基调是把心灵当作(物理)世界的一部分,就像对待其他任何事物一样,所以他说我对任何这种还原论都 "本能地感到恐惧"。他几乎在每一点上都歪曲了我的观点,因此我很难认真对待他的评论。如果我的文章太难理解,我建议霍夫斯塔德读读埃克斯的文章,他对我反对二元论的看法是正确的。
Furthermore, Hofstadter's commentary contains the
此外,霍夫斯塔德的评论还包含了

following non sequitur. From the fact that intentionality "springs from" the brain, together with the extra premise that "physical processes are formal, that is, rule governed" he infers that formal processes are constitutive of the mental, that we are "at bottom, formal systems." But that conclusion simply does not follow from the two premises. It does not even follow given his weird interpretation of the second premise: "To put it another way, the extra premise is that there is no intentionality at the level of particles." I can accept all these premises, but they just do not entail the conclusion. They do entail that intentionality is an "outcome of formal processess" in the trivial sense that it is an outcome of processes that have a level of description at which they are the instantiation of a computer program, but the same is true of milk and sugar and countless other "outcomes of formal processes."
以下是非连续性。从意向性 "源于 "大脑这一事实,再加上 "物理过程是形式的,即受规则支配的 "这一额外前提,他推断出形式过程是精神的构成要素,我们 "归根结底是形式系统"。但从这两个前提中根本无法得出这样的结论。鉴于他对第二个前提的奇怪解释:"换一种说法,额外的前提是,在粒子层面不存在意向性",这个结论甚至也不成立。我可以接受所有这些前提,但它们并不能得出结论。在微不足道的意义上,意向性确实是 "无形式过程的结果",即意向性是具有一定描述水平的过程的结果,在这个水平上,意向性是计算机程序的实例化,但牛奶和糖以及无数其他 "形式过程的结果 "也是如此。
Hofstadter also hypothesizes that perhaps a few trillion water pipes might work to produce consciousness, but he fails to come to grips with the crucial element of my argument, which is that even if this were the case it would have to be because the water-pipe system was duplicating the causal powers of the brain and not simply instantiating a formal program.
霍夫斯塔德还假设,也许几万亿根水管就能产生意识,但他没有抓住我的论点中的关键因素,那就是,即使是这样,也必须是因为水管系统在复制大脑的因果能力,而不是简单地实例化一个形式化的程序。
I think I agree with Smythe's subtle commentary except perhaps on one point. He seems to suppose that to the extent that the program is instantiated by "primitive hardware operations" my objections would not apply. But why? Let the man in my example have the program mapped into his hardware. He still doesn't thereby understand Chinese. Suppose he is so "hard wired" that he automatically comes out with uninterpreted Chinese sentences in response to uninterpreted Chinese stimuli. The case is still the same except that he is no longer acting voluntarily.
我想我同意斯迈思的精妙评论,也许除了一点。他似乎认为,只要程序是由 "原始硬件操作 "实例化的,我的反对意见就不适用。但为什么呢?让我例子中的那个人把程序映射到他的硬件中。他仍然无法理解中文。假设他的 "硬连线 "如此之强,以至于他在面对无法理解的中文刺激时,会自动说出无法理解的中文句子。情况还是一样,只是他不再是自愿的了。
Side issues. I felt that some of the commentators missed the point or concentrated on peripheral issues, so my remarks about them will be even briefer.
题外话。我觉得有些评论员没有抓住重点,或只关注了枝节问题,因此我对他们的评论将更加简短。
I believe that Bridgeman has missed the point of my argument when he claims that though the homunculus in my example might not know what was going on, it could soon learn, and that it would simply need more information, specifically "information with a known relationship to the outside world." I quite agree. To the extent that the homunculus has such information it is more than a mere instantiation of a computer program, and thus it is irrelevant to my dispute with strong AI. According to strong AI, if the homunculus has the right program it must already have the information. But I disagree with Bridgeman's claim that the only properties of the brain are the properties it has at the level of neurons. I think all sides to the present dispute would agree that the brain has all sorts of properties that are not ascribable at the level of individual neurons - for example, causal properties (such as the brain's control of breathing).
我认为布里奇曼忽略了我的论点,他声称虽然我的例子中的同体可能不知道发生了什么,但它很快就能学会,它只是需要更多的信息,特别是 "与外部世界有已知关系的信息"。我非常同意这个观点。如果同源体拥有这样的信息,它就不仅仅是一个计算机程序的实例,因此这与我与强人工智能的争论无关。根据强人工智能的观点,如果同源体拥有正确的程序,那么它就一定已经拥有了信息。但我不同意布里奇曼的说法,即大脑的唯一属性就是它在神经元层面的属性。我认为,目前争论的各方都会同意,大脑具有各种无法在单个神经元层面上描述的属性--例如,因果属性(如大脑对呼吸的控制)。
Similar misgivings apply to the remarks of Marshall. He stoutly denounces the idea that there is anything weak about the great achievements of weak AI, and concludes "Clearly, there must be some radical misunderstanding here." The only misunderstanding was in his supposing that in contrasting weak with strong AI, I was in some way disparaging the former.
类似的疑虑也适用于马歇尔的言论。他强烈谴责弱人工智能的伟大成就有任何弱点的观点,并得出结论说:"显然,这里一定存在着某种根本性的误解。唯一的误解在于,他认为我将弱人工智能与强人工智能进行对比,是在以某种方式贬低前者。
Marshall finds it strange that anyone should think that a program could be a theory. But the word program is used ambiguously. Sometimes "program" refers to the pile of punch cards, sometimes to a set of statements. It is in the latter sense that the programs are sometimes supposed to be theories. If Marshall objects to that sense, the dispute is still merely verbal and can be resolved by saying not that the program is a theory, but that the program is an embodiment of a theory. And the idea that programs could be theories is not something I invented. Consider the following. "Occasion- ally after seeing what a program can do, someone will ask for a specification of the theory behind it. Often the correct response is that the program is the theory" (Winston 1977, p. 259).
马歇尔觉得很奇怪,怎么会有人认为程序可以是一种理论呢?但 "程序 "一词的用法含糊不清。有时 "程序 "指的是一堆打卡机,有时指的是一组语句。正是在后一种意义上,程序有时被认为是理论。如果马歇尔反对这种意义,那么争论仍然只是口头上的,可以说不是程序是理论,而是程序是理论的体现。而且,程序可以是理论的观点也不是我发明的。请看下面这段话。"偶尔有人在看到程序的功能后,会要求说明程序背后的理论。正确的回答往往是程序就是理论"(温斯顿,1977 年,第 259 页)。
Ringle also missed my point. He says I take refuge in mysticism by arguing that "the physical properties of neuronal systems are such that they cannot in principle be simulated by a nonprotoplasmic computer." But that is not even remotely close to my claim. I think that anything can be given a formal simulation, and it is an empirical question in each case whether the simulation duplicated the causal features. The question is whether the formal simulation by itself, without any further causal elements, is sufficient to reproduce the mental. And the answer to that question is no, because of the arguments I have stated repeatedly, and which Ringle does not answer. It is just a fallacy to suppose that because the brain has a program and because the computer could have the same program, that what the brain does is nothing more than what the computer does. It is for each case an empirical question whether a rival system duplicates the causal powers of the brain, but it is a quite different question whether instantiating a formal program is by itself constitutive of the mental.
林格也没有理解我的观点。他说我以神秘主义为借口,认为 "神经元系统的物理特性决定了它们原则上无法被非原生质计算机模拟"。但这与我的说法根本不符。我认为,任何事物都可以进行形式模拟,而模拟是否复制了因果特征,在每种情况下都是一个经验问题。问题在于,形式模拟本身,在没有任何其他因果元素的情况下,是否足以再现心理。这个问题的答案是否定的,因为我已经反复阐述了这些论点,而林格并没有回答这些论点。如果因为大脑有一个程序,而计算机也可能有同样的程序,就认为大脑所做的不过是计算机所做的,这简直是谬论。就每种情况而言,一个对立的系统是否能复制大脑的因果能力是一个经验问题,但将一个形式化的程序实例化本身是否就是精神的构成,则是一个完全不同的问题。
I also have the feeling, perhaps based on a misunderstanding, that Menzel's discussion is based on a confusion between how one knows that some system has mental states and what is to have a mental state. He assumes that I am looking for a criterion for the mental, and he cannot see the point in my saying such vague things about the brain. But I am not in any sense looking for a criterion for the mental. I know what mental states are, at least in part, by myself being a system of mental states. My objection to strong AI is not, as Menzel claims, that it might fail in a single possible instance, but rather that in the instance in which it fails, it possesses no more resources than in any other instance; hence if it fails in that instance it fails in every instance.
我还有一种感觉,也许是基于一种误解,门泽尔的讨论是基于对如何知道某个系统具有精神状态与 何为具有精神状态这两者之间的混淆。他假定我在寻找精神状态的标准,他不明白我对大脑说这些含糊不清的话有什么意义。但从任何意义上讲,我都不是在寻找精神的标准。我知道什么是精神状态,至少在某种程度上,我自己就是一个精神状态系统。我反对强人工智能,并不是像门泽尔所说的那样,它可能在某个可能的实例中失败,而是在它失败的那个实例中,它所拥有的资源并不比任何其他实例多;因此,如果它在那个实例中失败了,那么它在每一个实例中都会失败。
I fail to detect any arguments in Walter's paper, only a few weak analogies. He laments my failure to make my views on intentionality more explicit. They are so made in the three papers cited by Natsoulas (Searle 1979a; 1979b; 1979c).
我在沃尔特的论文中找不到任何论据,只有一些无力的类比。他感叹我没有更明确地表达我对意向性的看法。在纳楚拉斯所引用的三篇论文(塞尔,1979a;1979b;1979c)中,我都是这样说的。
Further implications. I can only express my appreciation for the contributions of Danto, Libet, Maxwell, Puccetti, and Natsoulas. In various ways, they each add supporting arguments and commentary to the main thesis. Both Natsoulas and Maxwell challenge me to provide some answers to questions about the relevance of the discussion to the traditional ontological and mind-body issues. I try to avoid as much as possible the traditional vocabulary and categories, and my own - very tentative - picture is this. Mental states are as real as any other biological phenomena. They are both caused by and realized in the brain. That is no more mysterious than the fact that such properties as the elasticity and puncture resistance of an inflated car tire are both caused by and realized in its microstructure. Of course, this does not imply that mental states are ascribable to individual neurons, any more than the properties at the level of the tire are ascribable to individual electrons. To pursue the analogy: the brain operates causally both at the level of the neurons and at the level of the mental states, in the same sense that the tire operates causally both at the level of particles and at the level of its overall properties. Mental states are no more epiphenomenal than are the elasticity and puncture resistance of an inflated tire, and interactions can be described both at the higher and lower levels, just as in the analogous case of the tire.
进一步的影响。我只能对丹托、利贝特、马克斯韦尔、普切蒂和纳楚拉斯的贡献表示感谢。他们各自以不同的方式为主要论点添加了支持性论据和评论。纳祖拉斯和马克斯韦尔都向我提出了挑战,要求我就讨论与传统本体论和心身问题的相关性提供一些答案。我尽量避免使用传统的词汇和范畴,我自己的--非常初步的--看法是这样的。精神状态与其他生物现象一样真实。它们既是由大脑引起的,也是在大脑中实现的。这并不比汽车轮胎充气后的弹性和抗穿刺性等特性既是由其微观结构造成的,又是在其微观结构中实现的这一事实更神秘。当然,这并不意味着精神状态可以归因于单个神经元,就像轮胎层面的属性可以归因于单个电子一样。继续类比:大脑在神经元层面和精神状态层面都是因果运作的,就像轮胎在粒子层面和整体属性层面都是因果运作的一样。精神状态并不比充气轮胎的弹性和抗穿刺性更具有表象性,就像轮胎的类比情况一样,相互作用可以在较高和较低的层次上进行描述。
Some, but not all, mental states are conscious, and the intentional-nonintentional distinction cuts across the conscious-unconscious distinction. At every level the phenomena are causal. I suppose this is "interactionism," and I guess it is
有些精神状态是有意识的,但并非所有精神状态都是有意识的,有意-无意的区分跨越了有意识-无意识的区分。在每个层面上,这些现象都是因果关系。我想这就是 "互动论 "吧。

also, in some sense, "monism," but I would prefer to avoid this myth-eaten vocabulary altogether.
从某种意义上说,这也是 "一元论",但我宁愿完全避免使用这个被神话吞噬的词汇。
Conclusion. I conclude that the Chinese room has survived the assaults of its critics. The remaining puzzle to me is this: why do so many workers in Al still want to adhere to strong AI? Surely weak AI is challenging, interesting, and difficult enough.
结论。我的结论是,中国房间经受住了批评者的攻击。对我来说,剩下的困惑是:为什么阿尔的这么多工人仍想坚持使用强人工智能?当然,弱人工智能已经足够具有挑战性、趣味性和难度了。

ACK NOWLEDGMENT 确认

I am indebted to Paul Kube for discussion of these issues.
我感谢 Paul Kube 对这些问题的讨论。

References 参考资料

Anderson, J. (1980) Cognitive units. Paper presented at the Society for Philosophy and Psychology, Ann Arbor, Mich. [RCS]
Anderson, J. (1980) Cognitive units.在密歇根州安阿伯市哲学与心理学学会上发表的论文。[RCS]
Block, N. J. (1978) Troubles with functionalism. In: Minnesota studies in the philosophy of science, vol. 9, ed. C. W. Savage, Minneapolis: University of Minnesota Press. [NB, WGL]
Block, N. J. (1978) Troubles with functionalism.In:Minnesota studies in the philosophy of science, vol. 9, ed. C. W. Savage, Minneapolis: Minnesota Studies in the philosophy of science.C. W. Savage, Minneapolis:明尼苏达大学出版社。[NB, WGL]
(forthcoming) Psychologism and behaviorism. Philosophical Review. [NB, WGL]
(forthcoming) Psychologism and behaviorism.Philosophical Review.[NB, WGL]
Bower, G. H.; Black, J. B., & Turner, T. J. (1979) Scripts in text comprehension and memory. Cognitive Psychology 11: 177-220. [RCS]
Bower, G. H.; Black, J. B., & Turner, T. J. (1979) Scripts in text comprehension and memory.认知心理学》11:177-220。[俄语对照] (RCS)
Carroll, C. W. (1975) The great chess automaton. New York: Dover. [RP]
Carroll, C. W. (1975) The great chess automaton.纽约:Dover.[RP]
Cummins, R. (1977) Programs in the explanation of behavior. Philosophy of Science 44: 269-87. [JCM]
Cummins, R. (1977) Programs in the explanation of behavior.Philosophy of Science 44:269-87.[JCM] (1977) Programs in the explanation of behavior.
Dennett, D. C. (1969) Content and consciousness, London: Routledge & Kegan Paul. [DD,TN]
Dennett, D. C. (1969) Content and consciousness, London:Routledge & Kegan Paul.[DD,TN]
(1971) Intentional systems. Journal of Philosophy 68: 87-106. [TN]
(1971) Intentional Systems.哲学杂志》68:87-106。[TN]
(1972) Reply to Arbib and Gundérson. Paper presented at the Eastern Division meeting of the American Philosophical Association. Boston, Mass. [TN]
(1972) Reply to Arbib and Gundérson.在美国哲学协会东部分部会议上提交的论文。波士顿,马萨诸塞州。[TN]
(1975) Why the law of effect won't go away. Journal for the Theory of Social Behavior 5: 169-87.
(1975) Why the law of effect won't go away.社会行为理论期刊》5:169-87。
[NB]
(1978) Brainstorms. Montgomery, Vt.: Bradford Books. [DD, AS]
(1978) Brainstorms.Montgomery, Vt:Bradford Books.[DD, AS]
Eccles, J. C. (1978) A critical appraisal of brain-mind theories. In: Cerebral correlates of conscious experiences, ed. P. A. Buser and A. Rougeul-Buser, pp. 347-55. Amsterdam: North Holland. [JCE]
Eccles, J. C. (1978) A critical appraisal of brain-mind theories.In:P. A. Buser and A. Rougeul-Buser, pp.P. A. Buser and A. Rougeul-Buser, pp.阿姆斯特丹:阿姆斯特丹:北荷兰。[JCE]
(1979) The human mystery. Heidelberg: Springer Verlag. [JCE]
(1979) The human mystery.海德堡:Springer Verlag.[JCE]
Fodor, J. A. (1968) The appeal to tacit knowledge in psychological explanation. Journal of Philosophy 65: 627-40. [NB]
Fodor, J. A. (1968) The appeal to tacit knowledge in psychological explanation.Journal of Philosophy 65: 627-40.[NB]
(1980) Methodological solopsism considered as a research strategy in cognitive psychology. The Behavioral and Brain Sciences 3:1. [NB, WGL, WES]
(1980) Methodological solopsism considered as a research strategy in cognitive psychology.The Behavioral and Brain Sciences 3:1.[NB、WGL、WES] (1980) Methodological solopsism considered in the research strategy in cognitive psychology.
Freud, S. (1895) Project for a scientific psychology. In: The standard edition of the complete psychological works of Sigmund Freud, vol. 1, ed. J. Strachey. London: Hogarth Press, 1966. [JCM]
Freud, S. (1895) Project for a scientific psychology.In:西格蒙德-弗洛伊德心理学著作全集标准版》第 1 卷,J. Strachey 编辑。J. Strachey.伦敦:霍加斯出版社,1966 年。[JCM][JCM][JCM
Frey, P. W. (1977) An introduction to computer chess. In: Chess skill in man and machine, ed. P. W. Frey. New York, Heidelberg, Berlin: SpringerVerlag. [RP]
Frey, P. W. (1977) 计算机国际象棋入门。In:人与机器的国际象棋技巧》,P. W. Frey 编辑。P. W. Frey.纽约、海德堡、柏林:SpringerVerlag.[RP]
Fryer, D. M. & Marshall, J. C. (1979) The motives of Jacques de Vaucanson. Technology and Culture 20: 257-69. [JCM]
Fryer, D. M. & Marshall, J. C. (1979) The motives of Jacques de Vaucanson.技术与文化》20:257-69。[JCM]《技术与文化》。
Gibson, J. J. (1966) The senses considered as perceptual systems. Boston: Houghton Mifflin.
Gibson, J. J. (1966) The senses considered as perceptual systems.波士顿:霍顿-米夫林出版社。
[TN]
(1967) New reasons for realism. Synthese 17: 162-72. [TN]
(1967) New reasons for realism.Synthese 17: 162-72.[TN]
(1972) A theory of direct visual perception. In: The psychology of knowing ed. S. R. Royce & W. W. Rozeboom. New York: Gordon & Breach. [TN]
(1972) A theory of direct visual perception.In:The psychology of knowing ed.. S. R. Royce & W. W. Rozeboom.S. R. Royce & W. W. Rozeboom.New York:Gordon & Breach.[TN]
Graesser, A. C.; Gordon, S. E;; & Sawyer, J. D. (1979) Recognition memory for typical and atypical actions in scripted activities: tests for a script pointer and tag hypotheses. Journal of Verbal Learning and Verbal Behavior 1: 319-32. [RCS]
Graesser, A. C.; Gordon, S. E; & Sawyer, J. D. (1979) Recognition memory for typical and atypical actions in scripted activities: tests for a script pointer and tag hypotheses.言语学习与言语行为杂志》1:319-32。[RCS][RCS][RCS
Gruendel, J. (1980). Scripts and stories: a study of children's event narratives. Ph.D. dissertation, Yale University. [RCS]
Gruendel, J. (1980).脚本与故事:儿童事件叙事研究》。耶鲁大学博士论文。[RCS].
Hanson, N. R. (1969) Perception and discovery. San Francisco: Freeman, Cooper. [DOW]
Hanson, N. R. (1969) Perception and discovery.旧金山:Freeman, Cooper.[DOW]
Hayes, P. J. (1977) In defence of logic. In: Proceedings of the 5th interna. tional joint conference on artificial intelligence, ed. R. Reddy. Cambridge, Mass: M.I.T. Press. [WES]
Hayes, P. J. (1977) In defense of logic.In:Proceedings of the 5th interna. tional joint conference on artificial intelligence, ed. R. Reddy.R. Reddy.Cambridge, Mass:M.I.T. Press.[WES]
Hobbes, T. (1651) Leviathan. London: Willis.
Hobbes, T. (1651) Leviathan.伦敦:Willis.
[JCM] [联合机械化中心]
Hofstadter, D. R. (1979) Gödel, Escher, Bach. New York: Basic Books. [DOW]
Hofstadter, D. R. (1979) Gödel, Escher, Bach.New York:Basic Books.[DOW]

Householder, F. W. (1962) On the uniqueness of semantic mapping. Word 18: 173-85. [JCM]
Householder, F. W. (1962) On the uniqueness of semantic mapping.Word 18: 173-85.[JCM][JCM][JCM
Huxley, T. H. (1874) On the hypothesis that animals are automata and its history. In: Collected Essays, vol. 1. London: Macmillan, [JCM]
Huxley, T. H. (1874) On the hypothesis that animals are automata and its history.In:论文集,第 1 卷。伦敦:麦克米伦, [JCM]
Kolers, P. A. & Smythe, W. E. (1979) Images, symbols, and skills. Canadian Journal of Psychology 33: 158-84. [WES]
Kolers, P. A. & Smythe, W. E. (1979) Images, symbols, and skills.Canadian Journal of Psychology 33: 158-84.[WES]
Kosslvn, S. M. & Shwartz, S. P. (1977) A simulation of visual imagery. Cognitive Science 1: 265-95. [WES]
Kosslvn, S. M. & Shwartz, S. P. (1977) A simulation of visual imagery.认知科学 1: 265-95.[WES]
Lenneberg, E. H. (1975) A neuropsychological comparison bet ween man, chimpanzee and monkey. Neuropsychologia 13: 125. [JCE]
Lenneberg, E. H. (1975) A neuropsychological comparison bet ween man, chimpanzee and monkey.神经心理学 13: 125.[JCE]
Libet, B. (1973) Electrical stimulation of cortex in human subjects and conscious sensory aspects. In: Handbook of sensory physiology, vol. II, ed. A. Iggo, pp. 743-90. New York: Springer-Verlag. [BL]
Libet, B.(1973 年)人体皮层的电刺激和意识感觉方面。In:In: Handbook of sensory physiology, vol. II, ed. A. Iggo, pp.A. Iggo, pp.New York:Springer-Verlag.[BL]
Libet, B., Wright, E. W., Jr., Feinstein, B., and Pearl, D. K. (1979) Subjective referral of the timing for a conscious sensory experience: a functional role for the somatosensory specific projection system in man. Brain 102:191222. [BL]
Libet, B., Wright, E. W., Jr., Feinstein, B., and Pearl, D. K. (1979) Subjective referral of the timing for a conscious sensory experience: a functional role for the somatosensory specific projection system in man.脑 102:191222。[BL]
Longuet-Higgins, H. C. (1979) The perception of music. Proceedings of the Royal Society of London B 205:307-22. [JCM]
Longuet-Higgins, H. C. (1979) The perception of music.Proceedings of the Royal Society of London B 205:307-22.[JCM] (英国)
Lucas, J. R. (1961) Minds, machines, and Gödel. Philosophy 36:112127. [DRH]
Lucas, J. R. (1961) Minds, machines, and Gödel.Philosophy 36:112127.[DRH]
Lycan, W. G. (forthcoming) Form, function, and feel. Journal of Philosophy. [NB, WGL]
Lycan, W. G. (forthcoming) Form, function, and feel.哲学杂志》。[NB, WGL]
MeCarthy, J. (1979) Ascribing mental qualities to machines. In: Philosophical perspectives in artificial intelligence, ed. M. Ringle. Atlantic Highlands, N.J.: Humanities Press. [JM, JRS]
MeCarthy, J. (1979) Ascribing mental qualities to machines.In:人工智能的哲学视角》,M. Ringle 编辑。M. Ringle.Atlantic Highlands, N.J.: Humanities Press.[JM, JRS]
Marr, D. & Poggio, T. (1979) A computational theory of human stereo vision Proceedings of the Royal Society of London B 204:301-28. [JCM]
Marr, D. & Poggio, T. (1979) A computational theory of human stereo vision Proceedings of the Royal Society of London B 204:301-28.[JCM][JCM][JCM
Marshall, J. C. (1971) Can humans talk? In: Biological and social factors in psycholinguistics, ed. J. Morton. London: Logos Press. [JCM]
Marshall, J. C. (1971) Can humans talk?In:心理语言学中的生物和社会因素》,J. Morton 编辑。J. Morton.London:Logos Press.[JCM]
(1977) Minds, machines and metaphors. Social Studies of Science 7:47588. [JCM]
(1977) Minds, machines and metaphors.科学的社会研究》7:47588。[JCM] (1977) Minds, machines and metaphaph.
Maxwell, G. (1976) Scientific results and the mind-brain issue. In: Consciousness and the brain, ed. G. G. Globus, G. Maxwell, & I. Savodnik. New York: Plenum Press. [GM]
Maxwell, G. (1976) Scientific results and the mind-brain issue.In:意识与大脑》,G. G. Globus, G. Maxwell, & I. Savodnik 编辑。G. G. Globus, G. Maxwell, & I. Savodnik.纽约:Plenum Press.[GM]
(1978) Rigid designators and mind-brain identity. In: Perception and cognttion: Issues in the foundaions of psychology, Minnesota Studies in the Philosophy of Science, vol. 9, ed. C. W. Savage. Minneapolis: University of Minnesota Press. [CM]
(1978) Rigid designators and mind-brain identity.In:Perception and cognttion:Minnesota Studies in the Philosophy of Science, vol. 9, ed. C. W. Savage.C. W. Savage.Minneapolis:明尼苏达大学出版社。[CM]
Mersenne, M. (1636) Harmonie universelle. Paris: Le Gras. [JCM]
Mersenne, M. (1636) Harmonie universelle.Paris: Le Gras.[JCM]
Moor, J. H. (1978) Three myths of computer science. British Journal of the Philosophy of Scionce 29:213-22. [JCM]
Moor, J. H. (1978) Three myths of computer science.British Journal of the Philosophy of Scionce 29:213-22.[JCM] (1978) Three myths computer science.
Nagel, T. (1974) What is it like to be a bat? Philosophical Review 83:43550. [GM]
Nagel, T. (1974) What is it like to be a bat?Philosophical Review 83:43550.[GM]
Natsoulas, T. (1974) The subjective, experiential element in perception. Psychological Bulletin 81:611-31. [TN]
Natsoulas, T. (1974) The subjective, experiential element in perception.Psychological Bulletin 81:611-31.[TN]
(1977) On perceptual aboutness. Behaviorism 5:75-97. [TN]
(1977) On perceptual aboutness.Behaviorism 5:75-97.[TN]
(1978a) Haugeland's first hurdle. Behavioral and Brain Sciences 1:243. [TN]
(1978a) Haugeland's first hurdle.行为与脑科学》1:243。[TN]
(1979b) Residual subjectivity. American Psychologist 33:269-83. [TN]
(1979b) Residual subjectivity.美国心理学家》33:269-83。[TN]
(1980) Dimensions of perceptual awareness. Psychology Department, University of California, Davis. Unpublished manuscript. [TN]
(1980) Dimensions of perceptual awareness.加州大学戴维斯分校心理学系。未发表手稿。[TN]
Nelson, K. & Gruendel, J. (1978) From person episode to social script: two dimensions in the development of event knowledge. Paper presented at the biennial meeting of the Society for Research in Child Development, San Francisco. [RCS]
Nelson, K. & Gruendel, J. (1978) From person episode to social script: two dimensions in the development of event knowledge.在儿童发展研究学会两年一度的会议上提交的论文,旧金山。[RCS]
Newell, A. (1973) Production systems: models of control structures. In: Visual information processing, ed. W. C. Chase. New York: Academic Press. [WES]
Newell, A. (1973) Production Systems: models of control structures.In:视觉信息处理》,W. C. Chase 编辑。W. C. Chase.New York:Academic Press.[WES]
(1979) Physical symbol systems. Lecture at the La Jolla Conference on Cog. nitive Science. [JRS]
(1979) Physical symbol systems.在拉荷亚认知科学会议上的演讲。[JRS]
(1980) Harpy, production systems, and human cognition. In: Perception and production of fluent speech, ed. R. Cole. Hillsdale, N.J: Erlbaum Press. [WES]
(1980) Harpy、生产系统和人类认知。In:流利语音的感知与生成》,R. Cole 编辑。R. Cole.Hillsdale, N.J: Erlbaum Press.[WES]
Newell, A. & Simon, H. A. (1963) GPS, a program that simulates human thought. In: Computers and thought, ed. A. Feigenbaum & V. Feldman, pp. 279-93. New York: McGraw Hill.
Newell, A. & Simon, H. A. (1963) GPS, a program that simulates human thought.In:计算机与思维》,A. Feigenbaum & V. Feldman 编辑,第 279-93 页。A. Feigenbaum & V. Feldman, pp.New York:New York: McGraw Hill.
Panofsky, E. (1954) Galileo as a critic of the arts. The Hague: Martinus Nijhoff.
Panofsky, E. (1954) Galileo as a critic of the arts.海牙:Martinus Nijhoff.
Popper, K. R. & Eccles, J. C. (1977) The self and its brain. Heidelberg: Springer-Verlag. [JCE, GM]
Popper, K. R. & Eccles, J. C. (1977) The self and its brain.海德堡:Springer-Verlag.[联合专家委员会、全球机制]
Putnam, H. (1960) Minds and machines. In: Dimensions of mind, ed. S. Hook, pp. 138-64. New York: Collier. [MR, RR]
Putnam, H. (1960) Minds and machines.In:Dimensions of mind, ed. S. Hook, pp.S. Hook, pp.New York:New York: Collier.[MR, RR]
(1975a) The meaning of "meaning." In: Mind, language and reality. Cambridge University Press. [NB, WGL]
(1975a) The meaning of "meaning.In:Mind, language and reality.剑桥大学出版社。[NB, WGL]
(1975b) The nature of mental states. In: Mind, language and reality. Cambridge: Cambridge University Press.
(1975b) The nature of mental states.In:Mind, language and reality.剑桥:剑桥大学出版社。
[NB]
(1975c) Philosophy and our mental life. In: Mind, language and reality. Cambridge: Cambridge University Press. [MM]
(1975c) 哲学与我们的精神生活。In:Mind, language and reality.剑桥:剑桥大学出版社。[MM]
Pylyshyn, Z. W. (1980a) Computation and cognition: issues in the foundations of cognitive science. Behavioral and Brain Sciences 3 . [JRS, WES]
Pylyshyn, Z. W. (1980a) Computation and cognition: issues in the foundations of cognitive science.Behavioral and Brain Sciences 3 .[JRS, WES]
(1980b) Cognitive representation and the process-architecture distinction. Behavioral and Brain Sciences. [ZWP]
(1980b) Cognitive representation and the process-architecture distinction.行为与脑科学》。[ZWP]
Russell, B. (1948) Human knowledge: its scope and limits. New York: Simon and Schuster. [GM]
Russell, B. (1948) Human knowledge: its scope and limits.New York:Simon and Schuster.[GM]
Schank, R. C. & Abelson, R. P. (1977) Scripts, plans, goals, and understanding. Hillsdale, N.J.: Lawrence Erlbaum Press. [RCS, JRS]
Schank, R. C. & Abelson, R. P. (1977) Scripts, plans, goals, and understanding.Hillsdale, N.J.. Lawrence Erlbaum Press:Lawrence Erlbaum Press.[RCS, JRS]
Searle, J. R. (1979a) Intentionality and the use of language. In: Meaning and usc, ed. A. Margalit. Dordrecht: Reidel. [TN, JRS]
Searle, J. R. (1979a) Intentionality and the use of language.In:Meaning and usc, ed.. A. Margalit.A. Margalit.Dordrecht:Reidel.[TN, JRS]
(1979b) The intentionality of intention and action. Inquiry 22:253-
(1979b) The intentionality of intention and action.探索 22:253-
(1979c) What is an intentional state? Mind 88:74-92. [JH, GM, TN, JRS]
(1979c) What is an intentional state?Mind 88:74-92.[JH, GM, TN, JRS] (1979C) What the intentional state?
Sherrington, C. S. (1950) Introductory. In: The physical basis of mind, ed. P. Laslett, Oxford: Basil Blackwell. [JCE]
Sherrington, C. S. (1950) Introductory.In:The physical basis of mind, ed.. P. Laslett, Oxford: Basil Blackwell.P. Laslett, Oxford: Basil Blackwell.[联合教育委员会]
Slate, J. S. & Atkin, L. R. (1977) CHESS 4.5 - the Northwestern University chess program. In: Chess skill in man and machine, ed. P. W. Frey. New York, Heidelberg, Berlin: Springer Verlag.
Slate, J. S. & Atkin, L. R. (1977) CHESS 4.5 - 西北大学国际象棋程序。In:人与机器的国际象棋技巧》,P. W. Frey 编辑。P. W. Frey.纽约、海德堡、柏林:Springer Verlag.
Sloman, A. (1978) The computer revolution in philosophy. Harvester Press and Humanities Press. [AS]
Sloman, A. (1978) The computer revolution in philosophy.Harvester Press and Humanities Press.[AS]
(1979) The primacy of non-communicative language. In: The analysis of meaning (informatics 5), ed. M. McCafferty & K. Gray. London: ASLIB and British Computer Society. [AS]
(1979) The primacy of non-communicative language.In:The analysis of meaning (informatics 5), ed. M. McCafferty & K. Gray.M. McCafferty & K. Gray.伦敦:London: ASLIB and British Computer Society.[AS]
Smith, E. E.; Adams, N.; & Schorr, D. (1978) Fact retrieval and the paradox of interference. Cognitive Psychology 10:438-64. [RCS]
Smith, E. E.; Adams, N.; & Schorr, D. (1978) Fact retrieval and the paradox of interference.认知心理学》10:438-64。[俄语对照] (RCS)
Smythe, W. E. (1979) The analogical/propositional debate about mental rep- resentation: Coodmanian analysis. Paper presented at the 5 th annual meeting of the Society for Philosophy and Psychology, New York City. [WES]
Smythe, W. E. (1979) The analogical/propositional debate about mental rep- resentation: Coodmanian analysis.在纽约市哲学与心理学学会第五届年会上发表的论文。[WES]
Sperry, R. W. (1969) A modified concept of consciousness. Psychological Review 76:532-36. [TN]
Sperry, R. W. (1969) A modified concept of consciousness.Psychological Review 76:532-36.[TN]
(1970) An objective approach to subjective experience: further explanation of a hypothesis. Psychological Review 77:585-90. [TN]
(1970) An objective approach to subjective experience: further explanation of a hypothesis.心理学评论》77:585-90。[TN]
(1976) Mental phenomena as causal determinants in brain function. In: Consciousness and the brain, ed. G. G. Globus, G. Maxwell, & I. Savodnik. New York: Plenum Press. [TN]
(1976) Mental phenomena as causal determinants in brain function.In:Consciousness and the brain, ed. G. Globus, G. Maxwell, & I. Savodnik.G. G. Globus, G. Maxwell, & I. Savodnik.New York:Plenum Press.[TN]
Stich, S. P. (in preparation) On the ascription of content. In: Entertaining thoughts, ed. A. Woodfield. [WGL]
Stich, S. P. (in preparation) On the ascription of content.In:Entertaining thoughts, ed.. A. Woodfield.A. Woodfield.[WGL]
Thorne, J. P. (1968) A computer model for the perception of syntactic structure. Proceedings of the Royal Society of London B 171:37786. [JCM]
Thorne, J. P. (1968) A computer model for the perception of syntactic structure.Proceedings of the Royal Society of London B 171:37786.[JCM] (1968) A computer model for the perception of syntact structure.
Turing, A. M. (1964) Computing machinery and intelligence. In: Minds and machines, ed. A. R. Anderson, pp.4-30. Englewood Cliffs, N.J.: PrenticeHall. [MR]
Turing, A. M. (1964) Computing machinery and intelligence.In:Minds and machines, ed. A. R. Anderson, pp.4-30.A. R. Anderson, pp.4-30.Englewood Cliffs, N.J.. PrenticeHall:PrenticeHall.[MR]
Weizenbaum, J. (1965) Eliza - a computer program for the study of natural language communication between man and machine. Communication of the Association for Computing Machinery 9:36-45. [JRS]
Weizenbaum, J. (1965) Eliza - 用于研究人机自然语言交流的计算机程序。计算机协会通讯》9:36-45。[JRS]
(1976) Computer power and human reason. San Francisco: W. H. Freeman. [JRS]
(1976) Computer power and human reason.San Francisco:W. H. Freeman.[JRS]
Winograd, T. (1973) A procedural model of language understanding. In: Computer models of thought and language, ed. R. Schank & K. Colby. San Francisco: W. H. Freeman. [JRS]
Winograd, T. (1973) A procedural model of language understanding.In:思维和语言的计算机模型》,R. Schank & K. Colby 编辑。R. Schank & K. Colby.San Francisco:W. H. Freeman.[JRS]
Winston, P. H. (1977) Artificial intelligence. Reading, Mass, Addison-Wesley; [JRS]
Winston, P. H. (1977) Artificial intelligence.Reading, Mass, Addison-Wesley; [JRS].
Woodruff, G. & Premack, D. (1979) Intentional communication in the chimpanzee: the development of deception. Cognition 7:333-62. [JCM]
Woodruff, G. & Premack, D. (1979) Intentional communication in the chimpanzee: the development of deception.认知》7:333-62。[JCM]《黑猩猩的有意交流:欺骗的发展》。

Society for Philosophy and Psychology
哲学与心理学协会

Department of Philosophy College of Charleston Charleston, SC 29401
哲学系 查尔斯顿学院 查尔斯顿,南卡罗来纳州 29401
\author{ \作者{
President:  主席
Daniel Dennett  丹尼尔-丹尼特
Tutts University  塔茨大学
President-Elect:  当选主席
Zenon Pylyshyn
University of  大学
Western Ontario  西安大略省
Secretary-Treasurer  秘书-财务主管
Rew A. Godow, Jr.
College of Charleston
查尔斯顿学院

Exocutive Committeo:  执行委员会
Richard Brand  理查德-布兰德
University of Michigan
密歇根大学

(Past President)  (前任主席)
Susan Carey  苏珊-凯里
Massachusetts Institute
麻省理工学院

of Technology  技术
Patricia Churchland  帕特里夏-丘奇兰
University of Manitoba
马尼托巴大学

Robert Cummins  罗伯特-卡明斯
University of Wisconsin - Milwaukee
威斯康星大学--密尔沃基

Fred Dretske  弗雷德-德雷茨克
University of Wisconsin - Madison
威斯康星大学麦迪逊分校

Rachel Joffe Falmagne
雷切尔-乔菲-法尔马涅

Clark University  克拉克大学
Alvin Goldman  阿尔文-戈德曼
University or Michigan
密歇根大学

Barbara Klein  芭芭拉-克莱因
Yale University  耶鲁大学
Robert J. Matthews
罗伯特-J-马修斯

Rutgers University  罗格斯大学
Thomas Simon  托马斯-西蒙
University of Florida
佛罗里达大学

Stephen P. Stich
斯蒂芬-P-斯蒂奇

University of Maryland
马里兰大学

Virginia Valian  弗吉尼亚-瓦里安
City Unwersity of New York
纽约市立大学

Eric Wanner  埃里克-万纳
Harvard University 哈佛大学

Program Chairman: 计划主席


Stevan Harnad
The Behaviorial and
Brain Sciences
}

CALL FOR PAPERS 1981 ANNUAL MEETING
1981 年年会征稿启事

The Society for Philosophy and Psychology is calling for papers to be read at its 7th annual meeting on April 2.5, 1981 at the University of Chicago, Chicago, Illinois.
哲学与心理学会征集论文,将于 1981 年 4 月 2-5 日在伊利诺斯州芝加哥市芝加哥大学举行的第七届年会上宣读。
The Society consists of psychoiogists and philosophers with common interests in the study of behavior, cognition, language, the nervous system, artificial intelligence, emotion, conscious experience, evolution and questions at the foundations of psychology and philosophy.
学会由心理学家和哲学家组成,他们对行为、认知、语言、神经系统、人工智能、情感、意识体验、进化以及心理学和哲学基础问题的研究有着共同的兴趣。
Contributed papers will be selected on the basis of quality and possible interest to both philosophers and psychologists. Psychologists especially are encouraged to report experimental, theoretical, and clinical work that they judge to have philosophical significance.
投稿论文将根据质量以及哲学家和心理学家可能感兴趣的内容进行筛选。尤其鼓励心理学家报告他们认为具有哲学意义的实验、理论和临床工作。
Contributed papers are for oral presentation and should not exceed a length of 30 minutes (about 12 double-spaced pages). Papers must be accompanied by a camera-ready, single-spaced, 300 -word abstract, which will appear in the newsletter of the Society prior to the meeting if the paper is accepted. The deadline for submission is October 3, 1980.
投稿论文为口头报告,篇幅不得超过 30 分钟(约 12 页,双倍行距)。论文必须附有300字左右、单行距、可上镜的摘要,如果论文被录用,摘要将刊登在会前的学会通讯上。提交摘要的截止日期为 1980 年 10 月 3 日。
Please submit three copies of your contributed paper to:
请将投稿论文一式三份寄至
Stevan Harnad
Program Chairman 计划主席
Society for Philosophy and Psychology
哲学与心理学协会
IPN Institute Suite 240
IPN 研究所 240 号套房
20 Nassau St. 拿骚街 20 号
Princeton, N.J. 08540 新泽西州普林斯顿 08540
The Society is also calling for suggestions for symposia. Symposia are organized around special topics of current interest, with several speakers and discussants. Please send suggestions to Program Chairman.
学会还在征集专题讨论会的建议。专题讨论会围绕当前感兴趣的特别主题组织,有多位发言人和讨论者。请将建议发送给项目主席。
Individuals interested in becoming members of the Society should send membership dues ( for students) to Professor Rew A. Godow, Jr., Department of Philosophy, College of Charleston, S.C. 29401.
有意成为学会会员的个人请将 会员费(学生为 )寄至:Professor Rew A. Godow, Jr., Department of Philosophy, College of Charleston, S.C. 29401。