前言:这是自我写作以来,做的最重要的一次尝试。本文翻译已被丹尼尔·丹尼特教授授权,我将对他在1981年出版的《Brainstorms: Philosophical Essays on Mind and Psychology》的第七章《Artificial Intelligence as Philosophy and as Psychology》进行翻译。
丹尼尔·丹尼特(Daniel Clement Dennett,1942年3月28日-)是美国哲学家、作家及认知科学家。其研究集中于科学哲学、生物学哲学,特别是与演化生物学及认知科学有关的课题。他目前是塔夫斯大学的哲学系教授、Austin B. Fletcher 讲座哲学教授及认知研究中心的共同主任。丹尼特是坚实的无神论及世俗论者(内容来源于Wikipedia)。
一种通常的理解是这两种研究方法能够也应该同时进行,但我们已经有足够证据证明自下而上的策略并不能给心理学带来更有价值的内容。在自下而上的策略中有两个较为成熟的心理学理论,其一是行为主义中的经典条件反射(Stimulus-response behaviorism),其二可被称为“神经元信号生理学性心理学”(neuron signal physiological psychology),大部分研究者对这两种理论过时这一点都了达成共识。对于前者,这种刺激-反应的结构并不能清晰证明其能够作为心里层次上的基本粒子。对后者而言,尽管突触与脉冲序列可以成为完美的基本粒子,但因为这种粒子数量过于庞大,使得研究者在放弃对于传入传出神经的研究而转向脑的时候,神经元之间的相互作用的复杂性,令他们不得不放弃对于单一神经元的研究(见第四章与第五章)(2)。
现在,一个事实依旧摆在我们面前,大部分反对人工智能的人都是对上述的欺骗手段感到气愤与不信任的人。为什么做人工智能的研究者要使用这些小把戏呢?因为各种原因。第一,他们需要用这些程序来“讲故事”,而且将原本更理智的,技术性的,与轻描淡写的内容,做成栩栩如生而又显得“自然”的对话对于那些研究者而言并非难事(比如“理性:在下命令前就去做那件事“REASON: PRIOR COMMAND TO DO THAT”)。第二点,在维诺格拉德的例子中,他所尝试的是揭示当我们在人工智能中做语言结构分析的时候,我们能接受的最低程度是什么(注意到在上述例子里,句子中的代词前置)。因此我们能够通过检查人工智能“自然”语言的输出,来确定我们对于自然语言的研究与输入部分是否合理。第三点,将录制的回答放入程序中,这么做的确十分有趣。无论是去用来戏耍那些不会被骗到的同事或是那些并不明白原理的门外汉,这都是有趣的事情。作为一个门外汉,我们必须学习如何才能不被这种看起来真实的人工智能所迷惑,人工智能这种人造的真实性就如同化学家们各式各样的玻璃瓶,或是二战中被画满牙齿的战斗机一般,仅仅是一种看似吓人的幻觉。
(*1)George Smith与Barbara Klein 向我指出了在文中所提出的问题在某几种角度去理解的时候是很模棱两可的,因此对于相同的问题其实可以做出不同的回应。在下文中,我所提出的回答这个问题的不同策略,实际上也能回答不尽相同(但类似)的问题。提出类似问题的哲学家有时候会尴尬地发现他们收到的回复是对于其他类似内容的详细回答。
(*4)Margaret Boden的观点对我最初文本有帮助很大。她在Artificial Intelligence and Natural Man (Harvester, 1977) 中提供了有助于哲学家理解人工智能的导论。
1. J. Weizenbaum, Computer Power and Human Reason (San
Francisco: Freeman, 1976): p. 179, credits Louis Fein with this
term.
2. Cf. also Content and Consciousness.
3. Cf. Zenon Pylyshyn, “Complexity and the Study of Artificial and Human Intelligence,” in Martin Ringle, ed., Philosophical Perspectives on Artificial Intelligence (Humanities Press and Harvester Press, 1978), for a particularly good elaboration of the top-down strategy, a familiar theme in AI and cognitive psychology. Moore and Newell’s “How can MERLIN Understand?” in Lee W. Gregg, Knowledge and Cognition (New York: Academic Press, 1974), is the most clear and self-conscious employment of this strategy I have found.
4. See also Judson Webb, “Gödel’s Theorem and Church’s Thesis: A
Prologue to Mechanism,” Boston Studies in the Philosophy of Science, XXXI (Reidel, 1976).
5. Wilfrid Sellars, Science, Perception and Reality (London: Routledge & Kegan Paul, 1963): pp. 182ff.
6. Terry Winograd, Understanding Natural Language (New York: Academic Press, 1972), pp. 12ff.
7. Cf. Correspondence between Weizenbaum, et al. in Communications of the Association for Computing Machinery; Weizenbaum, CACM, XVII, 7 (July 1974): 425; Arbib, CACM, XVII, 9 (Sept. 1974): 543; McLeod, CACM, XVIII, 9 (Sept. 1975): 546; Wilks, CACM, XIX, 2 (Feb. 1976): 108; Weizenbaum and McLeod, CACM, XIX, 6 (June 1976): 362.189
8. J. Weizenbaum, “Contextual Understanding by Computers,” CACM, X, 8 (1967): 464–80; also Computer Power and Human Reason.
9. Cf. Pylyshyn, op. cit.
10. Cf. Weizenbaum, Computer Power and Human Reason, for detailed support of this claim.
11. Cf. Jerry Fodor, “The Appeal to Tacit Knowledge in Psychological
Explanation,” Journal of Philosophy, LXV (1968); F. Attneave, “In
Defense of Homunculi,” in W. Rosenblith, Sensory Communication (Cambridge: MIT Press, 1960); R. DeSousa,“Rational Homunculi,” in A. Rorty, ed., The Identities of Persons (University of California Press, 1976); Elliot Sober, “Mental Representations,” in Synthese, XXXIII (1976).
12. See, e.g., Daniel Bobrow, “Dimensions of Representation,” in D. Bobrow and A. Collins, eds., Representation and Understanding (New York: Academic Press, 1975).
13. W. A Woods, “What's In a Link?” in Bobrow and Collins, op. cit.; Z. Pylyshyn, “Imagery and Artificial Intelligence,” in C. Wade Savage, ed., Minnesota Studies in the Philosophy of Science, IX (forthcoming), and Pylyshyn, “Complexity and the Study of Human and Artificial Intelligence,” op. cit.; M. Minsky, “A Framework for Representing Knowledge,” in P. Winston, ed., The Psychology of Computer Vision (New York, 1975).
14. Cf. Winograd on the costs and benefits of declarative representations in Bobrow and Collins, op. cit.: 188.
15. See, e.g., Winston, op. cit., and Pylyshyn, “Imagery and Artificial Intelligence,” op. cit.; “What the Mind's Eye Tells the Mind's Brain,” Psychological Bulletin (1972); and the literature referenced in these papers.
16. See e.g., Pylyshyn’s paper op. cit.; Winograd in Bobrow and Collins, op. cit.; Moore and Newell, op. cit.; Minsky, op. cit.
Reference list:
Dennett, D. C., III. (1981). Artificial Intelligence as Philosophy and as Psychology. Brainstorms. doi:10.7551/mitpress/11146.003.0013
评论区
共 14 条评论热门最新