这是用户在 2024-5-10 10:10 为 https://plato.stanford.edu/entries/ethics-ai/ 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?

Ethics of Artificial Intelligence and Robotics
人工智能和机器人伦理

First published Thu Apr 30, 2020
首次发布 2020 年 4 月 30 日星期四

Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these.
人工智能(AI)和机器人技术是数字技术,将在不久的将来对人类的发展产生重大影响。他们提出了一些基本问题,即我们应该如何处理这些系统,系统本身应该做什么,它们涉及哪些风险,以及我们如何控制这些风险。

After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3).
在该领域简介(§1)之后,本文的主要主题(§2)是:人工智能系统作为对象(即人类制造和使用的工具)出现的伦理问题。这包括隐私(§2.1)和操纵(§2.2)、不透明(§2.3)和偏见(§2.4)、人机交互(§2.5)、就业(§2.6)和自主性的影响(§2.7)等问题。然后是人工智能系统作为主体,即人工智能系统本身在机器伦理学(§2.8)和人工道德能动性(§2.9)中的伦理学。最后,未来可能出现的人工智能超级智能导致“奇点”的问题(§2.10)。最后,我们以对人工智能愿景的评论(§3)结束。

For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn.
对于这些主题中的每一部分,我们都会对伦理问题进行一般性解释,概述现有的立场和论点,然后分析这些立场和论点如何在当前技术中发挥作用,最后,可能会得出哪些政策后果。

1. Introduction 1. 引言

1.1 Background of the Field
1.1 领域背景

The ethics of AI and robotics is often focused on “concerns” of various sorts, which is a typical response to new technologies. Many such concerns turn out to be rather quaint (trains are too fast for souls); some are predictably wrong when they suggest that the technology will fundamentally change humans (telephones will destroy personal communication, writing will destroy memory, video cassettes will make going out redundant); some are broadly correct but moderately relevant (digital technology will destroy industries that make photographic film, cassette tapes, or vinyl records); but some are broadly correct and deeply relevant (cars will kill children and fundamentally change the landscape). The task of an article such as this is to analyse the issues and to deflate the non-issues.
人工智能和机器人技术的伦理通常集中在各种“关注点”上,这是对新技术的典型回应。许多这样的担忧被证明是相当古怪的(火车对灵魂来说太快了);当他们认为这项技术将从根本上改变人类时,有些人是可以预见的错误的(电话会破坏个人交流,写作会破坏记忆,录像带会让外出变得多余);有些是大体正确的,但相关性适中(数字技术将摧毁制造照相胶片、盒式磁带或黑胶唱片的行业);但有些是大体上正确且密切相关的(汽车会杀死儿童并从根本上改变景观)。像这样的文章的任务是分析问题并消除非问题。

Some technologies, like nuclear power, cars, or plastics, have caused ethical and political discussion and significant policy efforts to control the trajectory these technologies, usually only once some damage is done. In addition to such “ethical concerns”, new technologies challenge current norms and conceptual systems, which is of particular interest to philosophy. Finally, once we have understood a technology in its context, we need to shape our societal response, including regulation and law. All these features also exist in the case of new AI and Robotics technologies—plus the more fundamental fear that they may end the era of human control on Earth.
一些技术,如核能、汽车或塑料,已经引起了伦理和政治讨论,并做出了重大的政策努力来控制这些技术的发展轨迹,通常只有在造成一些损害之后。除了这种“伦理问题”之外,新技术还挑战了当前的规范和概念体系,这是哲学特别感兴趣的。最后,一旦我们理解了一种技术的背景,我们就需要塑造我们的社会反应,包括监管和法律。所有这些特征也存在于新的人工智能和机器人技术中,再加上更根本的担忧,即它们可能会终结人类控制地球的时代。

The ethics of AI and robotics has seen significant press coverage in recent years, which supports related research, but also may end up undermining it: the press often talks as if the issues under discussion were just predictions of what future technology will bring, and as though we already know what would be most ethical and how to achieve that. Press coverage thus focuses on risk, security (Brundage et al. 2018, in the Other Internet Resources section below, hereafter [OIR]), and prediction of impact (e.g., on the job market). The result is a discussion of essentially technical problems that focus on how to achieve a desired outcome. Current discussions in policy and industry are also motivated by image and public relations, where the label “ethical” is really not much more than the new “green”, perhaps used for “ethics washing”. For a problem to qualify as a problem for AI ethics would require that we do not readily know what the right thing to do is. In this sense, job loss, theft, or killing with AI is not a problem in ethics, but whether these are permissible under certain circumstances is a problem. This article focuses on the genuine problems of ethics where we do not readily know what the answers are.
近年来,人工智能和机器人技术的伦理问题得到了大量的媒体报道,这支持了相关研究,但最终也可能破坏它:媒体经常谈论,好像所讨论的问题只是对未来技术将带来什么的预测,好像我们已经知道什么是最合乎道德的,以及如何实现这一目标。因此,新闻报道的重点是风险、安全性(Brundage 等人,2018 年,在下面的其他互联网资源部分,以下简称 [OIR])和影响预测(例如,对就业市场的影响)。其结果是对本质上技术问题的讨论,这些问题的重点是如何实现预期的结果。目前政策和行业的讨论也受到形象和公共关系的推动,其中“道德”的标签实际上只不过是新的“绿色”,也许用于“道德清洗”。要使一个问题符合人工智能伦理问题,就需要我们不容易知道什么是正确的做法。从这个意义上说,人工智能的失业、盗窃或杀戮不是道德问题,但在某些情况下是否允许这些行为是一个问题。本文重点关注伦理学的真正问题,我们并不容易知道答案是什么。

A last caveat: The ethics of AI and robotics is a very young field within applied ethics, with significant dynamics, but few well-established issues and no authoritative overviews—though there is a promising outline (European Group on Ethics in Science and New Technologies 2018) and there are beginnings on societal impact (Floridi et al. 2018; Taddeo and Floridi 2018; S. Taylor et al. 2018; Walsh 2018; Bryson 2019; Gibert 2019; Whittlestone et al. 2019), and policy recommendations (AI HLEG 2019 [OIR]; IEEE 2019). So this article cannot merely reproduce what the community has achieved thus far, but must propose an ordering where little order exists.
最后需要注意的是:人工智能和机器人的伦理学是应用伦理学中一个非常年轻的领域,具有重要的动态性,但很少有公认的问题,也没有权威的概述——尽管有一个有希望的大纲(欧洲科学和新技术伦理小组 2018 年)并且有社会影响的开端(Floridi 等人,2018 年;Taddeo 和 Floridi 2018;S. Taylor 等人,2018 年;沃尔什 2018;布赖森 2019;吉伯特 2019;Whittlestone 等人,2019 年)和政策建议(AI HLEG 2019 [OIR];IEEE 2019)。因此,本文不能仅仅复制社区迄今为止所取得的成就,而必须在几乎没有秩序的地方提出一种秩序。

1.2 AI & Robotics
1.2 人工智能与机器人技术

The notion of “artificial intelligence” (AI) is understood broadly as any kind of artificial computational system that shows intelligent behaviour, i.e., complex behaviour that is conducive to reaching goals. In particular, we do not wish to restrict “intelligence” to what would require intelligence if done by humans, as Minsky had suggested (1985). This means we incorporate a range of machines, including those in “technical AI”, that show only limited abilities in learning or reasoning but excel at the automation of particular tasks, as well as machines in “general AI” that aim to create a generally intelligent agent.
“人工智能”(AI)的概念被广义地理解为任何一种显示智能行为的人工计算系统,即有利于实现目标的复杂行为。特别是,我们不希望像明斯基(Minsky)所建议的那样,将“智能”限制在由人类完成时需要智能的事情上。这意味着我们整合了一系列机器,包括“技术人工智能”中的机器,它们在学习或推理方面的能力有限,但在特定任务的自动化方面表现出色,以及“通用人工智能”中的机器,旨在创建一个普遍智能的代理。

AI somehow gets closer to our skin than other technologies—thus the field of “philosophy of AI”. Perhaps this is because the project of AI is to create machines that have a feature central to how we humans see ourselves, namely as feeling, thinking, intelligent beings. The main purposes of an artificially intelligent agent probably involve sensing, modelling, planning and action, but current AI applications also include perception, text analysis, natural language processing (NLP), logical reasoning, game-playing, decision support systems, data analytics, predictive analytics, as well as autonomous vehicles and other forms of robotics (P. Stone et al. 2016). AI may involve any number of computational techniques to achieve these aims, be that classical symbol-manipulating AI, inspired by natural cognition, or machine learning via neural networks (Goodfellow, Bengio, and Courville 2016; Silver et al. 2018).
人工智能在某种程度上比其他技术更接近我们的皮肤——因此是“人工智能哲学”的领域。也许这是因为人工智能的项目是创造机器,这些机器具有我们人类如何看待自己的核心特征,即作为感觉、思考、智能生物。人工智能代理的主要目的可能涉及感知、建模、规划和行动,但目前的人工智能应用还包括感知、文本分析、自然语言处理 (NLP)、逻辑推理、游戏、决策支持系统、数据分析、预测分析,以及自动驾驶汽车和其他形式的机器人(P. Stone 等人,2016 年)。人工智能可能涉及任意数量的计算技术来实现这些目标,无论是受自然认知启发的经典符号操纵人工智能,还是通过神经网络进行的机器学习(Goodfellow、Bengio 和 Courville 2016;Silver 等人,2018 年)。

Historically, it is worth noting that the term “AI” was used as above ca. 1950–1975, then came into disrepute during the “AI winter”, ca. 1975–1995, and narrowed. As a result, areas such as “machine learning”, “natural language processing” and “data science” were often not labelled as “AI”. Since ca. 2010, the use has broadened again, and at times almost all of computer science and even high-tech is lumped under “AI”. Now it is a name to be proud of, a booming industry with massive capital investment (Shoham et al. 2018), and on the edge of hype again. As Erik Brynjolfsson noted, it may allow us to
从历史上看,值得注意的是,“人工智能”一词在1950-1975年如上所述使用,然后在1975-1995年的“人工智能冬天”期间声名狼藉,并缩小了范围。因此,“机器学习”、“自然语言处理”和“数据科学”等领域通常没有被贴上“人工智能”的标签。自2010年以来,它的用途再次扩大,有时几乎所有的计算机科学甚至高科技都归入“人工智能”。现在,它是一个值得骄傲的名字,一个蓬勃发展的行业,拥有大量资本投资(Shoham 等人,2018 年),并再次处于炒作的边缘。正如 Erik Brynjolfsson 所指出的,它可能允许我们

virtually eliminate global poverty, massively reduce disease and provide better education to almost everyone on the planet. (quoted in Anderson, Rainie, and Luchsinger 2018)
几乎消除了全球贫困,大大减少了疾病,并为地球上的几乎每个人提供了更好的教育。(引自 Anderson、Rainie 和 Luchsinger 2018)

While AI can be entirely software, robots are physical machines that move. Robots are subject to physical impact, typically through “sensors”, and they exert physical force onto the world, typically through “actuators”, like a gripper or a turning wheel. Accordingly, autonomous cars or planes are robots, and only a minuscule portion of robots is “humanoid” (human-shaped), like in the movies. Some robots use AI, and some do not: Typical industrial robots blindly follow completely defined scripts with minimal sensory input and no learning or reasoning (around 500,000 such new industrial robots are installed each year (IFR 2019 [OIR])). It is probably fair to say that while robotics systems cause more concerns in the general public, AI systems are more likely to have a greater impact on humanity. Also, AI or robotics systems for a narrow set of tasks are less likely to cause new issues than systems that are more flexible and autonomous.
虽然人工智能可以完全是软件,但机器人是移动的物理机器。机器人通常通过“传感器”受到物理冲击,并且它们通常通过“执行器”(如抓手或转轮)向世界施加物理力。因此,自动驾驶汽车或飞机是机器人,只有一小部分机器人是“人形”(人形),就像电影中一样。有些机器人使用人工智能,有些则不使用:典型的工业机器人盲目地遵循完全定义的脚本,只有最少的感官输入,没有学习或推理(每年安装约50万台这样的新型工业机器人(IFR 2019 [OIR]))。可以公平地说,虽然机器人系统在公众中引起了更多的关注,但人工智能系统更有可能对人类产生更大的影响。此外,与更灵活和自主的系统相比,用于狭窄任务的人工智能或机器人系统不太可能引起新问题。

Robotics and AI can thus be seen as covering two overlapping sets of systems: systems that are only AI, systems that are only robotics, and systems that are both. We are interested in all three; the scope of this article is thus not only the intersection, but the union, of both sets.
因此,机器人和人工智能可以被看作是涵盖两组重叠的系统:只有人工智能的系统,只有机器人的系统,以及两者兼而有之的系统。我们对这三个都感兴趣;因此,本文的范围不仅是两个集合的交集,而且是并集。

1.3 A Note on Policy
1.3 政策说明

Policy is only one of the concerns of this article. There is significant public discussion about AI ethics, and there are frequent pronouncements from politicians that the matter requires new policy, which is easier said than done: Actual technology policy is difficult to plan and enforce. It can take many forms, from incentives and funding, infrastructure, taxation, or good-will statements, to regulation by various actors, and the law. Policy for AI will possibly come into conflict with other aims of technology policy or general policy. Governments, parliaments, associations, and industry circles in industrialised countries have produced reports and white papers in recent years, and some have generated good-will slogans (“trusted/responsible/humane/human-centred/good/beneficial AI”), but is that what is needed? For a survey, see Jobin, Ienca, and Vayena (2019) and V. Müller’s list of PT-AI Policy Documents and Institutions.
政策只是本文关注的问题之一。关于人工智能伦理的公众讨论很多,政客们也经常宣称,这个问题需要新的政策,这说起来容易做起来难:实际的技术政策很难规划和执行。它可以采取多种形式,从激励和资金、基础设施、税收或善意声明,到各种行为者的监管和法律。人工智能政策可能会与技术政策或一般政策的其他目标发生冲突。近年来,工业化国家的政府、议会、协会和行业界纷纷出台报告和白皮书,其中一些还提出了善意的口号(“可信/负责任/人道/以人为本/善良/有益的人工智能”),但这就是所需要的吗?有关调查,请参阅Jobin, Ienca, and Vayena (2019)和V. Müller的PT-AI政策文件和机构列表。

For people who work in ethics and policy, there might be a tendency to overestimate the impact and threats from a new technology, and to underestimate how far current regulation can reach (e.g., for product liability). On the other hand, there is a tendency for businesses, the military, and some public administrations to “just talk” and do some “ethics washing” in order to preserve a good public image and continue as before. Actually implementing legally binding regulation would challenge existing business models and practices. Actual policy is not just an implementation of ethical theory, but subject to societal power structures—and the agents that do have the power will push against anything that restricts them. There is thus a significant risk that regulation will remain toothless in the face of economical and political power.
对于从事道德和政策工作的人来说,可能有一种倾向,即高估新技术的影响和威胁,而低估当前监管所能达到的程度(例如,产品责任)。另一方面,企业、军队和一些公共行政部门倾向于“空谈”和做一些“道德清洗”,以保持良好的公众形象并像以前一样继续下去。实际上,实施具有法律约束力的法规将对现有的商业模式和做法提出挑战。实际的政策不仅仅是道德理论的实施,而且受制于社会权力结构——拥有权力的代理人会反对任何限制他们的东西。因此,在经济和政治权力面前,监管将保持无力的重大风险。

Though very little actual policy has been produced, there are some notable beginnings: The latest EU policy document suggests “trustworthy AI” should be lawful, ethical, and technically robust, and then spells this out as seven requirements: human oversight, technical robustness, privacy and data governance, transparency, fairness, well-being, and accountability (AI HLEG 2019 [OIR]). Much European research now runs under the slogan of “responsible research and innovation” (RRI), and “technology assessment” has been a standard field since the advent of nuclear power. Professional ethics is also a standard field in information technology, and this includes issues that are relevant in this article. Perhaps a “code of ethics” for AI engineers, analogous to the codes of ethics for medical doctors, is an option here (Véliz 2019). What data science itself should do is addressed in (L. Taylor and Purtova 2019). We also expect that much policy will eventually cover specific uses or technologies of AI and robotics, rather than the field as a whole. A useful summary of an ethical framework for AI is given in (European Group on Ethics in Science and New Technologies 2018: 13ff). On general AI policy, see Calo (2018) as well as Crawford and Calo (2016); Stahl, Timmermans, and Mittelstadt (2016); Johnson and Verdicchio (2017); and Giubilini and Savulescu (2018). A more political angle of technology is often discussed in the field of “Science and Technology Studies” (STS). As books like The Ethics of Invention (Jasanoff 2016) show, concerns in STS are often quite similar to those in ethics (Jacobs et al. 2019 [OIR]). In this article, we discuss the policy for each type of issue separately rather than for AI or robotics in general.
尽管几乎没有制定实际的政策,但有一些值得注意的开端:最新的欧盟政策文件建议“可信赖的人工智能”应该是合法的、合乎道德的和技术稳健的,然后将其阐明为七个要求:人类监督、技术稳健性、隐私和数据治理、透明度、公平性、福祉和问责制(AI HLEG 2019 [OIR])。现在,欧洲的许多研究都在“负责任的研究和创新”(RRI)的口号下进行,自核电出现以来,“技术评估”一直是一个标准领域。职业道德也是信息技术的一个标准领域,这包括与本文相关的问题。也许人工智能工程师的“道德准则”,类似于医生的道德准则,在这里是一种选择(Véliz 2019)。数据科学本身应该做什么在(L. Taylor and Purtova 2019)中得到解决。我们还预计,许多政策最终将涵盖人工智能和机器人技术的特定用途或技术,而不是整个领域。(欧洲科学和新技术伦理小组 2018 年:13ff)中给出了人工智能伦理框架的有用总结。关于一般人工智能政策,请参阅Calo (2018)以及Crawford and Calo (2016);Stahl,Timmermans和Mittelstadt(2016);Johnson 和 Verdicchio (2017);以及Giubilini和Savulescu(2018)。在“科学技术研究”(STS)领域经常讨论更具政治性的技术角度。正如《发明伦理学》(Jasanoff 2016)等书所显示的那样,STS中的关注点通常与伦理学中的关注点非常相似(Jacobs et al. 2019 [OIR])。在本文中,我们将分别讨论每种类型问题的策略,而不是一般的人工智能或机器人技术。

2. Main Debates 2. 主要辩论

In this section we outline the ethical issues of human use of AI and robotics systems that can be more or less autonomous—which means we look at issues that arise with certain uses of the technologies which would not arise with others. It must be kept in mind, however, that technologies will always cause some uses to be easier, and thus more frequent, and hinder other uses. The design of technical artefacts thus has ethical relevance for their use (Houkes and Vermaas 2010; Verbeek 2011), so beyond “responsible use”, we also need “responsible design” in this field. The focus on use does not presuppose which ethical approaches are best suited for tackling these issues; they might well be virtue ethics (Vallor 2017) rather than consequentialist or value-based (Floridi et al. 2018). This section is also neutral with respect to the question whether AI systems truly have “intelligence” or other mental properties: It would apply equally well if AI and robotics are merely seen as the current face of automation (cf. Müller forthcoming-b).
在本节中,我们概述了人类使用人工智能和机器人系统的伦理问题,这些系统或多或少是自主的——这意味着我们着眼于某些技术使用时出现的问题,而其他技术则不会出现。然而,必须牢记的是,技术总是会使某些用途更容易,从而更频繁,并阻碍其他用途。因此,技术人工制品的设计与其使用具有伦理相关性(Houkes and Vermaas 2010;Verbeek 2011),因此除了“负责任的使用”之外,我们还需要在这个领域进行“负责任的设计”。对使用的关注并不以哪种道德方法最适合解决这些问题为前提;它们很可能是美德伦理学(Vallor 2017),而不是结果主义或基于价值的(Floridi et al. 2018)。对于人工智能系统是否真的具有“智能”或其他心理特性的问题,本节也是中立的:如果人工智能和机器人技术仅仅被视为自动化的当前面孔,那么它同样适用(参见Müller即将出版-b)。

2.1 Privacy & Surveillance
2.1 隐私与监控

There is a general discussion about privacy and surveillance in information technology (e.g., Macnish 2017; Roessler 2017), which mainly concerns the access to private data and data that is personally identifiable. Privacy has several well recognised aspects, e.g., “the right to be let alone”, information privacy, privacy as an aspect of personhood, control over information about oneself, and the right to secrecy (Bennett and Raab 2006). Privacy studies have historically focused on state surveillance by secret services but now include surveillance by other state agents, businesses, and even individuals. The technology has changed significantly in the last decades while regulation has been slow to respond (though there is the Regulation (EU) 2016/679)—the result is a certain anarchy that is exploited by the most powerful players, sometimes in plain sight, sometimes in hiding.
有一个关于信息技术中的隐私和监控的一般性讨论(例如,Macnish 2017;Roessler 2017),主要涉及对私人数据和个人身份数据的访问。隐私有几个公认的方面,例如,“独处权”、信息隐私、隐私作为人格的一个方面、对自己信息的控制以及保密权(Bennett and Raab 2006)。隐私研究历来侧重于特勤局对国家进行监控,但现在包括其他国家特工、企业甚至个人的监控。在过去的几十年里,技术发生了重大变化,而监管反应迟钝(尽管有法规(EU)2016/679)——结果是某种无政府状态被最强大的参与者利用,有时在众目睽睽之下,有时在隐藏中。

The digital sphere has widened greatly: All data collection and storage is now digital, our lives are increasingly digital, most digital data is connected to a single Internet, and there is more and more sensor technology in use that generates data about non-digital aspects of our lives. AI increases both the possibilities of intelligent data collection and the possibilities for data analysis. This applies to blanket surveillance of whole populations as well as to classic targeted surveillance. In addition, much of the data is traded between agents, usually for a fee.
数字领域已经大大扩大:现在所有的数据收集和存储都是数字化的,我们的生活越来越数字化,大多数数字数据都连接到一个互联网,并且有越来越多的传感器技术正在使用,可以生成有关我们生活中非数字方面的数据。人工智能增加了智能数据收集的可能性和数据分析的可能性。这适用于对整个人群的全面监测以及经典的有针对性的监测。此外,大部分数据都是在代理商之间交易的,通常是收费的。

At the same time, controlling who collects which data, and who has access, is much harder in the digital world than it was in the analogue world of paper and telephone calls. Many new AI technologies amplify the known issues. For example, face recognition in photos and videos allows identification and thus profiling and searching for individuals (Whittaker et al. 2018: 15ff). This continues using other techniques for identification, e.g., “device fingerprinting”, which are commonplace on the Internet (sometimes revealed in the “privacy policy”). The result is that “In this vast ocean of data, there is a frighteningly complete picture of us” (Smolan 2016: 1:01). The result is arguably a scandal that still has not received due public attention.
与此同时,在数字世界中,控制谁收集哪些数据以及谁可以访问数据,比在纸质和电话的模拟世界中要困难得多。许多新的人工智能技术放大了已知问题。例如,照片和视频中的人脸识别可以识别,从而分析和搜索个人(Whittaker 等人,2018:15ff)。这继续使用其他技术进行识别,例如“设备指纹识别”,这在互联网上很常见(有时在“隐私政策”中显示)。结果是“在这个浩瀚的数据海洋中,有一幅可怕的完整图景”(Smolan 2016:1:01)。结果可以说是一个仍然没有得到公众应有关注的丑闻。

The data trail we leave behind is how our “free” services are paid for—but we are not told about that data collection and the value of this new raw material, and we are manipulated into leaving ever more such data. For the “big 5” companies (Amazon, Google/Alphabet, Microsoft, Apple, Facebook), the main data-collection part of their business appears to be based on deception, exploiting human weaknesses, furthering procrastination, generating addiction, and manipulation (Harris 2016 [OIR]). The primary focus of social media, gaming, and most of the Internet in this “surveillance economy” is to gain, maintain, and direct attention—and thus data supply. “Surveillance is the business model of the Internet” (Schneier 2015). This surveillance and attention economy is sometimes called “surveillance capitalism” (Zuboff 2019). It has caused many attempts to escape from the grasp of these corporations, e.g., in exercises of “minimalism” (Newport 2019), sometimes through the open source movement, but it appears that present-day citizens have lost the degree of autonomy needed to escape while fully continuing with their life and work. We have lost ownership of our data, if “ownership” is the right relation here. Arguably, we have lost control of our data.
我们留下的数据线索是我们的“免费”服务是如何支付的——但我们没有被告知数据收集和这种新原材料的价值,我们纵留下了越来越多的此类数据。对于“五大”公司(亚马逊、谷歌/Alphabet、Microsoft、苹果、Facebook),其业务的主要数据收集部分似乎是基于欺骗、利用人性弱点、进一步拖延、产生成瘾和操纵(Harris 2016 [OIR])。在这种“监控经济”中,社交媒体、游戏和大多数互联网的主要关注点是获得、维持和引导注意力,从而提供数据。“监控是互联网的商业模式”(Schneier 2015)。这种监视和注意力经济有时被称为“监视资本主义”(Zuboff 2019)。它导致了许多试图摆脱这些公司控制的尝试,例如,在“极简主义”(Newport 2019)的练习中,有时通过开源运动,但似乎当今的公民已经失去了在完全继续生活和工作的同时逃脱所需的自主权程度。我们已经失去了对数据的所有权,如果“所有权”在这里是正确的关系。可以说,我们已经失去了对数据的控制。

These systems will often reveal facts about us that we ourselves wish to suppress or are not aware of: they know more about us than we know ourselves. Even just observing online behaviour allows insights into our mental states (Burr and Christianini 2019) and manipulation (see below section 2.2). This has led to calls for the protection of “derived data” (Wachter and Mittelstadt 2019). With the last sentence of his bestselling book, Homo Deus, Harari asks about the long-term consequences of AI:
这些系统往往会揭示我们自己希望压制或不知道的关于我们的事实:它们对我们的了解比我们对自己的了解还要多。即使只是观察在线行为,也可以深入了解我们的精神状态(Burr 和 Christianini 2019)和操纵(见下文第 2.2 节)。这导致了对“衍生数据”的保护(Wachter and Mittelstadt 2019)。在他的畅销书《Homo Deus》的最后一句话中,赫拉利询问了人工智能的长期后果:

What will happen to society, politics and daily life when non-conscious but highly intelligent algorithms know us better than we know ourselves? (2016: 462)
当无意识但高度智能的算法比我们自己更了解我们时,社会、政治和日常生活会发生什么?(2016: 462)

Robotic devices have not yet played a major role in this area, except for security patrolling, but this will change once they are more common outside of industry environments. Together with the “Internet of things”, the so-called “smart” systems (phone, TV, oven, lamp, virtual assistant, home,…), “smart city” (Sennett 2018), and “smart governance”, they are set to become part of the data-gathering machinery that offers more detailed data, of different types, in real time, with ever more information.
除了安全巡逻之外,机器人设备尚未在这一领域发挥重要作用,但一旦它们在行业环境之外更加普遍,这种情况就会改变。与“物联网”、所谓的“智能”系统(电话、电视、烤箱、灯、虚拟助手、家庭,...)、“智能城市”(Sennett 2018)和“智能治理”一起,它们将成为数据收集机器的一部分,提供更详细的数据,不同类型的数据,实时,信息越来越多。

Privacy-preserving techniques that can largely conceal the identity of persons or groups are now a standard staple in data science; they include (relative) anonymisation , access control (plus encryption), and other models where computation is carried out with fully or partially encrypted input data (Stahl and Wright 2018); in the case of “differential privacy”, this is done by adding calibrated noise to encrypt the output of queries (Dwork et al. 2006; Abowd 2017). While requiring more effort and cost, such techniques can avoid many of the privacy issues. Some companies have also seen better privacy as a competitive advantage that can be leveraged and sold at a price.
可以在很大程度上隐藏个人或群体身份的隐私保护技术现在是数据科学的标准主打技术;它们包括(相对)匿名化、访问控制(加上加密)和其他使用完全或部分加密的输入数据进行计算的模型(Stahl 和 Wright 2018);在“差分隐私”的情况下,这是通过添加校准噪声来加密查询的输出来完成的(Dwork 等人,2006 年;Abowd 2017 年)。虽然需要更多的努力和成本,但这些技术可以避免许多隐私问题。一些公司还将更好的隐私视为一种竞争优势,可以加以利用并以一定的价格出售。

One of the major practical difficulties is to actually enforce regulation, both on the level of the state and on the level of the individual who has a claim. They must identify the responsible legal entity, prove the action, perhaps prove intent, find a court that declares itself competent … and eventually get the court to actually enforce its decision. Well-established legal protection of rights such as consumer rights, product liability, and other civil liability or protection of intellectual property rights is often missing in digital products, or hard to enforce. This means that companies with a “digital” background are used to testing their products on the consumers without fear of liability while heavily defending their intellectual property rights. This “Internet Libertarianism” is sometimes taken to assume that technical solutions will take care of societal problems by themselves (Mozorov 2013).
主要的实际困难之一是在国家层面和提出索赔的个人层面上实际执行监管。他们必须确定负责的法律实体,证明行动,也许证明意图,找到一个宣布自己有能力的法院......并最终让法院实际执行其决定。对消费者权利、产品责任和其他民事责任或知识产权保护等权利的完善法律保护在数字产品中往往缺失,或者难以执行。这意味着具有“数字”背景的公司习惯于在消费者身上测试他们的产品,而不必担心承担责任,同时大力捍卫他们的知识产权。这种“互联网自由主义”有时被认为技术解决方案将自行解决社会问题(Mozorov 2013)。

2.2 Manipulation of Behaviour
2.2 操纵行为

The ethical issues of AI in surveillance go beyond the mere accumulation of data and direction of attention: They include the use of information to manipulate behaviour, online and offline, in a way that undermines autonomous rational choice. Of course, efforts to manipulate behaviour are ancient, but they may gain a new quality when they use AI systems. Given users’ intense interaction with data systems and the deep knowledge about individuals this provides, they are vulnerable to “nudges”, manipulation, and deception. With sufficient prior data, algorithms can be used to target individuals or small groups with just the kind of input that is likely to influence these particular individuals. A ’nudge‘ changes the environment such that it influences behaviour in a predictable way that is positive for the individual, but easy and cheap to avoid (Thaler & Sunstein 2008). There is a slippery slope from here to paternalism and manipulation.
人工智能在监控中的伦理问题不仅仅是数据的积累和注意力的引导:它们包括使用信息来操纵在线和离线行为,从而破坏自主的理性选择。当然,操纵行为的努力是古老的,但当它们使用人工智能系统时,它们可能会获得新的品质。鉴于用户与数据系统的密切互动以及由此提供的对个人的深入了解,他们很容易受到“轻推”、操纵和欺骗。有了足够的先验数据,算法就可以用来针对个人或小团体,只提供可能影响这些特定个人的输入。“推动”会改变环境,使其以一种可预测的方式影响行为,这对个人来说是积极的,但很容易避免(Thaler & Sunstein 2008)。从这里到家长式作风和操纵有一个滑坡。

Many advertisers, marketers, and online sellers will use any legal means at their disposal to maximise profit, including exploitation of behavioural biases, deception, and addiction generation (Costa and Halpern 2019 [OIR]). Such manipulation is the business model in much of the gambling and gaming industries, but it is spreading, e.g., to low-cost airlines. In interface design on web pages or in games, this manipulation uses what is called “dark patterns” (Mathur et al. 2019). At this moment, gambling and the sale of addictive substances are highly regulated, but online manipulation and addiction are not—even though manipulation of online behaviour is becoming a core business model of the Internet.
许多广告商、营销人员和在线卖家将使用他们所掌握的任何法律手段来最大化利润,包括利用行为偏见、欺骗和成瘾的产生(Costa 和 Halpern 2019 [OIR])。这种操纵是许多赌博和游戏行业的商业模式,但它正在蔓延,例如,低成本航空公司。在网页或游戏的界面设计中,这种操作使用所谓的“黑暗模式”(Mathur 等人,2019 年)。目前,赌博和成瘾物质的销售受到高度监管,但在线操纵和成瘾却没有——尽管操纵在线行为正在成为互联网的核心商业模式。

Furthermore, social media is now the prime location for political propaganda. This influence can be used to steer voting behaviour, as in the Facebook-Cambridge Analytica “scandal” (Woolley and Howard 2017; Bradshaw, Neudert, and Howard 2019) and—if successful—it may harm the autonomy of individuals (Susser, Roessler, and Nissenbaum 2019).
此外,社交媒体现在是政治宣传的主要场所。这种影响可用于引导投票行为,例如Facebook-Cambridge Analytica的“丑闻”(Woolley and Howard 2017;Bradshaw、Neudert 和 Howard 2019),如果成功,它可能会损害个人的自主权(Susser、Roessler 和 Nissenbaum 2019)。

Improved AI “faking” technologies make what once was reliable evidence into unreliable evidence—this has already happened to digital photos, sound recordings, and video. It will soon be quite easy to create (rather than alter) “deep fake” text, photos, and video material with any desired content. Soon, sophisticated real-time interaction with persons over text, phone, or video will be faked, too. So we cannot trust digital interactions while we are at the same time increasingly dependent on such interactions.
改进的人工智能“伪造”技术使曾经可靠的证据变成了不可靠的证据——这已经发生在数码照片、录音和视频中。创建(而不是更改)具有任何所需内容的“深度伪造”文本、照片和视频材料很快就会变得非常容易。很快,通过短信、电话或视频与人进行的复杂实时互动也将被伪造。因此,我们不能信任数字互动,同时我们越来越依赖这种互动。

One more specific issue is that machine learning techniques in AI rely on training with vast amounts of data. This means there will often be a trade-off between privacy and rights to data vs. technical quality of the product. This influences the consequentialist evaluation of privacy-violating practices.
一个更具体的问题是,人工智能中的机器学习技术依赖于使用大量数据进行训练。这意味着在隐私和数据权利与产品的技术质量之间经常需要权衡。这影响了对侵犯隐私行为的后果主义评估。

The policy in this field has its ups and downs: Civil liberties and the protection of individual rights are under intense pressure from businesses’ lobbying, secret services, and other state agencies that depend on surveillance. Privacy protection has diminished massively compared to the pre-digital age when communication was based on letters, analogue telephone communications, and personal conversation and when surveillance operated under significant legal constraints.
这一领域的政策有起有落:公民自由和个人权利保护受到企业游说、特勤局和其他依赖监控的国家机构的巨大压力。与前数字时代相比,隐私保护已经大大减少,当时通信基于信件、模拟电话通信和个人对话,并且监控在重大法律限制下运作。

While the EU General Data Protection Regulation (Regulation (EU) 2016/679) has strengthened privacy protection, the US and China prefer growth with less regulation (Thompson and Bremmer 2018), likely in the hope that this provides a competitive advantage. It is clear that state and business actors have increased their ability to invade privacy and manipulate people with the help of AI technology and will continue to do so to further their particular interests—unless reined in by policy in the interest of general society.
虽然欧盟《通用数据保护条例》(法规(EU)2016/679)加强了隐私保护,但美国和中国更喜欢监管较少的增长(Thompson and Bremmer 2018),可能是希望这能提供竞争优势。很明显,国家和商业行为者已经增强了他们在人工智能技术的帮助下侵犯隐私和操纵人们的能力,并将继续这样做以促进他们的特殊利益——除非受到符合一般社会利益的政策的控制。

2.3 Opacity of AI Systems
2.3 人工智能系统的不透明性

Opacity and bias are central issues in what is now sometimes called “data ethics” or “big data ethics” (Floridi and Taddeo 2016; Mittelstadt and Floridi 2016). AI systems for automated decision support and “predictive analytics” raise “significant concerns about lack of due process, accountability, community engagement, and auditing” (Whittaker et al. 2018: 18ff). They are part of a power structure in which “we are creating decision-making processes that constrain and limit opportunities for human participation” (Danaher 2016b: 245). At the same time, it will often be impossible for the affected person to know how the system came to this output, i.e., the system is “opaque” to that person. If the system involves machine learning, it will typically be opaque even to the expert, who will not know how a particular pattern was identified, or even what the pattern is. Bias in decision systems and data sets is exacerbated by this opacity. So, at least in cases where there is a desire to remove bias, the analysis of opacity and bias go hand in hand, and political response has to tackle both issues together.
不透明和偏见是现在有时被称为“数据伦理”或“大数据伦理”的核心问题(Floridi and Taddeo 2016;米特尔施塔特和弗洛里迪 2016 年)。用于自动决策支持和“预测分析”的人工智能系统引发了“对缺乏正当程序、问责制、社区参与和审计的重大担忧”(Whittaker 等人,2018 年:18ff)。它们是权力结构的一部分,在这种结构中,“我们正在创建限制和限制人类参与机会的决策过程”(Danaher 2016b:245)。同时,受影响的人通常不可能知道系统是如何得出这个输出的,也就是说,系统对那个人来说是“不透明的”。如果系统涉及机器学习,那么即使对专家来说,它通常也是不透明的,他们不知道特定模式是如何识别的,甚至不知道模式是什么。这种不透明性加剧了决策系统和数据集中的偏见。因此,至少在希望消除偏见的情况下,对不透明性和偏见的分析是齐头并进的,政治反应必须同时解决这两个问题。

Many AI systems rely on machine learning techniques in (simulated) neural networks that will extract patterns from a given dataset, with or without “correct” solutions provided; i.e., supervised, semi-supervised or unsupervised. With these techniques, the “learning” captures patterns in the data and these are labelled in a way that appears useful to the decision the system makes, while the programmer does not really know which patterns in the data the system has used. In fact, the programs are evolving, so when new data comes in, or new feedback is given (“this was correct”, “this was incorrect”), the patterns used by the learning system change. What this means is that the outcome is not transparent to the user or programmers: it is opaque. Furthermore, the quality of the program depends heavily on the quality of the data provided, following the old slogan “garbage in, garbage out”. So, if the data already involved a bias (e.g., police data about the skin colour of suspects), then the program will reproduce that bias. There are proposals for a standard description of datasets in a “datasheet” that would make the identification of such bias more feasible (Gebru et al. 2018 [OIR]). There is also significant recent literature about the limitations of machine learning systems that are essentially sophisticated data filters (Marcus 2018 [OIR]). Some have argued that the ethical problems of today are the result of technical “shortcuts” AI has taken (Cristianini forthcoming).
许多人工智能系统依赖于(模拟)神经网络中的机器学习技术,这些技术将从给定的数据集中提取模式,无论是否提供“正确”的解决方案;即有监督、半监督或无监督。通过这些技术,“学习”捕获数据中的模式,这些模式以一种对系统做出的决策有用的方式进行标记,而程序员并不真正知道系统使用了数据中的哪些模式。事实上,这些程序是不断发展的,所以当新的数据进来,或者给出新的反馈(“这是正确的”,“这是不正确的”)时,学习系统使用的模式就会发生变化。这意味着结果对用户或程序员来说是不透明的:它是不透明的。此外,程序的质量很大程度上取决于所提供数据的质量,遵循“垃圾进,垃圾出”的旧口号。因此,如果数据已经涉及偏见(例如,关于嫌疑人肤色的警方数据),那么程序将重现这种偏见。有人建议在“数据表”中对数据集进行标准描述,这将使识别这种偏差更加可行(Gebru 等人,2018 [OIR])。最近也有大量关于机器学习系统局限性的文献,这些系统本质上是复杂的数据过滤器(Marcus 2018 [OIR])。一些人认为,今天的伦理问题是人工智能所采取的技术“捷径”的结果(克里斯蒂亚尼尼即将出版)。

There are several technical activities that aim at “explainable AI”, starting with (Van Lent, Fisher, and Mancuso 1999; Lomas et al. 2012) and, more recently, a DARPA programme (Gunning 2017 [OIR]). More broadly, the demand for
有几项旨在“可解释的人工智能”的技术活动,首先是(Van Lent, Fisher, and Mancuso 1999;Lomas 等人,2012 年),以及最近的 DARPA 计划(Gunning 2017 [OIR])。更广泛地说,对

a mechanism for elucidating and articulating the power structures, biases, and influences that computational artefacts exercise in society (Diakopoulos 2015: 398)
阐明和阐明计算人工制品在社会中行使的权力结构、偏见和影响的机制(Diakopoulos 2015:398)

is sometimes called “algorithmic accountability reporting”. This does not mean that we expect an AI to “explain its reasoning”—doing so would require far more serious moral autonomy than we currently attribute to AI systems (see below §2.10).
有时被称为“算法问责报告”。这并不意味着我们期望人工智能“解释其推理”——这样做需要比我们目前归因于人工智能系统更严肃的道德自主权(见下文§2.10)。

The politician Henry Kissinger pointed out that there is a fundamental problem for democratic decision-making if we rely on a system that is supposedly superior to humans, but cannot explain its decisions. He says we may have “generated a potentially dominating technology in search of a guiding philosophy” (Kissinger 2018). Danaher (2016b) calls this problem “the threat of algocracy” (adopting the previous use of ‘algocracy’ from Aneesh 2002 [OIR], 2006). In a similar vein, Cave (2019) stresses that we need a broader societal move towards more “democratic” decision-making to avoid AI being a force that leads to a Kafka-style impenetrable suppression system in public administration and elsewhere. The political angle of this discussion has been stressed by O’Neil in her influential book Weapons of Math Destruction (2016), and by Yeung and Lodge (2019).
政治家亨利·基辛格(Henry Kissinger)指出,如果我们依赖一个据称优于人类的系统,但无法解释其决策,那么民主决策就存在根本问题。他说,我们可能已经“产生了一种潜在的主导技术,以寻找一种指导哲学”(基辛格,2018)。Danaher(2016b)将这个问题称为“算法的威胁”(采用Aneesh 2002 [OIR],2006中“算法”的先前使用)。同样,Cave (2019) 强调,我们需要更广泛的社会举措,朝着更“民主”的决策方向发展,以避免人工智能成为导致公共行政和其他地方卡夫卡式难以穿透的压制系统的力量。奥尼尔(O'Neil)在其颇具影响力的著作《数学毁灭武器》(2016年)以及杨和洛奇(2019年)中强调了这一讨论的政治角度。

In the EU, some of these issues have been taken into account with the (Regulation (EU) 2016/679), which foresees that consumers, when faced with a decision based on data processing, will have a legal “right to explanation”—how far this goes and to what extent it can be enforced is disputed (Goodman and Flaxman 2017; Wachter, Mittelstadt, and Floridi 2016; Wachter, Mittelstadt, and Russell 2017). Zerilli et al. (2019) argue that there may be a double standard here, where we demand a high level of explanation for machine-based decisions despite humans sometimes not reaching that standard themselves.
在欧盟,其中一些问题已被考虑在(法规 (EU) 2016/679)中,该法规预计消费者在面临基于数据处理的决定时,将拥有合法的“解释权”——这在多大程度上以及可以在多大程度上执行是有争议的(Goodman and Flaxman 2017;Wachter、Mittelstadt 和 Floridi 2016;Wachter、Mittelstadt 和 Russell 2017)。Zerilli 等人(2019 年)认为,这里可能存在双重标准,尽管人类有时自己没有达到该标准,但我们要求对基于机器的决策进行高水平的解释。

2.4 Bias in Decision Systems
2.4 决策系统中的偏差

Automated AI decision support systems and “predictive analytics” operate on data and produce a decision as “output”. This output may range from the relatively trivial to the highly significant: “this restaurant matches your preferences”, “the patient in this X-ray has completed bone growth”, “application to credit card declined”, “donor organ will be given to another patient”, “bail is denied”, or “target identified and engaged”. Data analysis is often used in “predictive analytics” in business, healthcare, and other fields, to foresee future developments—since prediction is easier, it will also become a cheaper commodity. One use of prediction is in “predictive policing” (NIJ 2014 [OIR]), which many fear might lead to an erosion of public liberties (Ferguson 2017) because it can take away power from the people whose behaviour is predicted. It appears, however, that many of the worries about policing depend on futuristic scenarios where law enforcement foresees and punishes planned actions, rather than waiting until a crime has been committed (like in the 2002 film “Minority Report”). One concern is that these systems might perpetuate bias that was already in the data used to set up the system, e.g., by increasing police patrols in an area and discovering more crime in that area. Actual “predictive policing” or “intelligence led policing” techniques mainly concern the question of where and when police forces will be needed most. Also, police officers can be provided with more data, offering them more control and facilitating better decisions, in workflow support software (e.g., “ArcGIS”). Whether this is problematic depends on the appropriate level of trust in the technical quality of these systems, and on the evaluation of aims of the police work itself. Perhaps a recent paper title points in the right direction here: “AI ethics in predictive policing: From models of threat to an ethics of care” (Asaro 2019).
自动化人工智能决策支持系统和“预测分析”对数据进行操作,并生成决策作为“输出”。这种输出的范围可能从相对微不足道到非常重要:“这家餐厅符合您的喜好”、“X 光片中的患者已完成骨骼生长”、“信用卡申请被拒绝”、“捐赠器官将提供给另一位患者”、“保释被拒绝”或“目标确定并参与”。数据分析通常用于商业、医疗保健和其他领域的“预测分析”,以预测未来的发展——由于预测更容易,它也将成为更便宜的商品。预测的一个用途是“预测性警务”(NIJ 2014 [OIR]),许多人担心这可能会导致公共自由的侵蚀(Ferguson 2017),因为它可以剥夺被预测行为的人的权力。然而,似乎许多对警务的担忧都取决于未来的场景,即执法部门预见并惩罚计划的行动,而不是等到犯罪发生时(如2002年的电影《少数派报告》)。一个令人担忧的问题是,这些系统可能会使用于建立系统的数据中已经存在的偏见永久化,例如,通过增加一个地区的警察巡逻并在该地区发现更多的犯罪。实际的“预测性警务”或“情报主导的警务”技术主要涉及何时何地最需要警察部队的问题。此外,在工作流支持软件(例如“ArcGIS”)中,可以为警察提供更多数据,为他们提供更多的控制权并促进更好的决策。这是否有问题,取决于对这些系统技术质量的适当信任程度,以及对警察工作本身目标的评估。 也许最近的一篇论文标题在这里指向了正确的方向:“预测性警务中的人工智能伦理:从威胁模型到护理伦理”(Asaro 2019)。

Bias typically surfaces when unfair judgments are made because the individual making the judgment is influenced by a characteristic that is actually irrelevant to the matter at hand, typically a discriminatory preconception about members of a group. So, one form of bias is a learned cognitive feature of a person, often not made explicit. The person concerned may not be aware of having that bias—they may even be honestly and explicitly opposed to a bias they are found to have (e.g., through priming, cf. Graham and Lowery 2004). On fairness vs. bias in machine learning, see Binns (2018).
当做出不公平的判断时,偏见通常会浮出水面,因为做出判断的个人受到实际上与手头问题无关的特征的影响,通常是对群体成员的歧视性先入之见。因此,偏见的一种形式是一个人的习得性认知特征,通常不明确。有关人员可能没有意识到有这种偏见——他们甚至可能诚实而明确地反对他们被发现的偏见(例如,通过启动,参见 Graham 和 Lowery 2004)。关于机器学习中的公平性与偏见,请参阅Binns (2018) 。

Apart from the social phenomenon of learned bias, the human cognitive system is generally prone to have various kinds of “cognitive biases”, e.g., the “confirmation bias”: humans tend to interpret information as confirming what they already believe. This second form of bias is often said to impede performance in rational judgment (Kahnemann 2011)—though at least some cognitive biases generate an evolutionary advantage, e.g., economical use of resources for intuitive judgment. There is a question whether AI systems could or should have such cognitive bias.
除了习得性偏差的社会现象外,人类的认知系统通常容易出现各种“认知偏差”,例如“确认偏差”:人类倾向于将信息解释为确认他们已经相信的东西。人们常说,第二种形式的偏见阻碍了理性判断的表现(Kahnemann 2011)——尽管至少有一些认知偏见产生了进化优势,例如,经济地使用资源进行直觉判断。人工智能系统是否能够或应该有这样的认知偏差是一个问题。

A third form of bias is present in data when it exhibits systematic error, e.g., “statistical bias”. Strictly, any given dataset will only be unbiased for a single kind of issue, so the mere creation of a dataset involves the danger that it may be used for a different kind of issue, and then turn out to be biased for that kind. Machine learning on the basis of such data would then not only fail to recognise the bias, but codify and automate the “historical bias”. Such historical bias was discovered in an automated recruitment screening system at Amazon (discontinued early 2017) that discriminated against women—presumably because the company had a history of discriminating against women in the hiring process. The “Correctional Offender Management Profiling for Alternative Sanctions” (COMPAS), a system to predict whether a defendant would re-offend, was found to be as successful (65.2% accuracy) as a group of random humans (Dressel and Farid 2018) and to produce more false positives and less false negatives for black defendants. The problem with such systems is thus bias plus humans placing excessive trust in the systems. The political dimensions of such automated systems in the USA are investigated in Eubanks (2018).
当数据表现出系统误差时,就会出现第三种形式的偏差,例如“统计偏差”。严格来说,任何给定的数据集都只会对单一类型的问题没有偏见,因此仅仅创建一个数据集就涉及它可能被用于不同类型的问题的危险,然后结果证明对该类型的问题有偏见。基于这些数据的机器学习不仅无法识别偏见,而且无法将“历史偏见”编纂和自动化。这种历史偏见是在亚马逊的自动招聘筛选系统(2017年初停产)中发现的,该系统歧视女性,可能是因为该公司在招聘过程中有歧视女性的历史。“替代制裁的惩教罪犯管理分析”(COMPAS)是一个预测被告是否会再次犯罪的系统,被发现与一组随机人类一样成功(65.2%的准确率)(Dressel and Farid 2018),并为黑人被告产生更多的误报和更少的误报。因此,这种系统的问题在于偏见加上人类对系统的过度信任。Eubanks(2018)研究了美国此类自动化系统的政治层面。

There are significant technical efforts to detect and remove bias from AI systems, but it is fair to say that these are in early stages: see UK Institute for Ethical AI & Machine Learning (Brownsword, Scotford, and Yeung 2017; Yeung and Lodge 2019). It appears that technological fixes have their limits in that they need a mathematical notion of fairness, which is hard to come by (Whittaker et al. 2018: 24ff; Selbst et al. 2019), as is a formal notion of “race” (see Benthall and Haynes 2019). An institutional proposal is in (Veale and Binns 2017).
在检测和消除人工智能系统中的偏见方面,有大量的技术努力,但公平地说,这些都处于早期阶段:参见英国伦理人工智能与机器学习研究所(Brownsword,Scotford和Yeung 2017;Yeung 和 Lodge 2019)。技术修复似乎有其局限性,因为它们需要一个公平的数学概念,而这很难实现(Whittaker 等人,2018 年:24ff;Selbst 等人,2019 年),以及“种族”的正式概念(参见 Benthall 和 Haynes 2019 年)。一项机构提案正在(Veale 和 Binns 2017)。

2.5 Human-Robot Interaction
2.5 人机交互

Human-robot interaction (HRI) is an academic fields in its own right, which now pays significant attention to ethical matters, the dynamics of perception from both sides, and both the different interests present in and the intricacy of the social context, including co-working (e.g., Arnold and Scheutz 2017). Useful surveys for the ethics of robotics include Calo, Froomkin, and Kerr (2016); Royakkers and van Est (2016); Tzafestas (2016); a standard collection of papers is Lin, Abney, and Jenkins (2017).
人机交互 (HRI) 本身就是一个学术领域,它现在非常关注道德问题、双方感知的动态以及社会环境中存在的不同兴趣和复杂性,包括共同工作(例如,Arnold 和 Scheutz 2017)。机器人伦理的有用调查包括 Calo、Froomkin 和 Kerr (2016);Royakkers和van Est(2016);Tzafestas(2016年);Lin, Abney, and Jenkins (2017) 是标准的论文集。

While AI can be used to manipulate humans into believing and doing things (see section 2.2), it can also be used to drive robots that are problematic if their processes or appearance involve deception, threaten human dignity, or violate the Kantian requirement of “respect for humanity”. Humans very easily attribute mental properties to objects, and empathise with them, especially when the outer appearance of these objects is similar to that of living beings. This can be used to deceive humans (or animals) into attributing more intellectual or even emotional significance to robots or AI systems than they deserve. Some parts of humanoid robotics are problematic in this regard (e.g., Hiroshi Ishiguro’s remote-controlled Geminoids), and there are cases that have been clearly deceptive for public-relations purposes (e.g. on the abilities of Hanson Robotics’ “Sophia”). Of course, some fairly basic constraints of business ethics and law apply to robots, too: product safety and liability, or non-deception in advertisement. It appears that these existing constraints take care of many concerns that are raised. There are cases, however, where human-human interaction has aspects that appear specifically human in ways that can perhaps not be replaced by robots: care, love, and sex.
虽然人工智能可以用来操纵人类相信和做事(见第2.2节),但它也可以用来驱动机器人,如果它们的过程或外观涉及欺骗,威胁人类尊严,或违反康德式的“尊重人类”的要求。人类很容易将心理属性归因于物体,并对它们产生共鸣,尤其是当这些物体的外表与生物的外表相似时。这可以用来欺骗人类(或动物),使其将更多的智力甚至情感意义归因于机器人或人工智能系统,而不是他们应得的。人形机器人的某些部分在这方面存在问题(例如,石黑浩的遥控双子座),并且有些案例显然具有公共关系目的的欺骗性(例如,汉森机器人公司的“索菲亚”的能力)。当然,商业道德和法律的一些相当基本的限制也适用于机器人:产品安全和责任,或广告中的非欺骗性。这些现有的制约因素似乎解决了提出的许多问题。然而,在某些情况下,人与人之间的互动具有人类特有的方面,这些方面可能无法被机器人取代:关怀、爱和性。

2.5.1 Example (a) Care Robots
2.5.1 示例 (a) 护理机器人

The use of robots in health care for humans is currently at the level of concept studies in real environments, but it may become a usable technology in a few years, and has raised a number of concerns for a dystopian future of de-humanised care (A. Sharkey and N. Sharkey 2011; Robert Sparrow 2016). Current systems include robots that support human carers/caregivers (e.g., in lifting patients, or transporting material), robots that enable patients to do certain things by themselves (e.g., eat with a robotic arm), but also robots that are given to patients as company and comfort (e.g., the “Paro” robot seal). For an overview, see van Wynsberghe (2016); Nørskov (2017); Fosch-Villaronga and Albo-Canals (2019), for a survey of users Draper et al. (2014).
机器人在人类医疗保健中的使用目前处于真实环境中的概念研究水平,但它可能在几年内成为一种可用的技术,并引发了对非人性化护理的反乌托邦未来的一些担忧(A. Sharkey 和 N. Sharkey 2011;罗伯特·斯派洛 2016 年)。目前的系统包括支持人类护理人员/护理人员的机器人(例如,抬起患者或运输材料),使患者能够自己做某些事情的机器人(例如,用机械臂吃饭),以及作为陪伴和安慰提供给患者的机器人(例如,“Paro”机器人密封件)。有关概述,请参阅van Wynsberghe (2016);Nørskov(2017年);Fosch-Villaronga 和 Albo-Canals (2019),对用户的调查 Draper 等人(2014 年)。

One reason why the issue of care has come to the fore is that people have argued that we will need robots in ageing societies. This argument makes problematic assumptions, namely that with longer lifespan people will need more care, and that it will not be possible to attract more humans to caring professions. It may also show a bias about age (Jecker forthcoming). Most importantly, it ignores the nature of automation, which is not simply about replacing humans, but about allowing humans to work more efficiently. It is not very clear that there really is an issue here since the discussion mostly focuses on the fear of robots de-humanising care, but the actual and foreseeable robots in care are assistive robots for classic automation of technical tasks. They are thus “care robots” only in a behavioural sense of performing tasks in care environments, not in the sense that a human “cares” for the patients. It appears that the success of “being cared for” relies on this intentional sense of “care”, which foreseeable robots cannot provide. If anything, the risk of robots in care is the absence of such intentional care—because less human carers may be needed. Interestingly, caring for something, even a virtual agent, can be good for the carer themselves (Lee et al. 2019). A system that pretends to care would be deceptive and thus problematic—unless the deception is countered by sufficiently large utility gain (Coeckelbergh 2016). Some robots that pretend to “care” on a basic level are available (Paro seal) and others are in the making. Perhaps feeling cared for by a machine, to some extent, is progress for come patients.
护理问题之所以浮出水面,一个原因是人们认为在老龄化社会中我们需要机器人。这一论点提出了有问题的假设,即随着寿命的延长,人们将需要更多的护理,并且不可能吸引更多的人从事护理职业。它也可能显示出对年龄的偏见(Jecker即将出版)。最重要的是,它忽略了自动化的本质,自动化不仅仅是取代人类,而是让人类更有效地工作。目前还不清楚这里是否真的存在问题,因为讨论主要集中在对机器人非人性化护理的恐惧上,但实际和可预见的护理机器人是用于技术任务经典自动化的辅助机器人。因此,它们只是“护理机器人”,只是在行为意义上在护理环境中执行任务,而不是在人类“照顾”患者的意义上。看来,“被照顾”的成功依赖于这种有意识的“照顾”感,而可预见的机器人无法提供这种感觉。如果说有什么不同的话,那就是机器人在护理中的风险是缺乏这种有意识的护理——因为可能需要更少的人类护理人员。有趣的是,照顾某物,即使是虚拟代理,也可能对照顾者本身有好处(Lee 等人,2019 年)。一个假装关心的系统将是欺骗性的,因此是有问题的——除非这种欺骗被足够大的效用收益所抵消(Coeckelbergh 2016)。一些假装在基本层面上“关心”的机器人是可用的(Paro海豹),而其他机器人正在制作中。也许在某种程度上,感觉被机器照顾是患者的进步。

2.5.2 Example (b) Sex Robots
2.5.2 示例(b)性爱机器人

It has been argued by several tech optimists that humans will likely be interested in sex and companionship with robots and be comfortable with the idea (Levy 2007). Given the variation of human sexual preferences, including sex toys and sex dolls, this seems very likely: The question is whether such devices should be manufactured and promoted, and whether there should be limits in this touchy area. It seems to have moved into the mainstream of “robot philosophy” in recent times (Sullins 2012; Danaher and McArthur 2017; N. Sharkey et al. 2017 [OIR]; Bendel 2018; Devlin 2018).
一些技术乐观主义者认为,人类可能会对性和与机器人的陪伴感兴趣,并对这个想法感到满意(Levy 2007)。鉴于人类性偏好的差异,包括性玩具和性玩偶,这似乎很有可能:问题是是否应该制造和推广此类设备,以及是否应该在这个敏感领域进行限制。最近,它似乎已经进入了“机器人哲学”的主流(Sullins 2012;丹纳赫和麦克阿瑟 2017;N. Sharkey 等人,2017 年 [OIR];本德尔 2018;德夫林 2018 年)。

Humans have long had deep emotional attachments to objects, so perhaps companionship or even love with a predictable android is attractive, especially to people who struggle with actual humans, and already prefer dogs, cats, birds, a computer or a tamagotchi. Danaher (2019b) argues against (Nyholm and Frank 2017) that these can be true friendships, and is thus a valuable goal. It certainly looks like such friendship might increase overall utility, even if lacking in depth. In these discussions there is an issue of deception, since a robot cannot (at present) mean what it says, or have feelings for a human. It is well known that humans are prone to attribute feelings and thoughts to entities that behave as if they had sentience,even to clearly inanimate objects that show no behaviour at all. Also, paying for deception seems to be an elementary part of the traditional sex industry.
长期以来,人类对物体有着深厚的情感依恋,所以也许与可预测的机器人的陪伴甚至爱情很有吸引力,尤其是对于那些与真正的人类作斗争并且已经更喜欢狗、猫、鸟、电脑或 tamagotchi 的人来说。Danaher (2019b) 反对(Nyholm 和 Frank 2017)认为这些可以是真正的友谊,因此是一个有价值的目标。看起来这样的友谊可能会增加整体效用,即使缺乏深度。在这些讨论中,存在一个欺骗问题,因为机器人(目前)不能表达它所说的,或者对人类有感情。众所周知,人类倾向于将感觉和思想归因于表现得好像他们有知觉的实体,甚至是完全没有行为的明显无生命的物体。此外,为欺骗付费似乎是传统性产业的基本组成部分。

Finally, there are concerns that have often accompanied matters of sex, namely consent (Frank and Nyholm 2017), aesthetic concerns, and the worry that humans may be “corrupted” by certain experiences. Old fashioned though this may seem, human behaviour is influenced by experience, and it is likely that pornography or sex robots support the perception of other humans as mere objects of desire, or even recipients of abuse, and thus ruin a deeper sexual and erotic experience. In this vein, the “Campaign Against Sex Robots” argues that these devices are a continuation of slavery and prostitution (Richardson 2016).
最后,还有一些问题经常伴随着性问题,即同意(Frank and Nyholm 2017)、审美问题以及对人类可能被某些经历“腐蚀”的担忧。虽然这看起来很老套,但人类的行为会受到经验的影响,色情或性爱机器人很可能支持将其他人视为欲望的对象,甚至是虐待的接受者,从而破坏更深层次的性和色情体验。本着这种精神,“反对性爱机器人运动”认为这些设备是奴隶制和卖淫的延续(Richardson 2016)。

2.6 Automation and Employment
2.6 自动化与就业

It seems clear that AI and robotics will lead to significant gains in productivity and thus overall wealth. The attempt to increase productivity has often been a feature of the economy, though the emphasis on “growth” is a modern phenomenon (Harari 2016: 240). However, productivity gains through automation typically mean that fewer humans are required for the same output. This does not necessarily imply a loss of overall employment, however, because available wealth increases and that can increase demand sufficiently to counteract the productivity gain. In the long run, higher productivity in industrial societies has led to more wealth overall. Major labour market disruptions have occurred in the past, e.g., farming employed over 60% of the workforce in Europe and North-America in 1800, while by 2010 it employed ca. 5% in the EU, and even less in the wealthiest countries (European Commission 2013). In the 20 years between 1950 and 1970 the number of hired agricultural workers in the UK was reduced by 50% (Zayed and Loft 2019). Some of these disruptions lead to more labour-intensive industries moving to places with lower labour cost. This is an ongoing process.
很明显,人工智能和机器人技术将显著提高生产力,从而带来整体财富的显著提高。提高生产率的尝试往往是经济的一个特征,尽管强调“增长”是一种现代现象(Harari 2016:240)。然而,通过自动化提高生产力通常意味着相同的产出需要更少的人。然而,这并不一定意味着整体就业的丧失,因为可用财富增加,这足以增加需求以抵消生产率的提高。从长远来看,工业社会生产率的提高带来了更多的财富。过去曾发生过重大的劳动力市场中断,例如,1800年欧洲和北美的农业雇用了超过60%的劳动力,而到2010年,欧盟的劳动力约为5%,而在最富裕的国家甚至更少(欧盟委员会,2013年)。在 1950 年至 1970 年的 20 年间,英国雇用的农业工人数量减少了 50%(Zayed 和 Loft,2019 年)。其中一些中断导致更多的劳动密集型产业转移到劳动力成本较低的地方。这是一个持续的过程。

Classic automation replaced human muscle, whereas digital automation replaces human thought or information-processing—and unlike physical machines, digital automation is very cheap to duplicate (Bostrom and Yudkowsky 2014). It may thus mean a more radical change on the labour market. So, the main question is: will the effects be different this time? Will the creation of new jobs and wealth keep up with the destruction of jobs? And even if it is not different, what are the transition costs, and who bears them? Do we need to make societal adjustments for a fair distribution of costs and benefits of digital automation?
经典的自动化取代了人类的肌肉,而数字自动化取代了人类的思想或信息处理——与物理机器不同,数字自动化的复制成本非常低(Bostrom and Yudkowsky 2014)。因此,这可能意味着劳动力市场将发生更根本的变化。所以,主要问题是:这次的效果会有所不同吗?创造新的就业机会和财富会跟上工作岗位的破坏吗?即使没有不同,过渡成本是多少,谁来承担?我们是否需要进行社会调整,以公平分配数字自动化的成本和收益?

Responses to the issue of unemployment from AI have ranged from the alarmed (Frey and Osborne 2013; Westlake 2014) to the neutral (Metcalf, Keller, and Boyd 2016 [OIR]; Calo 2018; Frey 2019) to the optimistic (Brynjolfsson and McAfee 2016; Harari 2016; Danaher 2019a). In principle, the labour market effect of automation seems to be fairly well understood as involving two channels:
对人工智能失业问题的反应各不相同(Frey and Osborne 2013;Westlake 2014)到中性(Metcalf, Keller, and Boyd 2016 [OIR];卡洛 2018;Frey 2019)到乐观(Brynjolfsson and McAfee 2016;哈拉利 2016;丹纳赫 2019a)。原则上,自动化对劳动力市场的影响似乎被很好地理解为涉及两个渠道:

(i) the nature of interactions between differently skilled workers and new technologies affecting labour demand and (ii) the equilibrium effects of technological progress through consequent changes in labour supply and product markets. (Goos 2018: 362)
(i)不同技术工人与影响劳动力需求的新技术之间相互作用的性质,以及(ii)技术进步通过随之而来的劳动力供应和产品市场变化而产生的均衡效应。(Goos 2018:362)

What currently seems to happen in the labour market as a result of AI and robotics automation is “job polarisation” or the “dumbbell” shape (Goos, Manning, and Salomons 2009): The highly skilled technical jobs are in demand and highly paid, the low skilled service jobs are in demand and badly paid, but the mid-qualification jobs in factories and offices, i.e., the majority of jobs, are under pressure and reduced because they are relatively predictable, and most likely to be automated (Baldwin 2019).
由于人工智能和机器人自动化,目前劳动力市场上发生的似乎是“工作两极分化”或“哑铃”形状(Goos,Manning和Salomons 2009):高技能技术工作需求旺盛且报酬高,低技能服务工作需求旺盛且报酬低,但工厂和办公室的中等资格工作, 也就是说,大多数工作都处于压力之下并减少,因为它们相对可预测,并且最有可能实现自动化(Baldwin 2019)。

Perhaps enormous productivity gains will allow the “age of leisure” to be realised, something (Keynes 1930) had predicted to occur around 2030, assuming a growth rate of 1% per annum. Actually, we have already reached the level he anticipated for 2030, but we are still working—consuming more and inventing ever more levels of organisation. Harari explains how this economic development allowed humanity to overcome hunger, disease, and war—and now we aim for immortality and eternal bliss through AI, thus his title Homo Deus (Harari 2016: 75).
也许巨大的生产率提高将使“休闲时代”得以实现,凯恩斯 1930 年预测这将在 2030 年左右发生,假设年增长率为 1%。事实上,我们已经达到了他预期的2030年水平,但我们仍在努力——消耗更多,创造更多层次的组织。赫拉利解释了这种经济发展如何使人类能够克服饥饿、疾病和战争——现在我们的目标是通过人工智能实现不朽和永恒的幸福,因此他的头衔是 Homo Deus(Harari 2016:75)。

In general terms, the issue of unemployment is an issue of how goods in a society should be justly distributed. A standard view is that distributive justice should be rationally decided from behind a “veil of ignorance” (Rawls 1971), i.e., as if one does not know what position in a society one would actually be taking (labourer or industrialist, etc.). Rawls thought the chosen principles would then support basic liberties and a distribution that is of greatest benefit to the least-advantaged members of society. It would appear that the AI economy has three features that make such justice unlikely: First, it operates in a largely unregulated environment where responsibility is often hard to allocate. Second, it operates in markets that have a “winner takes all” feature where monopolies develop quickly. Third, the “new economy” of the digital service industries is based on intangible assets, also called “capitalism without capital” (Haskel and Westlake 2017). This means that it is difficult to control multinational digital corporations that do not rely on a physical plant in a particular location. These three features seem to suggest that if we leave the distribution of wealth to free market forces, the result would be a heavily unjust distribution: And this is indeed a development that we can already see.
一般而言,失业问题是一个社会中商品应该如何公平分配的问题。一个标准的观点是,分配正义应该从“无知的面纱”后面理性地决定(罗尔斯,1971),也就是说,就好像一个人不知道自己在社会中实际会采取什么立场(工人或工业家等)。罗尔斯认为,所选择的原则将支持基本自由和对社会最弱势成员最有利的分配。人工智能经济似乎有三个特征使这种正义不太可能实现:首先,它在一个基本上不受监管的环境中运作,责任往往难以分配。其次,它在具有“赢家通吃”特征的市场中运作,垄断发展迅速。第三,数字服务业的“新经济”是建立在无形资产的基础上的,也称为“没有资本的资本主义”(Haskel and Westlake 2017)。这意味着很难控制不依赖特定地点的实体工厂的跨国数字公司。这三个特征似乎表明,如果我们把财富分配留给自由市场力量,结果将是严重不公正的分配: 这确实是我们已经看到的发展。

One interesting question that has not received too much attention is whether the development of AI is environmentally sustainable: Like all computing systems, AI systems produce waste that is very hard to recycle and they consume vast amounts of energy, especially for the training of machine learning systems (and even for the “mining” of cryptocurrency). Again, it appears that some actors in this space offload such costs to the general society.
一个没有受到太多关注的有趣问题是人工智能的发展是否在环境上是可持续的:像所有计算系统一样,人工智能系统产生的废物很难回收,并且它们消耗了大量的能源,特别是用于机器学习系统的训练(甚至用于加密货币的“挖掘”)。同样,这个领域的一些行为者似乎将这种成本转嫁给了整个社会。

2.7 Autonomous Systems 2.7 自治系统

There are several notions of autonomy in the discussion of autonomous systems. A stronger notion is involved in philosophical debates where autonomy is the basis for responsibility and personhood (Christman 2003 [2018]). In this context, responsibility implies autonomy, but not inversely, so there can be systems that have degrees of technical autonomy without raising issues of responsibility. The weaker, more technical, notion of autonomy in robotics is relative and gradual: A system is said to be autonomous with respect to human control to a certain degree (Müller 2012). There is a parallel here to the issues of bias and opacity in AI since autonomy also concerns a power-relation: who is in control, and who is responsible?
在自治系统的讨论中,有几种自治的概念。在哲学辩论中,自主性是责任和人格的基础(Christman 2003 [2018]),涉及更强的概念。在这种情况下,责任意味着自主性,但不是相反的,因此可以存在具有一定程度的技术自主性的系统,而不会引起责任问题。机器人技术中较弱、技术性更强的自主性概念是相对的和渐进的:据说系统在一定程度上对人类控制是自主的(Müller 2012)。这与人工智能中的偏见和不透明问题有相似之处,因为自主性也涉及权力关系:谁在控制,谁负责?

Generally speaking, one question is the degree to which autonomous robots raise issues our present conceptual schemes must adapt to, or whether they just require technical adjustments. In most jurisdictions, there is a sophisticated system of civil and criminal liability to resolve such issues. Technical standards, e.g., for the safe use of machinery in medical environments, will likely need to be adjusted. There is already a field of “verifiable AI” for such safety-critical systems and for “security applications”. Bodies like the IEEE (The Institute of Electrical and Electronics Engineers) and the BSI (British Standards Institution) have produced “standards”, particularly on more technical sub-problems, such as data security and transparency. Among the many autonomous systems on land, on water, under water, in air or space, we discuss two samples: autonomous vehicles and autonomous weapons.
一般来说,一个问题是自主机器人在多大程度上引发了我们目前的概念方案必须适应的问题,或者它们是否只需要技术调整。在大多数司法管辖区,都有一套复杂的民事和刑事责任制度来解决这些问题。技术标准,例如,在医疗环境中安全使用机械的标准,可能需要调整。对于此类安全关键系统和“安全应用程序”,已经存在“可验证的人工智能”领域。IEEE(电气和电子工程师协会)和BSI(英国标准协会)等机构已经制定了“标准”,特别是在技术性更强的子问题上,如数据安全和透明度。在陆地、水上、水下、空中或太空的众多自主系统中,我们讨论了两个样本:自动驾驶汽车和自主武器。

2.7.1 Example (a) Autonomous Vehicles
2.7.1 示例(a)自动驾驶汽车

Autonomous vehicles hold the promise to reduce the very significant damage that human driving currently causes—approximately 1 million humans being killed per year, many more injured, the environment polluted, earth sealed with concrete and tarmac, cities full of parked cars, etc. However, there seem to be questions on how autonomous vehicles should behave, and how responsibility and risk should be distributed in the complicated system the vehicles operates in. (There is also significant disagreement over how long the development of fully autonomous, or “level 5” cars (SAE International 2018) will actually take.)
自动驾驶汽车有望减少人类驾驶目前造成的非常重大的损害——每年约有 100 万人丧生,更多人受伤,环境污染,土地被混凝土和柏油路面密封,城市停满了汽车等。然而,关于自动驾驶汽车应该如何表现,以及责任和风险应该如何在车辆运行的复杂系统中分配,似乎存在一些问题。(关于完全自动驾驶或“5级”汽车(SAE International 2018)的开发实际需要多长时间也存在重大分歧。

There is some discussion of “trolley problems” in this context. In the classic “trolley problems” (Thomson 1976; Woollard and Howard-Snyder 2016: section 2) various dilemmas are presented. The simplest version is that of a trolley train on a track that is heading towards five people and will kill them, unless the train is diverted onto a side track, but on that track there is one person, who will be killed if the train takes that side track. The example goes back to a remark in (Foot 1967: 6), who discusses a number of dilemma cases where tolerated and intended consequences of an action differ. “Trolley problems” are not supposed to describe actual ethical problems or to be solved with a “right” choice. Rather, they are thought-experiments where choice is artificially constrained to a small finite number of distinct one-off options and where the agent has perfect knowledge. These problems are used as a theoretical tool to investigate ethical intuitions and theories—especially the difference between actively doing vs. allowing something to happen, intended vs. tolerated consequences, and consequentialist vs. other normative approaches (Kamm 2016). This type of problem has reminded many of the problems encountered in actual driving and in autonomous driving (Lin 2016). It is doubtful, however, that an actual driver or autonomous car will ever have to solve trolley problems (but see Keeling 2020). While autonomous car trolley problems have received a lot of media attention (Awad et al. 2018), they do not seem to offer anything new to either ethical theory or to the programming of autonomous vehicles.
在这种情况下,有一些关于“手推车问题”的讨论。在经典的“手推车问题”(Thomson 1976;Woollard 和 Howard-Snyder 2016:第 2 节)提出了各种困境。最简单的版本是一列无轨电车在轨道上驶向五个人并会杀死他们,除非火车改道到侧轨上,但在那条轨道上有一个人,如果火车走那条侧轨,他将被杀死。这个例子可以追溯到(Foot 1967:6)中的一句话,他讨论了许多困境案例,在这些案例中,一个行为的容忍度和预期后果是不同的。“手推车问题”不应该描述实际的道德问题,也不应该用“正确”的选择来解决。相反,它们是思想实验,其中选择被人为地限制在少数有限数量的不同的一次性选项中,并且代理拥有完美的知识。这些问题被用作研究道德直觉和理论的理论工具,尤其是积极做某事与允许某事发生、预期与容忍后果以及结果主义与其他规范方法之间的区别(Kamm 2016)。这类问题提醒了实际驾驶和自动驾驶中遇到的许多问题(Lin 2016)。然而,值得怀疑的是,真正的驾驶员或自动驾驶汽车是否必须解决手推车问题(但参见 Keeling 2020)。虽然自动驾驶汽车电车问题受到了媒体的广泛关注(Awad 等人,2018 年),但它们似乎并没有为道德理论或自动驾驶汽车的编程提供任何新的东西。

The more common ethical problems in driving, such as speeding, risky overtaking, not keeping a safe distance, etc. are classic problems of pursuing personal interest vs. the common good. The vast majority of these are covered by legal regulations on driving. Programming the car to drive “by the rules” rather than “by the interest of the passengers” or “to achieve maximum utility” is thus deflated to a standard problem of programming ethical machines (see section 2.9). There are probably additional discretionary rules of politeness and interesting questions on when to break the rules (Lin 2016), but again this seems to be more a case of applying standard considerations (rules vs. utility) to the case of autonomous vehicles.
驾驶中比较常见的道德问题,如超速、危险超车、不保持安全距离等,是追求个人利益与共同利益的经典问题。其中绝大多数都包含在有关驾驶的法律规定中。因此,对汽车进行编程以“按规则”而不是“根据乘客的利益”或“实现最大效用”来驾驶,这被简化为对道德机器进行编程的标准问题(见第2.9节)。可能还有其他关于礼貌的自由裁量规则和关于何时违反规则的有趣问题(Lin 2016),但这似乎更像是将标准考虑因素(规则与效用)应用于自动驾驶汽车的情况。

Notable policy efforts in this field include the report (German Federal Ministry of Transport and Digital Infrastructure 2017), which stresses that safety is the primary objective. Rule 10 states
该领域值得注意的政策努力包括报告(德国联邦交通和数字基础设施部,2017 年),该报告强调安全是首要目标。规则 10 规定

In the case of automated and connected driving systems, the accountability that was previously the sole preserve of the individual shifts from the motorist to the manufacturers and operators of the technological systems and to the bodies responsible for taking infrastructure, policy and legal decisions.
就自动驾驶和互联驾驶系统而言,以前仅由个人承担的责任从驾驶者转移到技术系统的制造商和运营商,以及负责制定基础设施、政策和法律决策的机构。

(See section 2.10.1 below). The resulting German and EU laws on licensing automated driving are much more restrictive than their US counterparts where “testing on consumers” is a strategy used by some companies—without informed consent of the consumers or their possible victims.
(请参阅下面的第 2.10.1 节)。由此产生的德国和欧盟关于自动驾驶许可的法律比美国法律要严格得多,在美国,一些公司在没有消费者或其潜在受害者知情同意的情况下使用“对消费者进行测试”的策略。

2.7.2 Example (b) Autonomous Weapons
2.7.2 示例(b)自主武器

The notion of automated weapons is fairly old:
自动武器的概念相当古老:

For example, instead of fielding simple guided missiles or remotely piloted vehicles, we might launch completely autonomous land, sea, and air vehicles capable of complex, far-ranging reconnaissance and attack missions. (DARPA 1983: 1)
例如,我们可能发射能够执行复杂、远距离侦察和攻击任务的完全自主的陆地、海上和空中飞行器,而不是部署简单的制导导弹或遥控飞行器。(DARPA 1983:1)

This proposal was ridiculed as “fantasy” at the time (Dreyfus, Dreyfus, and Athanasiou 1986: ix), but it is now a reality, at least for more easily identifiable targets (missiles, planes, ships, tanks, etc.), but not for human combatants. The main arguments against (lethal) autonomous weapon systems (AWS or LAWS), are that they support extrajudicial killings, take responsibility away from humans, and make wars or killings more likely—for a detailed list of issues see Lin, Bekey, and Abney (2008: 73–86).
这个提议在当时被嘲笑为“幻想”(Dreyfus, Dreyfus, and Athanasiou 1986: ix),但现在它已成为现实,至少对于更容易识别的目标(导弹、飞机、船只、坦克等)来说,但对于人类战斗人员来说却不是。反对(致命)自主武器系统(AWS或LAWS)的主要论点是,它们支持法外杀戮,将责任从人类身上夺走,并使战争或杀戮更有可能发生——有关问题的详细列表,请参阅Lin,Bekey和Abney(2008:73-86)。

It appears that lowering the hurdle to use such systems (autonomous vehicles, “fire-and-forget” missiles, or drones loaded with explosives) and reducing the probability of being held accountable would increase the probability of their use. The crucial asymmetry where one side can kill with impunity, and thus has few reasons not to do so, already exists in conventional drone wars with remote controlled weapons (e.g., US in Pakistan). It is easy to imagine a small drone that searches, identifies, and kills an individual human—or perhaps a type of human. These are the kinds of cases brought forward by the Campaign to Stop Killer Robots and other activist groups. Some seem to be equivalent to saying that autonomous weapons are indeed weapons …, and weapons kill, but we still make them in gigantic numbers. On the matter of accountability, autonomous weapons might make identification and prosecution of the responsible agents more difficult—but this is not clear, given the digital records that one can keep, at least in a conventional war. The difficulty of allocating punishment is sometimes called the “retribution gap” (Danaher 2016a).
看来,降低使用此类系统(自动驾驶汽车、“即发即弃”导弹或装有炸药的无人机)的门槛并降低被追究责任的可能性将增加其使用的可能性。一方可以逍遥法外地杀人,因此没有理由不这样做,这种关键的不对称性已经存在于使用遥控武器的常规无人机战争中(例如,美国在巴基斯坦)。很容易想象一架小型无人机可以搜索、识别和杀死一个人——或者可能是某种人类。这些是“阻止杀手机器人运动”和其他激进组织提出的案例。有些人似乎等同于说自主武器确实是武器......,武器会杀人,但我们仍然大量制造它们。在问责问题上,自主武器可能会使识别和起诉责任代理人变得更加困难,但考虑到人们可以保留的数字记录,至少在常规战争中,这一点尚不清楚。分配惩罚的困难有时被称为“报应差距”(Danaher 2016a)。

Another question is whether using autonomous weapons in war would make wars worse, or make wars less bad. If robots reduce war crimes and crimes in war, the answer may well be positive and has been used as an argument in favour of these weapons (Arkin 2009; Müller 2016a) but also as an argument against them (Amoroso and Tamburrini 2018). Arguably the main threat is not the use of such weapons in conventional warfare, but in asymmetric conflicts or by non-state agents, including criminals.
另一个问题是,在战争中使用自主武器是否会使战争变得更糟,或者使战争不那么糟糕。如果机器人减少了战争罪和战争中的犯罪,答案很可能是肯定的,并已被用作支持这些武器的论据(Arkin 2009;Müller 2016a),但也作为反对它们的论据(Amoroso 和 Tamburrini 2018)。可以说,主要威胁不是在常规战争中使用这种武器,而是在不对称冲突或包括罪犯在内的非国家代理人中使用这种武器。

It has also been said that autonomous weapons cannot conform to International Humanitarian Law, which requires observance of the principles of distinction (between combatants and civilians), proportionality (of force), and military necessity (of force) in military conflict (A. Sharkey 2019). It is true that the distinction between combatants and non-combatants is hard, but the distinction between civilian and military ships is easy—so all this says is that we should not construct and use such weapons if they do violate Humanitarian Law. Additional concerns have been raised that being killed by an autonomous weapon threatens human dignity, but even the defenders of a ban on these weapons seem to say that these are not good arguments:
也有人说,自主武器不符合国际人道法,该法要求在军事冲突中遵守区分(战斗员和平民之间)、相称性(武力)和军事必要性(武力)原则(A. Sharkey 2019)。诚然,区分战斗人员和非战斗人员很难,但区分民用船只和军用船只很容易——所以这一切都表明,如果这些武器确实违反了人道法,我们就不应该制造和使用它们。有人担心被自主武器杀死会威胁到人类尊严,但即使是禁止这些武器的捍卫者似乎也认为这些都不是好的论点:

There are other weapons, and other technologies, that also compromise human dignity. Given this, and the ambiguities inherent in the concept, it is wiser to draw on several types of objections in arguments against AWS, and not to rely exclusively on human dignity. (A. Sharkey 2019)
还有其他武器和其他技术也损害了人的尊严。鉴于这一点,以及该概念固有的模糊性,在反对 AWS 的论点中利用几种类型的反对意见是更明智的,而不是完全依赖人的尊严。(A. 夏基 2019)

A lot has been made of keeping humans “in the loop” or “on the loop” in the military guidance on weapons—these ways of spelling out “meaningful control” are discussed in (Santoni de Sio and van den Hoven 2018). There have been discussions about the difficulties of allocating responsibility for the killings of an autonomous weapon, and a “responsibility gap” has been suggested (esp. Rob Sparrow 2007), meaning that neither the human nor the machine may be responsible. On the other hand, we do not assume that for every event there is someone responsible for that event, and the real issue may well be the distribution of risk (Simpson and Müller 2016). Risk analysis (Hansson 2013) indicates it is crucial to identify who is exposed to risk, who is a potential beneficiary, and who makes the decisions (Hansson 2018: 1822–1824).
在武器的军事指导中,已经做了很多工作来让人类“在循环中”或“在循环中”——这些拼写“有意义的控制”的方法在(Santoni de Sio 和 van den Hoven 2018)中进行了讨论。已经有人讨论了分配自主武器杀戮责任的困难,并提出了“责任差距”(特别是Rob Sparrow 2007),这意味着人类和机器都可能不负责任。另一方面,我们并不假设每个事件都有人对该事件负责,真正的问题很可能是风险的分布(Simpson and Müller 2016)。风险分析(Hansson 2013)表明,确定谁面临风险,谁是潜在受益人以及谁做出决定至关重要(Hansson 2018:1822-1824)。

2.8 Machine Ethics 2.8 机器伦理

Machine ethics is ethics for machines, for “ethical machines”, for machines as subjects, rather than for the human use of machines as objects. It is often not very clear whether this is supposed to cover all of AI ethics or to be a part of it (Floridi and Saunders 2004; Moor 2006; Anderson and Anderson 2011; Wallach and Asaro 2017). Sometimes it looks as though there is the (dubious) inference at play here that if machines act in ethically relevant ways, then we need a machine ethics. Accordingly, some use a broader notion:
机器伦理是机器的伦理,是“伦理机器”的伦理,是机器作为主体的伦理,而不是人类将机器用作客体的伦理。通常不是很清楚这是否应该涵盖所有人工智能伦理或成为其中的一部分(Floridi and Saunders 2004;摩尔 2006;安德森和安德森 2011;Wallach 和 Asaro 2017)。有时看起来好像有一个(可疑的)推论在起作用,如果机器以道德相关的方式行事,那么我们需要一个机器伦理。因此,有些人使用更广泛的概念:

machine ethics is concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable. (Anderson and Anderson 2007: 15)
机器伦理学关注的是确保机器对人类用户的行为,也许还有其他机器,在道德上是可以接受的。(安德森和安德森 2007:15)

This might include mere matters of product safety, for example. Other authors sound rather ambitious but use a narrower notion:
例如,这可能仅包括产品安全问题。其他作者听起来相当雄心勃勃,但使用了一个更狭隘的概念:

AI reasoning should be able to take into account societal values, moral and ethical considerations; weigh the respective priorities of values held by different stakeholders in various multicultural contexts; explain its reasoning; and guarantee transparency. (Dignum 2018: 1, 2)
人工智能推理应该能够考虑到社会价值观、道德和伦理考虑;权衡不同利益攸关方在不同多元文化背景下所持有的价值观的优先次序;解释其推理;并保证透明度。(Dignum 2018:1,2)

Some of the discussion in machine ethics makes the very substantial assumption that machines can, in some sense, be ethical agents responsible for their actions, or “autonomous moral agents” (see van Wynsberghe and Robbins 2019). The basic idea of machine ethics is now finding its way into actual robotics where the assumption that these machines are artificial moral agents in any substantial sense is usually not made (Winfield et al. 2019). It is sometimes observed that a robot that is programmed to follow ethical rules can very easily be modified to follow unethical rules (Vanderelst and Winfield 2018).
机器伦理学中的一些讨论做出了非常实质性的假设,即在某种意义上,机器可以成为对其行为负责的道德主体,或“自主的道德主体”(参见van Wynsberghe and Robbins 2019)。机器伦理的基本思想现在正在进入实际的机器人技术,通常不假设这些机器是任何实质意义上的人造道德主体(Winfield 等人,2019 年)。有时观察到,被编程为遵循道德规则的机器人可以很容易地被修改为遵循不道德的规则(Vanderelst 和 Winfield 2018)。

The idea that machine ethics might take the form of “laws” has famously been investigated by Isaac Asimov, who proposed “three laws of robotics” (Asimov 1942):
艾萨克·阿西莫夫(Isaac Asimov)提出了“机器人三定律”(Asimov 1942):

First Law—A robot may not injure a human being or, through inaction, allow a human being to come to harm. Second Law—A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. Third Law—A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
第一定律——机器人不得伤害人类,或因不作为而允许人类受到伤害。第二定律——机器人必须服从人类给它的命令,除非这些命令与第一定律相冲突。第三定律——机器人必须保护自己的存在,只要这种保护不与第一定律或第二定律相冲突。

Asimov then showed in a number of stories how conflicts between these three laws will make it problematic to use them despite their hierarchical organisation.
然后,阿西莫夫在许多故事中展示了这三条法律之间的冲突将如何使使用它们成为问题,尽管它们有等级组织。

It is not clear that there is a consistent notion of “machine ethics” since weaker versions are in danger of reducing “having an ethics” to notions that would not normally be considered sufficient (e.g., without “reflection” or even without “action”); stronger notions that move towards artificial moral agents may describe a—currently—empty set.
目前尚不清楚是否存在一个一致的“机器伦理”概念,因为较弱的版本有可能将“具有伦理”简化为通常被认为不充分的概念(例如,没有“反思”甚至没有“行动”);转向人工道德主体的更强概念可能描述了一个——目前——空洞的集合。

2.9 Artificial Moral Agents
2.9 人工道德主体

If one takes machine ethics to concern moral agents, in some substantial sense, then these agents can be called “artificial moral agents”, having rights and responsibilities. However, the discussion about artificial entities challenges a number of common notions in ethics and it can be very useful to understand these in abstraction from the human case (cf. Misselhorn 2020; Powers and Ganascia forthcoming).
如果从某种实质意义上讲,机器伦理学涉及道德主体,那么这些主体可以被称为“人工道德主体”,拥有权利和责任。然而,关于人造实体的讨论挑战了伦理学中的一些常见概念,从人类案例中抽象地理解这些概念可能非常有用(参见 Misselhorn 2020;Powers 和 Ganascia 即将出版)。

Several authors use “artificial moral agent” in a less demanding sense, borrowing from the use of “agent” in software engineering in which case matters of responsibility and rights will not arise (Allen, Varner, and Zinser 2000). James Moor (2006) distinguishes four types of machine agents: ethical impact agents (e.g., robot jockeys), implicit ethical agents (e.g., safe autopilot), explicit ethical agents (e.g., using formal methods to estimate utility), and full ethical agents (who “can make explicit ethical judgments and generally is competent to reasonably justify them. An average adult human is a full ethical agent”.) Several ways to achieve “explicit” or “full” ethical agents have been proposed, via programming it in (operational morality), via “developing” the ethics itself (functional morality), and finally full-blown morality with full intelligence and sentience (Allen, Smit, and Wallach 2005; Moor 2006). Programmed agents are sometimes not considered “full” agents because they are “competent without comprehension”, just like the neurons in a brain (Dennett 2017; Hakli and Mäkelä 2019).
一些作者在不太苛刻的意义上使用“人工道德代理”,借鉴了软件工程中“代理”的使用,在这种情况下,不会出现责任和权利问题(Allen,Varner和Zinser 2000)。James Moor(2006)区分了四种类型的机器代理:道德影响代理(例如,机器人骑师),隐性道德代理(例如,安全自动驾驶仪),显性道德代理(例如,使用正式方法估计效用)和完全道德代理(他们“可以做出明确的道德判断,并且通常有能力合理地证明它们。一个普通的成年人是一个完全的道德主体“。已经提出了几种实现“显性”或“完全”道德主体的方法,通过将其编程为(操作道德),通过“发展”伦理本身(功能道德),最后是具有充分智能和感知力的成熟道德(Allen,Smit和Wallach 2005;摩尔 2006 年)。编程代理有时不被认为是“完整”代理,因为它们“无需理解即可胜任”,就像大脑中的神经元一样(Dennett 2017;Hakli 和 Mäkelä 2019)。

In some discussions, the notion of “moral patient” plays a role: Ethical agents have responsibilities while ethical patients have rights because harm to them matters. It seems clear that some entities are patients without being agents, e.g., simple animals that can feel pain but cannot make justified choices. On the other hand, it is normally understood that all agents will also be patients (e.g., in a Kantian framework). Usually, being a person is supposed to be what makes an entity a responsible agent, someone who can have duties and be the object of ethical concerns. Such personhood is typically a deep notion associated with phenomenal consciousness, intention and free will (Frankfurt 1971; Strawson 1998). Torrance (2011) suggests “artificial (or machine) ethics could be defined as designing machines that do things that, when done by humans, are indicative of the possession of ‘ethical status’ in those humans” (2011: 116)—which he takes to be “ethical productivity and ethical receptivity” (2011: 117)—his expressions for moral agents and patients.
在一些讨论中,“道德病人”的概念发挥了作用:道德代理人有责任,而道德患者有权利,因为对他们的伤害很重要。很明显,有些实体是患者而不是代理人,例如,可以感觉到疼痛但无法做出合理选择的简单动物。另一方面,通常认为所有代理也将是患者(例如,在康德框架中)。通常,作为一个人应该使一个实体成为负责任的代理人,一个可以承担职责并成为道德关注对象的人。这种人格通常是一个与现象意识、意图和自由意志相关的深刻概念(法兰克福 1971 年;Strawson 1998)。Torrance(2011)认为“人工(或机器)伦理可以被定义为设计机器,当人类完成这些事情时,表明这些人类拥有'道德地位'”(2011:116)——他认为这是“道德生产力和道德接受能力”(2011:117)——他对道德代理人和病人的表达。

2.9.1 Responsibility for Robots
2.9.1 机器人的责任

There is broad consensus that accountability, liability, and the rule of law are basic requirements that must be upheld in the face of new technologies (European Group on Ethics in Science and New Technologies 2018, 18), but the issue in the case of robots is how this can be done and how responsibility can be allocated. If the robots act, will they themselves be responsible, liable, or accountable for their actions? Or should the distribution of risk perhaps take precedence over discussions of responsibility?
人们普遍认为,问责制、责任和法治是面对新技术时必须坚持的基本要求(欧洲科技伦理小组 2018 年,18 页),但就机器人而言,问题是如何做到这一点以及如何分配责任。如果机器人行动了,他们自己会对自己的行为负责、负责还是负责?或者说,风险的分配是否应该优先于责任的讨论?

Traditional distribution of responsibility already occurs: A car maker is responsible for the technical safety of the car, a driver is responsible for driving, a mechanic is responsible for proper maintenance, the public authorities are responsible for the technical conditions of the roads, etc. In general
传统的责任分配已经发生:汽车制造商负责汽车的技术安全,驾驶员负责驾驶,机械师负责适当的维护,公共当局负责道路的技术条件等。通常

The effects of decisions or actions based on AI are often the result of countless interactions among many actors, including designers, developers, users, software, and hardware.… With distributed agency comes distributed responsibility. (Taddeo and Floridi 2018: 751).
基于人工智能的决策或行动的影响通常是许多参与者(包括设计师、开发人员、用户、软件和硬件)之间无数互动的结果。分布式代理带来了分布式责任。(Taddeo 和 Floridi 2018:751)。

How this distribution might occur is not a problem that is specific to AI, but it gains particular urgency in this context (Nyholm 2018a, 2018b). In classical control engineering, distributed control is often achieved through a control hierarchy plus control loops across these hierarchies.
这种分布如何发生并不是人工智能特有的问题,但在这种情况下,它变得尤为紧迫(Nyholm 2018a,2018b)。在经典的控制工程中,分布式控制通常是通过控制层次结构以及跨这些层次结构的控制循环来实现的。

2.9.2 Rights for Robots
2.9.2 机器人的权利

Some authors have indicated that it should be seriously considered whether current robots must be allocated rights (Gunkel 2018a, 2018b; Danaher forthcoming; Turner 2019). This position seems to rely largely on criticism of the opponents and on the empirical observation that robots and other non-persons are sometimes treated as having rights. In this vein, a “relational turn” has been proposed: If we relate to robots as though they had rights, then we might be well-advised not to search whether they “really” do have such rights (Coeckelbergh 2010, 2012, 2018). This raises the question how far such anti-realism or quasi-realism can go, and what it means then to say that “robots have rights” in a human-centred approach (Gerdes 2016). On the other side of the debate, Bryson has insisted that robots should not enjoy rights (Bryson 2010), though she considers it a possibility (Gunkel and Bryson 2014).
一些作者表示,应该认真考虑是否必须为当前的机器人分配权利(Gunkel 2018a,2018b;丹纳赫即将出版;特纳 2019 年)。这一立场似乎很大程度上依赖于对反对者的批评,以及机器人和其他非人有时被视为拥有权利的经验观察。本着这种精神,有人提出了一个“关系转向”:如果我们与机器人建立联系,就好像它们拥有权利一样,那么我们可能最好不要搜索它们是否“真的”拥有这样的权利(Coeckelbergh 2010,2012,2018)。这就提出了一个问题,即这种反现实主义或准现实主义能走多远,以及以人为本的方法说“机器人拥有权利”意味着什么(Gerdes 2016)。在辩论的另一边,布赖森坚持认为机器人不应该享有权利(Bryson 2010),尽管她认为这是一种可能性(Gunkel and Bryson 2014)。

There is a wholly separate issue whether robots (or other AI systems) should be given the status of “legal entities” or “legal persons” in a sense natural persons, but also states, businesses, or organisations are “entities”, namely they can have legal rights and duties. The European Parliament has considered allocating such status to robots in order to deal with civil liability (EU Parliament 2016; Bertolini and Aiello 2018), but not criminal liability—which is reserved for natural persons. It would also be possible to assign only a certain subset of rights and duties to robots. It has been said that “such legislative action would be morally unnecessary and legally troublesome” because it would not serve the interest of humans (Bryson, Diamantis, and Grant 2017: 273). In environmental ethics there is a long-standing discussion about the legal rights for natural objects like trees (C. D. Stone 1972).
机器人(或其他人工智能系统)是否应该被赋予自然人意义上的“法人”或“法人”的地位,这是一个完全不同的问题,但国家、企业或组织也是“实体”,即它们可以拥有合法的权利和义务。欧洲议会已考虑将这种地位分配给机器人,以处理民事责任(欧盟议会 2016 年;Bertolini 和 Aiello 2018),但不包括刑事责任——这是为自然人保留的。也可以只为机器人分配一定的权利和义务子集。有人说,“这种立法行动在道德上是不必要的,在法律上是麻烦的”,因为它不符合人类的利益(Bryson、Diamantis 和 Grant 2017:273)。在环境伦理学中,关于树木等自然物体的合法权利的讨论由来已久(C. D. Stone 1972)。

It has also been said that the reasons for developing robots with rights, or artificial moral patients, in the future are ethically doubtful (van Wynsberghe and Robbins 2019). In the community of “artificial consciousness” researchers there is a significant concern whether it would be ethical to create such consciousness since creating it would presumably imply ethical obligations to a sentient being, e.g., not to harm it and not to end its existence by switching it off—some authors have called for a “moratorium on synthetic phenomenology” (Bentley et al. 2018: 28f).
也有人说,未来开发具有权利的机器人或人造道德患者的原因在道德上是值得怀疑的(van Wynsberghe 和 Robbins 2019)。在“人工意识”研究人员的社区中,人们非常担心创造这种意识是否合乎道德,因为创造它可能意味着对有情众生的道德义务,例如,不伤害它,不通过关闭它来结束它的存在——一些作者呼吁“暂停合成现象学”(Bentley 等人,2018 年: 28f)。

2.10 Singularity 2.10 奇点

2.10.1 Singularity and Superintelligence
2.10.1 奇点与超级智能

In some quarters, the aim of current AI is thought to be an “artificial general intelligence” (AGI), contrasted to a technical or “narrow” AI. AGI is usually distinguished from traditional notions of AI as a general purpose system, and from Searle’s notion of “strong AI”:
在某些方面,当前人工智能的目标被认为是“通用人工智能”(AGI),与技术或“狭义”人工智能形成鲜明对比。AGI通常有别于传统的人工智能概念,即人工智能是一个通用系统,也不同于Searle的“强人工智能”概念:

computers given the right programs can be literally said to understand and have other cognitive states. (Searle 1980: 417)
从字面上看,给定正确程序的计算机可以理解并具有其他认知状态。(塞尔 1980:417)

The idea of singularity is that if the trajectory of artificial intelligence reaches up to systems that have a human level of intelligence, then these systems would themselves have the ability to develop AI systems that surpass the human level of intelligence, i.e., they are “superintelligent” (see below). Such superintelligent AI systems would quickly self-improve or develop even more intelligent systems. This sharp turn of events after reaching superintelligent AI is the “singularity” from which the development of AI is out of human control and hard to predict (Kurzweil 2005: 487).
奇点的概念是,如果人工智能的轨迹达到具有人类智能水平的系统,那么这些系统本身将有能力开发超越人类智能水平的人工智能系统,即它们是“超级智能”(见下文)。这种超级智能的人工智能系统将迅速自我改进或开发更智能的系统。在达到超级智能人工智能之后,这种急剧的事件转变是人工智能发展不受人类控制且难以预测的“奇点”(Kurzweil 2005:487)。

The fear that “the robots we created will take over the world” had captured human imagination even before there were computers (e.g., Butler 1863) and is the central theme in Čapek’s famous play that introduced the word “robot” (Čapek 1920). This fear was first formulated as a possible trajectory of existing AI into an “intelligence explosion” by Irvin Good:
对“我们创造的机器人将接管世界”的恐惧甚至在计算机出现之前就已经抓住了人类的想象力(例如,Butler 1863),并且是Čapek引入“机器人”一词的著名戏剧的中心主题(Čapek 1920)。这种恐惧最初是由欧文·古德(Irvin Good)表述为现有人工智能进入“智能爆炸”的可能轨迹:

Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion”, and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control. (Good 1965: 33)
让一台超智能机器被定义为一台机器,它可以远远超过任何人的所有智力活动,无论多么聪明。由于机器的设计是这些智力活动之一,因此超智能机器可以设计出更好的机器;毫无疑问,将出现“智力爆炸”,人类的智力将被远远抛在后面。因此,第一台超智能机器是人类需要做的最后一项发明,前提是机器足够温顺,可以告诉我们如何控制它。(好 1965:33)

The optimistic argument from acceleration to singularity is spelled out by Kurzweil (1999, 2005, 2012) who essentially points out that computing power has been increasing exponentially, i.e., doubling ca. every 2 years since 1970 in accordance with “Moore’s Law” on the number of transistors, and will continue to do so for some time in the future. He predicted in (Kurzweil 1999) that by 2010 supercomputers will reach human computation capacity, by 2030 “mind uploading” will be possible, and by 2045 the “singularity” will occur. Kurzweil talks about an increase in computing power that can be purchased at a given cost—but of course in recent years the funds available to AI companies have also increased enormously: Amodei and Hernandez (2018 [OIR]) thus estimate that in the years 2012–2018 the actual computing power available to train a particular AI system doubled every 3.4 months, resulting in an 300,000x increase—not the 7x increase that doubling every two years would have created.
Kurzweil(1999,2005,2012)阐述了从加速到奇点的乐观论点,他基本上指出,计算能力一直在呈指数级增长,即自1970年以来,根据晶体管数量的“摩尔定律”,大约每两年翻一番,并将在未来一段时间内继续这样做。他在(Kurzweil 1999)中预测,到2010年,超级计算机将达到人类的计算能力,到2030年,“思维上传”将成为可能,到2045年,“奇点”将发生。库兹韦尔谈到了可以以给定成本购买的计算能力的增加——当然,近年来人工智能公司可用的资金也大幅增加:Amodei 和 Hernandez (2018 [OIR]) 因此估计,在 2012 年至 2018 年期间,可用于训练特定人工智能系统的实际计算能力每 3.4 个月翻一番, 导致 300,000 倍的增长——而不是每两年翻一番所创造的 7 倍增长。

A common version of this argument (Chalmers 2010) talks about an increase in “intelligence” of the AI system (rather than raw computing power), but the crucial point of “singularity” remains the one where further development of AI is taken over by AI systems and accelerates beyond human level. Bostrom (2014) explains in some detail what would happen at that point and what the risks for humanity are. The discussion is summarised in Eden et al. (2012); Armstrong (2014); Shanahan (2015). There are possible paths to superintelligence other than computing power increase, e.g., the complete emulation of the human brain on a computer (Kurzweil 2012; Sandberg 2013), biological paths, or networks and organisations (Bostrom 2014: 22–51).
这个论点的一个常见版本(Chalmers 2010)谈到了人工智能系统的“智能”(而不是原始计算能力)的增加,但“奇点”的关键点仍然是人工智能的进一步发展被人工智能系统接管并加速超越人类水平。Bostrom(2014)详细解释了当时会发生什么以及人类面临的风险是什么。Eden等人(2012)总结了讨论;阿姆斯特朗(2014);沙纳汉(2015)。除了计算能力的提高之外,还有可能实现超级智能的途径,例如,在计算机上完全模拟人脑(Kurzweil 2012;Sandberg 2013),生物路径或网络和组织(Bostrom 2014:22-51)。

Despite obvious weaknesses in the identification of “intelligence” with processing power, Kurzweil seems right that humans tend to underestimate the power of exponential growth. Mini-test: If you walked in steps in such a way that each step is double the previous, starting with a step of one metre, how far would you get with 30 steps? (answer: almost 3 times further than the Earth’s only permanent natural satellite.) Indeed, most progress in AI is readily attributable to the availability of processors that are faster by degrees of magnitude, larger storage, and higher investment (Müller 2018). The actual acceleration and its speeds are discussed in (Müller and Bostrom 2016; Bostrom, Dafoe, and Flynn forthcoming); Sandberg (2019) argues that progress will continue for some time.
尽管在识别“智能”与处理能力方面存在明显的弱点,但库兹韦尔似乎是对的,人类倾向于低估指数增长的力量。小测试:如果你以这样的方式走路,每一步都是前一步的两倍,从一米的台阶开始,你会走多远 30 步?(答案:几乎是地球上唯一的永久性天然卫星的3倍。事实上,人工智能的大多数进步都很容易归因于处理器的可用性,这些处理器的速度更快,存储更大,投资更高(Müller 2018)。实际加速度及其速度在(Müller 和 Bostrom 2016;Bostrom、Dafoe 和 Flynn 即将出版);桑德伯格(2019)认为,进展将持续一段时间。

The participants in this debate are united by being technophiles in the sense that they expect technology to develop rapidly and bring broadly welcome changes—but beyond that, they divide into those who focus on benefits (e.g., Kurzweil) and those who focus on risks (e.g., Bostrom). Both camps sympathise with “transhuman” views of survival for humankind in a different physical form, e.g., uploaded on a computer (Moravec 1990, 1998; Bostrom 2003a, 2003c). They also consider the prospects of “human enhancement” in various respects, including intelligence—often called “IA” (intelligence augmentation). It may be that future AI will be used for human enhancement, or will contribute further to the dissolution of the neatly defined human single person. Robin Hanson provides detailed speculation about what will happen economically in case human “brain emulation” enables truly intelligent robots or “ems” (Hanson 2016).
这场辩论的参与者都是技术爱好者,他们期望技术能够迅速发展并带来广泛欢迎的变化,但除此之外,他们分为关注利益的人(例如库兹韦尔)和关注风险的人(例如博斯特罗姆)。这两个阵营都同情“超人类”以不同的物理形式对人类生存的看法,例如,上传到计算机上(Moravec 1990,1998;Bostrom 2003a, 2003c)。他们还考虑了“人类增强”在各个方面的前景,包括智力——通常被称为“IA”(智力增强)。未来的人工智能可能会被用于人类的增强,或者将进一步有助于整齐定义的人类个体的解体。罗宾·汉森(Robin Hanson)详细推测了如果人类的“大脑模拟”能够实现真正的智能机器人或“ems”(Hanson 2016),经济上会发生什么。

The argument from superintelligence to risk requires the assumption that superintelligence does not imply benevolence—contrary to Kantian traditions in ethics that have argued higher levels of rationality or intelligence would go along with a better understanding of what is moral and better ability to act morally (Gewirth 1978; Chalmers 2010: 36f). Arguments for risk from superintelligence say that rationality and morality are entirely independent dimensions—this is sometimes explicitly argued for as an “orthogonality thesis” (Bostrom 2012; Armstrong 2013; Bostrom 2014: 105–109).
从超级智能到风险的论证需要假设超级智能并不意味着仁慈——这与康德的伦理学传统相反,康德认为更高水平的理性或智力将伴随着对什么是道德的更好理解和更好的道德行为能力(Gewirth 1978;查尔默斯 2010:36f)。超级智能风险的论点说,理性和道德是完全独立的维度——这有时被明确地论证为“正交性论文”(Bostrom 2012;阿姆斯特朗 2013;Bostrom 2014:105-109)。

Criticism of the singularity narrative has been raised from various angles. Kurzweil and Bostrom seem to assume that intelligence is a one-dimensional property and that the set of intelligent agents is totally-ordered in the mathematical sense—but neither discusses intelligence at any length in their books. Generally, it is fair to say that despite some efforts, the assumptions made in the powerful narrative of superintelligence and singularity have not been investigated in detail. One question is whether such a singularity will ever occur—it may be conceptually impossible, practically impossible or may just not happen because of contingent events, including people actively preventing it. Philosophically, the interesting question is whether singularity is just a “myth” (Floridi 2016; Ganascia 2017), and not on the trajectory of actual AI research. This is something that practitioners often assume (e.g., Brooks 2017 [OIR]). They may do so because they fear the public relations backlash, because they overestimate the practical problems, or because they have good reasons to think that superintelligence is an unlikely outcome of current AI research (Müller forthcoming-a). This discussion raises the question whether the concern about “singularity” is just a narrative about fictional AI based on human fears. But even if one does find negative reasons compelling and the singularity not likely to occur, there is still a significant possibility that one may turn out to be wrong. Philosophy is not on the “secure path of a science” (Kant 1791: B15), and maybe AI and robotics aren’t either (Müller 2020). So, it appears that discussing the very high-impact risk of singularity has justification even if one thinks the probability of such singularity ever occurring is very low.
对奇点叙事的批评已经从各个角度提出。库兹韦尔和博斯特罗姆似乎认为智能是一种一维属性,智能代理的集合在数学意义上是完全有序的,但在他们的书中都没有详细讨论智能。一般来说,可以公平地说,尽管做出了一些努力,但在超级智能和奇点的强大叙述中所做的假设还没有得到详细研究。一个问题是这样的奇点是否会发生——它可能在概念上是不可能的,实际上是不可能的,或者可能只是因为偶然事件而不会发生,包括人们积极阻止它。从哲学上讲,有趣的问题是奇点是否只是一个“神话”(Floridi 2016;Ganascia 2017),而不是在实际人工智能研究的轨道上。这是从业者经常假设的事情(例如,Brooks 2017 [OIR])。他们这样做可能是因为他们害怕公关的反弹,因为他们高估了实际问题,或者因为他们有充分的理由认为超级智能是当前人工智能研究的不太可能的结果。这次讨论提出了一个问题,即对“奇点”的担忧是否只是基于人类恐惧的虚构人工智能的叙述。但是,即使人们确实发现负面原因令人信服,并且奇点不太可能发生,仍然有很大的可能性被证明是错误的。哲学不在“科学的安全道路上”(康德 1791:B15),也许人工智能和机器人技术也不是(Müller 2020)。因此,即使人们认为这种奇点发生的概率非常低,讨论奇点的非常高影响风险似乎也是有道理的。

2.10.2 Existential Risk from Superintelligence
2.10.2 超级智能的生存风险

Thinking about superintelligence in the long term raises the question whether superintelligence may lead to the extinction of the human species, which is called an “existential risk” (or XRisk): The superintelligent systems may well have preferences that conflict with the existence of humans on Earth, and may thus decide to end that existence—and given their superior intelligence, they will have the power to do so (or they may happen to end it because they do not really care).
从长远来看,超级智能提出了一个问题,即超级智能是否可能导致人类物种的灭绝,这被称为“生存风险”(或XRisk):超级智能系统很可能具有与地球上人类存在相冲突的偏好,因此可能决定结束这种存在——鉴于它们卓越的智能, 他们将有能力这样做(或者他们可能碰巧结束它,因为他们并不真正关心)。

Thinking in the long term is the crucial feature of this literature. Whether the singularity (or another catastrophic event) occurs in 30 or 300 or 3000 years does not really matter (Baum et al. 2019). Perhaps there is even an astronomical pattern such that an intelligent species is bound to discover AI at some point, and thus bring about its own demise. Such a “great filter” would contribute to the explanation of the “Fermi paradox” why there is no sign of life in the known universe despite the high probability of it emerging. It would be bad news if we found out that the “great filter” is ahead of us, rather than an obstacle that Earth has already passed. These issues are sometimes taken more narrowly to be about human extinction (Bostrom 2013), or more broadly as concerning any large risk for the species (Rees 2018)—of which AI is only one (Häggström 2016; Ord 2020). Bostrom also uses the category of “global catastrophic risk” for risks that are sufficiently high up the two dimensions of “scope” and “severity” (Bostrom and Ćirković 2011; Bostrom 2013).
长远思考是这部文学的关键特征。奇点(或其他灾难性事件)是否发生在 30 年、300 年或 3000 年并不重要(Baum 等人,2019 年)。也许甚至有一种天文模式,即一个聪明的物种必然会在某个时候发现人工智能,从而带来自己的灭亡。这样的“大过滤器”将有助于解释“费米悖论”,为什么在已知宇宙中没有生命的迹象,尽管它出现的可能性很高。如果我们发现“大过滤器”就在我们面前,而不是地球已经过去的障碍,那将是一个坏消息。这些问题有时被更狭隘地理解为关于人类灭绝(Bostrom 2013),或者更广泛地理解为物种的任何重大风险(Rees 2018)——人工智能只是其中之一(Häggström 2016;Ord 2020 年)。Bostrom 还使用“全球灾难性风险”类别来描述在“范围”和“严重性”两个维度上足够高的风险(Bostrom 和 Ćirković,2011 年;Bostrom 2013 年)。

These discussions of risk are usually not connected to the general problem of ethics under risk (e.g., Hansson 2013, 2018). The long-term view has its own methodological challenges but has produced a wide discussion: (Tegmark 2017) focuses on AI and human life “3.0” after singularity while Russell, Dewey, and Tegmark (2015) and Bostrom, Dafoe, and Flynn (forthcoming) survey longer-term policy issues in ethical AI. Several collections of papers have investigated the risks of artificial general intelligence (AGI) and the factors that might make this development more or less risk-laden (Müller 2016b; Callaghan et al. 2017; Yampolskiy 2018), including the development of non-agent AI (Drexler 2019).
这些关于风险的讨论通常与风险下的道德问题无关(例如,Hansson 2013,2018)。长期观点有其自身的方法论挑战,但已经产生了广泛的讨论:(Tegmark 2017)关注奇点之后的人工智能和人类生活“3.0”,而Russell,Dewey和Tegmark(2015)以及Bostrom,Dafoe和Flynn(即将推出)调查道德AI的长期政策问题。几篇论文集调查了通用人工智能 (AGI) 的风险以及可能使这种发展或多或少充满风险的因素(Müller 2016b;Callaghan 等人,2017 年;Yampolskiy 2018),包括非代理人工智能的开发(Drexler 2019)。

2.10.3 Controlling Superintelligence?
2.10.3 控制超级智能?

In a narrow sense, the “control problem” is how we humans can remain in control of an AI system once it is superintelligent (Bostrom 2014: 127ff). In a wider sense, it is the problem of how we can make sure an AI system will turn out to be positive according to human perception (Russell 2019); this is sometimes called “value alignment”. How easy or hard it is to control a superintelligence depends significantly on the speed of “take-off” to a superintelligent system. This has led to particular attention to systems with self-improvement, such as AlphaZero (Silver et al. 2018).
从狭义上讲,“控制问题”是指一旦人工智能系统变得超级智能,我们人类如何保持对人工智能系统的控制(Bostrom 2014:127ff)。从更广泛的意义上讲,这是一个问题,即我们如何确保人工智能系统根据人类的感知是积极的(Russell 2019);这有时被称为“价值一致性”。控制超级智能的难易程度在很大程度上取决于超级智能系统的“起飞”速度。这导致了对自我改进系统的特别关注,例如AlphaZero(Silver等人,2018)。

One aspect of this problem is that we might decide a certain feature is desirable, but then find out that it has unforeseen consequences that are so negative that we would not desire that feature after all. This is the ancient problem of King Midas who wished that all he touched would turn into gold. This problem has been discussed on the occasion of various examples, such as the “paperclip maximiser” (Bostrom 2003b), or the program to optimise chess performance (Omohundro 2014).
这个问题的一个方面是,我们可能认为某个功能是可取的,但随后发现它具有不可预见的后果,这些后果是如此负面,以至于我们根本不想要该功能。这是迈达斯国王的古老问题,他希望他所接触的一切都能变成金子。这个问题已经在各种例子中被讨论过,例如“回形针最大化器”(Bostrom 2003b)或优化国际象棋性能的程序(Omohundro 2014)。

Discussions about superintelligence include speculation about omniscient beings, the radical changes on a “latter day”, and the promise of immortality through transcendence of our current bodily form—so sometimes they have clear religious undertones (Capurro 1993; Geraci 2008, 2010; O’Connell 2017: 160ff). These issues also pose a well-known problem of epistemology: Can we know the ways of the omniscient (Danaher 2015)? The usual opponents have already shown up: A characteristic response of an atheist is
关于超级智能的讨论包括对无所不知的生物的猜测,“后期”的根本变化,以及通过超越我们当前的身体形式而获得不朽的承诺——所以有时它们具有明显的宗教色彩(Capurro 1993;杰拉奇 2008, 2010;奥康奈尔 2017:160ff)。这些问题也提出了一个众所周知的认识论问题:我们能知道无所不知的方式吗(Danaher 2015)?通常的反对者已经出现:无神论者的一个典型反应是

People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid and they’ve already taken over the world (Domingos 2015)
人们担心计算机会变得太聪明并接管世界,但真正的问题是它们太愚蠢了,它们已经接管了世界(Domingos 2015)

The new nihilists explain that a “techno-hypnosis” through information technologies has now become our main method of distraction from the loss of meaning (Gertz 2018). Both opponents would thus say we need an ethics for the “small” problems that occur with actual AI and robotics (sections 2.1 through 2.9 above), and that there is less need for the “big ethics” of existential risk from AI (section 2.10).
新的虚无主义者解释说,通过信息技术进行的“技术催眠”现在已成为我们分散失去意义的主要方法(Gertz 2018)。因此,两位反对者都会说,我们需要对实际人工智能和机器人技术中发生的“小”问题进行伦理处理(上文第2.1至2.9节),而对人工智能存在风险的“大伦理”的需求较小(第2.10节)。

3. Closing 3. 关闭

The singularity thus raises the problem of the concept of AI again. It is remarkable how imagination or “vision” has played a central role since the very beginning of the discipline at the “Dartmouth Summer Research Project” (McCarthy et al. 1955 [OIR]; Simon and Newell 1958). And the evaluation of this vision is subject to dramatic change: In a few decades, we went from the slogans “AI is impossible” (Dreyfus 1972) and “AI is just automation” (Lighthill 1973) to “AI will solve all problems” (Kurzweil 1999) and “AI may kill us all” (Bostrom 2014). This created media attention and public relations efforts, but it also raises the problem of how much of this “philosophy and ethics of AI” is really about AI rather than about an imagined technology. As we said at the outset, AI and robotics have raised fundamental questions about what we should do with these systems, what the systems themselves should do, and what risks they have in the long term. They also challenge the human view of humanity as the intelligent and dominant species on Earth. We have seen issues that have been raised and will have to watch technological and social developments closely to catch the new issues early on, develop a philosophical analysis, and learn for traditional problems of philosophy.
因此,奇点再次引发了人工智能概念的问题。值得注意的是,自“达特茅斯夏季研究项目”(McCarthy et al. 1955 [OIR];Simon 和 Newell 1958)。对这一愿景的评估发生了巨大变化:在几十年内,我们从“人工智能是不可能的”(Dreyfus 1972)和“人工智能只是自动化”(Lighthill 1973)的口号变成了“人工智能将解决所有问题”(Kurzweil 1999)和“人工智能可能会杀死我们所有人”(Bostrom 2014)。这引起了媒体的关注和公关努力,但它也提出了一个问题,即这种“人工智能的哲学和伦理”有多少是关于人工智能的,而不是关于想象的技术。正如我们一开始所说,人工智能和机器人技术提出了一些基本问题,即我们应该如何处理这些系统,系统本身应该做什么,以及从长远来看它们有什么风险。它们还挑战了人类将人类视为地球上智慧和主导物种的观点。我们已经看到了已经提出的问题,并且必须密切关注技术和社会发展,以便尽早发现新问题,进行哲学分析,并学习传统的哲学问题。

Bibliography 书目

NOTE: Citations in the main text annotated “[OIR]” may be found in the Other Internet Resources section below, not in the Bibliography.
注意:正文注释“[OIR]”中的引文可以在下面的“其他互联网资源”部分找到,而不是在参考书目中找到。

  • Abowd, John M, 2017, “How Will Statistical Agencies Operate When All Data Are Private?”, Journal of Privacy and Confidentiality, 7(3): 1–15. doi:10.29012/jpc.v7i3.404
    Abowd,John M,2017 年,“当所有数据都是私有的时,统计机构将如何运作?”,隐私与保密杂志,7(3):1-15。doi:10.29012/jpc.v7i3.404
  • Allen, Colin, Iva Smit, and Wendell Wallach, 2005, “Artificial Morality: Top-down, Bottom-up, and Hybrid Approaches”, Ethics and Information Technology, 7(3): 149–155. doi:10.1007/s10676-006-0004-4
    Allen、Colin、Iva Smit 和 Wendell Wallach,2005 年,“人工道德:自上而下、自下而上和混合方法”,伦理与信息技术,7(3):149-155。doi:10.1007/s10676-006-0004-4
  • Allen, Colin, Gary Varner, and Jason Zinser, 2000, “Prolegomena to Any Future Artificial Moral Agent”, Journal of Experimental & Theoretical Artificial Intelligence, 12(3): 251–261. doi:10.1080/09528130050111428
    Allen,Colin,Gary Varner和Jason Zinser,2000,“Prolegomena to Any Future Artificial Moral Agent”,Journal of Experimental & Theoretical Artificial Intelligence,12(3):251-261。doi:10.1080/09528130050111428
  • Amoroso, Daniele and Guglielmo Tamburrini, 2018, “The Ethical and Legal Case Against Autonomy in Weapons Systems”, Global Jurist, 18(1): art. 20170012. doi:10.1515/gj-2017-0012
    Amoroso、Daniele 和 Guglielmo Tamburrini,2018 年,“反对武器系统自主性的道德和法律案例”,全球法学家,18(1):第 20170012 条。doi:10.1515/gj-2017-0012
  • Anderson, Janna, Lee Rainie, and Alex Luchsinger, 2018, Artificial Intelligence and the Future of Humans, Washington, DC: Pew Research Center.
    Anderson、Janna、Lee Rainie 和 Alex Luchsinger,2018 年,人工智能和人类的未来,华盛顿特区:皮尤研究中心。
  • Anderson, Michael and Susan Leigh Anderson, 2007, “Machine Ethics: Creating an Ethical Intelligent Agent”, AI Magazine, 28(4): 15–26.
    Anderson、Michael 和 Susan Leigh Anderson,2007 年,“机器伦理:创建道德智能代理”,AI 杂志,28(4):15-26。
  • ––– (eds.), 2011, Machine Ethics, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511978036
    ––– (eds.), 2011, Machine Ethics, Cambridge: Cambridge University Press. doi:10.1017/CBO9780511978036
  • Aneesh, A., 2006, Virtual Migration: The Programming of Globalization, Durham, NC and London: Duke University Press.
    Aneesh, A., 2006, Virtual Migration: The Programming of Globalization, 北卡罗来纳州达勒姆和伦敦:杜克大学出版社。
  • Arkin, Ronald C., 2009, Governing Lethal Behavior in Autonomous Robots, Boca Raton, FL: CRC Press.
    Arkin,Ronald C.,2009 年,管理自主机器人中的致命行为,佛罗里达州博卡拉顿:CRC 出版社。
  • Armstrong, Stuart, 2013, “General Purpose Intelligence: Arguing the Orthogonality Thesis”, Analysis and Metaphysics, 12: 68–84.
    阿姆斯特朗,斯图尔特,2013 年,“通用智能:争论正交性论文”,分析与形而上学,12:68-84。
  • –––, 2014, Smarter Than Us, Berkeley, CA: MIRI.
    –––, 2014, Smarter Than Us, Berkeley, CA: MIRI.
  • Arnold, Thomas and Matthias Scheutz, 2017, “Beyond Moral Dilemmas: Exploring the Ethical Landscape in HRI”, in Proceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction—HRI ’17, Vienna, Austria: ACM Press, 445–452. doi:10.1145/2909824.3020255
    Arnold、Thomas 和 Matthias Scheutz,2017 年,“超越道德困境:探索 HRI 中的道德景观”,载于 2017 年 ACM/IEEE 人机交互国际会议论文集——HRI '17,奥地利维也纳:ACM 出版社,445–452。doi:10.1145/2909824.3020255
  • Asaro, Peter M., 2019, “AI Ethics in Predictive Policing: From Models of Threat to an Ethics of Care”, IEEE Technology and Society Magazine, 38(2): 40–53. doi:10.1109/MTS.2019.2915154
    Asaro, Peter M., 2019, “预测性警务中的人工智能伦理:从威胁模型到护理伦理”, IEEE 技术与社会杂志, 38(2): 40–53.doi:10.1109/MTS.2019.2915154
  • Asimov, Isaac, 1942, “Runaround: A Short Story”, Astounding Science Fiction, March 1942. Reprinted in “I, Robot”, New York: Gnome Press 1950, 1940ff.
    艾萨克·阿西莫夫,1942 年,“Runaround:一个短篇小说”,《惊人的科幻小说》,1942 年 3 月。转载于“我,机器人”,纽约:侏儒出版社 1950 年,1940 年以下。
  • Awad, Edmond, Sohan Dsouza, Richard Kim, Jonathan Schulz, Joseph Henrich, Azim Shariff, Jean-François Bonnefon, and Iyad Rahwan, 2018, “The Moral Machine Experiment”, Nature, 563(7729): 59–64. doi:10.1038/s41586-018-0637-6
    Awad、Edmond、Sohan Dsouza、Richard Kim、Jonathan Schulz、Joseph Henrich、Azim Shariff、Jean-François Bonnefon 和 Iyad Rahwan,2018 年,“道德机器实验”,自然,563(7729):59-64。doi:10.1038/s41586-018-0637-6
  • Baldwin, Richard, 2019, The Globotics Upheaval: Globalisation, Robotics and the Future of Work, New York: Oxford University Press.
    鲍德温,理查德,2019 年,《全球剧变:全球化、机器人和工作的未来》,纽约:牛津大学出版社。
  • Baum, Seth D., Stuart Armstrong, Timoteus Ekenstedt, Olle Häggström, Robin Hanson, Karin Kuhlemann, Matthijs M. Maas, James D. Miller, Markus Salmela, Anders Sandberg, Kaj Sotala, Phil Torres, Alexey Turchin, and Roman V. Yampolskiy, 2019, “Long-Term Trajectories of Human Civilization”, Foresight, 21(1): 53–83. doi:10.1108/FS-04-2018-0037
    鲍姆、塞思 D.、斯图尔特·阿姆斯特朗、蒂莫特乌斯·埃肯斯泰特、奥勒·哈格斯特伦、罗宾·汉森、卡琳·库勒曼、马蒂斯·马斯、詹姆斯·米勒、马库斯·萨尔梅拉、安德斯·桑德伯格、卡伊·索塔拉、菲尔·托雷斯、阿列克谢·图尔钦和罗曼·扬波尔斯基,2019 年,“人类文明的长期轨迹”,远见,21(1):53-83。doi:10.1108/FS-04-2018-0037
  • Bendel, Oliver, 2018, “Sexroboter aus Sicht der Maschinenethik”, in Handbuch Filmtheorie, Bernhard Groß and Thomas Morsch (eds.), (Springer Reference Geisteswissenschaften), Wiesbaden: Springer Fachmedien Wiesbaden, 1–19. doi:10.1007/978-3-658-17484-2_22-1
    Bendel, Oliver, 2018, “Robots from the Perspective of Machine Ethics”, in Handbook Film Theory, Bernhard Groß and Thomas Morsch (eds.), (Springer Reference Geisteswissenschaften), Wiesbaden: Springer Fachmedien Wiesbaden, 1–19. doi:10.1007/978-3-658-17484-2_22-1
  • Bennett, Colin J. and Charles Raab, 2006, The Governance of Privacy: Policy Instruments in Global Perspective, second edition, Cambridge, MA: MIT Press.
    Bennett、Colin J. 和 Charles Raab,2006 年,《隐私治理:全球视角中的政策工具》,第二版,马萨诸塞州剑桥:麻省理工学院出版社。
  • Benthall, Sebastian and Bruce D. Haynes, 2019, “Racial Categories in Machine Learning”, in Proceedings of the Conference on Fairness, Accountability, and Transparency - FAT* ’19, Atlanta, GA, USA: ACM Press, 289–298. doi:10.1145/3287560.3287575
    Benthall、Sebastian 和 Bruce D. Haynes,2019 年,“机器学习中的种族类别”,收录于公平、问责和透明度会议论文集 - FAT* '19,美国佐治亚州亚特兰大:ACM Press,289–298。doi:10.1145/3287560.3287575
  • Bentley, Peter J., Miles Brundage, Olle Häggström, and Thomas Metzinger, 2018, “Should We Fear Artificial Intelligence? In-Depth Analysis”, European Parliamentary Research Service, Scientific Foresight Unit (STOA), March 2018, PE 614.547, 1–40. [Bentley et al. 2018 available online]
    Bentley、Peter J.、Miles Brundage、Olle Häggström 和 Thomas Metzinger,2018 年,“我们应该害怕人工智能吗?深入分析“,欧洲议会研究服务处,科学展望单位 (STOA),2018 年 3 月,PE 614.547,1-40。[ Bentley 等人,2018 年,可在线获取]
  • Bertolini, Andrea and Giuseppe Aiello, 2018, “Robot Companions: A Legal and Ethical Analysis”, The Information Society, 34(3): 130–140. doi:10.1080/01972243.2018.1444249
    Bertolini、Andrea 和 Giuseppe Aiello,2018 年,“机器人伴侣:法律和伦理分析”,信息社会,34(3):130-140。doi:10.1080/01972243.2018.1444249
  • Binns, Reuben, 2018, “Fairness in Machine Learning: Lessons from Political Philosophy”, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in Proceedings of Machine Learning Research, 81: 149–159.
    Binns, Reuben, 2018, “Fairness in Machine Learning: Lessons from Political Philosophy”, Proceedings of the 1st Conference on Fairness, Accountability and Transparency, in Proceedings of Machine Learning Research, 81: 149–159.
  • Bostrom, Nick, 2003a, “Are We Living in a Computer Simulation?”, The Philosophical Quarterly, 53(211): 243–255. doi:10.1111/1467-9213.00309
    Bostrom, Nick, 2003a, “Are We Living in a Computer Simulation?”, The Philosophical Quarterly, 53(211): 243–255.doi:10.1111/1467-9213.00309
  • –––, 2003b, “Ethical Issues in Advanced Artificial Intelligence”, in Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, Volume 2, Iva Smit, Wendell Wallach, and G.E. Lasker (eds), (IIAS-147-2003), Tecumseh, ON: International Institute of Advanced Studies in Systems Research and Cybernetics, 12–17. [Botstrom 2003b revised available online]
    –––, 2003b, “Ethical Issues in Advanced Artificial Intelligence”, in Cognitive, Eemotional and Ethics Aspects of Decision Making in Humans and in Artificial Intelligence, Volume 2, Iva Smit, Wendell Wallach, and G.E. Lasker (eds), (IIAS-147-2003), Tecumseh, ON: International Institute of Advanced Studies in Systems Research and Cybernetics, 12–17.[ Botstrom 2003b 修订版可在线获取]
  • –––, 2003c, “Transhumanist Values”, in Ethical Issues for the Twenty-First Century, Frederick Adams (ed.), Bowling Green, OH: Philosophical Documentation Center Press.
    –––, 2003c, “超人类主义价值观”, in Ethical Issues for the Twenty-First Century, Frederick Adams (ed.), Bowling Green, OH: Philosophical Documentation Center Press.
  • –––, 2012, “The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents”, Minds and Machines, 22(2): 71–85. doi:10.1007/s11023-012-9281-3
    –––, 2012, “超级智能意志:高级人工代理中的动机和工具理性”, 心灵与机器, 22(2): 71–85.doi:10.1007/s11023-012-9281-3
  • –––, 2013, “Existential Risk Prevention as Global Priority”, Global Policy, 4(1): 15–31. doi:10.1111/1758-5899.12002
    –––, 2013, “作为全球优先事项的存在风险预防”, 全球政策, 4(1): 15–31.doi:10.1111/1758-5899.12002
  • –––, 2014, Superintelligence: Paths, Dangers, Strategies, Oxford: Oxford University Press.
    –––,2014 年,超级智能:路径、危险、策略,牛津:牛津大学出版社。
  • Bostrom, Nick and Milan M. Ćirković (eds.), 2011, Global Catastrophic Risks, New York: Oxford University Press.
    Bostrom、Nick 和 Milan M. Ćirković(编辑),2011 年,《全球灾难性风险》,纽约:牛津大学出版社。
  • Bostrom, Nick, Allan Dafoe, and Carrick Flynn, forthcoming, “Policy Desiderata for Superintelligent AI: A Vector Field Approach (V. 4.3)”, in Ethics of Artificial Intelligence, S Matthew Liao (ed.), New York: Oxford University Press. [Bostrom, Dafoe, and Flynn forthcoming – preprint available online]
    Bostrom、Nick、Allan Dafoe 和 Carrick Flynn,即将出版,“Policy Desiderata for Superintelligent AI: A Vector Field Approach (V. 4.3)”,载于 Ethics of Artificial Intelligence,S Matthew Liao(编辑),纽约:牛津大学出版社。 [ Bostrom、Dafoe 和 Flynn 即将出版 – 预印本可在线获取]
  • Bostrom, Nick and Eliezer Yudkowsky, 2014, “The Ethics of Artificial Intelligence”, in The Cambridge Handbook of Artificial Intelligence, Keith Frankish and William M. Ramsey (eds.), Cambridge: Cambridge University Press, 316–334. doi:10.1017/CBO9781139046855.020 [Bostrom and Yudkowsky 2014 available online]
    Bostrom、Nick 和 Eliezer Yudkowsky,2014 年,“人工智能的伦理学”,载于《剑桥人工智能手册》,Keith Frankish 和 William M. Ramsey(编辑),剑桥:剑桥大学出版社,316-334。doi:10.1017/CBO9781139046855.020 [ Bostrom and Yudkowsky 2014 available online]
  • Bradshaw, Samantha, Lisa-Maria Neudert, and Phil Howard, 2019, “Government Responses to Malicious Use of Social Media”, Working Paper 2019.2, Oxford: Project on Computational Propaganda. [Bradshaw, Neudert, and Howard 2019 available online/]
    Bradshaw、Samantha、Lisa-Maria Neudert 和 Phil Howard,2019 年,“政府对恶意使用社交媒体的回应”,工作论文 2019.2,牛津:计算宣传项目。[ Bradshaw、Neudert 和 Howard 2019 可在线获取/]
  • Brownsword, Roger, Eloise Scotford, and Karen Yeung (eds.), 2017, The Oxford Handbook of Law, Regulation and Technology, Oxford: Oxford University Press. doi:10.1093/oxfordhb/9780199680832.001.0001
    Brownsword、Roger、Eloise Scotford 和 Karen Yeung(编辑),2017 年,《牛津法律、法规和技术手册》,牛津:牛津大学出版社。 doi:10.1093/oxfordhb/9780199680832.001.0001
  • Brynjolfsson, Erik and Andrew McAfee, 2016, The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies, New York: W. W. Norton.
    Brynjolfsson、Erik 和 Andrew McAfee,2016 年,《第二个机器时代:辉煌技术时代的工作、进步和繁荣》,纽约:WW Norton。
  • Bryson, Joanna J., 2010, “Robots Should Be Slaves”, in Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues, Yorick Wilks (ed.), (Natural Language Processing 8), Amsterdam: John Benjamins Publishing Company, 63–74. doi:10.1075/nlp.8.11bry
    Bryson, Joanna J., 2010, “Robots Should Be Slaves”, in Close Engagements with Artificial Companions: Key Social, Psychological, Ethics and Design Issues, Yorick Wilks (ed.), (Natural Language Processing 8), Amsterdam: John Benjamins Publishing Company, 63–74.doi:10.1075/nlp.8.11bry
  • –––, 2019, “The Past Decade and Future of Ai’s Impact on Society”, in Towards a New Enlightenment: A Transcendent Decade, Madrid: Turner - BVVA. [Bryson 2019 available online]
    –––, 2019, “The Past Decade and Future of Ai 's Impact on Society”, in Towards a New Enlightenment: A Transcendent Decade, Madrid: Turner - BVVA.[ Bryson 2019 在线提供]
  • Bryson, Joanna J., Mihailis E. Diamantis, and Thomas D. Grant, 2017, “Of, for, and by the People: The Legal Lacuna of Synthetic Persons”, Artificial Intelligence and Law, 25(3): 273–291. doi:10.1007/s10506-017-9214-9
    Bryson、Joanna J.、Mihailis E. Diamantis 和 Thomas D. Grant,2017 年,“人民的、为人民的、由人民的:合成人的法律空白”,人工智能与法律,25(3):273-291。doi:10.1007/s10506-017-9214-9
  • Burr, Christopher and Nello Cristianini, 2019, “Can Machines Read Our Minds?”, Minds and Machines, 29(3): 461–494. doi:10.1007/s11023-019-09497-4
    Burr、Christopher 和 Nello Cristianini,2019 年,“机器能读懂我们的思想吗?”,《心灵与机器》,29(3):461-494。doi:10.1007/s11023-019-09497-4
  • Butler, Samuel, 1863, “Darwin among the Machines: Letter to the Editor”, Letter in The Press (Christchurch), 13 June 1863. [Butler 1863 available online]
    巴特勒,塞缪尔,1863年,“机器中的达尔文:给编辑的信”,新闻界的信(基督城),1863年6月13日。[ Butler 1863 可在线获取]
  • Callaghan, Victor, James Miller, Roman Yampolskiy, and Stuart Armstrong (eds.), 2017, The Technological Singularity: Managing the Journey, (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-662-54033-6
    卡拉汉、维克多、詹姆斯·米勒、罗曼·扬波尔斯基和斯图尔特·阿姆斯特朗(编辑),2017 年,《技术奇点:管理旅程》(The Frontiers Collection),柏林,海德堡:施普林格柏林海德堡。doi:10.1007/978-3-662-54033-6
  • Calo, Ryan, 2018, “Artificial Intelligence Policy: A Primer and Roadmap”, University of Bologna Law Review, 3(2): 180-218. doi:10.6092/ISSN.2531-6133/8670
    Calo, Ryan, 2018, “人工智能政策:入门和路线图”, 博洛尼亚大学法律评论, 3(2): 180-218.doi:10.6092/ISSN.2531-6133/8670
  • Calo, Ryan, A. Michael Froomkin, and Ian Kerr (eds.), 2016, Robot Law, Cheltenham: Edward Elgar.
    Calo、Ryan、A. Michael Froomkin 和 Ian Kerr(编辑),2016 年,机器人法,切尔滕纳姆:爱德华·埃尔加。
  • Čapek, Karel, 1920, R.U.R., Prague: Aventium. Translated by Peter Majer and Cathy Porter, London: Methuen, 1999.
    Čapek,Karel,1920年,R.U.R.,布拉格:Aventium。由 Peter Majer 和 Cathy Porter 翻译,伦敦:Methuen,1999 年。
  • Capurro, Raphael, 1993, “Ein Grinsen Ohne Katze: Von der Vergleichbarkeit Zwischen ‘Künstlicher Intelligenz’ und ‘Getrennten Intelligenzen’”, Zeitschrift für philosophische Forschung, 47: 93–102.
    卡普罗,拉斐尔,1993 年,“没有猫的笑容:关于'人工智能'和'独立智能'之间的可比性”,哲学研究杂志,47:93-102。
  • Cave, Stephen, 2019, “To Save Us from a Kafkaesque Future, We Must Democratise AI”, The Guardian , 04 January 2019. [Cave 2019 available online]
    凯夫,斯蒂芬,2019 年,“为了将我们从卡夫卡式的未来中拯救出来,我们必须使人工智能民主化”,《卫报》,2019 年 1 月 4 日。[ Cave 2019 在线提供]
  • Chalmers, David J., 2010, “The Singularity: A Philosophical Analysis”, Journal of Consciousness Studies, 17(9–10): 7–65. [Chalmers 2010 available online]
    Chalmers, David J., 2010, “奇点:哲学分析”, 意识研究杂志, 17(9-10): 7-65.[ Chalmers 2010 在线提供]
  • Christman, John, 2003 [2018], “Autonomy in Moral and Political Philosophy”, (Spring 2018) Stanford Encyclopedia of Philosophy (EDITION NEEDED), URL = <https://plato.stanford.edu/archives/spr2018/entries/autonomy-moral/>
    约翰·克里斯特曼,2003 [2018],“道德和政治哲学中的自主性”,(2018 年春季)斯坦福哲学百科全书(需要版本),URL = < https://plato.stanford.edu/archives/spr2018/entries/autonomy-moral/>
  • Coeckelbergh, Mark, 2010, “Robot Rights? Towards a Social-Relational Justification of Moral Consideration”, Ethics and Information Technology, 12(3): 209–221. doi:10.1007/s10676-010-9235-5
    Coeckelbergh,Mark,2010 年,“机器人权利?迈向道德考虑的社会关系正当性“,伦理与信息技术,12(3):209-221。doi:10.1007/s10676-010-9235-5
  • –––, 2012, Growing Moral Relations: Critique of Moral Status Ascription, London: Palgrave. doi:10.1057/9781137025968
    –––,2012 年,成长中的道德关系:对道德地位归属的批判,伦敦:帕尔格雷夫。doi:10.1057/9781137025968
  • –––, 2016, “Care Robots and the Future of ICT-Mediated Elderly Care: A Response to Doom Scenarios”, AI & Society, 31(4): 455–462. doi:10.1007/s00146-015-0626-3
    –––, 2016, “护理机器人和ICT介导的老年护理的未来:对厄运情景的回应”, 人工智能与社会, 31(4): 455–462.doi:10.1007/s00146-015-0626-3
  • –––, 2018, “What Do We Mean by a Relational Ethics? Growing a Relational Approach to the Moral Standing of Plants, Robots and Other Non-Humans”, in Plant Ethics: Concepts and Applications, Angela Kallhoff, Marcello Di Paola, and Maria Schörgenhumer (eds.), London: Routledge, 110–121.
    –––, 2018, “我们所说的关系伦理是什么意思?Growing a Relational Approach to the Moral Position of Plants, Robots and Other Non-Humans“, in Plant Ethics: Concepts and Applications, Angela Kallhoff, Marcello Di Paola, and Maria Schörgenhumer (eds.), London: Routledge, 110–121.
  • Crawford, Kate and Ryan Calo, 2016, “There Is a Blind Spot in AI Research”, Nature, 538(7625): 311–313. doi:10.1038/538311a
    Crawford、Kate 和 Ryan Calo,2016 年,“AI 研究中存在盲点”,《自然》,538(7625):311–313。doi:10.1038/538311a
  • Cristianini, Nello, forthcoming, “Shortcuts to Artificial Intelligence”, in Machines We Trust, Marcello Pelillo and Teresa Scantamburlo (eds.), Cambridge, MA: MIT Press. [Cristianini forthcoming – preprint available online]
    Cristianini,Nello,即将出版,“人工智能的捷径”,收录于《我们信任的机器》,Marcello Pelillo 和 Teresa Scantamburlo(编辑),马萨诸塞州剑桥:麻省理工学院出版社。
  • Danaher, John, 2015, “Why AI Doomsayers Are Like Sceptical Theists and Why It Matters”, Minds and Machines, 25(3): 231–246. doi:10.1007/s11023-015-9365-y
    丹纳赫,约翰,2015 年,“为什么人工智能末日预言家像怀疑论者及其重要性”,思想与机器,25(3):231-246。doi:10.1007/s11023-015-9365-y
  • –––, 2016a, “Robots, Law and the Retribution Gap”, Ethics and Information Technology, 18(4): 299–309. doi:10.1007/s10676-016-9403-3
    –––, 2016a, “机器人、法律与报应差距”, 伦理与信息技术, 18(4): 299–309.doi:10.1007/s10676-016-9403-3
  • –––, 2016b, “The Threat of Algocracy: Reality, Resistance and Accommodation”, Philosophy & Technology, 29(3): 245–268. doi:10.1007/s13347-015-0211-1
    –––, 2016b, “Algocracy: Reality, Resistance and Accommodation”, Philosophy & Technology, 29(3): 245–268.doi:10.1007/s13347-015-0211-1
  • –––, 2019a, Automation and Utopia: Human Flourishing in a World without Work, Cambridge, MA: Harvard University Press.
    –––, 2019a, 自动化与乌托邦:人类在没有工作的世界中蓬勃发展, 马萨诸塞州剑桥:哈佛大学出版社。
  • –––, 2019b, “The Philosophical Case for Robot Friendship”, Journal of Posthuman Studies, 3(1): 5–24. doi:10.5325/jpoststud.3.1.0005
    –––, 2019b, “机器人友谊的哲学案例”, 后人类研究杂志, 3(1): 5–24.doi:10.5325/jpoststud.3.1.0005
  • –––, forthcoming, “Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism”, Science and Engineering Ethics, first online: 20 June 2019. doi:10.1007/s11948-019-00119-x
    –––,即将出版,“欢迎机器人进入道德圈:道德行为主义的辩护”,《科学与工程伦理》,首次在线:2019年6月20日。doi:10.1007/s11948-019-00119-x
  • Danaher, John and Neil McArthur (eds.), 2017, Robot Sex: Social and Ethical Implications, Boston, MA: MIT Press.
    丹纳赫、约翰和尼尔·麦克阿瑟(编辑),2017 年,《机器人性爱:社会和道德影响》,马萨诸塞州波士顿:麻省理工学院出版社。
  • DARPA, 1983, “Strategic Computing. New-Generation Computing Technology: A Strategic Plan for Its Development an Application to Critical Problems in Defense”, ADA141982, 28 October 1983. [DARPA 1983 available online]
    DARPA,1983年,“战略计算。新一代计算技术:其发展战略计划:在国防关键问题中的应用“,ADA141982,1983 年 10 月 28 日。[ DARPA 1983 可在线获取]
  • Dennett, Daniel C, 2017, From Bacteria to Bach and Back: The Evolution of Minds, New York: W.W. Norton.
    丹尼特,丹尼尔 C,2017 年,从细菌到巴赫再回来:思想的进化,纽约:WW Norton。
  • Devlin, Kate, 2018, Turned On: Science, Sex and Robots, London: Bloomsbury.
    德夫林,凯特,2018 年,开启:科学、性和机器人,伦敦:布鲁姆斯伯里。
  • Diakopoulos, Nicholas, 2015, “Algorithmic Accountability: Journalistic Investigation of Computational Power Structures”, Digital Journalism, 3(3): 398–415. doi:10.1080/21670811.2014.976411
    Diakopoulos,Nicholas,2015 年,“算法问责制:计算权力结构的新闻调查”,数字新闻学,3(3):398–415。doi:10.1080/21670811.2014.976411
  • Dignum, Virginia, 2018, “Ethics in Artificial Intelligence: Introduction to the Special Issue”, Ethics and Information Technology, 20(1): 1–3. doi:10.1007/s10676-018-9450-z
    弗吉尼亚州迪格纳姆,2018 年,“人工智能伦理学:特刊简介”,伦理与信息技术,20(1):1-3。doi:10.1007/s10676-018-9450-z
  • Domingos, Pedro, 2015, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World, London: Allen Lane.
    多明戈斯,佩德罗,2015 年,主算法:对终极学习机器的追求将如何重塑我们的世界,伦敦:艾伦巷。
  • Draper, Heather, Tom Sorell, Sandra Bedaf, Dag Sverre Syrdal, Carolina Gutierrez-Ruiz, Alexandre Duclos, and Farshid Amirabdollahian, 2014, “Ethical Dimensions of Human-Robot Interactions in the Care of Older People: Insights from 21 Focus Groups Convened in the UK, France and the Netherlands”, in International Conference on Social Robotics 2014, Michael Beetz, Benjamin Johnston, and Mary-Anne Williams (eds.), (Lecture Notes in Artificial Intelligence 8755), Cham: Springer International Publishing, 135–145. doi:10.1007/978-3-319-11973-1_14
    Draper、Heather、Tom Sorell、Sandra Bedaf、Dag Sverre Syrdal、Carolina Gutierrez-Ruiz、Alexandre Duclos 和 Farshid Amirabdollahian,2014 年,“老年人护理中人机交互的伦理维度:来自英国、法国和荷兰召集的 21 个焦点小组的见解”,在 2014 年社会机器人国际会议上,Michael Beetz、Benjamin Johnston 和 Mary-Anne Williams(编辑), (人工智能讲义 8755),Cham:Springer International Publishing,135-145。doi:10.1007/978-3-319-11973-1_14
  • Dressel, Julia and Hany Farid, 2018, “The Accuracy, Fairness, and Limits of Predicting Recidivism”, Science Advances, 4(1): eaao5580. doi:10.1126/sciadv.aao5580
    Dressel、Julia 和 Hany Farid,2018 年,“预测累犯的准确性、公平性和局限性”,《科学进展》,4(1):eaao5580。doi:10.1126/sciadv.aao5580
  • Drexler, K. Eric, 2019, “Reframing Superintelligence: Comprehensive AI Services as General Intelligence”, FHI Technical Report, 2019-1, 1-210. [Drexler 2019 available online]
    Drexler, K. Eric,2019 年,“重构超级智能:作为通用智能的综合 AI 服务”,FHI 技术报告,2019-1,1-210。[ Drexler 2019 在线提供]
  • Dreyfus, Hubert L., 1972, What Computers Still Can’t Do: A Critique of Artificial Reason, second edition, Cambridge, MA: MIT Press 1992.
    Dreyfus, Hubert L., 1972, What Computers Still Can't Do: A Critique of Artificial Reason, 第二版, 马萨诸塞州剑桥:麻省理工学院出版社 1992 年。
  • Dreyfus, Hubert L., Stuart E. Dreyfus, and Tom Athanasiou, 1986, Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer, New York: Free Press.
    Dreyfus、Hubert L.、Stuart E. Dreyfus 和 Tom Athanasiou,1986 年,《计算机时代的思维:人类直觉和专业知识的力量》,纽约:自由出版社。
  • Dwork, Cynthia, Frank McSherry, Kobbi Nissim, and Adam Smith, 2006, Calibrating Noise to Sensitivity in Private Data Analysis, Berlin, Heidelberg.
    Dwork、Cynthia、Frank McSherry、Kobbi Nissim 和 Adam Smith,2006 年,将噪声校准到私人数据分析中的灵敏度,柏林,海德堡。
  • Eden, Amnon H., James H. Moor, Johnny H. Søraker, and Eric Steinhart (eds.), 2012, Singularity Hypotheses: A Scientific and Philosophical Assessment, (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg. doi:10.1007/978-3-642-32560-1
    Eden, Amnon H., James H. Moor, Johnny H. Søraker, and Eric Steinhart (eds.), 2012, Singularity Hypotheses: A Scientific and Philosophical Assessment, (The Frontiers Collection), Berlin, Heidelberg: Springer Berlin Heidelberg.doi:10.1007/978-3-642-32560-1
  • Eubanks, Virginia, 2018, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, London: St. Martin’s Press.
    弗吉尼亚州尤班克斯,2018 年,《自动化不平等:高科技工具如何剖析、监管和惩罚穷人》,伦敦:圣马丁出版社。
  • European Commission, 2013, “How Many People Work in Agriculture in the European Union? An Answer Based on Eurostat Data Sources”, EU Agricultural Economics Briefs, 8 (July 2013). [Anonymous 2013 available online]
    欧盟委员会,2013 年,“欧盟有多少人从事农业工作?基于欧盟统计局数据来源的答案“,《欧盟农业经济简报》,第8期(2013年7月)。[ 匿名 2013 在线提供]
  • European Group on Ethics in Science and New Technologies, 2018, “Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems”, 9 March 2018, European Commission, Directorate-General for Research and Innovation, Unit RTD.01. [European Group 2018 available online ]
    欧洲科学和新技术伦理小组,2018 年,“关于人工智能、机器人和'自主'系统的声明”,2018 年 3 月 9 日,欧盟委员会研究与创新总局,RTD.01 单元。[ 2018年欧洲组在线发布 ]
  • Ferguson, Andrew Guthrie, 2017, The Rise of Big Data Policing: Surveillance, Race, and the Future of Law Enforcement, New York: NYU Press.
    弗格森,安德鲁·格思里,2017 年,大数据警务的兴起:监控、种族和执法的未来,纽约:纽约大学出版社。
  • Floridi, Luciano, 2016, “Should We Be Afraid of AI? Machines Seem to Be Getting Smarter and Smarter and Much Better at Human Jobs, yet True AI Is Utterly Implausible. Why?”, Aeon, 9 May 2016. URL = <Floridi 2016 available online>
    弗洛里迪,卢西亚诺,2016 年,“我们应该害怕人工智能吗?机器似乎越来越聪明,在人类工作方面也越来越好,但真正的人工智能是完全不可信的。为什么?“,永旺,2016年5月9日。URL = < Floridi 2016 在线提供>
  • Floridi, Luciano, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke, and Effy Vayena, 2018, “AI4People—An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations”, Minds and Machines, 28(4): 689–707. doi:10.1007/s11023-018-9482-5
    弗洛里迪、卢西亚诺、乔什·考尔斯、莫妮卡·贝尔特拉梅蒂、拉贾·查蒂拉、帕特里斯·查泽兰德、弗吉尼亚·迪格纳姆、克里斯托夫·卢特奇、罗伯特·马德林、乌戈·帕加洛、弗朗西斯卡·罗西、伯克哈德·谢弗、佩吉·瓦尔克和艾菲·瓦耶纳,2018 年,“AI4People——良好人工智能社会的道德框架:机会、风险、原则和建议”,思维与机器,28(4):689-707。doi:10.1007/s11023-018-9482-5
  • Floridi, Luciano and Jeff W. Sanders, 2004, “On the Morality of Artificial Agents”, Minds and Machines, 14(3): 349–379. doi:10.1023/B:MIND.0000035461.63578.9d
    Floridi、Luciano 和 Jeff W. Sanders,2004 年,“关于人工代理的道德”,Minds and Machines,14(3):349-379。doi:10.1023/B:MIND.0000035461.63578.9d
  • Floridi, Luciano and Mariarosaria Taddeo, 2016, “What Is Data Ethics?”, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 374(2083): 20160360. doi:10.1098/rsta.2016.0360
    Floridi、Luciano 和 Mariarosaria Taddeo,2016 年,“什么是数据伦理?”,英国皇家学会哲学汇刊 A:数学、物理和工程科学,374(2083):20160360。doi:10.1098/rsta.2016.0360
  • Foot, Philippa, 1967, “The Problem of Abortion and the Doctrine of the Double Effect”, Oxford Review, 5: 5–15.
    Foot, Philippa, 1967, “The Problem of Abortion and the Doctrine of the Double Effect”, Oxford Review, 5: 5–15.
  • Fosch-Villaronga, Eduard and Jordi Albo-Canals, 2019, “‘I’ll Take Care of You,’ Said the Robot”, Paladyn, Journal of Behavioral Robotics, 10(1): 77–93. doi:10.1515/pjbr-2019-0006
    Fosch-Villaronga、Eduard 和 Jordi Albo-Canals,2019 年,“'我会照顾你,'机器人说”,Paladyn,行为机器人学杂志,10(1):77-93。doi:10.1515/pjbr-2019-0006
  • Frank, Lily and Sven Nyholm, 2017, “Robot Sex and Consent: Is Consent to Sex between a Robot and a Human Conceivable, Possible, and Desirable?”, Artificial Intelligence and Law, 25(3): 305–323. doi:10.1007/s10506-017-9212-y
    Frank、Lily 和 Sven Nyholm,2017 年,“机器人性行为和同意:机器人和人类之间的性行为同意是可以想象的、可能的和可取的吗?”,人工智能与法律,25(3):305-323。doi:10.1007/s10506-017-9212-y
  • Frankfurt, Harry G., 1971, “Freedom of the Will and the Concept of a Person”, The Journal of Philosophy, 68(1): 5–20.
    法兰克福,Harry G.,1971,“意志的自由和人的概念”,哲学杂志,68(1):5-20。
  • Frey, Carl Benedict, 2019, The Technology Trap: Capital, Labour, and Power in the Age of Automation, Princeton, NJ: Princeton University Press.
    弗雷,卡尔·本尼迪克特,2019 年,《技术陷阱:自动化时代的资本、劳动力和权力》,新泽西州普林斯顿:普林斯顿大学出版社。
  • Frey, Carl Benedikt and Michael A. Osborne, 2013, “The Future of Employment: How Susceptible Are Jobs to Computerisation?”, Oxford Martin School Working Papers, 17 September 2013. [Frey and Osborne 2013 available online]
    Frey、Carl Benedikt 和 Michael A. Osborne,2013 年,“就业的未来:工作对计算机化有多敏感?”,牛津马丁学院工作文件,2013 年 9 月 17 日。[ Frey and Osborne 2013 在线提供]
  • Ganascia, Jean-Gabriel, 2017, Le Mythe De La Singularité, Paris: Éditions du Seuil.
    Ganascia,Jean-Gabriel,2017年,《奇点的神话》,巴黎:Éditions du Seuil。
  • EU Parliament, 2016, “Draft Report with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(Inl))”, Committee on Legal Affairs, 10.11.2016. https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html
    欧盟议会,2016年,“向机器人民法规则委员会提出建议的报告草案(2015/2103(Inl))”,法律事务委员会,2016年11月10日。https://www.europarl.europa.eu/doceo/document/A-8-2017-0005_EN.html
  • EU Regulation, 2016/679, “General Data Protection Regulation: Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the Protection of Natural Persons with Regard to the Processing of Personal Data and on the Free Movement of Such Data, and Repealing Directive 95/46/Ec”, Official Journal of the European Union, 119 (4 May 2016), 1–88. [Regulation (EU) 2016/679 available online]
    欧盟法规,2016/679,“通用数据保护条例:欧洲议会和理事会 2016 年 4 月 27 日关于在个人数据处理和此类数据自由流动方面保护自然人的条例 (EU) 2016/679,并废除指令 95/46/EC”,《欧盟官方公报》,第 119 期(2016 年 5 月 4 日), 1–88.[ 法规 (EU) 2016/679 可在线获取]
  • Geraci, Robert M., 2008, “Apocalyptic AI: Religion and the Promise of Artificial Intelligence”, Journal of the American Academy of Religion, 76(1): 138–166. doi:10.1093/jaarel/lfm101
    Geraci, Robert M., 2008, “Apocalyptic AI: Religion and the Promise of Artificial Intelligence”, Journal of the American Academy of Religion, 76(1): 138–166.doi:10.1093/jaarel/lfm101
  • –––, 2010, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780195393026.001.0001
    –––, 2010, Apocalyptic AI: Visions of Heaven in Robotics, Artificial Intelligence, and Virtual Reality, 牛津:牛津大学出版社. doi:10.1093/acprof:oso/9780195393026.001.0001
  • Gerdes, Anne, 2016, “The Issue of Moral Consideration in Robot Ethics”, ACM SIGCAS Computers and Society, 45(3): 274–279. doi:10.1145/2874239.2874278
    Gerdes,Anne,2016,“机器人伦理中的道德考虑问题”,ACM SIGCAS计算机与社会,45(3):274-279。doi:10.1145/2874239.2874278
  • German Federal Ministry of Transport and Digital Infrastructure, 2017, “Report of the Ethics Commission: Automated and Connected Driving”, June 2017, 1–36. [GFMTDI 2017 available online]
    德国联邦交通和数字基础设施部,2017 年,“道德委员会报告:自动驾驶和互联驾驶”,2017 年 6 月,第 1-36 页。[ GFMTDI 2017 在线发布]
  • Gertz, Nolen, 2018, Nihilism and Technology, London: Rowman & Littlefield.
    Gertz,Nolen,2018,虚无主义与技术,伦敦:Rowman&Littlefield。
  • Gewirth, Alan, 1978, “The Golden Rule Rationalized”, Midwest Studies in Philosophy, 3(1): 133–147. doi:10.1111/j.1475-4975.1978.tb00353.x
    Gewirth,Alan,1978,“合理化的黄金法则”,中西部哲学研究,3(1):133-147。doi:10.1111/j.1475-4975.1978.tb00353.x
  • Gibert, Martin, 2019, “Éthique Artificielle (Version Grand Public)”, in L’Encyclopédie Philosophique, Maxime Kristanek (ed.), accessed: 16 April 2020, URL = <Gibert 2019 available online>
    Gibert, Martin, 2019, “Artificial Ethics (General Public Version)”, in L'Encyclopédie Philosophique, Maxime Kristanek (ed.), accessed: 16 April 2020, URL = < Gibert 2019 available online>
  • Giubilini, Alberto and Julian Savulescu, 2018, “The Artificial Moral Advisor. The ‘Ideal Observer’ Meets Artificial Intelligence”, Philosophy & Technology, 31(2): 169–188. doi:10.1007/s13347-017-0285-z
    Giubilini、Alberto 和 Julian Savulescu,2018 年,“人工道德顾问。“理想的观察者”遇见人工智能“,哲学与技术,31(2):169-188。doi:10.1007/s13347-017-0285-z
  • Good, Irving John, 1965, “Speculations Concerning the First Ultraintelligent Machine”, in Advances in Computers 6, Franz L. Alt and Morris Rubinoff (eds.), New York & London: Academic Press, 31–88. doi:10.1016/S0065-2458(08)60418-0
    Good, Irving John, 1965, “Speculations About the First Ultraintelligent Machine”, in Advances in Computers 6, Franz L. Alt and Morris Rubinoff (eds.), New York & London: Academic Press, 31–88.doi:10.1016/S0065-2458(08)60418-0
  • Goodfellow, Ian, Yoshua Bengio, and Aaron Courville, 2016, Deep Learning, Cambridge, MA: MIT Press.
    Goodfellow、Ian、Yoshua Bengio 和 Aaron Courville,2016 年,深度学习,马萨诸塞州剑桥:麻省理工学院出版社。
  • Goodman, Bryce and Seth Flaxman, 2017, “European Union Regulations on Algorithmic Decision-Making and a ‘Right to Explanation’”, AI Magazine, 38(3): 50–57. doi:10.1609/aimag.v38i3.2741
    Goodman、Bryce 和 Seth Flaxman,2017 年,“欧盟关于算法决策和'解释权'的法规”,人工智能杂志,38(3):50–57。doi:10.1609/aimag.v38i3.2741
  • Goos, Maarten, 2018, “The Impact of Technological Progress on Labour Markets: Policy Challenges”, Oxford Review of Economic Policy, 34(3): 362–375. doi:10.1093/oxrep/gry002
    Goos,Maarten,2018 年,“技术进步对劳动力市场的影响:政策挑战”,《牛津经济政策评论》,34(3):362-375。doi:10.1093/oxrep/gry002
  • Goos, Maarten, Alan Manning, and Anna Salomons, 2009, “Job Polarization in Europe”, American Economic Review, 99(2): 58–63. doi:10.1257/aer.99.2.58
    Goos、Maarten、Alan Manning 和 Anna Salomons,2009 年,“欧洲的工作两极分化”,《美国经济评论》,99(2):58-63。doi:10.1257/aer.99.2.58
  • Graham, Sandra and Brian S. Lowery, 2004, “Priming Unconscious Racial Stereotypes about Adolescent Offenders”, Law and Human Behavior, 28(5): 483–504. doi:10.1023/B:LAHU.0000046430.65485.1f
    Graham、Sandra 和 Brian S. Lowery,2004 年,“引发关于青少年罪犯的无意识种族刻板印象”,法律与人类行为,28(5):483-504。doi:10.1023/B:LAHU.0000046430.65485.1f
  • Gunkel, David J., 2018a, “The Other Question: Can and Should Robots Have Rights?”, Ethics and Information Technology, 20(2): 87–99. doi:10.1007/s10676-017-9442-4
    Gunkel, David J., 2018a, “另一个问题:机器人可以而且应该拥有权利吗?”, 伦理与信息技术, 20(2): 87–99.doi:10.1007/s10676-017-9442-4
  • –––, 2018b, Robot Rights, Boston, MA: MIT Press.
    –––, 2018b, 机器人权利, 波士顿, 马萨诸塞州: 麻省理工学院出版社.
  • Gunkel, David J. and Joanna J. Bryson (eds.), 2014, Machine Morality: The Machine as Moral Agent and Patient special issue of Philosophy & Technology, 27(1): 1–142.
    Gunkel,David J.和Joanna J. Bryson(编辑),2014年,机器道德:机器作为道德代理人和患者哲学与技术特刊,27(1):1-142。
  • Häggström, Olle, 2016, Here Be Dragons: Science, Technology and the Future of Humanity, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780198723547.001.0001
    Häggström, Olle, 2016, Here Be Dragons: Science, Technology and the Future of Humanity, 牛津:牛津大学出版社. doi:10.1093/acprof:oso/9780198723547.001.0001
  • Hakli, Raul and Pekka Mäkelä, 2019, “Moral Responsibility of Robots and Hybrid Agents”, The Monist, 102(2): 259–275. doi:10.1093/monist/onz009
    Hakli、Raul 和 Pekka Mäkelä,2019 年,“机器人和混合代理的道德责任”,一元论者,102(2):259-275。doi:10.1093/monist/onz009
  • Hanson, Robin, 2016, The Age of Em: Work, Love and Life When Robots Rule the Earth, Oxford: Oxford University Press.
    汉森,罗宾,2016 年,《机器人统治地球时的工作、爱情和生活》,牛津:牛津大学出版社。
  • Hansson, Sven Ove, 2013, The Ethics of Risk: Ethical Analysis in an Uncertain World, New York: Palgrave Macmillan.
    Hansson, Sven Ove, 2013, The Ethics of Risk: Ethical Analysis in an Uncertain World, 纽约:帕尔格雷夫·麦克米伦(Palgrave Macmillan)。
  • –––, 2018, “How to Perform an Ethical Risk Analysis (eRA)”, Risk Analysis, 38(9): 1820–1829. doi:10.1111/risa.12978
    –––, 2018, “如何进行道德风险分析 (eRA)”, 风险分析, 38(9): 1820–1829.doi:10.1111/risa.12978
  • Harari, Yuval Noah, 2016, Homo Deus: A Brief History of Tomorrow, New York: Harper.
    Harari,Yuval Noah,2016 年,Homo Deus:明日简史,纽约:哈珀。
  • Haskel, Jonathan and Stian Westlake, 2017, Capitalism without Capital: The Rise of the Intangible Economy, Princeton, NJ: Princeton University Press.
    哈斯克尔、乔纳森和斯蒂安·韦斯特莱克,2017 年,《没有资本的资本主义:无形经济的兴起》,新泽西州普林斯顿:普林斯顿大学出版社。
  • Houkes, Wybo and Pieter E. Vermaas, 2010, Technical Functions: On the Use and Design of Artefacts, (Philosophy of Engineering and Technology 1), Dordrecht: Springer Netherlands. doi:10.1007/978-90-481-3900-2
    Houkes、Wybo 和 Pieter E. Vermaas,2010 年,技术功能:关于人工制品的使用和设计,(工程与技术哲学 1),多德雷赫特:施普林格荷兰。doi:10.1007/978-90-481-3900-2
  • IEEE, 2019, Ethically Aligned Design: A Vision for Prioritizing Human Well-Being with Autonomous and Intelligent Systems (First Version), <IEEE 2019 available online>.
    IEEE,2019,道德一致设计:通过自主和智能系统优先考虑人类福祉的愿景(第一版),< IEEE 2019 在线提供>。
  • Jasanoff, Sheila, 2016, The Ethics of Invention: Technology and the Human Future, New York: Norton.
    Jasanoff,Sheila,2016 年,《发明伦理:技术与人类未来》,纽约:诺顿。
  • Jecker, Nancy S., forthcoming, Ending Midlife Bias: New Values for Old Age, New York: Oxford University Press.
    Jecker, Nancy S.,即将出版,《终结中年偏见:老年的新价值观》,纽约:牛津大学出版社。
  • Jobin, Anna, Marcello Ienca, and Effy Vayena, 2019, “The Global Landscape of AI Ethics Guidelines”, Nature Machine Intelligence, 1(9): 389–399. doi:10.1038/s42256-019-0088-2
    Jobin、Anna、Marcello Ienca 和 Effy Vayena,2019 年,“人工智能伦理准则的全球格局”,自然机器智能,1(9):389–399。doi:10.1038/s42256-019-0088-2
  • Johnson, Deborah G. and Mario Verdicchio, 2017, “Reframing AI Discourse”, Minds and Machines, 27(4): 575–590. doi:10.1007/s11023-017-9417-6
    Johnson、Deborah G. 和 Mario Verdicchio,2017 年,“重构 AI 话语”,Minds and Machines,27(4):575–590。doi:10.1007/s11023-017-9417-6
  • Kahnemann, Daniel, 2011, Thinking Fast and Slow, London: Macmillan.
    Kahnemann,Daniel,2011年,《思考快与慢》,伦敦:麦克米伦。
  • Kamm, Frances Myrna, 2016, The Trolley Problem Mysteries, Eric Rakowski (ed.), Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190247157.001.0001
    Kamm,Frances Myrna,2016 年,电车问题之谜,Eric Rakowski(编辑),牛津:牛津大学出版社。 doi:10.1093/acprof:oso/9780190247157.001.0001
  • Kant, Immanuel, 1781/1787, Kritik der reinen Vernunft. Translated as Critique of Pure Reason, Norman Kemp Smith (trans.), London: Palgrave Macmillan, 1929.
    康德,伊曼纽尔,1781/1787 年,Kritik der reinen Vernunft。译为《纯粹理性批判》,诺曼·肯普·史密斯(译),伦敦:帕尔格雷夫·麦克米伦出版社,1929年。
  • Keeling, Geoff, 2020, “Why Trolley Problems Matter for the Ethics of Automated Vehicles”, Science and Engineering Ethics, 26(1): 293–307. doi:10.1007/s11948-019-00096-1
    基林,杰夫,2020 年,“为什么手推车问题对自动驾驶汽车的伦理很重要”,科学与工程伦理,26(1):293–307。doi:10.1007/s11948-019-00096-1
  • Keynes, John Maynard, 1930, “Economic Possibilities for Our Grandchildren”. Reprinted in his Essays in Persuasion, New York: Harcourt Brace, 1932, 358–373.
    凯恩斯,约翰·梅纳德,1930 年,“我们孙子孙女的经济可能性”。转载于他的《劝导论文集》,纽约:哈考特·布雷斯,1932 年,第 358-373 页。
  • Kissinger, Henry A., 2018, “How the Enlightenment Ends: Philosophically, Intellectually—in Every Way—Human Society Is Unprepared for the Rise of Artificial Intelligence”, The Atlantic, June 2018. [Kissinger 2018 available online]
    基辛格,亨利 A.,2018 年,“启蒙运动如何结束:哲学上、智力上——在各个方面——人类社会对人工智能的兴起毫无准备”,《大西洋月刊》,2018 年 6 月。[ Kissinger 2018 在线提供]
  • Kurzweil, Ray, 1999, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, London: Penguin.
    Kurzweil, Ray, 1999, The Age of Spiritual Machines: When Computers Exceed Human Intelligence, 伦敦:企鹅出版社。
  • –––, 2005, The Singularity Is Near: When Humans Transcend Biology, London: Viking.
    –––, 2005, The Singularity Is Near: When Humans Transcend Biology, London: Viking.
  • –––, 2012, How to Create a Mind: The Secret of Human Thought Revealed, New York: Viking.
    –––, 2012, How to Create a Mind: The Secret of Human Thought Revealed, New York: Viking.
  • Lee, Minha, Sander Ackermans, Nena van As, Hanwen Chang, Enzo Lucas, and Wijnand IJsselsteijn, 2019, “Caring for Vincent: A Chatbot for Self-Compassion”, in Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems—CHI ’19, Glasgow, Scotland: ACM Press, 1–13. doi:10.1145/3290605.3300932
    Lee、Minha、Sander Ackermans、Nena van As、Hanwen Chang、Enzo Lucas 和 Wijnand IJsselsteijn,2019 年,“照顾文森特:自我同情的聊天机器人”,载于 2019 年 CHI 计算系统中人为因素会议论文集——CHI '19,苏格兰格拉斯哥:ACM 出版社,1-13。doi:10.1145/3290605.3300932
  • Levy, David, 2007, Love and Sex with Robots: The Evolution of Human-Robot Relationships, New York: Harper & Co.
    Levy, David, 2007, Love and with Robots: The Evolution of Human-Robot Relationships, New York: Harper & Co.
  • Lighthill, James, 1973, “Artificial Intelligence: A General Survey”, Artificial intelligence: A Paper Symposion, London: Science Research Council. [Lighthill 1973 available online]
    Lighthill, James, 1973, “Artificial Intelligence: A General Survey”, Artificial intelligence: A Paper Symposion, 伦敦:科学研究委员会。[ Lighthill 1973 可在线获取]
  • Lin, Patrick, 2016, “Why Ethics Matters for Autonomous Cars”, in Autonomous Driving, Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner (eds.), Berlin, Heidelberg: Springer Berlin Heidelberg, 69–85. doi:10.1007/978-3-662-48847-8_4
    Lin, Patrick, 2016, “Why Ethics Matters for Autonomous Cars”, in Autonomous Driving, Markus Maurer, J. Christian Gerdes, Barbara Lenz, and Hermann Winner (eds.), Berlin, Heidelberg: Springer Berlin Heidelberg, 69–85. doi:10.1007/978-3-662-48847-8_4
  • Lin, Patrick, Keith Abney, and Ryan Jenkins (eds.), 2017, Robot Ethics 2.0: From Autonomous Cars to Artificial Intelligence, New York: Oxford University Press. doi:10.1093/oso/9780190652951.001.0001
    Lin、Patrick、Keith Abney 和 Ryan Jenkins(编辑),2017 年,《机器人伦理 2.0:从自动驾驶汽车到人工智能》,纽约:牛津大学出版社。 doi:10.1093/oso/9780190652951.001.0001
  • Lin, Patrick, George Bekey, and Keith Abney, 2008, “Autonomous Military Robotics: Risk, Ethics, and Design”, ONR report, California Polytechnic State University, San Luis Obispo, 20 December 2008), 112 pp. [Lin, Bekey, and Abney 2008 available online]
    Lin、Patrick、George Bekey 和 Keith Abney,2008 年,“自主军事机器人:风险、伦理和设计”,ONR 报告,加州理工州立大学,圣路易斯奥比斯波,2008 年 12 月 20 日),112 页。
  • Lomas, Meghann, Robert Chevalier, Ernest Vincent Cross, Robert Christopher Garrett, John Hoare, and Michael Kopack, 2012, “Explaining Robot Actions”, in Proceedings of the Seventh Annual ACM/IEEE International Conference on Human-Robot Interaction—HRI ’12, Boston, MA: ACM Press, 187–188. doi:10.1145/2157689.2157748
    Lomas、Meghann、Robert Chevalier、Ernest Vincent Cross、Robert Christopher Garrett、John Hoare 和 Michael Kopack,2012 年,“解释机器人动作”,第七届年度 ACM/IEEE 人机交互国际会议论文集——HRI '12,马萨诸塞州波士顿:ACM 出版社,187–188。doi:10.1145/2157689.2157748
  • Macnish, Kevin, 2017, The Ethics of Surveillance: An Introduction, London: Routledge.
    Macnish,Kevin,2017 年,《监视伦理:简介》,伦敦:劳特利奇。
  • Mathur, Arunesh, Gunes Acar, Michael J. Friedman, Elena Lucherini, Jonathan Mayer, Marshini Chetty, and Arvind Narayanan, 2019, “Dark Patterns at Scale: Findings from a Crawl of 11K Shopping Websites”, Proceedings of the ACM on Human-Computer Interaction, 3(CSCW): art. 81. doi:10.1145/3359183
    Mathur、Arunesh、Gunes Acar、Michael J. Friedman、Elena Lucherini、Jonathan Mayer、Marshini Chetty 和 Arvind Narayanan,2019 年,“大规模的黑暗模式:来自 11K 购物网站爬行的发现”,ACM 人机交互会议记录,3(CSCW):第 81 条。doi:10.1145/3359183
  • Minsky, Marvin, 1985, The Society of Mind, New York: Simon & Schuster.
    Minsky,Marvin,1985年,心灵社会,纽约:Simon&Schuster。
  • Misselhorn, Catrin, 2020, “Artificial Systems with Moral Capacities? A Research Design and Its Implementation in a Geriatric Care System”, Artificial Intelligence, 278: art. 103179. doi:10.1016/j.artint.2019.103179
    米塞尔霍恩,卡特林,2020 年,“具有道德能力的人工系统?研究设计及其在老年护理系统中的实施“,《人工智能》,第278页:第103179条。doi:10.1016/j.artint.2019.103179
  • Mittelstadt, Brent Daniel and Luciano Floridi, 2016, “The Ethics of Big Data: Current and Foreseeable Issues in Biomedical Contexts”, Science and Engineering Ethics, 22(2): 303–341. doi:10.1007/s11948-015-9652-2
    米特尔施塔特、布伦特·丹尼尔和卢西亚诺·弗洛里迪,2016 年,“大数据伦理:生物医学背景下的当前和可预见问题”,科学与工程伦理,22(2):303-341。doi:10.1007/s11948-015-9652-2
  • Moor, James H., 2006, “The Nature, Importance, and Difficulty of Machine Ethics”, IEEE Intelligent Systems, 21(4): 18–21. doi:10.1109/MIS.2006.80
    Moor, James H., 2006, “机器伦理的性质、重要性和难度”, IEEE 智能系统, 21(4): 18–21.doi:10.1109/MIS.2006.80
  • Moravec, Hans, 1990, Mind Children, Cambridge, MA: Harvard University Press.
    莫拉维克,汉斯,1990 年,心灵儿童,马萨诸塞州剑桥:哈佛大学出版社。
  • –––, 1998, Robot: Mere Machine to Transcendent Mind, New York: Oxford University Press.
    –––,1998 年,机器人:从纯粹的机器到超验的心灵,纽约:牛津大学出版社。
  • Mozorov, Eygeny, 2013, To Save Everything, Click Here: The Folly of Technological Solutionism, New York: Public Affairs.
    莫佐罗夫,艾根尼,2013 年,拯救一切,点击这里:技术解决方案主义的愚蠢,纽约:公共事务。
  • Müller, Vincent C., 2012, “Autonomous Cognitive Systems in Real-World Environments: Less Control, More Flexibility and Better Interaction”, Cognitive Computation, 4(3): 212–215. doi:10.1007/s12559-012-9129-4
    Müller, Vincent C., 2012, “现实世界环境中的自主认知系统:更少的控制、更多的灵活性和更好的交互”, 认知计算, 4(3): 212–215.doi:10.1007/s12559-012-9129-4
  • –––, 2016a, “Autonomous Killer Robots Are Probably Good News”, In Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons, Ezio Di Nucci and Filippo Santoni de Sio (eds.), London: Ashgate, 67–81.
    –––, 2016a, “自主杀手机器人可能是好消息”, in Drones and Responsibility: Legal, Philosophical and Socio-Technical Perspectives on the Use of Remotely Controlled Weapons, Ezio Di Nucci and Filippo Santoni de Sio (eds.), London: Ashgate, 67–81.
  • ––– (ed.), 2016b, Risks of Artificial Intelligence, London: Chapman & Hall - CRC Press. doi:10.1201/b19187
    ––– (ed.), 2016b, Risks of Artificial Intelligence, London: Chapman & Hall - CRC Press. doi:10.1201/b19187
  • –––, 2018, “In 30 Schritten zum Mond? Zukünftiger Fortschritt in der KI”, Medienkorrespondenz, 20: 5–15. [Müller 2018 available online]
    –––, 2018, “30 Steps to the Moon?人工智能的未来进展“,媒体通讯,20:5-15。 [ Müller 2018 在线提供]
  • –––, 2020, “Measuring Progress in Robotics: Benchmarking and the ‘Measure-Target Confusion’”, in Metrics of Sensory Motor Coordination and Integration in Robots and Animals, Fabio Bonsignorio, Elena Messina, Angel P. del Pobil, and John Hallam (eds.), (Cognitive Systems Monographs 36), Cham: Springer International Publishing, 169–179. doi:10.1007/978-3-030-14126-4_9
    –––,2020 年,“测量机器人技术的进展:基准测试和'测量-目标混淆'”,在机器人和动物中感觉运动协调和整合的指标中,Fabio Bonsignorio、Elena Messina、Angel P. del Pobil 和 John Hallam(编辑),(认知系统专著 36),Cham:Springer International Publishing,169-179。doi:10.1007/978-3-030-14126-4_9
  • –––, forthcoming-a, Can Machines Think? Fundamental Problems of Artificial Intelligence, New York: Oxford University Press.
    –––, forthcoming-a, 机器能思考吗?人工智能的基本问题,纽约:牛津大学出版社。
  • ––– (ed.), forthcoming-b, Oxford Handbook of the Philosophy of Artificial Intelligence, New York: Oxford University Press.
    ––– (ed.), forthcoming-b, Oxford Handbook of the Philosophy of Artificial Intelligence, New York: Oxford University Press.
  • Müller, Vincent C. and Nick Bostrom, 2016, “Future Progress in Artificial Intelligence: A Survey of Expert Opinion”, in Fundamental Issues of Artificial Intelligence, Vincent C. Müller (ed.), Cham: Springer International Publishing, 555–572. doi:10.1007/978-3-319-26485-1_33
    Müller、Vincent C. 和 Nick Bostrom,2016 年,“人工智能的未来进展:专家意见调查”,载于《人工智能的基本问题》,Vincent C. Müller(编辑),Cham:Springer International Publishing,555-572。doi:10.1007/978-3-319-26485-1_33
  • Newport, Cal, 2019, Digital Minimalism: On Living Better with Less Technology, London: Penguin.
    加利福尼亚州纽波特,2019 年,数字极简主义:用更少的技术过上更好的生活,伦敦:企鹅出版社。
  • Nørskov, Marco (ed.), 2017, Social Robots, London: Routledge.
    Nørskov,Marco(编辑),2017年,社交机器人,伦敦:劳特利奇。
  • Nyholm, Sven, 2018a, “Attributing Agency to Automated Systems: Reflections on Human–Robot Collaborations and Responsibility-Loci”, Science and Engineering Ethics, 24(4): 1201–1219. doi:10.1007/s11948-017-9943-x
    Nyholm,Sven,2018a,“将代理归因于自动化系统:对人机协作和责任位点的反思”,科学与工程伦理,24(4):1201-1219。doi:10.1007/s11948-017-9943-x
  • –––, 2018b, “The Ethics of Crashes with Self-Driving Cars: A Roadmap, II”, Philosophy Compass, 13(7): e12506. doi:10.1111/phc3.12506
    –––, 2018b, “自动驾驶汽车碰撞的伦理:路线图,II”, 哲学指南针, 13(7): e12506.doi:10.1111/phc3.12506
  • Nyholm, Sven, and Lily Frank, 2017, “From Sex Robots to Love Robots: Is Mutual Love with a Robot Possible?”, in Danaher and McArthur 2017: 219–243.
    Nyholm、Sven 和 Lily Frank,2017 年,“从性爱机器人到爱情机器人:与机器人的相互爱可能吗?”,丹纳赫和麦克阿瑟 2017 年:219-243。
  • O’Connell, Mark, 2017, To Be a Machine: Adventures among Cyborgs, Utopians, Hackers, and the Futurists Solving the Modest Problem of Death, London: Granta.
    奥康奈尔,马克,2017 年,成为一台机器:机器人、乌托邦人、黑客和未来主义者之间的冒险解决适度的死亡问题,伦敦:格兰塔。
  • O’Neil, Cathy, 2016, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, Largo, ML: Crown.
    奥尼尔,凯茜,2016 年,数学毁灭武器:大数据如何增加不平等并威胁民主,拉戈,ML:皇冠。
  • Omohundro, Steve, 2014, “Autonomous Technology and the Greater Human Good”, Journal of Experimental & Theoretical Artificial Intelligence, 26(3): 303–315. doi:10.1080/0952813X.2014.895111
    Omohundro,Steve,2014,“自主技术与更大的人类利益”,实验与理论人工智能杂志,26(3):303-315。doi:10.1080/0952813X.2014.895111
  • Ord, Toby, 2020, The Precipice: Existential Risk and the Future of Humanity, London: Bloomsbury.
    奥德,托比,2020 年,《悬崖:存在风险和人类的未来》,伦敦:布鲁姆斯伯里。
  • Powers, Thomas M. and Jean-Gabriel Ganascia, forthcoming, “The Ethics of the Ethics of AI”, in Oxford Handbook of Ethics of Artificial Intelligence, Markus D. Dubber, Frank Pasquale, and Sunnit Das (eds.), New York: Oxford.
    Powers、Thomas M. 和 Jean-Gabriel Ganascia,即将出版,“The Ethics of the Ethics of AI”,载于《牛津人工智能伦理手册》,Markus D. Dubber、Frank Pasquale 和 Sunnit Das(编辑),纽约:牛津。
  • Rawls, John, 1971, A Theory of Justice, Cambridge, MA: Belknap Press.
    罗尔斯,约翰,1971 年,《正义论》,马萨诸塞州剑桥:贝尔纳普出版社。
  • Rees, Martin, 2018, On the Future: Prospects for Humanity, Princeton: Princeton University Press.
    里斯,马丁,2018 年,《论未来:人类的前景》,普林斯顿:普林斯顿大学出版社。
  • Richardson, Kathleen, 2016, “Sex Robot Matters: Slavery, the Prostituted, and the Rights of Machines”, IEEE Technology and Society Magazine, 35(2): 46–53. doi:10.1109/MTS.2016.2554421
    理查森,凯瑟琳,2016 年,“性爱机器人问题:奴隶制、和机器的权利”,IEEE 技术与社会杂志,35(2):46-53。doi:10.1109/MTS.2016.2554421
  • Roessler, Beate, 2017, “Privacy as a Human Right”, Proceedings of the Aristotelian Society, 117(2): 187–206. doi:10.1093/arisoc/aox008
    Roessler,Beate,2017 年,“隐私作为一项人权”,亚里士多德学会论文集,117(2):187-206。doi:10.1093/arisoc/aox008
  • Royakkers, Lambèr and Rinie van Est, 2016, Just Ordinary Robots: Automation from Love to War, Boca Raton, LA: CRC Press, Taylor & Francis. doi:10.1201/b18899
    Royakkers,Lambèr和Rinie van Est,2016年,Just Ordinary Robots:从爱到战争的自动化,洛杉矶博卡拉顿:CRC出版社,Taylor&Francis。doi:10.1201/b18899
  • Russell, Stuart, 2019, Human Compatible: Artificial Intelligence and the Problem of Control, New York: Viking.
    Russell, Stuart, 2019, Human Compatible: Artificial Intelligence and the Problem of Control, 纽约:维京人。
  • Russell, Stuart, Daniel Dewey, and Max Tegmark, 2015, “Research Priorities for Robust and Beneficial Artificial Intelligence”, AI Magazine, 36(4): 105–114. doi:10.1609/aimag.v36i4.2577
    Russell、Stuart、Daniel Dewey 和 Max Tegmark,2015 年,“稳健而有益的人工智能的研究重点”,AI 杂志,36(4):105–114。doi:10.1609/aimag.v36i4.2577
  • SAE International, 2018, “Taxonomy and Definitions for Terms Related to Driving Automation Systems for on-Road Motor Vehicles”, J3016_201806, 15 June 2018. [SAE International 2015 available online]
    SAE International,2018 年,“道路机动车辆驾驶自动化系统相关术语的分类和定义”,J3016_201806,2018 年 6 月 15 日。[ SAE International 2015 在线提供]
  • Sandberg, Anders, 2013, “Feasibility of Whole Brain Emulation”, in Philosophy and Theory of Artificial Intelligence, Vincent C. Müller (ed.), (Studies in Applied Philosophy, Epistemology and Rational Ethics, 5), Berlin, Heidelberg: Springer Berlin Heidelberg, 251–264. doi:10.1007/978-3-642-31674-6_19
    桑德伯格,安德斯,2013 年,“全脑仿真的可行性”,人工智能哲学和理论,Vincent C. Müller(编辑),(应用哲学、认识论和理性伦理学研究,5),柏林,海德堡:施普林格柏林海德堡,251-264。doi:10.1007/978-3-642-31674-6_19
  • –––, 2019, “There Is Plenty of Time at the Bottom: The Economics, Risk and Ethics of Time Compression”, Foresight, 21(1): 84–99. doi:10.1108/FS-04-2018-0044
    –––, 2019, “底部有充足的时间:时间压缩的经济学、风险和伦理学”, 展望, 21(1): 84–99.doi:10.1108/FS-04-2018-0044
  • Santoni de Sio, Filippo and Jeroen van den Hoven, 2018, “Meaningful Human Control over Autonomous Systems: A Philosophical Account”, Frontiers in Robotics and AI, 5(February): 15. doi:10.3389/frobt.2018.00015
    Santoni de Sio、Filippo 和 Jeroen van den Hoven,2018 年,“对自治系统的有意义的人类控制:哲学叙述”,机器人与人工智能前沿,5(2 月):15。doi:10.3389/frobt.2018.00015
  • Schneier, Bruce, 2015, Data and Goliath: The Hidden Battles to Collect Your Data and Control Your World, New York: W. W. Norton.
    Schneier,Bruce,2015 年,数据与歌利亚:收集数据和控制世界的隐藏战斗,纽约:WW Norton。
  • Searle, John R., 1980, “Minds, Brains, and Programs”, Behavioral and Brain Sciences, 3(3): 417–424. doi:10.1017/S0140525X00005756
    Searle, John R., 1980, “Minds, Brains, and Programs”, 行为与脑科学, 3(3): 417–424.doi:10.1017/S0140525X00005756
  • Selbst, Andrew D., Danah Boyd, Sorelle A. Friedler, Suresh Venkatasubramanian, and Janet Vertesi, 2019, “Fairness and Abstraction in Sociotechnical Systems”, in Proceedings of the Conference on Fairness, Accountability, and Transparency—FAT* ’19, Atlanta, GA: ACM Press, 59–68. doi:10.1145/3287560.3287598
    Selbst、Andrew D.、Danah Boyd、Sorelle A. Friedler、Suresh Venkatasubramanian 和 Janet Vertesi,2019 年,“社会技术系统中的公平性和抽象”,在公平、问责和透明度会议论文集——FAT* '19,佐治亚州亚特兰大:ACM 出版社,59-68。doi:10.1145/3287560.3287598
  • Sennett, Richard, 2018, Building and Dwelling: Ethics for the City, London: Allen Lane.
    Sennett,Richard,2018 年,建筑与住宅:城市伦理,伦敦:Allen Lane。
  • Shanahan, Murray, 2015, The Technological Singularity, Cambridge, MA: MIT Press.
    Shanahan,Murray,2015 年,技术奇点,马萨诸塞州剑桥:麻省理工学院出版社。
  • Sharkey, Amanda, 2019, “Autonomous Weapons Systems, Killer Robots and Human Dignity”, Ethics and Information Technology, 21(2): 75–87. doi:10.1007/s10676-018-9494-0
    Sharkey,Amanda,2019 年,“自主武器系统、杀手机器人和人类尊严”,伦理与信息技术,21(2):75–87。doi:10.1007/s10676-018-9494-0
  • Sharkey, Amanda and Noel Sharkey, 2011, “The Rights and Wrongs of Robot Care”, in Robot Ethics: The Ethical and Social Implications of Robotics, Patrick Lin, Keith Abney and George Bekey (eds.), Cambridge, MA: MIT Press, 267–282.
    Sharkey、Amanda 和 Noel Sharkey,2011 年,“机器人护理的权利与错误”,载于《机器人伦理:机器人的伦理和社会影响》,Patrick Lin、Keith Abney 和 George Bekey(编辑),马萨诸塞州剑桥:麻省理工学院出版社,第 267-282 页。
  • Shoham, Yoav, Perrault Raymond, Brynjolfsson Erik, Jack Clark, James Manyika, Juan Carlos Niebles, … Zoe Bauer, 2018, “The AI Index 2018 Annual Report”, 17 December 2018, Stanford, CA: AI Index Steering Committee, Human-Centered AI Initiative, Stanford University. [Shoam et al. 2018 available online]
    肖汉姆、约夫、佩罗·雷蒙德、布林约尔森·埃里克、杰克·克拉克、詹姆斯·曼尼卡、胡安·卡洛斯·尼布尔斯......Zoe Bauer,2018 年,“2018 年 AI 指数年度报告”,2018 年 12 月 17 日,加利福尼亚州斯坦福大学:斯坦福大学以人为本的 AI 计划 AI 指数指导委员会。[ Shoam 等人,2018 年,可在线获取]
  • Silver, David, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, Timothy Lillicrap, Karen Simonyan, and Demis Hassabis, 2018, “A General Reinforcement Learning Algorithm That Masters Chess, Shogi, and Go through Self-Play”, Science, 362(6419): 1140–1144. doi:10.1126/science.aar6404
    Silver、David、Thomas Hubert、Julian Schrittwieser、Ioannis Antonoglou、Matthew Lai、Arthur Guez、Marc Lanctot、Laurent Sifre、Dharshan Kumaran、Thore Graepel、Timothy Lillicrap、Karen Simonyan 和 Demis Hassabis,2018 年,“掌握国际象棋、将棋和自我游戏的通用强化学习算法”,科学,362(6419):1140–1144。doi:10.1126/science.aar6404
  • Simon, Herbert A. and Allen Newell, 1958, “Heuristic Problem Solving: The Next Advance in Operations Research”, Operations Research, 6(1): 1–10. doi:10.1287/opre.6.1.1
    Simon、Herbert A. 和 Allen Newell,1958 年,“启发式问题解决:运筹学的下一个进展”,运筹学,6(1):1-10。doi:10.1287/opre.6.1.1
  • Simpson, Thomas W. and Vincent C. Müller, 2016, “Just War and Robots’ Killings”, The Philosophical Quarterly, 66(263): 302–322. doi:10.1093/pq/pqv075
    Simpson、Thomas W. 和 Vincent C. Müller,2016 年,“正义战争和机器人的杀戮”,《哲学季刊》,66(263):302-322。doi:10.1093/pq/pqv075
  • Smolan, Sandy (director), 2016, “The Human Face of Big Data”, PBS Documentary, 24 February 2016, 56 mins.
    Smolan, Sandy(导演),2016年,“大数据的人性面孔”,PBS纪录片,2016年2月24日,56分钟。
  • Sparrow, Robert, 2007, “Killer Robots”, Journal of Applied Philosophy, 24(1): 62–77. doi:10.1111/j.1468-5930.2007.00346.x
    Sparrow, Robert, 2007, “杀手机器人”, 应用哲学杂志, 24(1): 62–77.doi:10.1111/j.1468-5930.2007.00346.x
  • –––, 2016, “Robots in Aged Care: A Dystopian Future?”, AI & Society, 31(4): 445–454. doi:10.1007/s00146-015-0625-4
    –––, 2016, “老年护理中的机器人:反乌托邦的未来?”, 人工智能与社会, 31(4): 445–454.doi:10.1007/s00146-015-0625-4
  • Stahl, Bernd Carsten, Job Timmermans, and Brent Daniel Mittelstadt, 2016, “The Ethics of Computing: A Survey of the Computing-Oriented Literature”, ACM Computing Surveys, 48(4): art. 55. doi:10.1145/2871196
    Stahl、Bernd Carsten、Job Timmermans 和 Brent Daniel Mittelstadt,2016 年,“计算伦理:面向计算的文献调查”,ACM Computing Surveys,48(4):第 55 条。doi:10.1145/2871196
  • Stahl, Bernd Carsten and David Wright, 2018, “Ethics and Privacy in AI and Big Data: Implementing Responsible Research and Innovation”, IEEE Security Privacy, 16(3): 26–33.
    Stahl、Bernd Carsten 和 David Wright,2018 年,“人工智能和大数据中的伦理和隐私:实施负责任的研究和创新”,IEEE 安全隐私,16(3):26-33。
  • Stone, Christopher D., 1972, “Should Trees Have Standing - toward Legal Rights for Natural Objects”, Southern California Law Review, 45: 450–501.
    Stone, Christopher D., 1972, “Should Trees Have Standing - Towards Legal Rights for Natural Objects”,《南加州法律评论》,45:450-501。
  • Stone, Peter, Rodney Brooks, Erik Brynjolfsson, Ryan Calo, Oren Etzioni, Greg Hager, Julia Hirschberg, Shivaram Kalyanakrishnan, Ece Kamar, Sarit Kraus, Kevin Leyton-Brown, David Parkes, William Press, AnnaLee Saxenian, Julie Shah, Milind Tambe, and Astro Teller, 2016, “Artificial Intelligence and Life in 2030”, One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel, Stanford University, Stanford, CA, September 2016. [Stone et al. 2016 available online]
    Stone、Peter、Rodney Brooks、Erik Brynjolfsson、Ryan Calo、Oren Etzioni、Greg Hager、Julia Hirschberg、Shivaram Kalyanakrishnan、Ece Kamar、Sarit Kraus、Kevin Leyton-Brown、David Parkes、William Press、AnnaLee Saxenian、Julie Shah、Milind Tambe 和 Astro Teller,2016 年,“人工智能与 2030 年的生活”,人工智能百年研究:2015-2016 年研究小组的报告, 斯坦福大学,加利福尼亚州斯坦福,2016 年 9 月。[ Stone 等人,2016 年,可在线获取]
  • Strawson, Galen, 1998, “Free Will”, in Routledge Encyclopedia of Philosophy, Taylor & Francis. doi:10.4324/9780415249126-V014-1
    Strawson, Galen, 1998, “Free Will”, in Routledge Encyclopedia of Philosophy, Taylor & Francis.doi:10.4324/9780415249126-V014-1
  • Sullins, John P., 2012, “Robots, Love, and Sex: The Ethics of Building a Love Machine”, IEEE Transactions on Affective Computing, 3(4): 398–409. doi:10.1109/T-AFFC.2012.31
    Sullins, John P., 2012, “机器人、爱和性:构建爱情机器的伦理”, IEEE Transactions on Affective Computing, 3(4): 398–409.doi:10.1109/T-AFFC.2012.31
  • Susser, Daniel, Beate Roessler, and Helen Nissenbaum, 2019, “Technology, Autonomy, and Manipulation”, Internet Policy Review, 8(2): 30 June 2019. [Susser, Roessler, and Nissenbaum 2019 available online]
    Susser、Daniel、Beate Roessler 和 Helen Nissenbaum,2019 年,“技术、自主和操纵”,《互联网政策评论》,8(2):2019 年 6 月 30 日。[ Susser、Roessler 和 Nissenbaum 2019 可在线获取]
  • Taddeo, Mariarosaria and Luciano Floridi, 2018, “How AI Can Be a Force for Good”, Science, 361(6404): 751–752. doi:10.1126/science.aat5991
    Taddeo、Mariarosaria 和 Luciano Floridi,2018 年,“人工智能如何成为一股向善的力量”,科学,361(6404):751–752。doi:10.1126/science.aat5991
  • Taylor, Linnet and Nadezhda Purtova, 2019, “What Is Responsible and Sustainable Data Science?”, Big Data & Society, 6(2): art. 205395171985811. doi:10.1177/2053951719858114
    Taylor、Linnet 和 Nadezhda Purtova,2019 年,“什么是负责任和可持续的数据科学?”,大数据与社会,6(2):第 205395171985811 条。doi:10.1177/2053951719858114
  • Taylor, Steve, et al., 2018, “Responsible AI – Key Themes, Concerns & Recommendations for European Research and Innovation: Summary of Consultation with Multidisciplinary Experts”, June. doi:10.5281/zenodo.1303252 [Taylor, et al. 2018 available online]
    Taylor, Steve, et al., 2018, “负责任的人工智能——欧洲研究与创新的关键主题、关注点和建议:与多学科专家的磋商摘要”,6月。doi:10.5281/zenodo.1303252 [ Taylor, et al. 2018 available online]
  • Tegmark, Max, 2017, Life 3.0: Being Human in the Age of Artificial Intelligence, New York: Knopf.
    Tegmark,Max,2017 年,Life 3.0:在人工智能时代成为人类,纽约:Knopf。
  • Thaler, Richard H and Sunstein, Cass, 2008, Nudge: Improving decisions about health, wealth and happiness, New York: Penguin.
    Thaler、Richard H 和 Sunstein、Cass,2008 年,助推:改善关于健康、财富和幸福的决策,纽约:企鹅出版社。
  • Thompson, Nicholas and Ian Bremmer, 2018, “The AI Cold War That Threatens Us All”, Wired, 23 November 2018. [Thompson and Bremmer 2018 available online]
    Thompson、Nicholas 和 Ian Bremmer,2018 年,“威胁我们所有人的人工智能冷战”,《连线》,2018 年 11 月 23 日。[ Thompson and Bremmer 2018 在线提供]
  • Thomson, Judith Jarvis, 1976, “Killing, Letting Die, and the Trolley Problem”, Monist, 59(2): 204–217. doi:10.5840/monist197659224
    汤姆森,朱迪思·贾维斯,1976 年,“杀戮、让死和手推车问题”,一元论者,59(2):204-217。doi:10.5840/monist197659224
  • Torrance, Steve, 2011, “Machine Ethics and the Idea of a More-Than-Human Moral World”, in Anderson and Anderson 2011: 115–137. doi:10.1017/CBO9780511978036.011
    托伦斯,史蒂夫,2011 年,“机器伦理和超越人类道德世界的理念”,安德森和安德森 2011 年:115-137。doi:10.1017/CBO9780511978036.011
  • Trump, Donald J, 2019, “Executive Order on Maintaining American Leadership in Artificial Intelligence”, 11 February 2019. [Trump 2019 available online]
    特朗普,唐纳德 J,2019 年,“关于保持美国在人工智能领域的领导地位的行政命令”,2019 年 2 月 11 日。[ 特朗普 2019 在线提供]
  • Turner, Jacob, 2019, Robot Rules: Regulating Artificial Intelligence, Berlin: Springer. doi:10.1007/978-3-319-96235-1
    特纳,雅各布,2019 年,机器人规则:调节人工智能,柏林:施普林格。doi:10.1007/978-3-319-96235-1
  • Tzafestas, Spyros G., 2016, Roboethics: A Navigating Overview, (Intelligent Systems, Control and Automation: Science and Engineering 79), Cham: Springer International Publishing. doi:10.1007/978-3-319-21714-7
    Tzafestas,Spyros G.,2016 年,Roboethics:导航概述,(智能系统、控制和自动化:科学与工程 79),Cham:Springer International Publishing。doi:10.1007/978-3-319-21714-7
  • Vallor, Shannon, 2017, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, Oxford: Oxford University Press. doi:10.1093/acprof:oso/9780190498511.001.0001
    Vallor, Shannon, 2017, Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting, 牛津:牛津大学出版社. doi:10.1093/acprof:oso/9780190498511.001.0001
  • Van Lent, Michael, William Fisher, and Michael Mancuso, 2004, “An Explainable Artificial Intelligence System for Small-Unit Tactical Behavior”, in Proceedings of the 16th Conference on Innovative Applications of Artifical Intelligence, (IAAI’04), San Jose, CA: AAAI Press, 900–907.
    Van Lent、Michael、William Fisher 和 Michael Mancuso,2004 年,“用于小单位战术行为的可解释人工智能系统”,第 16 届人工智能创新应用会议论文集,(IAAI'04),加利福尼亚州圣何塞:AAAI 出版社,900-907。
  • van Wynsberghe, Aimee, 2016, Healthcare Robots: Ethics, Design and Implementation, London: Routledge. doi:10.4324/9781315586397
    van Wynsberghe,Aimee,2016 年,医疗保健机器人:伦理、设计和实施,伦敦:劳特利奇。doi:10.4324/9781315586397
  • van Wynsberghe, Aimee and Scott Robbins, 2019, “Critiquing the Reasons for Making Artificial Moral Agents”, Science and Engineering Ethics, 25(3): 719–735. doi:10.1007/s11948-018-0030-8
    van Wynsberghe、Aimee 和 Scott Robbins,2019 年,“批判制造人工道德主体的原因”,科学与工程伦理,25(3):719-735。doi:10.1007/s11948-018-0030-8
  • Vanderelst, Dieter and Alan Winfield, 2018, “The Dark Side of Ethical Robots”, in Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society, New Orleans, LA: ACM, 317–322. doi:10.1145/3278721.3278726
    Vanderelst、Dieter 和 Alan Winfield,2018 年,“道德机器人的黑暗面”,2018 年 AAAI/ACM 人工智能、伦理和社会会议论文集,路易斯安那州新奥尔良:ACM,317-322。doi:10.1145/3278721.3278726
  • Veale, Michael and Reuben Binns, 2017, “Fairer Machine Learning in the Real World: Mitigating Discrimination without Collecting Sensitive Data”, Big Data & Society, 4(2): art. 205395171774353. doi:10.1177/2053951717743530
    Veale,Michael和Reuben Binns,2017年,“现实世界中更公平的机器学习:在不收集敏感数据的情况下减轻歧视”,大数据与社会,4(2):第205395171774353条。doi:10.1177/2053951717743530
  • Véliz, Carissa, 2019, “Three Things Digital Ethics Can Learn from Medical Ethics”, Nature Electronics, 2(8): 316–318. doi:10.1038/s41928-019-0294-2
    Véliz,Carissa,2019 年,“数字伦理可以从医学伦理学中学到的三件事”,Nature Electronics,2(8):316–318。doi:10.1038/s41928-019-0294-2
  • Verbeek, Peter-Paul, 2011, Moralizing Technology: Understanding and Designing the Morality of Things, Chicago: University of Chicago Press.
    Verbeek,Peter-Paul,2011 年,道德化技术:理解和设计事物的道德,芝加哥:芝加哥大学出版社。
  • Wachter, Sandra and Brent Daniel Mittelstadt, 2019, “A Right to Reasonable Inferences: Re-Thinking Data Protection Law in the Age of Big Data and AI”, Columbia Business Law Review, 2019(2): 494–620.
    Wachter、Sandra 和 Brent Daniel Mittelstadt,2019 年,“合理推论的权利:在大数据和人工智能时代重新思考数据保护法”,《哥伦比亚商法评论》,2019(2):494–620。
  • Wachter, Sandra, Brent Mittelstadt, and Luciano Floridi, 2017, “Why a Right to Explanation of Automated Decision-Making Does Not Exist in the General Data Protection Regulation”, International Data Privacy Law, 7(2): 76–99. doi:10.1093/idpl/ipx005
    Wachter、Sandra、Brent Mittelstadt 和 Luciano Floridi,2017 年,“为什么《通用数据保护条例》中不存在自动决策的解释权”,《国际数据隐私法》,7(2):76-99。doi:10.1093/idpl/ipx005
  • Wachter, Sandra, Brent Mittelstadt, and Chris Russell, 2018, “Counterfactual Explanations Without Opening the Black Box: Automated Decisions and the GDPR”, Harvard Journal of Law & Technology, 31(2): 842–887. doi:10.2139/ssrn.3063289
    Wachter、Sandra、Brent Mittelstadt 和 Chris Russell,2018 年,“不打开黑匣子的反事实解释:自动化决策和 GDPR”,哈佛法律与技术杂志,31(2):842-887。doi:10.2139/ssrn.3063289
  • Wallach, Wendell and Peter M. Asaro (eds.), 2017, Machine Ethics and Robot Ethics, London: Routledge.
    Wallach、Wendell 和 Peter M. Asaro(编辑),2017 年,机器伦理和机器人伦理,伦敦:劳特利奇。
  • Walsh, Toby, 2018, Machines That Think: The Future of Artificial Intelligence, Amherst, MA: Prometheus Books.
    沃尔什,托比,2018 年,思考的机器:人工智能的未来,马萨诸塞州阿默斯特:普罗米修斯图书。
  • Westlake, Stian (ed.), 2014, Our Work Here Is Done: Visions of a Robot Economy, London: Nesta. [Westlake 2014 available online]
    Westlake, Stian (ed.), 2014, Our Work Here Is Done: Visions of a Robot Economy, 伦敦:Nesta.[ Westlake 2014 在线提供]
  • Whittaker, Meredith, Kate Crawford, Roel Dobbe, Genevieve Fried, Elizabeth Kaziunas, Varoon Mathur, … Jason Schultz, 2018, “AI Now Report 2018”, New York: AI Now Institute, New York University. [Whittaker et al. 2018 available online]
    惠特克、梅雷迪思、凯特·克劳馥、罗尔·多贝、吉纳维芙·弗里德、伊丽莎白·卡齐纳斯、瓦龙·马图尔......Jason Schultz,2018 年,“2018 年 AI Now 报告”,纽约:纽约大学 AI Now 研究所。[ Whittaker 等人,2018 年,可在线获取]
  • Whittlestone, Jess, Rune Nyrup, Anna Alexandrova, Kanta Dihal, and Stephen Cave, 2019, “Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research”, Cambridge: Nuffield Foundation, University of Cambridge. [Whittlestone 2019 available online]
    Whittlestone、Jess、Rune Nyrup、Anna Alexandrova、Kanta Dihal 和 Stephen Cave,2019 年,“算法、数据和人工智能的伦理和社会影响:研究路线图”,剑桥:剑桥大学纳菲尔德基金会。[ Whittlestone 2019 在线提供]
  • Winfield, Alan, Katina Michael, Jeremy Pitt, and Vanessa Evers (eds.), 2019, Machine Ethics: The Design and Governance of Ethical AI and Autonomous Systems, special issue of Proceedings of the IEEE, 107(3): 501–632.
    Winfield、Alan、Katina Michael、Jeremy Pitt 和 Vanessa Evers(编辑),2019 年,机器伦理:道德人工智能和自治系统的设计和治理,IEEE 会议记录特刊,107(3):501–632。
  • Woollard, Fiona and Frances Howard-Snyder, 2016, “Doing vs. Allowing Harm”, Stanford Encyclopedia of Philosophy (Winter 2016 edition), Edward N. Zalta (ed.), URL = <https://plato.stanford.edu/archives/win2016/entries/doing-allowing/>
    Woollard、Fiona 和 Frances Howard-Snyder,2016 年,“做与允许伤害”,斯坦福哲学百科全书(2016 年冬季版),Edward N. Zalta(编辑),URL = < https://plato.stanford.edu/archives/win2016/entries/doing-allowing/>
  • Woolley, Samuel C. and Philip N. Howard (eds.), 2017, Computational Propaganda: Political Parties, Politicians, and Political Manipulation on Social Media, Oxford: Oxford University Press. doi:10.1093/oso/9780190931407.001.0001
    Woolley、Samuel C. 和 Philip N. Howard(编辑),2017 年,计算宣传:社交媒体上的政党、政治家和政治操纵,牛津:牛津大学出版社。 doi:10.1093/oso/9780190931407.001.0001
  • Yampolskiy, Roman V. (ed.), 2018, Artificial Intelligence Safety and Security, Boca Raton, FL: Chapman and Hall/CRC. doi:10.1201/9781351251389
    Yampolskiy,Roman V.(编辑),2018 年,人工智能安全与安保,佛罗里达州博卡拉顿:Chapman 和 Hall/CRC。doi:10.1201/9781351251389
  • Yeung, Karen and Martin Lodge (eds.), 2019, Algorithmic Regulation, Oxford: Oxford University Press. doi:10.1093/oso/9780198838494.001.0001
    Yeung、Karen 和 Martin Lodge(编辑),2019 年,算法监管,牛津:牛津大学出版社。 doi:10.1093/oso/9780198838494.001.0001
  • Zayed, Yago and Philip Loft, 2019, “Agriculture: Historical Statistics”, House of Commons Briefing Paper, 3339(25 June 2019): 1-19. [Zayed and Loft 2019 available online]
    Zayed、Yago 和 Philip Loft,2019 年,“农业:历史统计”,下议院简报,3339(2019 年 6 月 25 日):1-19。[ Zayed and Loft 2019 在线提供]
  • Zerilli, John, Alistair Knott, James Maclaurin, and Colin Gavaghan, 2019, “Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?”, Philosophy & Technology, 32(4): 661–683. doi:10.1007/s13347-018-0330-6
    Zerilli、John、Alistair Knott、James Maclaurin和Colin Gavaghan,2019年,“算法和人类决策的透明度:是否存在双重标准?”,哲学与技术,32(4):661-683。doi:10.1007/s13347-018-0330-6
  • Zuboff, Shoshana, 2019, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, New York: Public Affairs.
    Zuboff,Shoshana,2019 年,《监视资本主义时代:在权力新前沿为人类未来而战》,纽约:公共事务。

Academic Tools 学术工具

sep man icon How to cite this entry.
如何引用此条目。
sep man icon Preview the PDF version of this entry at the Friends of the SEP Society.
在SEP协会之友中预览此条目的PDF版本。
inpho icon Look up topics and thinkers related to this entry at the Internet Philosophy Ontology Project (InPhO).
在互联网哲学本体论项目(InPhO)上查找与此条目相关的主题和思想家。
phil papers icon Enhanced bibliography for this entry at PhilPapers, with links to its database.
PhilPapers上此条目的增强参考书目,以及指向其数据库的链接。

Other Internet Resources
其他互联网资源

References 引用

  • AI HLEG, 2019, “High-Level Expert Group on Artificial Intelligence: Ethics Guidelines for Trustworthy AI”, European Commission, accessed: 9 April 2019.
    AI HLEG,2019 年,“人工智能高级别专家组:可信赖人工智能的伦理准则”,欧盟委员会,访问时间:2019 年 4 月 9 日。
  • Amodei, Dario and Danny Hernandez, 2018, “AI and Compute”, OpenAI Blog, 16 July 2018.
    Amodei、Dario 和 Danny Hernandez,2018 年,“AI 和计算”,OpenAI 博客,2018 年 7 月 16 日。
  • Aneesh, A., 2002, Technological Modes of Governance: Beyond Private and Public Realms, paper in the Proceedings of the 4th International Summer Academy on Technology Studies, available at archive.org.
    Aneesh, A., 2002, Technological Modes of Governance: Beyond Private and Public Realms, 论文载于《第四届国际技术研究暑期学院论文集》,见第 archive.org 页。
  • Brooks, Rodney, 2017, “The Seven Deadly Sins of Predicting the Future of AI”, on Rodney Brooks: Robots, AI, and Other Stuff, 7 September 2017.
    Brooks, Rodney, 2017, “The Seven Deadly Sins of Predicting the Future of AI”, on Rodney Brooks: Robots, AI, and Other Stuff, 2017年9月7日。
  • Brundage, Miles, Shahar Avin, Jack Clark, Helen Toner, Peter Eckersley, Ben Garfinkel, Allan Dafoe, Paul Scharre, Thomas Zeitzoff, Bobby Filar, Hyrum Anderson, Heather Roff, Gregory C. Allen, Jacob Steinhardt, Carrick Flynn, Seán Ó hÉigeartaigh, Simon Beard, Haydn Belfield, Sebastian Farquhar, Clare Lyle, et al., 2018, “The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation”, unpublished manuscript, ArXiv:1802.07228 [Cs].
    布伦戴奇、迈尔斯、沙哈尔·阿文、杰克·克拉克、海伦·托纳、彼得·埃克斯利、本·加芬克尔、艾伦·达福、保罗·沙尔、托马斯·蔡佐夫、鲍比·菲拉尔、海伦·安德森、希瑟·罗夫、格雷戈里·艾伦、雅各布·斯坦哈特、卡里克·弗林、肖恩·奥·埃格泰、西蒙·比尔德、海顿·贝尔菲尔德、塞巴斯蒂安·法夸尔、克莱尔·莱尔等人,2018 年,“人工智能的恶意使用:预测、预防和缓解”,未发表的手稿, ArXiv:1802.07228 [Cs].
  • Costa, Elisabeth and David Halpern, 2019, “The Behavioural Science of Online Harm and Manipulation, and What to Do About It: An Exploratory Paper to Spark Ideas and Debate”, The Behavioural Insights Team Report, 1-82.
    Costa、Elisabeth 和 David Halpern,2019 年,“在线伤害和操纵的行为科学,以及如何应对它:激发想法和辩论的探索性论文”,行为洞察团队报告,1-82。
  • Gebru, Timnit, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumeé III, and Kate Crawford, 2018, “Datasheets for Datasets”, unpublished manuscript, arxiv:1803.09010, 23 March 2018.
    Gebru、Timnit、Jamie Morgenstern、Briana Vecchione、Jennifer Wortman Vaughan、Hanna Wallach、Hal Daumeé III 和 Kate Crawford,2018 年,“数据集数据表”,未发表的手稿,arxiv:1803.09010,2018 年 3 月 23 日。
  • Gunning, David, 2017, “Explainable Artificial Intelligence (XAI)”, Defense Advanced Research Projects Agency (DARPA) Program.
    Gunning,David,2017 年,“可解释的人工智能 (XAI)”,国防高级研究计划局 (DARPA) 计划。
  • Harris, Tristan, 2016, “How Technology Is Hijacking Your Mind—from a Magician and Google Design Ethicist”, Thrive Global, 18 May 2016.
    哈里斯,特里斯坦,2016 年,“技术如何劫持你的思想——来自魔术师和谷歌设计伦理学家”,Thrive Global,2016 年 5 月 18 日。
  • International Federation of Robotics (IFR), 2019, World Robotics 2019 Edition.
    国际机器人联合会 (IFR),2019 年,世界机器人 2019 年版。
  • Jacobs, An, Lynn Tytgat, Michel Maus, Romain Meeusen, and Bram Vanderborght (eds.), Homo Roboticus: 30 Questions and Answers on Man, Technology, Science & Art, 2019, Brussels: ASP.
    Jacobs, An, Lynn Tytgat, Michel Maus, Romain Meeusen, and Bram Vanderborght (eds.), Homo Roboticus: 30 Questions and Answers on Man, Technology, Science & Art, 2019, 布鲁塞尔:ASP.
  • Marcus, Gary, 2018, “Deep Learning: A Critical Appraisal”, unpublished manuscript, 2 January 2018, arxiv:1801.00631.
    Marcus, Gary, 2018, “Deep Learning: A Critical Appraisal”,未发表的手稿,2018 年 1 月 2 日,arxiv:1801.00631。
  • McCarthy, John, Marvin Minsky, Nathaniel Rochester, and Claude E. Shannon, 1955, “A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence”, 31 August 1955.
    McCarthy、John、Marvin Minsky、Nathaniel Rochester 和 Claude E. Shannon,1955 年,“达特茅斯人工智能夏季研究项目提案”,1955 年 8 月 31 日。
  • Metcalf, Jacob, Emily F. Keller, and Danah Boyd, 2016, “Perspectives on Big Data, Ethics, and Society”, 23 May 2016, Council for Big Data, Ethics, and Society.
    Metcalf、Jacob、Emily F. Keller 和 Danah Boyd,2016 年,“大数据、伦理和社会的观点”,2016 年 5 月 23 日,大数据、伦理和社会委员会。
  • National Institute of Justice (NIJ), 2014, “Overview of Predictive Policing”, 9 June 2014.
    美国国家司法研究所(NIJ),2014年,“预测性警务概述”,2014年6月9日。
  • Searle, John R., 2015, “Consciousness in Artificial Intelligence”, Google’s Singularity Network, Talks at Google (YouTube video).
    Searle, John R., 2015, “Consciousness in Artificial Intelligence”, Google's Singularity Network, Talks at Google(YouTube视频)。

Research Organizations 研究机构

Conferences 会议

Policy Documents 保单文件

Other Relevant pages 其他相关页面

Acknowledgments 确认

Early drafts of this article were discussed with colleagues at the IDEA Centre of the University of Leeds, some friends, and my PhD students Michael Cannon, Zach Gudmunsen, Gabriela Arriagada-Bruneau and Charlotte Stix. Later drafts were made publicly available on the Internet and publicised via Twitter and e-mail to all (then) cited authors that I could locate. These later drafts were presented to audiences at the INBOTS Project Meeting (Reykjavik 2019), the Computer Science Department Colloquium (Leeds 2019), the European Robotics Forum (Bucharest 2019), the AI Lunch and the Philosophy & Ethics group (Eindhoven 2019)—many thanks for their comments.
本文的早期草稿是与利兹大学IDEA中心的同事、一些朋友以及我的博士生Michael Cannon、Zach Gudmunsen、Gabriela Arriagada-Bruneau和Charlotte Stix讨论的。后来的草稿在互联网上公开,并通过Twitter和电子邮件向我能找到的所有(当时)引用的作者公布。这些后来的草稿在INBOTS项目会议(2019年雷克雅未克)、计算机科学系座谈会(2019年利兹)、欧洲机器人论坛(2019年布加勒斯特)、人工智能午餐会和哲学与伦理小组(2019年埃因霍温)上向观众展示——非常感谢他们的评论。

I am grateful for detailed written comments by John Danaher, Martin Gibert, Elizabeth O’Neill, Sven Nyholm, Etienne B. Roesch, Emma Ruttkamp-Bloem, Tom Powers, Steve Taylor, and Alan Winfield. I am grateful for further useful comments by Colin Allen, Susan Anderson, Christof Wolf-Brenner, Rafael Capurro, Mark Coeckelbergh, Yazmin Morlet Corti, Erez Firt, Vasilis Galanos, Anne Gerdes, Olle Häggström, Geoff Keeling, Karabo Maiyane, Brent Mittelstadt, Britt Östlund, Steve Petersen, Brian Pickering, Zoë Porter, Amanda Sharkey, Melissa Terras, Stuart Russell, Jan F Veneman, Jeffrey White, and Xinyi Wu.
我感谢 John Danaher、Martin Gibert、Elizabeth O'Neill、Sven Nyholm、Etienne B. Roesch、Emma Ruttkamp-Bloem、Tom Powers、Steve Taylor 和 Alan Winfield 的详细书面评论。我感谢科林·艾伦、苏珊·安德森、克里斯托夫·沃尔夫-布伦纳、拉斐尔·卡普罗、马克·科克尔伯格、亚兹明·莫莱特·科尔蒂、埃雷兹·菲尔特、瓦西里斯·加拉诺斯、安妮·格德斯、奥勒·哈格斯特伦、杰夫·基林、卡拉博·迈亚内、布伦特·米特尔施塔特、布里特·奥斯特伦德、史蒂夫·彼得森、布莱恩·皮克林、佐伊·波特、阿曼达·夏基、梅丽莎·特拉斯、斯图尔特·拉塞尔、扬·维尼曼、杰弗里·怀特和吴欣怡的进一步有益评论。

Parts of the work on this article have been supported by the European Commission under the INBOTS project (H2020 grant no. 780073).
本文的部分工作已得到欧盟委员会在 INBOTS 项目(H2020 资助编号 780073)下的支持。

Copyright © 2020 by
版权所有 © 2020 by

Vincent C. Müller <vincent.c.mueller@fau.de>
文森特·穆勒(Vincent C. Müller<vincent.c.mueller@fau.de>

Open access to the SEP is made possible by a world-wide funding initiative.
通过一项全球范围的资助计划,使对SEP的开放获取成为可能。

Please Read How You Can Help Support the Growth and Development of the Encyclopedia
请阅读如何帮助支持百科全书的成长和发展