Artificial intelligence 人工智能
Part of a series on 系列的一部分 |
Artificial intelligence 人工智能 |
---|
Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.[1] Such machines may be called AIs.
人工智能 (AI) 在其最广泛的意义上,是 机器,特别是 计算机系统 所表现出的 智能。它是 计算机科学 中的一个 研究领域,开发和研究使机器能够 感知环境 并利用 学习 和智能采取行动,以最大化实现既定目标的机会的方法和 软件。[1] 这样的机器可以称为 AI。
Some high-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); interacting via human speech (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT, Apple Intelligence, and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[2][3]
一些高调的 人工智能应用 包括先进的 网络搜索引擎(例如,谷歌搜索); 推荐系统(被 优酷、亚马逊 和 Netflix 使用);通过 人类语言互动(例如,谷歌助手、Siri 和 Alexa); 自动驾驶汽车(例如,Waymo); 生成 和 创意 工具(例如,ChatGPT、苹果智能 和 人工智能艺术);以及在 策略游戏(例如,国际象棋 和 围棋)中进行 超人类 的游戏和分析。然而,许多人工智能应用并不被视为人工智能:“许多前沿的人工智能已经渗透到一般应用中,通常没有被称为人工智能,因为一旦某样东西变得足够有用和普遍,它就 不再被标记为人工智能。”[2][3]
The various subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General intelligence—the ability to complete any task performable by a human on an at least equal level—is among the field's long-term goals.[4] To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics.[b] AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields.[5]
人工智能研究的各个子领域围绕特定目标和特定工具展开。人工智能研究的传统目标包括 推理、知识表示、规划、学习、自然语言处理、感知,以及对 机器人技术的支持。[a]通用智能——完成任何人类可执行任务的能力,至少达到同等水平——是该领域的长期目标之一。[4] 为了实现这些目标,人工智能研究人员已经适应并整合了广泛的技术,包括 搜索 和 数学优化、形式逻辑、人工神经网络,以及基于 统计学、运筹学 和 经济学的方法。[b] 人工智能还借鉴了 心理学、语言学、哲学、神经科学 和其他领域。[5]
Artificial intelligence was founded as an academic discipline in 1956,[6] and the field went through multiple cycles of optimism,[7][8] followed by periods of disappointment and loss of funding, known as AI winter.[9][10] Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques.[11] This growth accelerated further after 2017 with the transformer architecture,[12] and by the early 2020s hundreds of billions of dollars were being invested in AI (known as the "AI boom"). The widespread use of AI in the 21st century exposed several unintended consequences and harms in the present and raised concerns about its risks and long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.
人工智能于 1956 年作为一个学术学科成立,[6] 该领域经历了多次乐观周期,[7][8] 随后是失望和资金短缺的时期,这被称为 人工智能寒冬。[9][10] 2012 年后,随着 深度学习 超越了之前的人工智能技术,资金和兴趣大幅增加。[11] 2017 年后,这一增长进一步加速,伴随着 变换器架构,[12] 到 2020 年代初,数千亿美元被投资于人工智能(被称为“人工智能热潮”)。 21 世纪人工智能的广泛使用暴露了当前几个意想不到的后果和危害,并引发了对其风险和长期影响的担忧,促使人们讨论监管政策以确保技术的安全和利益。
Goals 目标
The general problem of simulating (or creating) intelligence has been broken into subproblems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research.[a]
模拟(或创造)智能的一般问题已被分解为子问题。这些子问题包括研究人员期望智能系统展示的特定特征或能力。下面描述的特征受到了最多关注,并涵盖了人工智能研究的范围。[a]
Reasoning and problem-solving
推理和解决问题
Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[13] By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics.[14]
早期的研究人员开发了模仿人类在解决难题或进行逻辑推理时所使用的逐步推理的算法。[13] 到 1980 年代末和 1990 年代,开发了处理不确定或不完整信息的方法,采用了来自概率和经济学的概念。[14]
Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": They become exponentially slower as the problems grow.[15] Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments.[16] Accurate and efficient reasoning is an unsolved problem.
许多这些算法不足以解决大型推理问题,因为它们会经历“组合爆炸”:随着问题的增长,它们变得呈指数级变慢。[15] 即使是人类也很少使用早期人工智能研究能够建模的逐步推理。他们大多数问题的解决依赖于快速、直观的判断。[16] 准确和高效的推理仍然是一个未解决的问题。
Knowledge representation 知识表示
Knowledge representation and knowledge engineering[17] allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval,[18] scene interpretation,[19] clinical decision support,[20] knowledge discovery (mining "interesting" and actionable inferences from large databases),[21] and other areas.[22]
知识表示 和 知识工程[17] 使人工智能程序能够智能地回答问题并对现实世界的事实进行推理。正式的知识表示用于基于内容的索引和检索,[18] 场景解释,[19] 临床决策支持,[20] 知识发现(从大型 数据库 中挖掘“有趣”的和可操作的推论),[21] 以及其他领域。[22]
A knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by a particular domain of knowledge.[23] Knowledge bases need to represent things such as objects, properties, categories, and relations between objects;[24] situations, events, states, and time;[25] causes and effects;[26] knowledge about knowledge (what we know about what other people know);[27] default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing);[28] and many other aspects and domains of knowledge.
一个 知识库 是以程序可以使用的形式表示的知识体。一个 本体 是特定知识领域中使用的对象、关系、概念和属性的集合。[23] 知识库需要表示诸如对象、属性、类别和对象之间的关系;[24] 情况、事件、状态和时间;[25] 原因和结果;[26] 关于知识的知识(我们对其他人所知道的事情的了解);[27]默认推理(人类假设为真的事情,直到被告知不同,并且即使其他事实发生变化也将保持为真);[28] 以及许多其他方面和知识领域。
Among the most difficult problems in knowledge representation are the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous);[29] and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally).[16] There is also the difficulty of knowledge acquisition, the problem of obtaining knowledge for AI applications.[c]
在知识表示中,最困难的问题之一是常识知识的广度(普通人所知道的原子事实的集合是巨大的);[29] 以及大多数常识知识的亚符号形式(人们所知道的许多内容并不是以“事实”或“陈述”的形式表达的)。[16] 还有获取知识的困难,获取用于人工智能应用的知识的问题。[c]
Planning and decision-making
规划与决策
An "agent" is anything that perceives and takes actions in the world. A rational agent has goals or preferences and takes actions to make them happen.[d][32] In automated planning, the agent has a specific goal.[33] In automated decision-making, the agent has preferences—there are some situations it would prefer to be in, and some situations it is trying to avoid. The decision-making agent assigns a number to each situation (called the "utility") that measures how much the agent prefers it. For each possible action, it can calculate the "expected utility": the utility of all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility.[34]
“代理”是指任何能够感知并在世界中采取行动的事物。一个理性代理有目标或偏好,并采取行动使其实现。[d][32] 在自动规划中,代理有一个特定的目标。[33] 在自动决策中,代理有偏好——有些情况是它希望处于的,有些情况是它试图避免的。决策代理为每种情况分配一个数字(称为“效用”),以衡量代理对该情况的偏好程度。对于每个可能的行动,它可以计算“期望效用”:该行动所有可能结果的效用,按结果发生的概率加权。然后,它可以选择具有最大期望效用的行动。[34]
In classical planning, the agent knows exactly what the effect of any action will be.[35] In most real-world problems, however, the agent may not be certain about the situation they are in (it is "unknown" or "unobservable") and it may not know for certain what will happen after each possible action (it is not "deterministic"). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked.[36]
在经典规划中,智能体确切知道任何行动的效果。[35] 然而,在大多数现实世界的问题中,智能体可能对其所处的情况不确定(它是“未知”或“不可观察的”),并且它可能无法确定每个可能行动后会发生什么(它不是“确定性的”)。它必须通过做出概率猜测来选择一个行动,然后重新评估情况以查看该行动是否有效。[36]
In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e.g., with inverse reinforcement learning), or the agent can seek information to improve its preferences.[37] Information value theory can be used to weigh the value of exploratory or experimental actions.[38] The space of possible future actions and situations is typically intractably large, so the agents must take actions and evaluate situations while being uncertain of what the outcome will be.
在某些问题中,代理的偏好可能不确定,特别是当涉及其他代理或人类时。这些可以通过学习获得(例如,通过逆强化学习),或者代理可以寻求信息以改善其偏好。[37]信息价值理论可以用来权衡探索或实验行动的价值。[38] 未来可能的行动和情况的空间通常是不可处理的,因此代理必须在不确定结果的情况下采取行动并评估情况。
A Markov decision process has a transition model that describes the probability that a particular action will change the state in a particular way and a reward function that supplies the utility of each state and the cost of each action. A policy associates a decision with each possible state. The policy could be calculated (e.g., by iteration), be heuristic, or it can be learned.[39]
一个 马尔可夫决策过程 具有一个 转移模型,描述了特定动作以特定方式改变状态的概率,以及一个 奖励函数,提供每个状态的效用和每个动作的成本。一个 策略 将每个可能状态与一个决策关联。该策略可以通过计算(例如,通过 迭代)、是 启发式,或者可以通过学习获得。[39]
Game theory describes the rational behavior of multiple interacting agents and is used in AI programs that make decisions that involve other agents.[40]
博弈论 描述了多个相互作用的主体的理性行为,并用于涉及其他主体的决策的人工智能程序。[40]
Learning 学习
Machine learning is the study of programs that can improve their performance on a given task automatically.[41] It has been a part of AI from the beginning.[e]
机器学习是研究能够自动提高其在特定任务上表现的程序。[41] 从一开始,它就是人工智能的一部分。[e]
There are several kinds of machine learning. Unsupervised learning analyzes a stream of data and finds patterns and makes predictions without any other guidance.[44] Supervised learning requires a human to label the input data first, and comes in two main varieties: classification (where the program must learn to predict what category the input belongs in) and regression (where the program must deduce a numeric function based on numeric input).[45]
有几种机器学习。无监督学习分析数据流,发现模式并进行预测,而无需其他指导。[44]监督学习要求人类首先对输入数据进行标记,主要有两种类型:分类(程序必须学习预测输入属于哪个类别)和回归(程序必须根据数值输入推导出一个数值函数)。[45]
In reinforcement learning, the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as "good".[46] Transfer learning is when the knowledge gained from one problem is applied to a new problem.[47] Deep learning is a type of machine learning that runs inputs through biologically inspired artificial neural networks for all of these types of learning.[48]
在强化学习中,代理因良好的反应而获得奖励,因不良反应而受到惩罚。代理学习选择被分类为“好”的反应。[46]迁移学习是指将从一个问题中获得的知识应用于新问题。[47]深度学习是一种机器学习类型,通过生物启发的人工神经网络处理所有这些类型的学习输入。[48]
Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[49]
计算学习理论 可以通过 计算复杂性、样本复杂性(需要多少数据)或其他 优化 的概念来评估学习者。[49]
Natural language processing
自然语言处理
Natural language processing (NLP)[50] allows programs to read, write and communicate in human languages such as English. Specific problems include speech recognition, speech synthesis, machine translation, information extraction, information retrieval and question answering.[51]
自然语言处理 (NLP)[50] 使程序能够以人类语言(如 英语)进行阅读、写作和交流。具体问题包括 语音识别、语音合成、机器翻译、信息提取、信息检索 和 问答。[51]
Early work, based on Noam Chomsky's generative grammar and semantic networks, had difficulty with word-sense disambiguation[f] unless restricted to small domains called "micro-worlds" (due to the common sense knowledge problem[29]). Margaret Masterman believed that it was meaning and not grammar that was the key to understanding languages, and that thesauri and not dictionaries should be the basis of computational language structure.
早期的工作基于Noam Chomsky的生成语法和语义网络,在词义消歧[f]方面遇到了困难,除非限制在称为“微观世界”的小领域内(由于常识知识问题[29])。玛格丽特·马斯特曼认为,理解语言的关键在于意义而不是语法,并且同义词典而不是字典应该是计算语言结构的基础。
Modern deep learning techniques for NLP include word embedding (representing words, typically as vectors encoding their meaning),[52] transformers (a deep learning architecture using an attention mechanism),[53] and others.[54] In 2019, generative pre-trained transformer (or "GPT") language models began to generate coherent text,[55][56] and by 2023, these models were able to get human-level scores on the bar exam, SAT test, GRE test, and many other real-world applications.[57]
现代深度学习技术在自然语言处理(NLP)中包括词嵌入(通常将单词表示为向量以编码其含义),[52]变换器(一种使用注意力机制的深度学习架构),[53]以及其他技术。[54] 在 2019 年,生成预训练变换器(或“GPT”)语言模型开始生成连贯的文本,[55][56] 到 2023 年,这些模型能够在律师资格考试、SAT考试、GRE考试以及许多其他现实世界应用中获得人类水平的分数。[57]
Perception 感知
Machine perception is the ability to use input from sensors (such as cameras, microphones, wireless signals, active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Computer vision is the ability to analyze visual input.[58]
机器感知是利用传感器(如摄像头、麦克风、无线信号、主动激光雷达、声纳、雷达和触觉传感器)的输入来推断世界各个方面的能力。计算机视觉是分析视觉输入的能力。[58]
The field includes speech recognition,[59] image classification,[60] facial recognition, object recognition,[61]object tracking,[62] and robotic perception.[63]
该领域包括 语音识别,[59]图像分类,[60]面部识别, 物体识别,[61]物体跟踪,[62] 和 机器人感知。[63]
Social intelligence 社会智能
Affective computing is an interdisciplinary umbrella that comprises systems that recognize, interpret, process, or simulate human feeling, emotion, and mood.[65] For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.
情感计算 是一个跨学科的伞式概念,包含能够识别、解释、处理或模拟人类 感觉、情绪和心情 的系统。[65] 例如,一些 虚拟助手 被编程为以对话的方式进行交流,甚至进行幽默的调侃;这使得它们在与人类互动的情感动态中显得更加敏感,或者以其他方式促进 人机交互。
However, this tends to give naïve users an unrealistic conception of the intelligence of existing computer agents.[66] Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal sentiment analysis, wherein AI classifies the affects displayed by a videotaped subject.[67]
然而,这往往使天真的用户对现有计算机代理的智能产生不切实际的认识。[66] 与情感计算相关的适度成功包括文本情感分析,以及最近的多模态情感分析,其中人工智能对录像中被试者表现出的情感进行分类。[67]
General intelligence 一般智力
A machine with artificial general intelligence should be able to solve a wide variety of problems with breadth and versatility similar to human intelligence.[4]
一台具有 人工通用智能 的机器应该能够以类似于 人类智能 的广度和多样性解决各种问题。[4]
Techniques 技术
AI research uses a wide variety of techniques to accomplish the goals above.[b]
人工智能研究使用多种技术来实现上述目标。[b]
Search and optimization 搜索与优化
AI can solve many problems by intelligently searching through many possible solutions.[68] There are two very different kinds of search used in AI: state space search and local search.
AI 可以通过智能地搜索许多可能的解决方案来解决许多问题。[68] AI 中使用了两种截然不同的搜索方式:状态空间搜索和局部搜索。
State space search 状态空间搜索
State space search searches through a tree of possible states to try to find a goal state.[69] For example, planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[70]
状态空间搜索 在可能状态的树中进行搜索,以尝试找到目标状态。[69] 例如,规划 算法在目标和子目标的树中进行搜索,试图找到通往目标的路径,这个过程称为 手段-目的分析。[70]
Simple exhaustive searches[71] are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes.[15] "Heuristics" or "rules of thumb" can help prioritize choices that are more likely to reach a goal.[72]
简单的穷举搜索[71] 对于大多数现实世界的问题来说,通常是不够的:搜索空间(需要搜索的地方数量)迅速增长到天文数字。结果是搜索变得太慢或永远无法完成。[15] “启发式”或“经验法则”可以帮助优先选择更有可能达到目标的选项。[72]
Adversarial search is used for game-playing programs, such as chess or Go. It searches through a tree of possible moves and counter-moves, looking for a winning position.[73]
对抗搜索用于游戏程序,例如国际象棋或围棋。它在可能的走法和反走法的树中搜索,寻找获胜的位置。[73]
Local search
Local search uses mathematical optimization to find a solution to a problem. It begins with some form of guess and refines it incrementally.[74]
Gradient descent is a type of local search that optimizes a set of numerical parameters by incrementally adjusting them to minimize a loss function. Variants of gradient descent are commonly used to train neural networks.[75]
梯度下降是一种局部搜索方法,通过逐步调整一组数值参数来优化,以最小化损失函数。梯度下降的变体通常用于训练神经网络。[75]
Another type of local search is evolutionary computation, which aims to iteratively improve a set of candidate solutions by "mutating" and "recombining" them, selecting only the fittest to survive each generation.[76]
另一种本地搜索类型是 进化计算,其目标是通过“变异”和“重组”迭代改进一组候选解决方案,选择 仅让最适合的个体在每一代中存活。[76]
Distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[77]
分布式搜索过程可以通过群体智能算法进行协调。两种在搜索中常用的群体算法是粒子群优化(受鸟类群聚启发)和蚁群优化(受蚂蚁踪迹启发)。[77]
Logic 逻辑
Formal logic is used for reasoning and knowledge representation.[78]
Formal logic comes in two main forms: propositional logic (which operates on statements that are true or false and uses logical connectives such as "and", "or", "not" and "implies")[79] and predicate logic (which also operates on objects, predicates and relations and uses quantifiers such as "Every X is a Y" and "There are some Xs that are Ys").[80]
形式逻辑用于推理和知识表示。[78]形式逻辑主要有两种形式:命题逻辑(它处理真假语句,并使用逻辑连接词如“和”、“或”、“非”和“蕴含”)[79]和谓词逻辑(它也处理对象、谓词和关系,并使用量词如“每个X都是Y”和“有一些X是Y”)。[80]
Deductive reasoning in logic is the process of proving a new statement (conclusion) from other statements that are given and assumed to be true (the premises).[81] Proofs can be structured as proof trees, in which nodes are labelled by sentences, and children nodes are connected to parent nodes by inference rules.
演绎推理在逻辑中是从其他被给定并假定为真的陈述(前提)中推导出一个新陈述(结论)的过程。[81] 证明可以结构化为证明树,其中节点由句子标记,子节点通过推理规则与父节点连接。
Given a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whose leaf nodes are labelled by premises or axioms. In the case of Horn clauses, problem-solving search can be performed by reasoning forwards from the premises or backwards from the problem.[82] In the more general case of the clausal form of first-order logic, resolution is a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that include the negation of the problem to be solved.[83]
给定一个问题和一组前提,问题解决归结为寻找一个证明树,其根节点标记为问题的解决方案,叶节点标记为前提或公理。在霍恩子句的情况下,问题解决搜索可以通过从前提向前推理或从问题向后推理来进行。在更一般的第一阶逻辑的子句形式中,归结是一条单一的、无公理的推理规则,其中通过证明一个包含待解决问题否定的前提的矛盾来解决问题。
Inference in both Horn clause logic and first-order logic is undecidable, and therefore intractable. However, backward reasoning with Horn clauses, which underpins computation in the logic programming language Prolog, is Turing complete. Moreover, its efficiency is competitive with computation in other symbolic programming languages.[84]
在霍恩子句逻辑和一阶逻辑中,推理是不可判定的,因此是不可处理的。然而,基于霍恩子句的逆向推理支撑着逻辑编程语言Prolog中的计算,并且是图灵完备的。此外,它的效率与其他符号编程语言中的计算具有竞争力。[84]
Fuzzy logic assigns a "degree of truth" between 0 and 1. It can therefore handle propositions that are vague and partially true.[85]
模糊逻辑赋予“真值”一个介于 0 和 1 之间的度。因此,它可以处理模糊和部分真实的命题。[85]
Non-monotonic logics, including logic programming with negation as failure, are designed to handle default reasoning.[28] Other specialized versions of logic have been developed to describe many complex domains.
非单调逻辑,包括带有失败时否定的逻辑编程,旨在处理默认推理。[28] 其他专门版本的逻辑已被开发出来,以描述许多复杂领域。
Probabilistic methods for uncertain reasoning
不确定推理的概率方法
Many problems in AI (including in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics.[86] Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[87] and information value theory.[88] These tools include models such as Markov decision processes,[89] dynamic decision networks,[90] game theory and mechanism design.[91]
许多人工智能(包括推理、规划、学习、感知和机器人技术)中的问题要求智能体在不完整或不确定的信息下进行操作。人工智能研究人员设计了许多工具,以利用概率理论和经济学的方法来解决这些问题。[86] 已开发出精确的数学工具,分析智能体如何做出选择和规划,使用决策理论、决策分析、[87] 和信息价值理论。[88] 这些工具包括模型,如马尔可夫决策过程、[89] 动态决策网络、[90]博弈论和机制设计。[91]
Bayesian networks[92] are a tool that can be used for reasoning (using the Bayesian inference algorithm),[g][94] learning (using the expectation–maximization algorithm),[h][96] planning (using decision networks)[97] and perception (using dynamic Bayesian networks).[90]
贝叶斯网络[92] 是一种可以用于 推理(使用 贝叶斯推断 算法),[g][94]学习(使用 期望最大化算法),[h][96]规划(使用 决策网络)[97] 和 感知(使用 动态贝叶斯网络)。[90]
Probabilistic algorithms can also be used for filtering, prediction, smoothing, and finding explanations for streams of data, thus helping perception systems analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[90]
概率算法也可以用于过滤、预测、平滑和寻找数据流的解释,从而帮助感知系统分析随时间发生的过程(例如,隐马尔可夫模型或卡尔曼滤波器)。[90]
Classifiers and statistical learning methods
分类器和统计学习方法
The simplest AI applications can be divided into two types: classifiers (e.g., "if shiny then diamond"), on one hand, and controllers (e.g., "if diamond then pick up"), on the other hand. Classifiers[98] are functions that use pattern matching to determine the closest match. They can be fine-tuned based on chosen examples using supervised learning. Each pattern (also called an "observation") is labeled with a certain predefined class. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[45]
最简单的人工智能应用可以分为两种类型:分类器(例如,“如果闪亮则是钻石”)和控制器(例如,“如果是钻石则捡起”)。分类器[98] 是使用 模式匹配 来确定最接近匹配的函数。它们可以根据选择的示例使用 监督学习 进行微调。每个模式(也称为“观察”)都被标记为某个预定义类别。所有观察及其类别标签的组合称为 数据集。当接收到新的观察时,该观察会根据之前的经验进行分类。[45]
There are many kinds of classifiers in use.[99] The decision tree is the simplest and most widely used symbolic machine learning algorithm.[100] K-nearest neighbor algorithm was the most widely used analogical AI until the mid-1990s, and Kernel methods such as the support vector machine (SVM) displaced k-nearest neighbor in the 1990s.[101]
The naive Bayes classifier is reportedly the "most widely used learner"[102] at Google, due in part to its scalability.[103]
Neural networks are also used as classifiers.[104]
使用的分类器种类繁多。[99] 决策树是最简单且最广泛使用的符号机器学习算法。[100]K 近邻算法在 1990 年代中期之前是最广泛使用的类比人工智能,而核方法如支持向量机(SVM)在 1990 年代取代了 K 近邻。[101] 据报道,朴素贝叶斯分类器是谷歌“使用最广泛的学习器”[102],部分原因是其可扩展性。[103]神经网络也被用作分类器。[104]
Artificial neural networks
人工神经网络
An artificial neural network is based on a collection of nodes also known as artificial neurons, which loosely model the neurons in a biological brain. It is trained to recognise patterns; once trained, it can recognise those patterns in fresh data. There is an input, at least one hidden layer of nodes and an output. Each node applies a function and once the weight crosses its specified threshold, the data is transmitted to the next layer. A network is typically called a deep neural network if it has at least 2 hidden layers.[104]
人工神经网络基于一组节点,也称为人工神经元,它们大致模拟生物大脑中的神经元。它经过训练以识别模式;一旦训练完成,它可以在新数据中识别这些模式。网络有一个输入,至少一个隐藏层节点和一个输出。每个节点应用一个函数,一旦权重超过其指定的阈值,数据就会传输到下一层。如果网络至少有两个隐藏层,通常称为深度神经网络。[104]
Learning algorithms for neural networks use local search to choose the weights that will get the right output for each input during training. The most common training technique is the backpropagation algorithm.[105] Neural networks learn to model complex relationships between inputs and outputs and find patterns in data. In theory, a neural network can learn any function.[106]
神经网络的学习算法使用 局部搜索 来选择在训练过程中为每个输入获得正确输出的权重。最常见的训练技术是 反向传播 算法。[105] 神经网络学习建模输入和输出之间的复杂关系,并在数据中 寻找模式。理论上,神经网络可以学习任何函数。[106]
In feedforward neural networks the signal passes in only one direction.[107] Recurrent neural networks feed the output signal back into the input, which allows short-term memories of previous input events. Long short term memory is the most successful network architecture for recurrent networks.[108] Perceptrons[109] use only a single layer of neurons; deep learning[110] uses multiple layers. Convolutional neural networks strengthen the connection between neurons that are "close" to each other—this is especially important in image processing, where a local set of neurons must identify an "edge" before the network can identify an object.[111]
在前馈神经网络中,信号仅朝一个方向传递。[107]递归神经网络将输出信号反馈到输入中,这允许对先前输入事件的短期记忆。长短期记忆是递归网络中最成功的网络架构。[108]感知器[109]仅使用单层神经元;深度学习[110]使用多层。卷积神经网络增强了“接近”彼此的神经元之间的连接——这在图像处理中尤为重要,因为一组局部神经元必须识别“边缘”,然后网络才能识别对象。[111]
Deep learning 深度学习
Deep learning[110] uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters, or faces.[112]
深度学习[110] 在网络的输入和输出之间使用多个神经元层。这些多个层可以逐步从原始输入中提取更高级的特征。例如,在 图像处理 中,较低的层可能识别边缘,而较高的层可能识别与人类相关的概念,如数字、字母或面孔。[112]
Deep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing, image classification,[113] and others. The reason that deep learning performs so well in so many applications is not known as of 2023.[114] The sudden success of deep learning in 2012–2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s)[i] but because of two factors: the incredible increase in computer power (including the hundred-fold increase in speed by switching to GPUs) and the availability of vast amounts of training data, especially the giant curated datasets used for benchmark testing, such as ImageNet.[j]
深度学习在许多重要的人工智能子领域中显著提高了程序的性能,包括 计算机视觉、语音识别、自然语言处理、图像分类、[113] 等等。到 2023 年,深度学习在如此多应用中表现如此出色的原因尚不清楚。[114] 深度学习在 2012 年至 2015 年的突然成功并不是由于某种新的发现或理论突破(深度神经网络和 反向传播 早在 1950 年代就已被许多人描述过)[i],而是由于两个因素:计算能力的惊人提升(包括通过切换到 GPU 实现的速度提高了百倍)以及大量训练数据的可用性,特别是用于基准测试的巨型 策划数据集,如 ImageNet。[j]
GPT
Generative pre-trained transformers (GPT) are large language models (LLMs) that generate text based on the semantic relationships between words in sentences. Text-based GPT models are pretrained on a large corpus of text that can be from the Internet. The pretraining consists of predicting the next token (a token being usually a word, subword, or punctuation). Throughout this pretraining, GPT models accumulate knowledge about the world and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful, and harmless, usually with a technique called reinforcement learning from human feedback (RLHF). Current GPT models are prone to generating falsehoods called "hallucinations", although this can be reduced with RLHF and quality data. They are used in chatbots, which allow people to ask a question or request a task in simple text.[122][123]
生成预训练变换器(GPT)是大型语言模型(LLMs),它们根据句子中单词之间的语义关系生成文本。基于文本的 GPT 模型在一个大型文本语料库上进行预训练,该语料库可以来自互联网。预训练的过程包括预测下一个标记(标记通常是一个单词、子词或标点符号)。在这个预训练过程中,GPT 模型积累了关于世界的知识,然后通过反复预测下一个标记生成类人文本。通常,后续的训练阶段使模型更加真实、有用和无害,通常使用一种称为人类反馈强化学习(RLHF)的方法。目前的 GPT 模型容易生成被称为“幻觉”的虚假信息,尽管通过 RLHF 和高质量数据可以减少这种情况。它们被用于聊天机器人,允许人们以简单文本提问或请求任务。[122][123]
Current models and services include Gemini (formerly Bard), ChatGPT, Grok, Claude, Copilot, and LLaMA.[124] Multimodal GPT models can process different types of data (modalities) such as images, videos, sound, and text.[125]
当前的模型和服务包括 Gemini(前身为 Bard)、ChatGPT、Grok、Claude、Copilot 和 LLaMA。[124]多模态 GPT 模型可以处理不同类型的数据(模态),例如图像、视频、声音和文本。[125]
Specialized hardware and software
专用硬件和软件
In the late 2010s, graphics processing units (GPUs) that were increasingly designed with AI-specific enhancements and used with specialized TensorFlow software had replaced previously used central processing unit (CPUs) as the dominant means for large-scale (commercial and academic) machine learning models' training.[126] Specialized programming languages such as Prolog were used in early AI research,[127] but general-purpose programming languages like Python have become predominant.[128]
在 2010 年代末,图形处理单元(GPU)越来越多地设计为具有 AI 特定增强功能,并与专用的TensorFlow软件一起使用,取代了之前使用的中央处理单元(CPU),成为大规模(商业和学术)机器学习模型训练的主导手段。[126] 专用编程语言如Prolog曾在早期 AI 研究中使用,[127] 但通用编程语言如Python已成为主流。[128]
Applications 应用程序
AI and machine learning technology is used in most of the essential applications of the 2020s, including: search engines (such as Google Search), targeting online advertisements, recommendation systems (offered by Netflix, YouTube or Amazon), driving internet traffic, targeted advertising (AdSense, Facebook), virtual assistants (such as Siri or Alexa), autonomous vehicles (including drones, ADAS and self-driving cars), automatic language translation (Microsoft Translator, Google Translate), facial recognition (Apple's Face ID or Microsoft's DeepFace and Google's FaceNet) and image labeling (used by Facebook, Apple's iPhoto and TikTok). The deployment of AI may be overseen by a Chief automation officer (CAO).
AI 和机器学习技术在 2020 年代的大多数重要应用中被使用,包括:搜索引擎(如谷歌搜索)、在线广告定向、推荐系统(由Netflix、YouTube或亚马逊提供)、驱动互联网流量、定向广告(AdSense、Facebook)、虚拟助手(如Siri或Alexa)、自动驾驶车辆(包括无人机、ADAS和自动驾驶汽车)、自动语言翻译(微软翻译、谷歌翻译)、面部识别(苹果的Face ID或微软的DeepFace和谷歌的FaceNet)以及图像标记(由Facebook、苹果的iPhoto和抖音使用)。AI 的部署可能由首席自动化官(CAO)监督。
Health and medicine 健康与医学
The application of AI in medicine and medical research has the potential to increase patient care and quality of life.[129] Through the lens of the Hippocratic Oath, medical professionals are ethically compelled to use AI, if applications can more accurately diagnose and treat patients.
人工智能在医学和医学研究中的应用有潜力提高患者护理和生活质量。[129] 从希波克拉底誓言的角度来看,医疗专业人员在伦理上有责任使用人工智能,如果这些应用能够更准确地诊断和治疗患者。
For medical research, AI is an important tool for processing and integrating big data. This is particularly important for organoid and tissue engineering development which use microscopy imaging as a key technique in fabrication.[130] It has been suggested that AI can overcome discrepancies in funding allocated to different fields of research.[130] New AI tools can deepen the understanding of biomedically relevant pathways. For example, AlphaFold 2 (2021) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein.[131] In 2023, it was reported that AI-guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria.[132] In 2024, researchers used machine learning to accelerate the search for Parkinson's disease drug treatments. Their aim was to identify compounds that block the clumping, or aggregation, of alpha-synuclein (the protein that characterises Parkinson's disease). They were able to speed up the initial screening process ten-fold and reduce the cost by a thousand-fold.[133][134]
在医学研究中,人工智能是处理和整合大数据的重要工具。这对于使用显微镜成像作为制造关键技术的类器官和组织工程的发展尤为重要。[130] 有人建议,人工智能可以克服不同研究领域之间资金分配的差异。[130] 新的人工智能工具可以加深对生物医学相关通路的理解。例如,AlphaFold 2(2021 年)展示了在数小时内而非数月内近似预测蛋白质的 3D 结构的能力。[131] 2023 年,有报道称,人工智能引导的药物发现帮助找到了一类能够杀死两种不同类型耐药细菌的抗生素。[132] 2024 年,研究人员使用机器学习加速寻找帕金森病药物治疗。 他们的目标是识别能够阻止α-突触核蛋白(帕金森病特征蛋白)聚集的化合物。他们成功地将初步筛选过程的速度提高了十倍,并将成本降低了千倍。[133][134]
Games 游戏
Game playing programs have been used since the 1950s to demonstrate and test AI's most advanced techniques.[135] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.[136] In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[137] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. Then, in 2017, it defeated Ke Jie, who was the best Go player in the world.[138] Other programs handle imperfect-information games, such as the poker-playing program Pluribus.[139] DeepMind developed increasingly generalistic reinforcement learning models, such as with MuZero, which could be trained to play chess, Go, or Atari games.[140] In 2019, DeepMind's AlphaStar achieved grandmaster level in StarCraft II, a particularly challenging real-time strategy game that involves incomplete knowledge of what happens on the map.[141] In 2021, an AI agent competed in a PlayStation Gran Turismo competition, winning against four of the world's best Gran Turismo drivers using deep reinforcement learning.[142] In 2024, Google DeepMind introduced SIMA, a type of AI capable of autonomously playing nine previously unseen open-world video games by observing screen output, as well as executing short, specific tasks in response to natural language instructions.[143]
游戏程序自 1950 年代以来一直被用来展示和测试人工智能最先进的技术。[135]深蓝成为第一个击败在位世界国际象棋冠军加里·卡斯帕罗夫的计算机国际象棋系统,时间是 1997 年 5 月 11 日。[136]2011 年,在一场危险边缘!问答节目的展览赛中,IBM的问答系统沃森以显著的优势击败了两位最伟大的危险边缘!冠军布拉德·鲁特和肯·詹宁斯。[137]2016 年 3 月,AlphaGo在与围棋冠军李世石的比赛中赢得了 5 局中的 4 局,成为第一个在没有让子的情况下击败职业围棋选手的计算机围棋系统。然后,在 2017 年,它击败了柯洁,他是世界上最好的围棋选手。[138] 其他程序处理 不完全信息 游戏,例如 扑克 玩家的程序 Pluribus。[139]DeepMind 开发了越来越通用的 强化学习 模型,例如 MuZero,可以训练来玩国际象棋、围棋或 Atari 游戏。[140] 在 2019 年,DeepMind 的 AlphaStar 在 星际争霸 II 中达到了大师级水平,这是一款特别具有挑战性的实时战略游戏,涉及对地图上发生的事情的不完全知识。[141] 在 2021 年,一个 AI 代理参加了 PlayStation Gran Turismo 比赛,使用深度强化学习战胜了四位世界顶级 Gran Turismo 车手。[142] 在 2024 年,Google DeepMind 推出了 SIMA,这是一种能够通过观察屏幕输出自主玩九款之前未见过的开放世界视频游戏的 AI,并能够根据自然语言指令执行简短、特定的任务。[143]
Mathematics 数学
In mathematics, special forms of formal step-by-step reasoning are used. In contrast, LLMs such as GPT-4 Turbo, Gemini Ultra, Claude Opus, LLaMa-2 or Mistral Large are working with probabilistic models, which can produce wrong answers in the form of hallucinations. Therefore, they need not only a large database of mathematical problems to learn from but also methods such as supervised fine-tuning or trained classifiers with human-annotated data to improve answers for new problems and learn from corrections.[144] A 2024 study showed that the performance of some language models for reasoning capabilities in solving math problems not included in their training data was low, even for problems with only minor deviations from trained data.[145]
在数学中,使用了特殊形式的逐步 推理。相比之下,LLMs(如 GPT-4 Turbo、Gemini Ultra、Claude Opus、LLaMa-2 或 Mistral Large)则使用概率模型,这可能会以 幻觉 的形式产生错误答案。因此,它们不仅需要一个大型的数学问题数据库来学习,还需要如 监督微调 或使用人类标注数据训练的 分类器 等方法,以改善新问题的答案并从修正中学习。[144] 一项 2024 年的研究表明,对于解决未包含在训练数据中的数学问题,一些语言模型的推理能力表现较低,即使是与训练数据仅有轻微偏差的问题。[145]
Alternatively, dedicated models for mathematic problem solving with higher precision for the outcome including proof of theorems have been developed such as Alpha Tensor, Alpha Geometry and Alpha Proof all from Google DeepMind,[146] Llemma from eleuther[147] or Julius.[148]
另外,已经开发出专门用于数学问题解决的模型,具有更高的结果精度,包括定理证明,如 Alpha Tensor、Alpha Geometry 和 Alpha Proof,均来自 Google DeepMind,[146] 以及来自 eleuther 的 Llemma [147] 或 Julius。[148]
When natural language is used to describe mathematical problems, converters transform such prompts into a formal language such as Lean to define mathematic tasks.
当自然语言用于描述数学问题时,转换器将这些提示转换为正式语言,例如 Lean,以定义数学任务。
Some models have been developed to solve challenging problems and reach good results in benchmark tests, others to serve as educational tools in mathematics.[149]
一些模型已经被开发出来,以解决具有挑战性的问题并在基准测试中取得良好结果,其他模型则作为数学教育工具。[149]
Finance 金融
Finance is one of the fastest growing sectors where applied AI tools are being deployed: from retail online banking to investment advice and insurance, where automated "robot advisers" have been in use for some years.
[150]
金融是应用人工智能工具部署最快的行业之一:从零售在线银行到投资建议和保险,自动化的“机器人顾问”已经使用了几年。[150]
World Pensions experts like Nicolas Firzli insist it may be too early to see the emergence of highly innovative AI-informed financial products and services: "the deployment of AI tools will simply further automatise things: destroying tens of thousands of jobs in banking, financial planning, and pension advice in the process, but I’m not sure it will unleash a new wave of [e.g., sophisticated] pension innovation."[151]
全球养老金专家如尼古拉斯·菲尔兹利坚持认为,看到高度创新的人工智能驱动的金融产品和服务的出现可能为时尚早:“人工智能工具的部署将进一步自动化事务:在此过程中摧毁数万个银行、财务规划和养老金咨询的工作,但我不确定这会释放出一波新的[例如,复杂的]养老金创新。”[151]
Military 军事
Various countries are deploying AI military applications.[152] The main applications enhance command and control, communications, sensors, integration and interoperability.[153] Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles.[152] AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles involving manned and unmanned teams.[153] AI was incorporated into military operations in Iraq and Syria.[152]
各国正在部署人工智能军事应用。[152] 主要应用增强了指挥与控制、通信、传感器、集成和互操作性。[153] 研究的目标是情报收集与分析、后勤、网络行动、信息行动,以及半自主和自主车辆。[152] 人工智能技术使传感器和效应器的协调、威胁检测与识别、敌方位置标记、目标获取、分布式联合火力的协调与冲突解决成为可能,这些都涉及到联网的有人和无人作战团队。[153] 人工智能已被纳入伊拉克和叙利亚的军事行动中。[152]
In November 2023, US Vice President Kamala Harris disclosed a declaration signed by 31 nations to set guardrails for the military use of AI. The commitments include using legal reviews to ensure the compliance of military AI with international laws, and being cautious and transparent in the development of this technology.[154]
在 2023 年 11 月,美国副总统卡马拉·哈里斯披露了一份由 31 个国家签署的声明,以设定军事使用人工智能的保护措施。承诺包括进行法律审查,以确保军事人工智能符合国际法,并在该技术的发展中保持谨慎和透明。[154]
Generative AI 生成性人工智能
In the early 2020s, generative AI gained widespread prominence. GenAI is AI capable of generating text, images, videos, or other data using generative models,[155][156] often in response to prompts.[157][158]
在 2020 年代初期,生成性人工智能获得了广泛的关注。生成性人工智能是能够使用生成模型生成文本、图像、视频或其他数据的人工智能,[155][156]通常是响应提示而生成的。[157][158]
In March 2023, 58% of U.S. adults had heard about ChatGPT and 14% had tried it.[159] The increasing realism and ease-of-use of AI-based text-to-image generators such as Midjourney, DALL-E, and Stable Diffusion sparked a trend of viral AI-generated photos. Widespread attention was gained by a fake photo of Pope Francis wearing a white puffer coat, the fictional arrest of Donald Trump, and a hoax of an attack on the Pentagon, as well as the usage in professional creative arts.[160][161]
在 2023 年 3 月,58%的美国成年人听说过ChatGPT,14%的人尝试过它。[159] 基于 AI 的文本到图像生成器,如Midjourney、DALL-E和Stable Diffusion,其日益逼真的效果和易用性引发了病毒式的 AI 生成照片趋势。一个假照片引起了广泛关注,照片中教皇弗朗西斯穿着白色羽绒服,关于唐纳德·特朗普的虚构逮捕,以及对五角大楼的攻击骗局,以及在专业创意艺术中的使用。[160][161]
Agents 代理人
Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, including virtual assistants, chatbots, autonomous vehicles, game-playing systems, and industrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks.[162][163][164]
人工智能(AI)代理是旨在感知其环境、做出决策并自主采取行动以实现特定目标的软件实体。这些代理可以与用户、环境或其他代理进行互动。AI 代理被广泛应用于各种场景,包括 虚拟助手、聊天机器人、自动驾驶车辆、游戏系统 和 工业机器人。AI 代理在其编程、可用计算资源和硬件限制的约束下运行。这意味着它们只能在定义的范围内执行任务,并且具有有限的内存和处理能力。在实际应用中,AI 代理通常面临决策和执行行动的时间限制。许多 AI 代理结合了学习算法,使它们能够通过经验或训练随着时间的推移提高性能。通过使用机器学习,AI 代理可以适应新情况并优化其为指定任务的行为。[162][163][164]
Other industry-specific tasks
其他行业特定任务
There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated "AI" in some offerings or processes.[165] A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management.
还有成千上万的成功人工智能应用被用于解决特定行业或机构的具体问题。在 2017 年的一项调查中,五分之一的公司报告称在某些产品或流程中融入了“人工智能”[165]。一些例子包括能源存储、医疗诊断、军事后勤、预测司法裁决结果的应用、外交政策或供应链管理。
AI applications for evacuation and disaster management are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions.[166][167][168]
AI 在疏散和灾难管理中的应用正在增长。AI 已被用于调查人们在大规模和小规模疏散中是否以及如何撤离,使用来自 GPS、视频或社交媒体的历史数据。此外,AI 可以提供实时的疏散条件信息。[166][167][168]
In agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.
在农业中,人工智能帮助农民识别需要灌溉、施肥、农药处理或提高产量的区域。农学家使用人工智能进行研究和开发。人工智能被用于预测作物如番茄的成熟时间,监测土壤湿度,操作农业机器人,进行预测分析,分类牲畜的情绪,自动化温室,检测疾病和害虫,以及节约用水。
Artificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights" for example for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. It could also be used for activities in space such as space exploration, including analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.
人工智能在天文学中用于分析日益增加的数据和应用,主要用于“分类、回归、聚类、预测、生成、发现以及发展新的科学见解”,例如发现系外行星、预测太阳活动,以及区分引力波天文学中的信号和仪器效应。它还可以用于太空中的活动,如太空探索,包括分析来自太空任务的数据、航天器的实时科学决策、太空垃圾规避以及更自主的操作。
Ethics 伦理
AI has potential benefits and potential risks. AI may be able to advance science and find solutions for serious problems: Demis Hassabis of Deep Mind hopes to "solve intelligence, and then use that to solve everything else".[169] However, as the use of AI has become widespread, several unintended consequences and risks have been identified.[170] In-production systems can sometimes not factor ethics and bias into their AI training processes, especially when the AI algorithms are inherently unexplainable in deep learning.[171]
人工智能具有潜在的好处和风险。人工智能可能能够推动科学进步并为严重问题找到解决方案:Demis Hassabis 来自 Deep Mind 希望“解决智能,然后利用它来解决其他所有问题”。[169] 然而,随着人工智能的广泛使用,已经识别出几个意想不到的后果和风险。[170] 在生产中的系统有时无法将伦理和偏见纳入其人工智能训练过程,尤其是当人工智能算法在深度学习中本质上是不可解释的。[171]
Risks and harm 风险与危害
Privacy and copyright 隐私和版权
Machine learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns about privacy, surveillance and copyright.
机器学习算法需要大量数据。获取这些数据的技术引发了关于隐私、监视和版权的担忧。
AI-powered devices and services, such as virtual assistants and IoT products, continuously collect personal information, raising concerns about intrusive data gathering and unauthorized access by third parties. The loss of privacy is further exacerbated by AI's ability to process and combine vast amounts of data, potentially leading to a surveillance society where individual activities are constantly monitored and analyzed without adequate safeguards or transparency.
人工智能驱动的设备和服务,如虚拟助手和物联网产品,持续收集个人信息,这引发了对侵入性数据收集和第三方未经授权访问的担忧。隐私的丧失因人工智能处理和结合大量数据的能力而进一步加剧,这可能导致一个监控社会,在这个社会中,个人活动在没有足够保障或透明度的情况下被不断监控和分析。
Sensitive user data collected may include online activity records, geolocation data, video or audio.[172] For example, in order to build speech recognition algorithms, Amazon has recorded millions of private conversations and allowed temporary workers to listen to and transcribe some of them.[173] Opinions about this widespread surveillance range from those who see it as a necessary evil to those for whom it is clearly unethical and a violation of the right to privacy.[174]
收集的敏感用户数据可能包括在线活动记录、地理位置数据、视频或音频。[172] 例如,为了构建语音识别算法,亚马逊记录了数百万个私人对话,并允许临时工收听并转录其中的一些。[173] 对于这种广泛监控的看法,从将其视为必要的恶的人,到认为这显然是不道德并侵犯隐私权的人,意见不一。[174]
AI developers argue that this is the only way to deliver valuable applications. and have developed several techniques that attempt to preserve privacy while still obtaining the data, such as data aggregation, de-identification and differential privacy.[175] Since 2016, some privacy experts, such as Cynthia Dwork, have begun to view privacy in terms of fairness. Brian Christian wrote that experts have pivoted "from the question of 'what they know' to the question of 'what they're doing with it'."[176]
AI 开发者认为这是提供有价值应用的唯一方法,并开发了几种技术,试图在获取数据的同时保护隐私,例如数据聚合、去标识化和差分隐私。[175] 自 2016 年以来,一些隐私专家,如辛西娅·德沃克,开始从公平性的角度看待隐私。布赖恩·克里斯蒂安写道,专家们已经从“他们知道什么”的问题转向“他们在做什么”的问题。[176]
Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under the rationale of "fair use". Experts disagree about how well and under what circumstances this rationale will hold up in courts of law; relevant factors may include "the purpose and character of the use of the copyrighted work" and "the effect upon the potential market for the copyrighted work".[177][178] Website owners who do not wish to have their content scraped can indicate it in a "robots.txt" file.[179] In 2023, leading authors (including John Grisham and Jonathan Franzen) sued AI companies for using their work to train generative AI.[180][181] Another discussed approach is to envision a separate sui generis system of protection for creations generated by AI to ensure fair attribution and compensation for human authors.[182]
生成性人工智能通常是在未授权的受版权保护作品上进行训练,包括图像或计算机代码等领域;然后输出在“合理使用”的理由下被使用。专家们对这一理由在法庭上能否成立以及在什么情况下成立存在分歧;相关因素可能包括“对受版权保护作品使用的目的和性质”和“对受版权保护作品潜在市场的影响”。[177][178] 不希望其内容被抓取的网站所有者可以在“robots.txt”文件中指明。[179] 在 2023 年,知名作家(包括约翰·格里沙姆和乔纳森·弗兰岑)起诉人工智能公司,指控其使用他们的作品来训练生成性人工智能。[180][181] 另一个讨论的方法是设想一个独立的 sui generis 保护系统,以确保对人类作者的公平归属和补偿。[182]
Dominance by tech giants 科技巨头的主导地位
The commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft.[183][184][185] Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace.[186][187]
商业人工智能领域由大型科技公司主导,如字母表公司、亚马逊、苹果公司、Meta 平台和微软。[183][184][185] 这些参与者中的一些已经拥有现有云基础设施和计算能力的绝大多数,来自数据中心,使他们能够在市场上进一步巩固地位。[186][187]
Substantial power needs and other environmental impacts
大量的电力需求和其他环境影响
In January 2024, the International Energy Agency (IEA) released Electricity 2024, Analysis and Forecast to 2026, forecasting electric power use.[188] This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation.[189]
在 2024 年 1 月,国际能源署(IEA)发布了《电力 2024,分析与 2026 年预测》,预测电力使用情况。[188] 这是 IEA 首次对数据中心和人工智能及加密货币的电力消耗进行预测。报告指出,这些用途的电力需求到 2026 年可能会翻倍,额外的电力使用量相当于整个日本的用电量。[189]
Prodigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and "intelligent", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms.[190]
人工智能的巨大电力消耗导致化石燃料使用的增长,并可能延迟过时的、排放碳的煤电设施的关闭。美国的数据中心建设热潮汹涌,使大型科技公司(如微软、Meta、谷歌、亚马逊)成为电力的贪婪消费者。预计的电力消耗如此庞大,以至于人们担心无论来源如何都将满足这种需求。一次 ChatGPT 搜索所需的电能是一次谷歌搜索的 10 倍。这些大型公司急于寻找电力来源——从核能到地热能再到聚变。科技公司辩称,从长远来看,人工智能最终会对环境更加友好,但他们现在需要能源。根据科技公司,人工智能使电网更加高效和“智能”,将有助于核能的发展,并跟踪整体碳排放。[190]
A 2024 Goldman Sachs Research Paper, AI Data Centers and the Coming US Power Demand Surge, found "US power demand (is) likely to experience growth not seen in a generation…." and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means.[191]Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all.[192]
一份 2024 年高盛研究报告,人工智能数据中心与即将到来的美国电力需求激增,发现“美国电力需求(可能)将经历一代人未见的增长……。”并预测到 2030 年,美国数据中心将消耗 8%的美国电力,而 2022 年为 3%,这预示着电力生产行业将通过多种方式实现增长。[191]数据中心对电力的需求越来越大,以至于可能会使电网达到极限。大型科技公司反驳称,人工智能可以被用来最大化电网的利用率。[192]
In 2024, the Wall Street Journal reported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US).[193]
在 2024 年,华尔街日报报道说,大型人工智能公司已开始与美国核电供应商进行谈判,以为数据中心提供电力。2024 年 3 月,亚马逊以 6.5 亿美元(美国)购买了一座位于宾夕法尼亚州的核能数据中心。[193]
Misinformation 错误信息
YouTube, Facebook and others use recommender systems to guide users to more content. These AI programs were given the goal of maximizing user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation.[194] This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government.[195] The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took steps to mitigate the problem [citation needed].
YouTube、Facebook 和其他平台使用 推荐系统 来引导用户获取更多内容。这些人工智能程序的目标是 最大化 用户参与度(也就是说,唯一的目标是让人们持续观看)。人工智能发现用户倾向于选择 错误信息、阴谋论 和极端的 党派 内容,为了让他们继续观看,人工智能推荐了更多此类内容。用户还倾向于观看同一主题的更多内容,因此人工智能将人们引导到 过滤气泡 中,在那里他们接收到了同一错误信息的多个版本。[194] 这让许多用户相信错误信息是真实的,最终削弱了对机构、媒体和政府的信任。[195] 该人工智能程序确实学会了最大化其目标,但结果对社会造成了伤害。在 2016 年美国大选后,主要科技公司采取措施来缓解这一问题 [需要引用]。
In 2022, generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films, or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda.[196] AI pioneer Geoffrey Hinton expressed concern about AI enabling "authoritarian leaders to manipulate their electorates" on a large scale, among other risks.[197]
在 2022 年,生成性人工智能开始创造与真实照片、录音、电影或人类写作无法区分的图像、音频、视频和文本。恶意行为者可能利用这项技术制造大量虚假信息或宣传。[196] 人工智能先驱杰弗里·辛顿对人工智能使“专制领导人能够大规模操控选民”等风险表示担忧。[197]
Algorithmic bias and fairness
算法偏见与公平性
Machine learning applications will be biased[k] if they learn from biased data.[199] The developers may not be aware that the bias exists.[200] Bias can be introduced by the way training data is selected and by the way a model is deployed.[201][199] If a biased algorithm is used to make decisions that can seriously harm people (as it can in medicine, finance, recruitment, housing or policing) then the algorithm may cause discrimination.[202] The field of fairness studies how to prevent harms from algorithmic biases.
机器学习应用将会是 偏见[k] 如果它们从偏见数据中学习。[199] 开发者可能并不知道偏见的存在。[200] 偏见可能通过选择 训练数据 的方式以及模型部署的方式引入。[201][199] 如果使用偏见算法做出决策,这可能会严重 伤害 人们(如在 医学、金融、招聘、住房 或 警务 中),那么该算法可能会导致 歧视。[202] 公平性 领域研究如何防止算法偏见造成的伤害。
On June 28, 2015, Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black. The system was trained on a dataset that contained very few images of black people,[203] a problem called "sample size disparity".[204] Google "fixed" this problem by preventing the system from labelling anything as a "gorilla". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon.[205]
在 2015 年 6 月 28 日,Google Photos的新图像标记功能错误地将 Jacky Alcine 和他的朋友识别为“猩猩”,因为他们是黑人。该系统是在一个包含很少黑人图像的数据集上训练的,[203] 这被称为“样本大小差异”问题。[204] Google 通过防止系统将任何东西标记为“猩猩”来“修复”这个问题。八年后,在 2023 年,Google Photos 仍然无法识别猩猩,苹果、Facebook、微软和亚马逊的类似产品也无法识别。[205]
COMPAS is a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. In 2016, Julia Angwin at ProPublica discovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different—the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend.[206] In 2017, several researchers[l] showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data.[208]
COMPAS 是一个广泛用于 美国法院 的商业程序,用于评估 被告 重新犯罪的可能性。2016 年,Julia Angwin 在 ProPublica 发现 COMPAS 存在种族偏见,尽管该程序并未被告知被告的种族。尽管白人和黑人之间的错误率被校准为完全相等,均为 61%,但每个种族的错误却不同——该系统始终高估黑人重新犯罪的机会,而低估白人不重新犯罪的机会。[206] 2017 年,几位研究人员[l] 表明,当数据中白人和黑人重新犯罪的基率不同的时候,COMPAS 在数学上是不可能兼顾所有可能的公平性衡量标准的。[208]
A program can make biased decisions even if the data does not explicitly mention a problematic feature (such as "race" or "gender"). The feature will correlate with other features (like "address", "shopping history" or "first name"), and the program will make the same decisions based on these features as it would on "race" or "gender".[209] Moritz Hardt said "the most robust fact in this research area is that fairness through blindness doesn't work."[210]
一个程序即使在数据中没有明确提到问题特征(如“种族”或“性别”),也可能做出有偏见的决策。该特征将与其他特征(如“地址”、“购物历史”或“名字”)相关联,程序将根据这些特征做出与“种族”或“性别”相同的决策。[209] Moritz Hardt 说:“在这个研究领域中,最可靠的事实是,盲目公平是行不通的。”[210]
Criticism of COMPAS highlighted that machine learning models are designed to make "predictions" that are only valid if we assume that the future will resemble the past. If they are trained on data that includes the results of racist decisions in the past, machine learning models must predict that racist decisions will be made in the future. If an application then uses these predictions as recommendations, some of these "recommendations" will likely be racist.[211] Thus, machine learning is not well suited to help make decisions in areas where there is hope that the future will be better than the past. It is descriptive rather than prescriptive.[m]
对 COMPAS 的批评强调,机器学习模型旨在做出“预测”,这些预测只有在我们假设未来将与过去相似的情况下才有效。如果它们是在包含过去种族主义决策结果的数据上训练的,机器学习模型必须预测未来会做出种族主义决策。如果一个应用程序随后将这些预测作为推荐,那么其中一些“推荐”很可能是种族主义的。[211] 因此,机器学习不太适合帮助在希望未来会比过去更好的领域做出决策。它是描述性的,而不是规范性的。[m]
Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women.[204]
偏见和不公平可能会被忽视,因为开发者主要是白人男性:在人工智能工程师中,约 4%是黑人,20%是女性。[204]
There are various conflicting definitions and mathematical models of fairness. These notions depend on ethical assumptions, and are influenced by beliefs about society. One broad category is distributive fairness, which focuses on the outcomes, often identifying groups and seeking to compensate for statistical disparities. Representational fairness tries to ensure that AI systems do not reinforce negative stereotypes or render certain groups invisible. Procedural fairness focuses on the decision process rather than the outcome. The most relevant notions of fairness may depend on the context, notably the type of AI application and the stakeholders. The subjectivity in the notions of bias and fairness makes it difficult for companies to operationalize them. Having access to sensitive attributes such as race or gender is also considered by many AI ethicists to be necessary in order to compensate for biases, but it may conflict with anti-discrimination laws.[198]
公平的定义和数学模型各不相同。这些概念依赖于伦理假设,并受到对社会信念的影响。一个广泛的类别是分配公平,侧重于结果,通常识别群体并寻求补偿统计差异。代表性公平试图确保人工智能系统不强化负面刻板印象或使某些群体变得不可见。程序公平则关注决策过程而非结果。最相关的公平概念可能依赖于上下文,特别是人工智能应用的类型和利益相关者。偏见和公平概念中的主观性使得公司难以将其操作化。许多人工智能伦理学家认为,获取种族或性别等敏感属性是补偿偏见所必需的,但这可能与反歧视法律相冲突。[198]
At its 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022), the Association for Computing Machinery, in Seoul, South Korea, presented and published findings that recommend that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe, and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed.[dubious – discuss][213]
在 2022 年公平、问责和透明度会议(ACM FAccT 2022)上,计算机协会在韩国首尔发布的研究结果建议,直到人工智能和机器人系统被证明没有偏见错误之前,它们都是不安全的,并且应限制使用在大量未经监管的有缺陷互联网数据上训练的自学习神经网络。[可疑 – 讨论][213]
Lack of transparency 缺乏透明度
Many AI systems are so complex that their designers cannot explain how they reach their decisions.[214] Particularly with deep neural networks, in which there are a large amount of non-linear relationships between inputs and outputs. But some popular explainability techniques exist.[215]
许多人工智能系统复杂到其设计者无法解释它们是如何做出决策的。[214] 尤其是在深度神经网络中,输入和输出之间存在大量非线性关系。但一些流行的可解释性技术是存在的。[215]
It is impossible to be certain that a program is operating correctly if no one knows how exactly it works. There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, a system that could identify skin diseases better than medical professionals was found to actually have a strong tendency to classify images with a ruler as "cancerous", because pictures of malignancies typically include a ruler to show the scale.[216] Another machine learning system designed to help effectively allocate medical resources was found to classify patients with asthma as being at "low risk" of dying from pneumonia. Having asthma is actually a severe risk factor, but since the patients having asthma would usually get much more medical care, they were relatively unlikely to die according to the training data. The correlation between asthma and low risk of dying from pneumonia was real, but misleading.[217]
如果没有人确切知道一个程序是如何工作的,就不可能确定它是否正确运行。曾经有许多案例表明,一个机器学习程序通过了严格的测试,但仍然学到了与程序员意图不同的东西。例如,一个能够比医疗专业人员更好地识别皮肤病的系统,实际上被发现有强烈的倾向将带有尺子的图像分类为“癌症”,因为恶性肿瘤的图片通常会包含一个尺子来显示比例。[216] 另一个旨在有效分配医疗资源的机器学习系统被发现将哮喘患者分类为“低风险”死于肺炎。实际上,哮喘是一个严重的风险因素,但由于哮喘患者通常会获得更多的医疗护理,因此根据训练数据,他们相对不太可能死亡。哮喘与低风险死于肺炎之间的相关性是真实的,但具有误导性。[217]
People who have been harmed by an algorithm's decision have a right to an explanation.[218] Doctors, for example, are expected to clearly and completely explain to their colleagues the reasoning behind any decision they make. Early drafts of the European Union's General Data Protection Regulation in 2016 included an explicit statement that this right exists.[n] Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used.[219]
受到算法决策伤害的人有权获得解释。[218] 例如,医生被期望清楚而完整地向同事解释他们所做决策背后的理由。2016 年,欧盟《通用数据保护条例》的早期草案中明确声明了这一权利的存在。[n] 行业专家指出,这是一个尚未解决的问题,且没有解决方案可见。监管机构则认为,尽管如此,伤害是真实存在的:如果问题没有解决方案,这些工具就不应该被使用。[219]
DARPA established the XAI ("Explainable Artificial Intelligence") program in 2014 to try to solve these problems.[220]
DARPA 于 2014 年建立了 XAI (“可解释人工智能”)项目,以尝试解决这些问题。[220]
Several approaches aim to address the transparency problem. SHAP enables to visualise the contribution of each feature to the output.[221] LIME can locally approximate a model's outputs with a simpler, interpretable model.[222] Multitask learning provides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned.[223] Deconvolution, DeepDream and other generative methods can allow developers to see what different layers of a deep network for computer vision have learned, and produce output that can suggest what the network is learning.[224] For generative pre-trained transformers, Anthropic developed a technique based on dictionary learning that associates patterns of neuron activations with human-understandable concepts.[225]
几种方法旨在解决透明性问题。SHAP 能够可视化每个特征对输出的贡献。[221] LIME 可以用一个更简单、可解释的模型在局部近似模型的输出。[222]多任务学习除了目标分类外,还提供大量输出。这些其他输出可以帮助开发者推断网络学到了什么。[223]反卷积、深度梦境和其他生成方法可以让开发者看到深度计算机视觉网络的不同层学到了什么,并生成可以暗示网络正在学习的输出。[224] 对于生成预训练变换器,Anthropic开发了一种基于字典学习的技术,将神经元激活模式与人类可理解的概念关联起来。[225]
Bad actors and weaponized AI
恶意行为者和武器化的人工智能
Artificial intelligence provides a number of tools that are useful to bad actors, such as authoritarian governments, terrorists, criminals or rogue states.
人工智能提供了一些对恶意行为者有用的工具,例如专制政府、恐怖分子、罪犯或流氓国家。
A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision.[o] Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction.[227] Even when used in conventional warfare, it is unlikely that they will be unable to reliably choose targets and could potentially kill an innocent person.[227] In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed.[228] By 2015, over fifty countries were reported to be researching battlefield robots.[229]
致命的自主武器是一种能够在没有人类监督的情况下定位、选择并攻击人类目标的机器。[o] 广泛可用的人工智能工具可以被不法分子用来开发廉价的自主武器,如果大规模生产,它们可能成为大规模杀伤性武器。[227] 即使在常规战争中,它们也不太可能可靠地选择目标,并可能杀死无辜的人。[227] 2014 年,30 个国家(包括中国)支持在联合国的某些常规武器公约下禁止自主武器,但美国等国不同意。[228] 到 2015 年,已有超过五十个国家被报道正在研究战场机器人。[229]
AI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware.[230] All these technologies have been available since 2020 or earlier—AI facial recognition systems are already being used for mass surveillance in China.[231][232]
AI 工具使得专制政府以多种方式更有效地控制其公民。面部和语音识别允许广泛的监视。机器学习处理这些数据,可以分类潜在的国家敌人并防止他们隐藏。推荐系统可以精确地针对宣传和虚假信息以达到最大效果。深度伪造和生成式 AI有助于产生虚假信息。先进的 AI 可以使专制的集中决策比自由和分散的系统如市场更具竞争力。它降低了数字战争和高级间谍软件的成本和难度。[230] 所有这些技术自 2020 年或更早就已可用——AI 面部识别系统已经在中国用于大规模监视。[231][232]
There many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours.[233]
有许多其他方式,人工智能预计将帮助不法分子,其中一些是无法预见的。例如,机器学习人工智能能够在短短几小时内设计出数万个有毒分子。[233]
Technological unemployment
技术性失业
Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.[234]
经济学家们经常强调人工智能带来的裁员风险,并推测如果没有足够的社会政策来实现充分就业,将会导致失业。[234]
In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI.[235] A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed.[236] Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classified only 9% of U.S. jobs as "high risk".[p][238] The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies.[234] In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.[239][240]
在过去,技术往往是增加而不是减少总就业,但经济学家承认“我们正处于未知领域”,与人工智能相关。[235] 一项经济学家的调查显示,对于机器人和人工智能的日益使用是否会导致长期失业的显著增加存在分歧,但他们普遍同意,如果生产力的收益被再分配,这可能是一个净收益。[236] 风险估计各不相同;例如,在 2010 年代,迈克尔·奥斯本和卡尔·贝内迪克特·弗雷估计 47%的美国工作岗位面临“高风险”自动化,而经济合作与发展组织(OECD)的一份报告仅将 9%的美国工作岗位分类为“高风险”。[p][238] 对于未来就业水平的推测方法受到批评,认为缺乏证据基础,并暗示技术而非社会政策造成失业,而不是冗余。[234] 2023 年 4 月,有报道称 70%的中国视频游戏插画师职位已被生成性人工智能取代。[239][240]
Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".[241] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[242]
与以往的自动化浪潮不同,许多中产阶级工作可能会被人工智能取代;经济学人在 2015 年指出,“担心人工智能可能对白领工作造成的影响,就像蒸汽动力对蓝领工作在工业革命期间造成的影响一样”,这是“值得认真对待的”。[241] 极高风险的工作包括法律助理和快餐厨师,而对个人护理到神职人员等与护理相关职业的需求可能会增加。[242]
From the early days of the development of artificial intelligence, there have been arguments, for example, those put forward by Joseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement.[243]
从人工智能发展的早期,就有关于计算机是否应该完成可以由它们完成的任务的争论,例如约瑟夫·韦岑鲍姆提出的观点,考虑到计算机与人类之间的差异,以及定量计算与定性、基于价值的判断之间的区别。[243]
Existential risk 存在风险
It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicist Stephen Hawking stated, "spell the end of the human race".[244] This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like "self-awareness" (or "sentience" or "consciousness") and becomes a malevolent character.[q] These sci-fi scenarios are misleading in several ways.
有人认为,人工智能将变得如此强大,以至于人类可能会不可逆转地失去对它的控制。正如物理学家 斯蒂芬·霍金 所说,"这可能意味着人类的终结"。[244] 这种情景在科幻小说中很常见,当计算机或机器人突然发展出类人“自我意识”(或“感知”或“意识”)并成为恶意角色。[q] 这些科幻情景在多个方面具有误导性。
First, AI does not require human-like "sentience" to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. Philosopher Nick Bostrom argued that if one gives almost any goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a paperclip factory manager).[246] Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that "you can't fetch the coffee if you're dead."[247] In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is "fundamentally on our side".[248]
首先,人工智能并不需要类似人类的"知觉"就能成为一种生存风险。现代人工智能程序被赋予特定目标,并利用学习和智能来实现这些目标。哲学家尼克·博斯特罗姆认为,如果给一个足够强大的人工智能设定几乎任何目标,它可能会选择摧毁人类以实现该目标(他举了一个回形针工厂经理的例子)。[246]斯图尔特·拉塞尔举了一个家庭机器人试图找到杀死其主人的方法,以防止其被拔掉电源的例子,推理是“如果你死了,就无法去拿咖啡。”[247]为了对人类安全,一个超级智能必须真正与人类的道德和价值观对齐,以便它“从根本上站在我们这一边”。[248]
Second, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are made of language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.[249]
其次,尤瓦尔·赫拉利认为,人工智能并不需要机器人身体或物理控制就能构成生存风险。文明的基本部分并不是物质的。像意识形态、法律、政府、货币和经济这样的东西是由语言构成的;它们的存在是因为有数十亿人相信的故事。目前错误信息的普遍存在表明,人工智能可以利用语言说服人们相信任何事情,甚至采取破坏性的行动。[249]
The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI.[250] Personalities such as Stephen Hawking, Bill Gates, and Elon Musk,[251] as well as AI pioneers such as Yoshua Bengio, Stuart Russell, Demis Hassabis, and Sam Altman, have expressed concerns about existential risk from AI.
专家和行业内部人士的意见不一,既有相当一部分人对最终的超级智能 AI 所带来的风险感到担忧,也有不少人对此并不担心。[250] 像 斯蒂芬·霍金、比尔·盖茨 和 埃隆·马斯克 这样的知名人士[251],以及像 约书亚·本吉奥、斯图尔特·拉塞尔、德米斯·哈萨比斯 和 山姆·奥特曼 这样的 AI 先驱,均对 AI 带来的生存风险表示担忧。
In May 2023, Geoffrey Hinton announced his resignation from Google in order to be able to "freely speak out about the risks of AI" without "considering how this impacts Google."[252] He notably mentioned risks of an AI takeover,[253] and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI.[254]
在 2023 年 5 月,杰弗里·辛顿宣布辞去谷歌职务,以便能够“自由地谈论人工智能的风险”,而不必“考虑这对谷歌的影响”。[252] 他特别提到了人工智能接管的风险,[253] 并强调为了避免最坏的结果,建立安全指南需要在使用人工智能的竞争者之间进行合作。[254]
In 2023, many leading AI experts issued the joint statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".[255]
在 2023 年,许多领先的人工智能专家发布了联合声明,指出“减轻人工智能导致灭绝的风险应与其他社会规模的风险,如疫情和核战争,一同成为全球优先事项”。[255]
Other researchers, however, spoke in favor of a less dystopian view. AI pioneer Juergen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making "human lives longer and healthier and easier."[256] While the tools that are now being used to improve lives can also be used by bad actors, "they can also be used against the bad actors."[257][258] Andrew Ng also argued that "it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests."[259] Yann LeCun "scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction."[260] In the early 2010s, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine.[261] However, after 2016, the study of current and future risks and possible solutions became a serious area of research.[262]
其他研究人员则支持一种不那么反乌托邦的观点。人工智能先驱Juergen Schmidhuber没有签署联合声明,强调在 95%的情况下,人工智能研究是为了让“人类的生活更长、更健康和更轻松”。[256] 虽然现在用于改善生活的工具也可能被不法分子使用,但“它们也可以用来对付不法分子。”[257][258]Andrew Ng 还认为“相信人工智能的末日宣传是一个错误——而这样做的监管者只会使既得利益受益。”[259]Yann LeCun “嘲笑他同行的反乌托邦情景,包括超级虚假信息,甚至最终的人类灭绝。”[260] 在 2010 年代初期,专家们认为风险距离未来太远,不值得进行研究,或者从超级智能机器的角度来看,人类将是有价值的。[261] 然而,在 2016 年之后,当前和未来风险及可能解决方案的研究成为一个严肃的研究领域。[262]
Ethical machines and alignment
伦理机器与对齐
Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk.[263]
友好的人工智能是从一开始就设计出来的机器,旨在最小化风险并做出有利于人类的选择。Eliezer Yudkowsky,这个术语的创造者,认为开发友好的人工智能应该是更高的研究优先级:这可能需要大量投资,并且必须在人工智能成为生存风险之前完成。[263]
Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas.[264]
The field of machine ethics is also called computational morality,[264]
and was founded at an AAAI symposium in 2005.[265]
具有智能的机器有潜力利用其智能做出伦理决策。机器伦理学领域为机器提供了解决伦理困境的伦理原则和程序。[264] 机器伦理学领域也被称为计算道德,[264] 并于 2005 年在一个AAAI研讨会上成立。[265]
Other approaches include Wendell Wallach's "artificial moral agents"[266] and Stuart J. Russell's three principles for developing provably beneficial machines.[267]
其他方法包括 温德尔·沃拉克 的“人工道德代理人”[266] 和 斯图亚特·J·拉塞尔 的 三项原则,用于开发可证明有益的机器。[267]
Open source 开源
Active organizations in the AI open-source community include Hugging Face,[268] Google,[269] EleutherAI and Meta.[270] Various AI models, such as Llama 2, Mistral or Stable Diffusion, have been made open-weight,[271][272] meaning that their architecture and trained parameters (the "weights") are publicly available. Open-weight models can be freely fine-tuned, which allows companies to specialize them with their own data and for their own use-case.[273] Open-weight models are useful for research and innovation but can also be misused. Since they can be fine-tuned, any built-in security measure, such as objecting to harmful requests, can be trained away until it becomes ineffective. Some researchers warn that future AI models may develop dangerous capabilities (such as the potential to drastically facilitate bioterrorism) and that once released on the Internet, they can't be deleted everywhere if needed. They recommend pre-release audits and cost-benefit analyses.[274]
在人工智能开源社区中活跃的组织包括 Hugging Face,[268]Google,[269]EleutherAI 和 Meta。[270] 各种人工智能模型,如 Llama 2、Mistral 或 Stable Diffusion,已被公开权重,[271][272] 这意味着它们的架构和训练参数(“权重”)是公开可用的。公开权重模型可以自由 微调,这使得公司能够使用自己的数据和特定用例进行专业化。[273] 公开权重模型对研究和创新非常有用,但也可能被滥用。由于它们可以被微调,任何内置的安全措施,例如反对有害请求的功能,都可能被训练去除,直到变得无效。 一些研究人员警告说,未来的人工智能模型可能会发展出危险的能力(例如,可能会极大地促进生物恐怖主义),一旦在互联网上发布,就无法在需要时彻底删除。他们建议进行发布前审计和成本效益分析。[274]
Frameworks 框架
Artificial Intelligence projects can have their ethical permissibility tested while designing, developing, and implementing an AI system. An AI framework such as the Care and Act Framework containing the SUM values—developed by the Alan Turing Institute tests projects in four main areas:[275][276]
人工智能项目可以在设计、开发和实施 AI 系统时测试其伦理许可性。像关怀与行动框架这样的 AI 框架,包含 SUM 价值——由阿兰·图灵研究所开发,测试项目的四个主要领域:[275][276]
- Respect the dignity of individual people
尊重每个人的尊严 - Connect with other people sincerely, openly, and inclusively
真诚、开放和包容地 与他人建立联系 - Care for the wellbeing of everyone
关心每个人的福祉 - Protect social values, justice, and the public interest
保护 社会价值、公正和公众利益
Other developments in ethical frameworks include those decided upon during the Asilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others;[277] however, these principles do not go without their criticisms, especially regards to the people chosen contributes to these frameworks.[278]
其他伦理框架的发展包括在阿西洛马会议期间决定的内容、蒙特利尔负责任人工智能宣言以及 IEEE 的自主系统伦理倡议等;[277] 然而,这些原则并非没有批评,特别是关于参与这些框架制定的人员的选择。[278]
Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.[279]
促进这些技术影响的人民和社区的福祉,需要在人工智能系统设计、开发和实施的各个阶段考虑社会和伦理影响,并在数据科学家、产品经理、数据工程师、领域专家和交付经理等职位之间进行合作。[279]
The UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities.[280]
英国人工智能安全研究所于 2024 年发布了一套名为“Inspect”的测试工具集,用于人工智能安全评估,采用 MIT 开源许可证,免费提供在 GitHub 上,并可以通过第三方软件包进行改进。它可以用于评估人工智能模型在多个领域的表现,包括核心知识、推理能力和自主能力。[280]
Regulation 规章
The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms.[281] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally.[282] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[283][284] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.[285] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.[285] The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology.[285] Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI.[286] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[287] In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics.[288] In 2024, the Council of Europe created the first international legally binding treaty on AI, called the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law". It was adopted by the European Union, the United States, the United Kingdom, and other signatories.[289]
人工智能的监管是为促进和规范人工智能而制定的公共部门政策和法律的发展;因此,它与算法的更广泛监管相关。[281] 人工智能的监管和政策环境在全球各个司法管辖区都是一个新兴问题。[282] 根据斯坦福大学的人工智能指数,在 127 个调查国家中,2016 年通过的与人工智能相关的法律数量从 1 部激增至 2022 年单年通过的 37 部。[283][284] 在 2016 年至 2020 年间,超过 30 个国家采用了专门的人工智能战略。[285] 大多数欧盟成员国发布了国家人工智能战略,加拿大、中国、印度、日本、毛里求斯、俄罗斯联邦、沙特阿拉伯、阿联酋、美国和越南也发布了相关战略。其他国家如孟加拉国、马来西亚和突尼斯则正在制定自己的人工智能战略。[285] 全球人工智能伙伴关系于 2020 年 6 月成立,声明需要根据人权和民主价值观发展人工智能,以确保公众对该技术的信心和信任。[285]亨利·基辛格、埃里克·施密特和丹尼尔·胡滕洛赫于 2021 年 11 月发布联合声明,呼吁成立政府委员会来监管人工智能。[286] 在 2023 年,OpenAI 的领导者发布了关于超级智能治理的建议,他们认为这可能在不到 10 年的时间内发生。[287] 2023 年,联合国还成立了一个咨询机构,提供关于人工智能治理的建议;该机构由科技公司高管、政府官员和学者组成。[288] 在 2024 年,欧洲委员会 制定了第一个国际法律约束力的人工智能条约,称为“人工智能与人权、民主和法治框架公约”。该条约得到了欧盟、美国、英国及其他签署国的通过。[289]
In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks".[283] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.[290] In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".[291][292]
在 2022 年的一项Ipsos调查中,各国对人工智能的态度差异很大;78%的中国公民认为“使用人工智能的产品和服务的好处大于坏处”,而只有 35%的美国人同意这一观点。[283] 2023 年的一项路透社/Ipsos 民调发现,61%的美国人同意,22%不同意,认为人工智能对人类构成风险。[290] 在 2023 年的一项福克斯新闻民调中,35%的美国人认为联邦政府对人工智能进行监管“非常重要”,另有 41%认为“有些重要”,而 13%的人回应“不是很重要”,8%的人回应“完全不重要”。[291][292]
In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.[293] 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.[294][295] In May 2024 at the AI Seoul Summit, 16 global AI tech companies agreed to safety commitments on the development of AI.[296][297]
在 2023 年 11 月,首届全球 人工智能安全峰会 在英国 布莱切利公园 举行,讨论人工智能的近期和远期风险以及强制性和自愿性监管框架的可能性。[293] 包括美国、中国和欧盟在内的 28 个国家在峰会开始时发布声明,呼吁国际合作以应对人工智能的挑战和风险。[294][295] 在 2024 年 5 月的 首尔人工智能峰会 上,16 家全球人工智能科技公司就人工智能的发展达成了安全承诺。[296][297]
History 历史
The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable form of mathematical reasoning.[298][299] This, along with concurrent discoveries in cybernetics, information theory and neurobiology, led researchers to consider the possibility of building an "electronic brain".[r] They developed several areas of research that would become part of AI,[301] such as McCullouch and Pitts design for "artificial neurons" in 1943,[115] and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced the Turing test and showed that "machine intelligence" was plausible.[302][299]
机械或“形式”推理的研究始于古代的哲学家和数学家。逻辑的研究直接导致了艾伦·图灵的计算理论,该理论表明,通过简单地排列“0”和“1”等符号,机器可以模拟任何可想象的数学推理形式。[298][299] 这与在控制论、信息论和神经生物学方面的同时发现,使研究人员考虑构建“电子大脑”的可能性。[r] 他们开发了几个研究领域,这些领域将成为人工智能的一部分,[301] 例如 麦卡洛克 和 皮茨 在 1943 年设计的“人工神经元”,[115] 以及图灵在 1950 年发表的影响深远的论文《计算机与智能》,该论文引入了 图灵测试 并表明“机器智能”是可行的。[302][299]
The field of AI research was founded at a workshop at Dartmouth College in 1956.[s][6] The attendees became the leaders of AI research in the 1960s.[t] They and their students produced programs that the press described as "astonishing":[u] computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English.[v][7] Artificial intelligence laboratories were set up at a number of British and U.S. universities in the latter 1950s and early 1960s.[299]
人工智能研究领域于 1956 年在达特茅斯学院的一个研讨会上成立。[s][6] 与会者成为了 1960 年代人工智能研究的领导者。[t]他们和他们的学生开发了被媒体描述为“惊人”的程序:[u]计算机学习跳棋策略,解决代数中的文字问题,证明逻辑定理并且会说英语。[v][7] 在 1950 年代后期和 1960 年代初期,许多英国和美国大学设立了人工智能实验室。[299]
Researchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine with general intelligence and considered this the goal of their field.[306] In 1965 Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do".[307] In 1967 Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".[308] They had, however, underestimated the difficulty of the problem.[w] In 1974, both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill[310] and ongoing pressure from the U.S. Congress to fund more productive projects.[311] Minsky's and Papert's book Perceptrons was understood as proving that artificial neural networks would never be useful for solving real-world tasks, thus discrediting the approach altogether.[312] The "AI winter", a period when obtaining funding for AI projects was difficult, followed.[9]
1960 年代和 1970 年代的研究人员坚信他们的方法最终会成功创造出具有通用智能的机器,并将此视为他们领域的目标。[306] 1965 年,赫伯特·西蒙预测:“机器将在二十年内能够完成任何人类能做的工作”。[307] 1967 年,马文·明斯基同意这一观点,写道:“在一代人之内……创造‘人工智能’的问题将基本得到解决”。[308] 然而,他们低估了这个问题的难度。[w] 1974 年,美国和英国政府因对詹姆斯·莱特希尔爵士的批评[310]以及来自美国国会的持续压力,停止了探索性研究,以资助更具生产力的项目。[311]明斯基和帕珀特的书感知器被理解为证明人工神经网络永远无法用于解决现实世界的任务,从而完全否定了这种方法。[312] 随后出现了“人工智能寒冬”,这是一个获得人工智能项目资金困难的时期。[9]
In the early 1980s, AI research was revived by the commercial success of expert systems,[313] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research.[8] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.[10]
在 1980 年代初,人工智能研究因专家系统的商业成功而复兴,[313] 这是一种模拟人类专家知识和分析技能的人工智能程序。到 1985 年,人工智能市场已超过十亿美元。与此同时,日本的第五代计算机项目激励了美国和英国政府恢复对学术研究的资金支持。[8] 然而,从 1987 年Lisp 机器市场崩溃开始,人工智能再次陷入不名誉,第二个、持续时间更长的寒冬开始了。[10]
Up to this point, most of AI's funding had gone to projects that used high-level symbols to represent mental objects like plans, goals, beliefs, and known facts. In the 1980s, some researchers began to doubt that this approach would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition,[314] and began to look into "sub-symbolic" approaches.[315] Rodney Brooks rejected "representation" in general and focussed directly on engineering machines that move and survive.[x] Judea Pearl, Lofti Zadeh and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic.[86][320] But the most important development was the revival of "connectionism", including neural network research, by Geoffrey Hinton and others.[321] In 1990, Yann LeCun successfully showed that convolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks.[322]
到目前为止,大多数人工智能的资金都用于使用高级符号来表示心理对象,如计划、目标、信念和已知事实。在 1980 年代,一些研究人员开始怀疑这种方法是否能够模仿人类认知的所有过程,特别是感知、机器人技术、学习和模式识别,[314]并开始研究“子符号”方法。[315]罗德尼·布鲁克斯普遍拒绝“表示”,直接专注于工程设计能够移动和生存的机器。[x]朱迪亚·珀尔、洛夫提·扎德等人开发了处理不完整和不确定信息的方法,通过做出合理的猜测而不是精确的逻辑。[86][320] 但最重要的发展是由 Geoffrey Hinton 等人复兴的“连接主义”,包括神经网络研究。[321] 在 1990 年,Yann LeCun 成功展示了 卷积神经网络 可以识别手写数字,这是神经网络众多成功应用中的第一个。[322]
AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such as statistics, economics and mathematics).[323] By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence".[324]
However, several academic researchers became concerned that AI was no longer pursuing its original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield of artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s.[4]
人工智能在 1990 年代末和 21 世纪初逐渐恢复了其声誉,利用了正式的数学方法,并找到针对特定问题的具体解决方案。这种"狭窄"和"正式"的关注使研究人员能够产生可验证的结果,并与其他领域(如统计学、经济学和数学)进行合作。[323] 到 2000 年,人工智能研究人员开发的解决方案被广泛使用,尽管在 1990 年代,它们很少被称为"人工智能"。[324] 然而,一些学术研究人员开始担心人工智能不再追求其创造多功能、完全智能机器的原始目标。大约在 2002 年,他们创立了人工通用智能(或"AGI")这一子领域,到 2010 年代时已有多个资金充足的机构。[4]
Deep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field.[11]
For many specific tasks, other methods were abandoned.[y]
Deep learning's success was based on both hardware improvements (faster computers,[326] graphics processing units, cloud computing[327]) and access to large amounts of data[328] (including curated datasets,[327] such as ImageNet). Deep learning's success led to an enormous increase in interest and funding in AI.[z] The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019.[285]
深度学习 在 2012 年开始主导行业基准,并在整个领域得到应用。[11] 对于许多特定任务,其他方法被抛弃。[y] 深度学习的成功基于硬件的改进(更快的计算机,[326]图形处理单元, 云计算[327])以及对大量数据[328](包括策划的数据集,[327]如ImageNet)的访问。深度学习的成功导致了对人工智能的巨大兴趣和资金的增加。[z] 2015 年至 2019 年间,机器学习研究的数量(以总出版物计)增加了 50%。[285]
In 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. The alignment problem became a serious field of academic study.[262]
在 2016 年,公平性和技术滥用的问题在机器学习会议上被推到了中心舞台,出版物大幅增加,资金变得可用,许多研究人员重新将他们的职业重心放在这些问题上。对齐问题成为一个严肃的学术研究领域。[262]
In the late teens and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program was taught only the rules of the game and developed strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text.[329] These programs, and others, inspired an aggressive AI boom, where large companies began investing billions in AI research. According to AI Impacts, about $50 billion annually was invested in "AI" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in "AI".[330] About 800,000 "AI"-related U.S. job openings existed in 2022.[331]
在十几岁末和 2020 年代初,AGI 公司开始推出引起巨大兴趣的程序。2015 年,AlphaGo 由 DeepMind 开发,击败了世界冠军 围棋选手。该程序仅被教授了游戏规则,并自行发展策略。GPT-3 是一个由 OpenAI 于 2020 年发布的 大型语言模型,能够生成高质量的人类般的文本。[329] 这些程序及其他程序激发了一场激进的 人工智能热潮,大型公司开始在人工智能研究上投资数十亿。根据 AI Impacts,2022 年仅在美国,每年约投资 500 亿美元于“人工智能”,而约 20%的新美国计算机科学博士毕业生专注于“人工智能”。[330] 2022 年,美国约有 80 万个与“人工智能”相关的职位空缺。[331]
Philosophy 哲学
Defining artificial intelligence
定义人工智能
Alan Turing wrote in 1950 "I propose to consider the question 'can machines think'?"[332] He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour".[332] He devised the Turing test, which measures the ability of a machine to simulate human conversation.[302] Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes that we can not determine these things about other people but "it is usual to have a polite convention that everyone thinks."[333]
艾伦·图灵在 1950 年写道:“我提议考虑‘机器能否思考’这个问题?”[332] 他建议将问题从机器是否“思考”改为“机械是否能够表现出智能行为”。[332] 他设计了图灵测试,用于测量机器模拟人类对话的能力。[302] 由于我们只能观察机器的行为,因此它是否“真正”在思考或字面上是否有“思想”并不重要。图灵指出,我们无法确定其他人是否具备这些特征,但“通常有一个礼貌的约定,认为每个人都在思考。”[333]
Russell and Norvig agree with Turing that intelligence must be defined in terms of external behavior, not internal structure.[1] However, they are critical that the test requires the machine to imitate humans. "Aeronautical engineering texts," they wrote, "do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons.'"[335] AI founder John McCarthy agreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence".[336]
拉塞尔和诺维格同意图灵的观点,即智能必须根据外部行为而非内部结构来定义。[1] 然而,他们批评这个测试要求机器模仿人类。“航空工程的文本,”他们写道,“并不将其领域的目标定义为制造‘像鸽子一样飞行的机器,以至于它们可以欺骗其他鸽子。'”[335] 人工智能创始人约翰·麦卡锡同意这一观点,写道“人工智能的定义并不是模拟人类智能”。[336]
McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world".[337] Another AI founder, Marvin Minsky similarly describes it as "the ability to solve hard problems".[338] The leading AI textbook defines it as the study of agents that perceive their environment and take actions that maximize their chances of achieving defined goals.[1] These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machine—and no other philosophical discussion is required, or may not even be possible.
麦卡锡将智能定义为“在世界中实现目标的能力的计算部分”。[337] 另一位人工智能创始人,马文·明斯基同样将其描述为“解决困难问题的能力”。[338] 领先的人工智能教科书将其定义为研究感知环境并采取行动以最大化实现定义目标机会的智能体。[1] 这些定义将智能视为具有明确问题和明确解决方案的情况,其中问题的难度和程序的表现都是机器“智能”的直接衡量标准——不需要其他哲学讨论,或者甚至可能不可能进行。
Another definition has been adopted by Google,[339] a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence.
谷歌[339],作为人工智能领域的主要实践者,采用了另一种定义。该定义规定系统合成信息的能力是智能的表现,类似于生物智能的定义方式。
Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI,[340] with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did "not actually use AI in a material way".[341]
一些作者在实践中提出,人工智能的定义模糊且难以界定,关于经典算法是否应被归类为人工智能存在争议,[340] 在 2020 年代初的人工智能热潮中,许多公司将这一术语作为营销buzzword,即使它们“并没有以实质性的方式使用人工智能”。[341]
Evaluating approaches to AI
评估人工智能的方法
No established unifying theory or paradigm has guided AI research for most of its history.[aa] The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.
没有确立的统一理论或范式在大多数历史时期指导着人工智能研究。[aa] 2010 年代统计机器学习的前所未有的成功掩盖了所有其他方法(以至于一些来源,特别是在商业领域,使用“人工智能”一词来指代“使用神经网络的机器学习”)。这种方法主要是亚符号的、软的和狭窄的。批评者认为,这些问题可能需要未来的人工智能研究者重新审视。
Symbolic AI and its limits
符号人工智能及其局限性
Symbolic AI (or "GOFAI")[343] simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."[344]
符号人工智能(或"GOFAI")[343] 模拟了人们在解决难题、表达法律推理和进行数学运算时所使用的高级意识推理。他们在代数或智商测试等“智能”任务中表现得非常成功。在 1960 年代,纽厄尔和西蒙提出了物理符号系统假说:“一个物理符号系统具有一般智能行为所需的必要和充分手段。”[344]
However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult.[345] Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge.[346] Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him.[ab][16]
然而,符号方法在许多人类轻松解决的任务上失败了,例如学习、识别物体或常识推理。莫拉维克悖论是发现高水平的“智能”任务对人工智能来说很简单,但低水平的“本能”任务却极其困难。[345] 哲学家休伯特·德雷福斯自 1960 年代以来就争辩人类的专业知识依赖于无意识的本能,而不是有意识的符号操作,并且依赖于对情况的“感觉”,而不是明确的符号知识。[346] 尽管他的论点在首次提出时遭到嘲笑和忽视,但最终,人工智能研究开始同意他的观点。[ab][16]
The issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence,[348][349] in part because sub-symbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.
问题尚未解决:子符号推理可能会犯与人类直觉相同的许多难以理解的错误,例如算法偏见。批评者如诺姆·乔姆斯基认为,继续对符号人工智能的研究仍然是实现通用智能所必需的,[348][349],部分原因是子符号人工智能偏离了可解释的人工智能:理解现代统计人工智能程序为何做出特定决策可能是困难或不可能的。新兴的神经符号人工智能领域试图将这两种方法结合起来。
Neat vs. scruffy 整洁与邋遢
"Neats" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s,[350] but eventually was seen as irrelevant. Modern AI has elements of both.
“整洁派”希望智能行为能够用简单、优雅的原则来描述(例如 逻辑、优化 或 神经网络)。而“杂乱派”则认为这必然需要解决大量无关的问题。整洁派以理论严谨为依据来辩护他们的程序,而杂乱派主要依靠增量测试来验证其有效性。这个问题在 1970 年代和 1980 年代被积极讨论过,[350] 但最终被视为无关紧要。现代人工智能则兼具两者的元素。
Soft vs. hard computing 软计算与硬计算
Finding a provably correct or optimal solution is intractable for many important problems.[15] Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks.
找到一个可证明正确或最优的解决方案对于许多重要问题是不可处理的。[15]软计算是一组技术,包括遗传算法、模糊逻辑和神经网络,这些技术能够容忍不精确、不确定、部分真理和近似。软计算在 1980 年代末被引入,21 世纪最成功的人工智能程序是使用神经网络的软计算示例。
Narrow vs. general AI 窄域人工智能与通用人工智能
AI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals.[351][352] General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The experimental sub-field of artificial general intelligence studies this area exclusively.
AI 研究人员在是否直接追求人工通用智能和超级智能的目标,还是尽可能解决更多具体问题(狭义 AI),以期这些解决方案能间接推动该领域的长期目标上存在分歧。[351][352] 通用智能难以定义和测量,而现代 AI 通过专注于具体问题和具体解决方案取得了更多可验证的成功。人工通用智能的实验子领域专门研究这一领域。
Machine consciousness, sentience, and mind
机器意识、感知和心智
The philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. Russell and Norvig add that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on."[353] However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction.
心灵哲学并不知道机器是否能像人类一样拥有心灵、意识和心理状态。这个问题考虑的是机器的内部体验,而不是它的外部行为。主流的人工智能研究认为这个问题无关紧要,因为它不影响该领域的目标:构建能够利用智能解决问题的机器。拉塞尔和诺维格补充道:“使机器以与人类完全相同的方式具备意识的额外项目并不是我们能够承担的。”[353] 然而,这个问题已成为心灵哲学的核心问题。它通常也是虚构中的人工智能所涉及的核心问题。
Consciousness 意识
David Chalmers identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness.[354] The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While human information processing is easy to explain, human subjective experience is difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to know what red looks like.[355]
大卫·查尔默斯 识别了理解心智的两个问题,他将其称为意识的“难题”和“易题”。[354] 易题是理解大脑如何处理信号、制定计划和控制行为。难题是解释这种 感觉 是如何产生的,或者为什么它应该有任何感觉,假设我们认为它确实有某种感觉(丹尼特的意识幻觉主义认为这是一种幻觉)。虽然人类的 信息处理 容易解释,但人类的 主观体验 却难以解释。例如,想象一个色盲的人,他已经学会识别视野中哪些物体是红色的,但不清楚这个人需要什么才能 知道红色是什么样的。[355]
Computationalism and functionalism
计算主义和功能主义
Computationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam.[356]
计算主义是心灵哲学中的一种观点,认为人类心智是一个信息处理系统,思维是一种计算形式。计算主义认为,心灵与身体之间的关系类似或相同于软件与硬件之间的关系,因此可能是解决心身问题的一种方案。这一哲学观点受到 1960 年代人工智能研究者和认知科学家工作的启发,最初由哲学家杰瑞·福多和希拉里·普特南提出。[356]
Philosopher John Searle characterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[ac] Searle counters this assertion with his Chinese room argument, which attempts to show that, even if a machine perfectly simulates human behavior, there is still no reason to suppose it also has a mind.[360]
哲学家 约翰·塞尔 将这一立场称为 "强人工智能": "适当编程的计算机在正确的输入和输出下,将会拥有与人类相同意义上的心智。" [ac] 塞尔用他的中文房间论证反驳了这一主张,试图表明,即使机器完美模拟人类行为,也没有理由认为它也拥有心智。[360]
AI welfare and rights 人工智能的福利和权利
It is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree.[361] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals.[362][363] Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights.[362] Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society.[364]
很难或不可能可靠地评估一个先进的 人工智能是否有感知能力(有感觉的能力),如果有的话,程度如何。[361] 但是,如果某台机器有显著的可能性能够感受和受苦,那么它可能有权享有某些权利或福利保护措施,类似于动物。[362][363]智慧(与高智力相关的一组能力,如辨别力或 自我意识)可能为人工智能权利提供另一种道德基础。[362]机器人权利 有时也被提议作为将自主代理人融入社会的实际方式。[364]
In 2017, the European Union considered granting "electronic personhood" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities.[365] Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.[366][367]
在 2017 年,欧盟考虑授予一些最先进的人工智能系统“电子人格”。这类似于公司的法律地位,它将赋予权利但也带来责任。[365] 批评者在 2018 年指出,赋予人工智能系统权利会淡化人权的重要性,立法应关注用户需求,而不是投机性的未来场景。他们还指出,机器人缺乏独立参与社会的自主权。[366][367]
Progress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited.[363][362]
人工智能的进展增加了人们对这一主题的兴趣。人工智能福利和权利的支持者常常争辩说,如果人工智能意识出现,将特别容易被否认。他们警告说,这可能是一个道德盲点,类似于奴隶制或工厂化养殖,如果创造出有意识的人工智能并被粗心利用,可能会导致大规模的痛苦。[363][362]
Future 未来
Superintelligence and the singularity
超智能与奇点
A superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.[352]
一个超级智能是一个假设的代理,它的智能将远远超过最聪明和最有天赋的人类头脑。[352]
If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an "intelligence explosion" and Vernor Vinge called a "singularity".[368]
如果对人工通用智能的研究产生了足够智能的软件,它可能能够重新编程并自我改进。改进后的软件将更擅长自我改进,从而导致I. J. Good所称的“智能爆炸”和Vernor Vinge所称的“奇点”。[368]
However, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do.[369]
然而,技术无法无限期地指数级提升,通常遵循一个S 形曲线,当它们达到技术所能实现的物理极限时会减缓。[369]
Transhumanism 超人类主义
Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger.[370]
机器人设计师 汉斯·莫拉维克、控制论专家 凯文·沃里克 和发明家 雷·库兹韦尔 预测人类和机器将在未来融合成 赛博格,其能力和力量将超过任何一方。这一思想被称为超人类主义,其根源可以追溯到 阿尔多斯·赫胥黎 和 罗伯特·埃廷格。[370]
Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his 1998 book Darwin Among the Machines: The Evolution of Global Intelligence.[371]
爱德华·弗雷德金认为“人工智能是进化的下一个阶段”,这一观点最早由塞缪尔·巴特勒在 1863 年的《机器中的达尔文》中提出,并在乔治·戴森1998 年的书籍机器中的达尔文:全球智能的进化中进行了扩展。[371]
In fiction 在小说中
Thought-capable artificial beings have appeared as storytelling devices since antiquity,[372] and have been a persistent theme in science fiction.[373]
思维能力的人工生物自古以来就作为叙事工具出现,[372] 并且在 科幻小说 中一直是一个持续的主题。[373]
A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture.[374]
在这些作品中,一个常见的 trope始于玛丽·雪莱的《弗兰肯斯坦》,其中人类创造物成为其主人的威胁。这包括阿瑟·克拉克和斯坦利·库布里克的《2001 太空漫游》(均为 1968 年),其中有负责发现号宇宙飞船的杀人计算机HAL 9000,以及《终结者》(1984 年)和《黑客帝国》(1999 年)。相比之下,像《地球停转之日》(1951 年)中的 Gort 和《异形》(1986 年)中的 Bishop 这样的忠诚机器人在流行文化中则不那么显著。[374]
Isaac Asimov introduced the Three Laws of Robotics in many stories, most notably with the "Multivac" super-intelligent computer. Asimov's laws are often brought up during lay discussions of machine ethics;[375] while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.[376]
艾萨克·阿西莫夫在许多故事中介绍了机器人三大法则,最著名的是与多维计算机这一超级智能计算机相关的故事。阿西莫夫的法则在机器伦理的非专业讨论中经常被提及;[375]虽然几乎所有人工智能研究人员都通过大众文化熟悉阿西莫夫的法则,但他们通常认为这些法则因多种原因而无用,其中之一就是它们的模糊性。[376]
Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.[377]
几部作品利用人工智能迫使我们面对一个根本性的问题:是什么让我们成为人类,向我们展示了具有 感知能力 的人工存在,从而能够感受痛苦。这在 Karel Čapek 的 R.U.R.、电影 A.I. 人工智能 和 机械姬 以及 机器人梦见电子羊吗? 中都有体现,后者是 菲利普·K·迪克 的作品。迪克认为,我们对人类主观性的理解受到人工智能技术的影响而改变。[377]
See also 另见
- AI Convention – International treaty
人工智能大会 – 国际条约 - Artificial intelligence detection software – Software to detect AI-generated content
人工智能检测软件 – 用于检测 AI 生成内容的软件 - Behavior selection algorithm – Algorithm that selects actions for intelligent agents
行为选择算法 – 为智能体选择行动的算法 - Business process automation – Automation of business processes
业务流程自动化 – 业务流程的自动化 - Case-based reasoning – Process of solving new problems based on the solutions of similar past problems
案例推理 – 基于类似过去问题的解决方案来解决新问题的过程 - Computational intelligence – Ability of a computer to learn a specific task from data or experimental observation
计算智能 – 计算机从数据或实验观察中学习特定任务的能力 - Digital immortality – Hypothetical concept of storing a personality in digital form
数字永生 – 假设性概念,将个性以数字形式存储 - Emergent algorithm – Algorithm exhibiting emergent behavior
涌现算法 – 展示涌现行为的算法 - Female gendering of AI technologies – Gender biases in digital technology
人工智能技术的女性性别化 – 数字技术中的性别偏见 - Glossary of artificial intelligence – List of definitions of terms and concepts commonly used in the study of artificial intelligence
人工智能术语表 – 人工智能研究中常用术语和概念的定义列表 - Intelligence amplification – Use of information technology to augment human intelligence
智能增强 – 使用信息技术来增强人类智能 - Mind uploading – Hypothetical process of digitally emulating a brain
意识上传 – 假设的数字化模拟大脑的过程 - Organoid intelligence - Use of brain cells and brain organoids for intelligent computing
类器官智能 - 使用脑细胞和脑类器官进行智能计算 - Robotic process automation – Form of business process automation technology
机器人流程自动化 – 一种业务流程自动化技术 - Weak artificial intelligence – Form of artificial intelligence
弱人工智能 – 人工智能的一种形式 - Wetware computer – Computer composed of organic material
湿件计算机 – 由有机材料组成的计算机 - Hallucination (artificial intelligence) – Erroneous material generated by AI
幻觉(人工智能) – AI 生成的错误材料
Explanatory notes 说明性注释
- ^ Jump up to: a b This list of intelligent traits is based on the topics covered by the major AI textbooks, including: Russell & Norvig (2021), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and Nilsson (1998)
这个智能特征列表基于主要人工智能教科书所涵盖的主题,包括:Russell & Norvig (2021),Luger & Stubblefield (2004),Poole, Mackworth & Goebel (1998) 和 Nilsson (1998) - ^ Jump up to: a b This list of tools is based on the topics covered by the major AI textbooks, including: Russell & Norvig (2021), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and Nilsson (1998)
这个工具列表基于主要人工智能教科书涵盖的主题,包括:Russell & Norvig (2021),Luger & Stubblefield (2004),Poole, Mackworth & Goebel (1998) 和 Nilsson (1998) - ^ It is among the reasons that expert systems proved to be inefficient for capturing knowledge.[30][31]
这也是专家系统在获取知识方面被证明效率低下的原因之一。3031 - ^
"Rational agent" is general term used in economics, philosophy and theoretical artificial intelligence. It can refer to anything that directs its behavior to accomplish goals, such as a person, an animal, a corporation, a nation, or in the case of AI, a computer program.
“理性代理”是一个在经济学、哲学和理论人工智能中使用的通用术语。它可以指任何旨在实现目标而指导其行为的事物,例如一个人、一只动物、一家公司、一国,或者在人工智能的情况下,一个计算机程序。 - ^ Alan Turing discussed the centrality of learning as early as 1950, in his classic paper "Computing Machinery and Intelligence".[42] In 1956, at the original Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine".[43]
艾伦·图灵 在 1950 年就讨论了学习的重要性,在他的经典论文 "计算机与智能" 中提到过。42 在 1956 年,在最初的达特茅斯人工智能夏季会议上,雷·所罗门诺夫 撰写了一份关于无监督概率机器学习的报告:“归纳推理机器”。43 - ^ See AI winter § Machine translation and the ALPAC report of 1966
查看 人工智能寒冬 § 机器翻译与 1966 年的 ALPAC 报告 - ^
Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be conditionally independent of one another. AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve.[93]
与符号逻辑相比,形式贝叶斯推断在计算上是昂贵的。为了使推断可行,大多数观察必须是条件独立的。AdSense使用一个拥有超过 3 亿条边的贝叶斯网络来学习投放哪些广告。93 - ^ Expectation–maximization, one of the most popular algorithms in machine learning, allows clustering in the presence of unknown latent variables.[95]
期望最大化,机器学习中最受欢迎的算法之一,允许在存在未知潜变量的情况下进行聚类。95 - ^
Some form of deep neural networks (without a specific learning algorithm) were described by:
Warren S. McCulloch and Walter Pitts (1943)[115]
Alan Turing (1948);[116]
Karl Steinbuch and Roger David Joseph (1961).[117]
Deep or recurrent networks that learned (or used gradient descent) were developed by:
Frank Rosenblatt(1957);[116]
Oliver Selfridge (1959);[117]
Alexey Ivakhnenko and Valentin Lapa (1965);[118]
Kaoru Nakano (1971);[119]
Shun-Ichi Amari (1972);[119]
John Joseph Hopfield (1982).[119]
Precursors to backpropagation were developed by:
Henry J. Kelley (1960);[116]
Arthur E. Bryson (1962);[116]
Stuart Dreyfus (1962);[116]
Arthur E. Bryson and Yu-Chi Ho (1969);[116]
Backpropagation was independently developed by:
Seppo Linnainmaa (1970);[120]
Paul Werbos (1974).[116]
某种形式的深度神经网络(没有特定的学习算法)由:沃伦·S·麦卡洛克 和 沃尔特·皮茨(1943)115艾伦·图灵(1948);116卡尔·施泰因布赫 和 罗杰·大卫·约瑟夫(1961)117 开发的深度或递归网络(学习或使用梯度下降)有:弗兰克·罗森布拉特(1957);116奥利弗·塞尔弗里奇(1959);117阿列克谢·伊瓦赫年科 和 瓦伦丁·拉帕(1965);118中野薰(1971);119天野俊一(1972);119约翰·约瑟夫·霍普菲尔德(1982)119 反向传播的前身由:亨利·J·凯利(1960);116亚瑟·E·布赖森(1962);116斯图尔特·德雷福斯(1962);116亚瑟·E·布赖森 和 何宇池(1969);116 反向传播是由以下人员独立开发的:塞波·林奈马(1970);120保罗·韦尔博斯(1974)116。 - ^ Geoffrey Hinton said, of his work on neural networks in the 1990s, "our labeled datasets were thousands of times too small. [And] our computers were millions of times too slow."[121]
杰弗里·辛顿说,关于他在 1990 年代对神经网络的研究,“我们的标记数据集小了数千倍。[而且]我们的计算机慢了数百万倍。”121 - ^ In statistics, a bias is a systematic error or deviation from the correct value. But in the context of fairness, it refers to a tendency in favor or against a certain group or individual characteristic, usually in a way that is considered unfair or harmful. A statistically unbiased AI system that produces disparate outcomes for different demographic groups may thus be viewed as biased in the ethical sense.[198]
在统计学中,偏差是指与正确值的系统性错误或偏离。但在公平性的背景下,它指的是对某个特定群体或个体特征的倾向,通常以被认为不公平或有害的方式表现出来。因此,一个在不同人口统计群体中产生不同结果的统计上无偏的人工智能系统可能在伦理上被视为有偏见的。198 - ^ Including Jon Kleinberg (Cornell University), Sendhil Mullainathan (University of Chicago), Cynthia Chouldechova (Carnegie Mellon) and Sam Corbett-Davis (Stanford)[207]
包括 Jon Kleinberg (Cornell University),Sendhil Mullainathan (University of Chicago),Cynthia Chouldechova (Carnegie Mellon)和 Sam Corbett-Davis (Stanford)207 - ^ Moritz Hardt (a director at the Max Planck Institute for Intelligent Systems) argues that machine learning "is fundamentally the wrong tool for a lot of domains, where you're trying to design interventions and mechanisms that change the world."[212]
莫里茨·哈特(马克斯·普朗克智能系统研究所的主任)认为,机器学习“在许多领域根本不是合适的工具,这些领域试图设计能够改变世界的干预和机制。”212 - ^ When the law was passed in 2018, it still contained a form of this provision.
当法律在 2018 年通过时,它仍然包含这种条款的一种形式。 - ^ This is the United Nations' definition, and includes things like land mines as well.[226]
这是联合国的定义,包括像地雷这样的东西。226 - ^ See table 4; 9% is both the OECD average and the U.S. average.[237]
见表 4;9%是经合组织的平均值和美国的平均值。237 - ^ Sometimes called a "robopocalypse"[245]
有时被称为“机器人末日”245 - ^ "Electronic brain" was the term used by the press around this time.[298][300]
“电子脑”是当时媒体使用的术语。298300 - ^
Daniel Crevier wrote, "the conference is generally recognized as the official birthdate of the new science."[303] Russell and Norvig called the conference "the inception of artificial intelligence."[115]
丹尼尔·克雷维尔写道:“这次会议通常被认为是新科学的正式诞生日。”303拉塞尔和诺维格称这次会议为“人工智能的开端。”115 - ^
Russell and Norvig wrote "for the next 20 years the field would be dominated by these people and their students."[304]
拉塞尔 和 诺维格 写道:“在接下来的 20 年里,这个领域将由这些人及其学生主导。”304 - ^
Russell and Norvig wrote "it was astonishing whenever a computer did anything kind of smartish".[305]
拉塞尔 和 诺维格 写道:“每当计算机做出任何聪明的事情时,真是令人惊讶。”305 - ^
The programs described are Arthur Samuel's checkers program for the IBM 701, Daniel Bobrow's STUDENT, Newell and Simon's Logic Theorist and Terry Winograd's SHRDLU.
所描述的程序是 亚瑟·萨缪尔 的跳棋程序,适用于 IBM 701,丹尼尔·博布罗 的 学生,纽厄尔 和 西蒙 的 逻辑理论家 以及 特里·温诺格拉德 的 SHRDLU。 - ^ Russell and Norvig write: "in almost all cases, these early systems failed on more difficult problems"[309]
拉塞尔 和 诺维格 写道:“在几乎所有情况下,这些早期系统在更困难的问题上失败了。”309 - ^
Embodied approaches to AI[316] were championed by Hans Moravec[317] and Rodney Brooks[318] and went by many names: Nouvelle AI.[318] Developmental robotics.[319]
具身的人工智能方法由 汉斯·莫拉维克317 和 罗德尼·布鲁克斯318 倡导,并有许多名称: 新人工智能。318发展机器人学。319 - ^ Matteo Wong wrote in The Atlantic: "Whereas for decades, computer-science fields such as natural-language processing, computer vision, and robotics used extremely different methods, now they all use a programming method called "deep learning." As a result, their code and approaches have become more similar, and their models are easier to integrate into one another."[325]
马泰奥·黄在《大西洋月刊》中写道:“几十年来,计算机科学领域如自然语言处理、计算机视觉和机器人技术使用的方式截然不同,而现在它们都使用一种叫做‘深度学习’的编程方法。因此,它们的代码和方法变得更加相似,它们的模型也更容易相互集成。”325 - ^ Jack Clark wrote in Bloomberg: "After a half-decade of quiet breakthroughs in artificial intelligence, 2015 has been a landmark year. Computers are smarter and learning faster than ever", and noted that the number of software projects that use machine learning at Google increased from a "sporadic usage" in 2012 to more than 2,700 projects in 2015.[327]
杰克·克拉克在彭博社中写道:“经过半个十年的安静突破,2015 年成为了一个里程碑式的年份。计算机比以往任何时候都更聪明,学习速度更快。”他还指出,使用机器学习的软件项目在谷歌的数量从 2012 年的“偶尔使用”增加到 2015 年的超过 2700 个项目。327 - ^ Nils Nilsson wrote in 1983: "Simply put, there is wide disagreement in the field about what AI is all about."[342]
Nils Nilsson 在 1983 年写道:“简单来说,关于人工智能的本质,领域内存在广泛的分歧。”342 - ^
Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."[347]
丹尼尔·克雷维尔写道:“时间证明了德雷福斯一些评论的准确性和敏锐性。如果他以不那么激进的方式表达这些观点,可能会更早采取他所建议的建设性行动。”347 - ^
Searle presented this definition of "Strong AI" in 1999.[357] Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states."[358] Strong AI is defined similarly by Russell and Norvig: "Stong AI – the assertion that machines that do so are actually thinking (as opposed to simulating thinking)."[359]
塞尔提出了“强人工智能”的定义于 1999 年。塞尔的原始表述是“适当编程的计算机确实是一个心智,因为给予正确程序的计算机可以被字面上说是理解并具有其他认知状态。”强人工智能的定义与拉塞尔和诺维格的定义类似:“强人工智能——机器确实在思考的主张(与模拟思考相对)。”
References 参考文献
- ^ Jump up to: a b c Russell & Norvig (2021), pp. 1–4.
拉塞尔与诺维格 (2021),第 1–4 页。 - ^ AI set to exceed human brain power Archived 2008-02-19 at the Wayback Machine CNN.com (July 26, 2006)
人工智能将超越人类大脑能力存档 2008-02-19 在 时光机 CNN.com(2006 年 7 月 26 日) - ^ Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62: 15–25. doi:10.1016/j.bushor.2018.08.004. ISSN 0007-6813. S2CID 158433736.
Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri, 在我手中:谁是这片土地上最美的?关于人工智能的解释、插图和影响". 商业视野. 62: 15–25. doi:10.1016/j.bushor.2018.08.004. 国际标准连续出版物号0007-6813. S2CID158433736. - ^ Jump up to: a b c
Artificial general intelligence:
人工通用智能:- Russell & Norvig (2021, pp. 32–33, 1020–1021)
拉塞尔与诺维格 (2021, pp. 32–33, 1020–1021)
现代版本提案: Warnings of overspecialization in AI from leading researchers:
来自顶尖研究者对人工智能过度专业化的警告: - Russell & Norvig (2021, pp. 32–33, 1020–1021)
- ^ Russell & Norvig (2021, §1.2).
拉塞尔与诺维格 (2021, §1.2). - ^ Jump up to: a b
Dartmouth workshop:
达特茅斯研讨会:- Russell & Norvig (2021, p. 18)
拉塞尔与诺维格 (2021, 第 18 页) - McCorduck (2004, pp. 111–136)
麦考杜克 (2004, 第 111–136 页) - NRC (1999, pp. 200–201)
NRC (1999, 第 200–201 页)
- Russell & Norvig (2021, p. 18)
- ^ Jump up to: a b
Successful programs the 1960s:
1960 年代成功的项目:- McCorduck (2004, pp. 243–252)
麦考杜克 (2004, pp. 243–252) - Crevier (1993, pp. 52–107)
Crevier (1993, pp. 52–107) - Moravec (1988, p. 9)
莫拉维克 (1988, 第 9 页) - Russell & Norvig (2021, pp. 19–21)
拉塞尔与诺维格 (2021, pp. 19–21)
- McCorduck (2004, pp. 243–252)
- ^ Jump up to: a b
Funding initiatives in the early 1980s: Fifth Generation Project (Japan), Alvey (UK), Microelectronics and Computer Technology Corporation (US), Strategic Computing Initiative (US):
1980 年代初期的资金倡议:第五代项目(日本),阿尔维计划(英国),微电子与计算机技术公司(美国),战略计算倡议(美国):- McCorduck (2004, pp. 426–441)
麦考杜克 (2004, pp. 426–441) - Crevier (1993, pp. 161–162, 197–203, 211, 240)
Crevier (1993, 第 161–162 页, 197–203 页, 211 页, 240 页) - Russell & Norvig (2021, p. 23)
拉塞尔与诺维格 (2021, 第 23 页) - NRC (1999, pp. 210–211)
NRC (1999, pp. 210–211) - Newquist (1994, pp. 235–248)
Newquist (1994, 第 235–248 页)
- McCorduck (2004, pp. 426–441)
- ^ Jump up to: a b
First AI Winter, Lighthill report, Mansfield Amendment
第一次人工智能寒冬,莱特希尔报告,曼斯菲尔德修正案- Crevier (1993, pp. 115–117)
Crevier (1993, pp. 115–117) - Russell & Norvig (2021, pp. 21–22)
拉塞尔与诺维格 (2021, pp. 21–22) - NRC (1999, pp. 212–213)
NRC (1999, pp. 212–213) - Howe (1994) 霍威 (1994)
- Newquist (1994, pp. 189–201)
Newquist (1994, 第 189–201 页)
- Crevier (1993, pp. 115–117)
- ^ Jump up to: a b
Second AI Winter:
第二个 人工智能寒冬:- Russell & Norvig (2021, p. 24)
拉塞尔与诺维格 (2021, 第 24 页) - McCorduck (2004, pp. 430–435)
麦考杜克 (2004, 第 430–435 页) - Crevier (1993, pp. 209–210)
Crevier (1993, 第 209–210 页) - NRC (1999, pp. 214–216)
NRC (1999, pp. 214–216) - Newquist (1994, pp. 301–318)
Newquist (1994, 第 301–318 页)
- Russell & Norvig (2021, p. 24)
- ^ Jump up to: a b
Deep learning revolution, AlexNet:
深度学习革命,AlexNet: - ^ Toews (2023).
托伊斯 (2023)。 - ^
Problem-solving, puzzle solving, game playing, and deduction:
问题解决、解谜、玩游戏和推理:- Russell & Norvig (2021, chpt. 3–5)
拉塞尔与诺维格 (2021, 第 3–5 章) - Russell & Norvig (2021, chpt. 6) (constraint satisfaction)
拉塞尔与诺维格 (2021, 第 6 章) (约束满足) - Poole, Mackworth & Goebel (1998, chpt. 2, 3, 7, 9)
普尔,麦克沃斯和戈贝尔(1998,第 2、3、7、9 章) - Luger & Stubblefield (2004, chpt. 3, 4, 6, 8)
卢戈尔和斯塔布尔菲尔德 (2004, 第 3, 4, 6, 8 章) - Nilsson (1998, chpt. 7–12)
尼尔森 (1998, 第 7–12 章)
- Russell & Norvig (2021, chpt. 3–5)
- ^
Uncertain reasoning:
不确定推理:
- Russell & Norvig (2021, chpt. 12–18)
拉塞尔与诺维格 (2021, 第 12–18 章) - Poole, Mackworth & Goebel (1998, pp. 345–395)
普尔,麦克沃斯与戈贝尔(1998,第 345–395 页) - Luger & Stubblefield (2004, pp. 333–381)
卢戈尔与斯塔布尔菲尔德 (2004, pp. 333–381) - Nilsson (1998, chpt. 7–12)
尼尔森 (1998, 第 7–12 章)
- Russell & Norvig (2021, chpt. 12–18)
- ^ Jump up to: a b c
Intractability and efficiency and the combinatorial explosion:
不可解性与效率 和 组合爆炸:- Russell & Norvig (2021, p. 21)
拉塞尔与诺维格 (2021, 第 21 页)
- Russell & Norvig (2021, p. 21)
- ^ Jump up to: a b c
Psychological evidence of the prevalence of sub-symbolic reasoning and knowledge:
心理证据表明亚符号推理和知识的普遍性: - ^
Knowledge representation and knowledge engineering:
知识表示 和 知识工程:- Russell & Norvig (2021, chpt. 10)
拉塞尔与诺维格 (2021, 第 10 章) - Poole, Mackworth & Goebel (1998, pp. 23–46, 69–81, 169–233, 235–277, 281–298, 319–345)
Poole, Mackworth & Goebel (1998, 第 23–46 页, 69–81 页, 169–233 页, 235–277 页, 281–298 页, 319–345 页) - Luger & Stubblefield (2004, pp. 227–243),
卢戈尔和斯塔布尔菲尔德 (2004, 第 227–243 页), - Nilsson (1998, chpt. 17.1–17.4, 18)
尼尔森 (1998, 第 17.1–17.4 章, 18)
- Russell & Norvig (2021, chpt. 10)
- ^ Smoliar & Zhang (1994).
Smoliar 和 Zhang (1994) - ^ Neumann & Möller (2008).
诺伊曼与莫勒(2008)。 - ^ Kuperman, Reichley & Bailey (2006).
库珀曼、赖克利和贝利 (2006)。 - ^ McGarry (2005).
麦加里 (2005)。 - ^ Bertini, Del Bimbo & Torniai (2006).
贝尔蒂尼、德尔·比姆博与托尔尼亚伊(2006)。 - ^ Russell & Norvig (2021), pp. 272.
拉塞尔与诺维格 (2021),第 272 页。 - ^
Representing categories and relations: Semantic networks, description logics, inheritance (including frames, and scripts):
表示类别和关系:语义网络,描述逻辑,继承(包括框架和脚本):- Russell & Norvig (2021, §10.2 & 10.5),
拉塞尔与诺维格 (2021, §10.2 & 10.5), - Poole, Mackworth & Goebel (1998, pp. 174–177),
普尔,麦克沃斯与戈贝尔(1998,第 174–177 页), - Luger & Stubblefield (2004, pp. 248–258),
Luger & Stubblefield (2004, 第 248–258 页), - Nilsson (1998, chpt. 18.3)
尼尔森 (1998, 第 18.3 章)
- Russell & Norvig (2021, §10.2 & 10.5),
- ^ Representing events and time:Situation calculus, event calculus, fluent calculus (including solving the frame problem):
表示事件和时间:情境演算,事件演算,流畅演算(包括解决框架问题):- Russell & Norvig (2021, §10.3),
拉塞尔与诺维格 (2021, §10.3), - Poole, Mackworth & Goebel (1998, pp. 281–298),
普尔,麦克沃斯与戈贝尔(1998,第 281–298 页), - Nilsson (1998, chpt. 18.2)
尼尔森 (1998, 第 18.2 章)
- Russell & Norvig (2021, §10.3),
- ^
Causal calculus:
因果演算:- Poole, Mackworth & Goebel (1998, pp. 335–337)
普尔,麦克沃斯与戈贝尔(1998,第 335–337 页)
- Poole, Mackworth & Goebel (1998, pp. 335–337)
- ^
Representing knowledge about knowledge: Belief calculus, modal logics:
表示关于知识的知识:信念计算,模态逻辑:- Russell & Norvig (2021, §10.4),
拉塞尔与诺维格 (2021, §10.4), - Poole, Mackworth & Goebel (1998, pp. 275–277)
普尔,麦克沃斯与戈贝尔(1998,第 275–277 页)
- Russell & Norvig (2021, §10.4),
- ^ Jump up to: a b
Default reasoning, Frame problem, default logic, non-monotonic logics, circumscription, closed world assumption, abduction:
默认推理, 框架问题, 默认逻辑, 非单调逻辑, 限制推理, 封闭世界假设, 归纳推理:- Russell & Norvig (2021, §10.6)
拉塞尔与诺维格 (2021, §10.6) - Poole, Mackworth & Goebel (1998, pp. 248–256, 323–335)
普尔,麦克沃斯与戈贝尔(1998,第 248–256 页,323–335 页) - Luger & Stubblefield (2004, pp. 335–363)
卢戈尔和斯塔布尔菲尔德 (2004, pp. 335–363) - Nilsson (1998, ~18.3.3)
尼尔森 (1998, ~18.3.3)
(Poole 等将绑架归类为“默认推理”。Luger 等将其归类为“不确定推理”)。 - Russell & Norvig (2021, §10.6)
- ^ Jump up to: a b
Breadth of commonsense knowledge:
常识知识的广度:- Lenat & Guha (1989, Introduction)
Lenat 和 Guha (1989, 引言) - Crevier (1993, pp. 113–114),
Crevier (1993, pp. 113–114), - Moravec (1988, p. 13),
莫拉维克 (1988, 第 13 页), - Russell & Norvig (2021, pp. 241, 385, 982) (qualification problem)
拉塞尔与诺维格 (2021, 第 241, 385, 982 页) (资格问题)
- Lenat & Guha (1989, Introduction)
- ^ Newquist (1994), p. 296.
Newquist (1994),第 296 页。 - ^ Crevier (1993), pp. 204–208.
Crevier (1993),第 204–208 页。 - ^ Russell & Norvig (2021), p. 528.
- ^
Automated planning:
自动化规划:- Russell & Norvig (2021, chpt. 11).
拉塞尔与诺维格 (2021, 第 11 章).
- Russell & Norvig (2021, chpt. 11).
- ^
Automated decision making, Decision theory:
自动决策, 决策理论:- Russell & Norvig (2021, chpt. 16–18).
拉塞尔与诺维格 (2021, 第 16–18 章)。
- Russell & Norvig (2021, chpt. 16–18).
- ^
Classical planning:
经典规划:- Russell & Norvig (2021, Section 11.2).
拉塞尔与诺维格 (2021, 第 11.2 节).
- Russell & Norvig (2021, Section 11.2).
- ^
Sensorless or "conformant" planning, contingent planning, replanning (a.k.a online planning):
无传感器或“符合”规划、应急规划、重新规划(即在线规划):- Russell & Norvig (2021, Section 11.5).
拉塞尔与诺维格 (2021, 第 11.5 节).
- Russell & Norvig (2021, Section 11.5).
- ^
Uncertain preferences:
不确定的偏好:
- Russell & Norvig (2021, Section 16.7)
拉塞尔与诺维格 (2021, 第 16.7 节)
逆向强化学习:- Russell & Norvig (2021, Section 22.6)
拉塞尔与诺维格 (2021, 第 22.6 节)
- Russell & Norvig (2021, Section 16.7)
- ^
Information value theory:
信息价值理论:- Russell & Norvig (2021, Section 16.6).
拉塞尔与诺维格 (2021, 第 16.6 节).
- Russell & Norvig (2021, Section 16.6).
- ^
Markov decision process:
马尔可夫决策过程:- Russell & Norvig (2021, chpt. 17).
拉塞尔与诺维格 (2021, 第 17 章).
- Russell & Norvig (2021, chpt. 17).
- ^
Game theory and multi-agent decision theory:
博弈论和多智能体决策理论:- Russell & Norvig (2021, chpt. 18).
拉塞尔与诺维格 (2021, 第 18 章).
- Russell & Norvig (2021, chpt. 18).
- ^
Learning:
学习:
- Russell & Norvig (2021, chpt. 19–22)
拉塞尔与诺维格 (2021, 第 19–22 章) - Poole, Mackworth & Goebel (1998, pp. 397–438)
普尔,麦克沃斯与戈贝尔(1998,第 397–438 页) - Luger & Stubblefield (2004, pp. 385–542)
卢戈尔与斯塔布尔菲尔德 (2004, pp. 385–542) - Nilsson (1998, chpt. 3.3, 10.3, 17.5, 20)
尼尔森 (1998, 第 3.3 章, 10.3, 17.5, 20)
- Russell & Norvig (2021, chpt. 19–22)
- ^ Turing (1950).
图灵 (1950)。 - ^ Solomonoff (1956).
所罗门诺夫 (1956)。 - ^
Unsupervised learning:
无监督学习:- Russell & Norvig (2021, pp. 653) (definition)
拉塞尔与诺维格 (2021, 第 653 页) (定义) - Russell & Norvig (2021, pp. 738–740) (cluster analysis)
拉塞尔与诺维格 (2021, 第 738–740 页) (聚类分析) - Russell & Norvig (2021, pp. 846–860) (word embedding)
拉塞尔与诺维格 (2021, pp. 846–860) (词嵌入)
- Russell & Norvig (2021, pp. 653) (definition)
- ^ Jump up to: a b
Supervised learning:
监督学习:- Russell & Norvig (2021, §19.2) (Definition)
拉塞尔与诺维格 (2021, §19.2) (定义) - Russell & Norvig (2021, Chpt. 19–20) (Techniques)
拉塞尔与诺维格 (2021, 第 19–20 章) (技术)
- Russell & Norvig (2021, §19.2) (Definition)
- ^
Reinforcement learning:
强化学习:- Russell & Norvig (2021, chpt. 22)
拉塞尔与诺维格 (2021, 第 22 章) - Luger & Stubblefield (2004, pp. 442–449)
卢戈尔与斯塔布尔菲尔德 (2004, pp. 442–449)
- Russell & Norvig (2021, chpt. 22)
- ^
Transfer learning:
迁移学习:- Russell & Norvig (2021, pp. 281)
拉塞尔与诺维格 (2021, 第 281 页) - The Economist (2016) 经济学人(2016)
- Russell & Norvig (2021, pp. 281)
- ^ "Artificial Intelligence (AI): What Is AI and How Does It Work? | Built In". builtin.com. Retrieved 30 October 2023.
"人工智能 (AI):什么是 AI 以及它是如何工作的? | Built In"。builtin.com。检索于 30 October 2023。 - ^
Computational learning theory:
计算学习理论:- Russell & Norvig (2021, pp. 672–674)
拉塞尔与诺维格 (2021, pp. 672–674) - Jordan & Mitchell (2015) 乔丹与米切尔(2015)
- Russell & Norvig (2021, pp. 672–674)
- ^
Natural language processing (NLP):
自然语言处理 (NLP):- Russell & Norvig (2021, chpt. 23–24)
拉塞尔与诺维格 (2021, 第 23–24 章) - Poole, Mackworth & Goebel (1998, pp. 91–104)
普尔,麦克沃斯与戈贝尔(1998,第 91–104 页) - Luger & Stubblefield (2004, pp. 591–632)
卢戈尔与斯塔布尔菲尔德 (2004, pp. 591–632)
- Russell & Norvig (2021, chpt. 23–24)
- ^
Subproblems of NLP:
- Russell & Norvig (2021, pp. 849–850)
- ^ Russell & Norvig (2021), pp. 856–858.
- ^ Dickson (2022).
- ^ Modern statistical and deep learning approaches to NLP:
- Russell & Norvig (2021, chpt. 24)
- Cambria & White (2014)
- ^ Vincent (2019).
- ^ Russell & Norvig (2021), pp. 875–878.
- ^ Bushwick (2023).
- ^
Computer vision:
- Russell & Norvig (2021, chpt. 25)
- Nilsson (1998, chpt. 6)
- ^ Russell & Norvig (2021), pp. 849–850.
- ^ Russell & Norvig (2021), pp. 895–899.
- ^ Russell & Norvig (2021), pp. 899–901.
- ^ Challa et al. (2011).
查拉等人 (2011)。 - ^ Russell & Norvig (2021), pp. 931–938.
拉塞尔与诺维格 (2021),第 931–938 页。 - ^ MIT AIL (2014).
麻省理工学院 AIL (2014) - ^
Affective computing:
情感计算: - ^ Waddell (2018).
瓦代尔 (2018) - ^ Poria et al. (2017).
茯苓等(2017)。 - ^
Search algorithms:
搜索算法:- Russell & Norvig (2021, Chpt. 3–5)
拉塞尔与诺维格 (2021, 第 3–5 章) - Poole, Mackworth & Goebel (1998, pp. 113–163)
普尔,麦克沃斯与戈贝尔(1998,第 113–163 页) - Luger & Stubblefield (2004, pp. 79–164, 193–219)
卢戈尔与斯塔布尔菲尔德 (2004, 第 79–164 页, 193–219 页) - Nilsson (1998, chpt. 7–12)
尼尔森 (1998, 第 7–12 章)
- Russell & Norvig (2021, Chpt. 3–5)
- ^
State space search:
状态空间搜索:- Russell & Norvig (2021, chpt. 3)
拉塞尔与诺维格 (2021, 第 3 章)
- Russell & Norvig (2021, chpt. 3)
- ^ Russell & Norvig (2021), §11.2.
拉塞尔与诺维格 (2021),§11.2。 - ^ Uninformed searches (breadth first search, depth-first search and general state space search):
无信息搜索 (广度优先搜索, 深度优先搜索 和一般的 状态空间搜索):- Russell & Norvig (2021, §3.4)
拉塞尔与诺维格 (2021, §3.4) - Poole, Mackworth & Goebel (1998, pp. 113–132)
普尔,麦克沃斯与戈贝尔(1998,第 113–132 页) - Luger & Stubblefield (2004, pp. 79–121)
卢戈尔和斯塔布尔菲尔德 (2004, pp. 79–121) - Nilsson (1998, chpt. 8)
尼尔森 (1998, 第 8 章)
- Russell & Norvig (2021, §3.4)
- ^
Heuristic or informed searches (e.g., greedy best first and A*):
启发式或知情搜索(例如,贪婪的最佳优先和A*):- Russell & Norvig (2021, s§3.5)
拉塞尔与诺维格 (2021, s§3.5) - Poole, Mackworth & Goebel (1998, pp. 132–147)
普尔,麦克沃斯与戈贝尔(1998,第 132–147 页) - Poole & Mackworth (2017, §3.6)
普尔和麦克沃斯 (2017, §3.6) - Luger & Stubblefield (2004, pp. 133–150)
卢戈尔和斯塔布尔菲尔德 (2004, pp. 133–150)
- Russell & Norvig (2021, s§3.5)
- ^
Adversarial search:
对抗性搜索:- Russell & Norvig (2021, chpt. 5)
拉塞尔与诺维格 (2021, 第 5 章)
- Russell & Norvig (2021, chpt. 5)
- ^ Local or "optimization" search:
本地 或 "优化" 搜索:- Russell & Norvig (2021, chpt. 4)
拉塞尔与诺维格 (2021, 第 4 章)
- Russell & Norvig (2021, chpt. 4)
- ^ Singh Chauhan, Nagesh (18 December 2020). "Optimization Algorithms in Neural Networks". KDnuggets. Retrieved 13 January 2024.
辛格·乔汉,纳盖什(2020 年 12 月 18 日)。"神经网络中的优化算法"。KDnuggets。检索于 13 January 2024 年。 - ^
Evolutionary computation:
进化计算:- Russell & Norvig (2021, §4.1.2)
拉塞尔与诺维格 (2021, §4.1.2)
- Russell & Norvig (2021, §4.1.2)
- ^ Merkle & Middendorf (2013).
梅克尔与米登多夫 (2013) - ^
Logic:
逻辑:
- Russell & Norvig (2021, chpt. 6–9)
拉塞尔与诺维格 (2021, 第 6–9 章) - Luger & Stubblefield (2004, pp. 35–77)
卢戈尔和斯塔布尔菲尔德 (2004, 第 35–77 页) - Nilsson (1998, chpt. 13–16)
尼尔森 (1998, 第 13–16 章)
- Russell & Norvig (2021, chpt. 6–9)
- ^
Propositional logic:
命题逻辑:- Russell & Norvig (2021, chpt. 6)
拉塞尔与诺维格 (2021, 第 6 章) - Luger & Stubblefield (2004, pp. 45–50)
卢戈尔和斯塔布尔菲尔德 (2004, 第 45–50 页) - Nilsson (1998, chpt. 13)
尼尔森 (1998, 第 13 章)
- Russell & Norvig (2021, chpt. 6)
- ^
First-order logic and features such as equality:
一阶逻辑和诸如等式等特征:- Russell & Norvig (2021, chpt. 7)
拉塞尔与诺维格 (2021, 第 7 章) - Poole, Mackworth & Goebel (1998, pp. 268–275),
普尔,麦克沃斯与戈贝尔(1998,第 268–275 页), - Luger & Stubblefield (2004, pp. 50–62),
Luger & Stubblefield (2004, 第 50–62 页), - Nilsson (1998, chpt. 15)
尼尔森 (1998, 第 15 章)
- Russell & Norvig (2021, chpt. 7)
- ^
Logical inference:
逻辑推理:- Russell & Norvig (2021, chpt. 10)
拉塞尔与诺维格 (2021, 第 10 章)
- Russell & Norvig (2021, chpt. 10)
- ^ logical deduction as search:
逻辑推理作为搜索:- Russell & Norvig (2021, §9.3, §9.4)
拉塞尔与诺维格 (2021, §9.3, §9.4) - Poole, Mackworth & Goebel (1998, pp. ~46–52)
普尔,麦克沃斯与戈贝尔(1998,第~46–52 页) - Luger & Stubblefield (2004, pp. 62–73)
卢戈尔和斯塔布尔菲尔德 (2004, pp. 62–73) - Nilsson (1998, chpt. 4.2, 7.2)
尼尔森 (1998, 第 4.2 章, 7.2)
- Russell & Norvig (2021, §9.3, §9.4)
- ^
Resolution and unification:
解决方案 和 统一:- Russell & Norvig (2021, §7.5.2, §9.2, §9.5)
拉塞尔与诺维格 (2021, §7.5.2, §9.2, §9.5)
- Russell & Norvig (2021, §7.5.2, §9.2, §9.5)
- ^ Warren, D.H.; Pereira, L.M.; Pereira, F. (1977). "Prolog-the language and its implementation compared with Lisp". ACM SIGPLAN Notices. 12 (8): 109–115. doi:10.1145/872734.806939.
沃伦,D.H.; 佩雷拉,L.M.; 佩雷拉,F. (1977). "Prolog-语言及其与 Lisp 的实现比较". ACM SIGPLAN 通知. 12 (8): 109–115. doi:10.1145/872734.806939. - ^
Fuzzy logic:
模糊逻辑:
- Russell & Norvig (2021, pp. 214, 255, 459)
拉塞尔与诺维格 (2021, pp. 214, 255, 459) - Scientific American (1999)
科学美国人 (1999)
- Russell & Norvig (2021, pp. 214, 255, 459)
- ^ Jump up to: a b
Stochastic methods for uncertain reasoning:
不确定推理的随机方法:- Russell & Norvig (2021, Chpt. 12–18 and 20),
拉塞尔与诺维格 (2021, 第 12–18 章和第 20 章), - Poole, Mackworth & Goebel (1998, pp. 345–395),
普尔,麦克沃斯与戈贝尔(1998,第 345–395 页), - Luger & Stubblefield (2004, pp. 165–191, 333–381),
卢戈尔和斯塔布尔菲尔德 (2004, pp. 165–191, 333–381), - Nilsson (1998, chpt. 19)
尼尔森 (1998, 第 19 章)
- Russell & Norvig (2021, Chpt. 12–18 and 20),
- ^
decision theory and decision analysis:
决策理论 和 决策分析:- Russell & Norvig (2021, Chpt. 16–18),
拉塞尔与诺维格 (2021, 第 16–18 章), - Poole, Mackworth & Goebel (1998, pp. 381–394)
普尔,麦克沃斯与戈贝尔(1998,第 381–394 页)
- Russell & Norvig (2021, Chpt. 16–18),
- ^
Information value theory:
信息价值理论:- Russell & Norvig (2021, §16.6)
拉塞尔与诺维格 (2021, §16.6)
- Russell & Norvig (2021, §16.6)
- ^ Markov decision processes and dynamic decision networks:
马尔可夫决策过程和动态决策网络:- Russell & Norvig (2021, chpt. 17)
拉塞尔与诺维格 (2021, 第 17 章)
- Russell & Norvig (2021, chpt. 17)
- ^ Jump up to: a b c
Stochastic temporal models:
随机时间模型:- Russell & Norvig (2021, Chpt. 14)
拉塞尔与诺维格 (2021, 第 14 章)
隐马尔可夫模型:- Russell & Norvig (2021, §14.3)
拉塞尔与诺维格 (2021, §14.3)
卡尔曼滤波器:- Russell & Norvig (2021, §14.4)
拉塞尔与诺维格 (2021, §14.4)
动态贝叶斯网络:- Russell & Norvig (2021, §14.5)
拉塞尔与诺维格 (2021, §14.5)
- Russell & Norvig (2021, Chpt. 14)
- ^ Game theory and mechanism design:
博弈论 和 机制设计:- Russell & Norvig (2021, chpt. 18)
拉塞尔与诺维格 (2021, 第 18 章)
- Russell & Norvig (2021, chpt. 18)
- ^
Bayesian networks:
贝叶斯网络:- Russell & Norvig (2021, §12.5–12.6, §13.4–13.5, §14.3–14.5, §16.5, §20.2–20.3),
拉塞尔与诺维格 (2021, §12.5–12.6, §13.4–13.5, §14.3–14.5, §16.5, §20.2–20.3), - Poole, Mackworth & Goebel (1998, pp. 361–381),
普尔,麦克沃斯与戈贝尔(1998,第 361–381 页), - Luger & Stubblefield (2004, pp. ~182–190, ≈363–379),
卢戈尔和斯塔布尔菲尔德 (2004, pp. ~182–190, ≈363–379), - Nilsson (1998, chpt. 19.3–19.4)
尼尔森 (1998, 第 19.3–19.4 章)
- Russell & Norvig (2021, §12.5–12.6, §13.4–13.5, §14.3–14.5, §16.5, §20.2–20.3),
- ^ Domingos (2015), chapter 6.
Domingos (2015),第六章。 - ^
Bayesian inference algorithm:
贝叶斯推断算法:- Russell & Norvig (2021, §13.3–13.5),
拉塞尔与诺维格 (2021, §13.3–13.5), - Poole, Mackworth & Goebel (1998, pp. 361–381),
普尔,麦克沃斯与戈贝尔(1998,第 361–381 页), - Luger & Stubblefield (2004, pp. ~363–379),
卢戈尔和斯塔布尔菲尔德 (2004, 页 ~363–379), - Nilsson (1998, chpt. 19.4 & 7)
尼尔森 (1998, 第 19.4 章和 7 章)
- Russell & Norvig (2021, §13.3–13.5),
- ^ Domingos (2015), p. 210.
多明戈斯 (2015), 第 210 页。 - ^
Bayesian learning and the expectation–maximization algorithm:
贝叶斯学习 和 期望最大化算法:- Russell & Norvig (2021, Chpt. 20),
拉塞尔与诺维格 (2021, 第 20 章), - Poole, Mackworth & Goebel (1998, pp. 424–433),
普尔,麦克沃斯与戈贝尔(1998,第 424–433 页), - Nilsson (1998, chpt. 20)
尼尔森 (1998, 第 20 章) - Domingos (2015, p. 210)
Domingos (2015, 第 210 页)
- Russell & Norvig (2021, Chpt. 20),
- ^ Bayesian decision theory and Bayesian decision networks:
贝叶斯决策理论 和贝叶斯 决策网络:- Russell & Norvig (2021, §16.5)
拉塞尔与诺维格 (2021, §16.5)
- Russell & Norvig (2021, §16.5)
- ^
Statistical learning methods and classifiers:
- Russell & Norvig (2021, chpt. 20),
- ^ Ciaramella, Alberto; Ciaramella, Marco (2024). Introduction to Artificial Intelligence: from data analysis to generative AI. ISBN 978-8894787603.
- ^
Decision trees:
- Russell & Norvig (2021, §19.3)
- Domingos (2015, p. 88)
- ^
Non-parameteric learning models such as K-nearest neighbor and support vector machines:
- Russell & Norvig (2021, §19.7)
- Domingos (2015, p. 187) (k-nearest neighbor)
- Domingos (2015, p. 88) (kernel methods)
- ^ Domingos (2015), p. 152.
- ^
Naive Bayes classifier:
朴素贝叶斯分类器:- Russell & Norvig (2021, §12.6)
拉塞尔与诺维格 (2021, §12.6) - Domingos (2015, p. 152)
多明戈斯 (2015, 第 152 页)
- Russell & Norvig (2021, §12.6)
- ^ Jump up to: a b
Neural networks:
神经网络:
- Russell & Norvig (2021, Chpt. 21),
拉塞尔和诺维格 (2021, 第 21 章), - Domingos (2015, Chapter 4)
Domingos (2015, 第四章)
- Russell & Norvig (2021, Chpt. 21),
- ^
Gradient calculation in computational graphs, backpropagation, automatic differentiation:
计算图中的梯度计算,反向传播,自动微分:- Russell & Norvig (2021, §21.2),
拉塞尔与诺维格 (2021, §21.2), - Luger & Stubblefield (2004, pp. 467–474),
卢戈尔和斯塔布尔菲尔德 (2004, pp. 467–474), - Nilsson (1998, chpt. 3.3)
尼尔森 (1998, 第 3.3 章)
- Russell & Norvig (2021, §21.2),
- ^
Universal approximation theorem:
泛函逼近定理:- Russell & Norvig (2021, p. 752)
拉塞尔与诺维格 (2021, 第 752 页)
- Russell & Norvig (2021, p. 752)
- ^
Feedforward neural networks:
前馈神经网络:- Russell & Norvig (2021, §21.1)
拉塞尔与诺维格 (2021, §21.1)
- Russell & Norvig (2021, §21.1)
- ^
Recurrent neural networks:
递归神经网络:- Russell & Norvig (2021, §21.6)
拉塞尔与诺维格 (2021, §21.6)
- Russell & Norvig (2021, §21.6)
- ^
Perceptrons:
感知器:
- Russell & Norvig (2021, pp. 21, 22, 683, 22)
拉塞尔与诺维格 (2021, 第 21, 22, 683, 22 页)
- Russell & Norvig (2021, pp. 21, 22, 683, 22)
- ^ Jump up to: a b
Deep learning:
深度学习: - ^
Convolutional neural networks:
卷积神经网络:- Russell & Norvig (2021, §21.3)
拉塞尔与诺维格 (2021, §21.3)
- Russell & Norvig (2021, §21.3)
- ^ Deng & Yu (2014), pp. 199–200.
邓和余 (2014),第 199–200 页。 - ^ Ciresan, Meier & Schmidhuber (2012).
奇雷桑、迈耶尔与施密德胡伯(2012)。 - ^ Russell & Norvig (2021), p. 751.
拉塞尔与诺维格 (2021), p. 751. - ^ Jump up to: a b c Russell & Norvig (2021), p. 17.
拉塞尔与诺维格 (2021), 第 17 页。 - ^ Jump up to: a b c d e f g Russell & Norvig (2021), p. 785.
拉塞尔与诺维格 (2021), 第 785 页。 - ^ Jump up to: a b Schmidhuber (2022), §5.
施密德胡伯 (2022),§5。 - ^ Schmidhuber (2022), §6.
施密德胡伯 (2022),§6。 - ^ Jump up to: a b c Schmidhuber (2022), §7.
施密德胡伯 (2022),§7。 - ^ Schmidhuber (2022), §8.
施密德胡伯 (2022),§8。 - ^ Quoted in Christian (2020, p. 22)
引用自 Christian (2020, 第 22 页) - ^ Smith (2023).
史密斯 (2023)。 - ^ "Explained: Generative AI". 9 November 2023.
"解释:生成性人工智能"。2023 年 11 月 9 日。 - ^ "AI Writing and Content Creation Tools". MIT Sloan Teaching & Learning Technologies. Retrieved 25 December 2023.
"人工智能写作与内容创作工具"。麻省理工学院斯隆教学与学习技术。检索于 25 December 2023。 - ^ Marmouyet (2023).
马尔穆耶 (2023) - ^ Kobielus (2019).
Kobielus (2019)。 - ^ Thomason, James (21 May 2024). "Mojo Rising: The resurgence of AI-first programming languages". VentureBeat. Retrieved 26 May 2024.
汤姆森,詹姆斯(2024 年 5 月 21 日)。"Mojo Rising: AI 优先编程语言的复兴"。VentureBeat。检索于 26 May 2024。 - ^ Wodecki, Ben (5 May 2023). "7 AI Programming Languages You Need to Know". AI Business.
Wodecki, Ben (2023 年 5 月 5 日). "你需要知道的 7 种 AI 编程语言". AI 商业. - ^ Davenport, T; Kalakota, R (June 2019). "The potential for artificial intelligence in healthcare". Future Healthc J. 6 (2): 94–98. doi:10.7861/futurehosp.6-2-94. PMC 6616181. PMID 31363513.
Davenport, T; Kalakota, R (2019 年 6 月). "人工智能在医疗保健中的潜力". 未来医疗杂志. 6 (2): 94–98. doi:10.7861/futurehosp.6-2-94. PMC6616181. PMID31363513. - ^ Jump up to: a b Bax, Monique; Thorpe, Jordan; Romanov, Valentin (December 2023). "The future of personalized cardiovascular medicine demands 3D and 4D printing, stem cells, and artificial intelligence". Frontiers in Sensors. 4. doi:10.3389/fsens.2023.1294721. ISSN 2673-5067.
巴克斯,莫尼克;索普,乔丹;罗曼诺夫,瓦伦丁(2023 年 12 月)。"个性化心血管医学的未来需要 3D 和 4D 打印、干细胞和人工智能"。传感器前沿。4。doi:10.3389/fsens.2023.1294721。国际标准连续出版物号2673-5067。 - ^ Jumper, J; Evans, R; Pritzel, A (2021). "Highly accurate protein structure prediction with AlphaFold". Nature. 596 (7873): 583–589. Bibcode:2021Natur.596..583J. doi:10.1038/s41586-021-03819-2. PMC 8371605. PMID 34265844.
Jumper, J; Evans, R; Pritzel, A (2021). "使用 AlphaFold 进行高精度蛋白质结构预测". 自然. 596 (7873): 583–589. 文献代码:2021Natur.596..583J. 数字对象标识符:10.1038/s41586-021-03819-2. PMC8371605. PMID34265844. - ^ "AI discovers new class of antibiotics to kill drug-resistant bacteria". 20 December 2023.
"人工智能发现新类抗生素以杀死耐药细菌"。2023 年 12 月 20 日。 - ^ "AI speeds up drug design for Parkinson's ten-fold". Cambridge University. 17 April 2024.
- ^ Horne, Robert I.; Andrzejewska, Ewa A.; Alam, Parvez; Brotzakis, Z. Faidon; Srivastava, Ankit; Aubert, Alice; Nowinska, Magdalena; Gregory, Rebecca C.; Staats, Roxine; Possenti, Andrea; Chia, Sean; Sormanni, Pietro; Ghetti, Bernardino; Caughey, Byron; Knowles, Tuomas P. J.; Vendruscolo, Michele (17 April 2024). "Discovery of potent inhibitors of α-synuclein aggregation using structure-based iterative learning". Nature Chemical Biology. 20 (5). Nature: 634–645. doi:10.1038/s41589-024-01580-x. PMC 11062903. PMID 38632492.
霍恩,罗伯特·I.;安德烈耶夫斯卡,埃娃·A.;阿拉姆,帕尔维兹;布罗扎基斯,Z. 法伊登;斯里瓦斯塔瓦,安基特;奥贝尔,爱丽丝;诺温斯卡,玛格达莱娜;格雷戈里,丽贝卡·C.;斯塔茨,罗克辛;波森提,安德烈;贾,肖恩;索尔曼尼,皮耶特罗;盖蒂,贝尔纳迪诺;考赫,拜伦;诺尔斯,图马斯·P. J.;文德鲁斯科洛,米凯莱(2024 年 4 月 17 日)。"基于结构的迭代学习发现α-突触核蛋白聚集的强效抑制剂"。自然化学生物学。20(5)。自然:634–645。doi:10.1038/s41589-024-01580-x。PMC11062903。PMID38632492。 - ^ Grant, Eugene F.; Lardner, Rex (25 July 1952). "The Talk of the Town – It". The New Yorker. ISSN 0028-792X. Retrieved 28 January 2024.
格兰特,尤金·F;拉德纳,雷克斯(1952 年 7 月 25 日)。"城里的谈话 – 它"。纽约客。国际标准连续出版物号0028-792X。检索于 28 January 2024。 - ^ Anderson, Mark Robert (11 May 2017). "Twenty years on from Deep Blue vs Kasparov: how a chess match started the big data revolution". The Conversation. Retrieved 28 January 2024.
安德森,马克·罗伯特(2017 年 5 月 11 日)。"从深蓝与卡斯帕罗夫对弈二十年:一场棋赛如何开启大数据革命"。对话。检索于 28 January 2024。 - ^ Markoff, John (16 February 2011). "Computer Wins on 'Jeopardy!': Trivial, It's Not". The New York Times. ISSN 0362-4331. Retrieved 28 January 2024.
马科夫,约翰(2011 年 2 月 16 日)。"计算机在《危险边缘!》中获胜:这可不是微不足道的"。纽约时报。国际标准连续出版物号0362-4331。检索于 28 January 2024。 - ^ Byford, Sam (27 May 2017). "AlphaGo retires from competitive Go after defeating world number one 3–0". The Verge. Retrieved 28 January 2024.
拜福德,萨姆(2017 年 5 月 27 日)。"AlphaGo 在击败世界排名第一的选手后以 3–0 宣布退役"。The Verge。检索于 28 January 2024。 - ^ Brown, Noam; Sandholm, Tuomas (30 August 2019). "Superhuman AI for multiplayer poker". Science. 365 (6456): 885–890. Bibcode:2019Sci...365..885B. doi:10.1126/science.aay2400. ISSN 0036-8075. PMID 31296650.
布朗,诺姆;桑德霍尔姆,图马斯(2019 年 8 月 30 日)。"超人类人工智能在多人扑克中的应用"。科学。365(6456):885–890。文献代码:2019Sci...365..885B。数字对象标识符:10.1126/science.aay2400。国际标准连续出版物号0036-8075。医学文献标识符31296650。 - ^ "MuZero: Mastering Go, chess, shogi and Atari without rules". Google DeepMind. 23 December 2020. Retrieved 28 January 2024.
"MuZero:无规则掌握围棋、国际象棋、将棋和雅达利"。谷歌深度学习。2020 年 12 月 23 日。检索于 28 January 2024。 - ^ Sample, Ian (30 October 2019). "AI becomes grandmaster in 'fiendishly complex' StarCraft II". The Guardian. ISSN 0261-3077. Retrieved 28 January 2024.
样本,伊恩(2019 年 10 月 30 日)。"人工智能在'极其复杂'的《星际争霸 II》中成为大师"。卫报。国际标准连续出版物号0261-3077。检索于 28 January 2024。 - ^ Wurman, P. R.; Barrett, S.; Kawamoto, K. (2022). "Outracing champion Gran Turismo drivers with deep reinforcement learning". Nature. 602 (7896): 223–228. Bibcode:2022Natur.602..223W. doi:10.1038/s41586-021-04357-7. PMID 35140384.
Wurman, P. R.; Barrett, S.; Kawamoto, K. (2022). "用深度强化学习超越冠军赛车手 Gran Turismo". 自然. 602 (7896): 223–228. 文献代码:2022Natur.602..223W. 数字对象标识符:10.1038/s41586-021-04357-7. PMID35140384. - ^ Wilkins, Alex (13 March 2024). "Google AI learns to play open-world video games by watching them". New Scientist. Retrieved 21 July 2024.
威尔金斯,亚历克斯(2024 年 3 月 13 日)。"谷歌人工智能通过观看开放世界视频游戏来学习玩游戏"。新科学家。于 21 July 2024 年检索。 - ^ Uesato, J. et al.: Improving mathematical reasoning with process supervision. openai.com, May 31, 2023. Retrieved 2024-08-07.
Uesato, J. 等: 通过过程监督提高数学推理能力。 openai.com, 2023 年 5 月 31 日。检索于 2024 年 8 月 7 日。 - ^ Srivastava, Saurabh (29 February 2024). "Functional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gap". arXiv:2402.19450 [cs.AI].
Srivastava, Saurabh (2024 年 2 月 29 日). "用于稳健评估推理性能的功能基准及推理差距". arXiv:2402.19450 [cs.AI]。 - ^ Roberts, Siobhan (25 July 2024). "AI achieves silver-medal standard solving International Mathematical Olympiad problems". The New York Times. Retrieved 7 August 2024.
罗伯茨,希沃恩(2024 年 7 月 25 日)。"人工智能在解决国际数学奥林匹克问题方面达到了银牌标准"。纽约时报。检索于 7 August 2024。 - ^ LLEMMA. eleuther.ai. Retrieved 2024-08-07.
LLEMMA. eleuther.ai。检索于 2024-08-07。 - ^ AI Math. Caesars Labs, 2024. Retrieved 2024-08-07.
人工智能数学。 凯撒实验室,2024 年。检索于 2024 年 08 月 07 日。 - ^ Alex McFarland: 7 Best AI for Math Tools. unite.ai. Retrieved 2024-08-07
亚历克斯·麦克法兰:7 款最佳数学 AI 工具。 unite.ai。检索于 2024-08-07 - ^ Matthew Finio & Amanda Downie: IBM Think 2024 Primer, "What is Artificial Intelligence (AI) in Finance?" 8 Dec. 2023
马修·菲尼奥与阿曼达·唐尼:IBM Think 2024 入门,“金融中的人工智能(AI)是什么?” 2023 年 12 月 8 日 - ^ M. Nicolas, J. Firzli: Pensions Age/European Pensions magazine, "Artificial Intelligence: Ask the Industry" May June 2024 https://videovoice.org/ai-in-finance-innovation-entrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligence-act-wont-work-as-intended/.
M. Nicolas, J. Firzli: Pensions Age/European Pensions magazine, "人工智能:询问行业" 2024 年 5 月 6 月 https://videovoice.org/ai-in-finance-innovation-entrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligence-act-wont-work-as-intended/。 - ^ Jump up to: a b c Congressional Research Service (2019). Artificial Intelligence and National Security (PDF). Washington, DC: Congressional Research Service.PD-notice
国会研究服务处 (2019)。人工智能与国家安全 (PDF)。华盛顿特区:国会研究服务处。PD-notice - ^ Jump up to: a b Slyusar, Vadym (2019). "Artificial intelligence as the basis of future control networks". ResearchGate. doi:10.13140/RG.2.2.30247.50087.
Slyusar, Vadym (2019). "人工智能作为未来控制网络的基础". ResearchGate. doi:10.13140/RG.2.2.30247.50087. - ^ Knight, Will. "The US and 30 Other Nations Agree to Set Guardrails for Military AI". Wired. ISSN 1059-1028. Retrieved 24 January 2024.
骑士,威尔。"美国和其他 30 个国家同意为军事人工智能设定保护措施"。连线。国际标准连续出版物号1059-1028。检索于 24 January 2024。 - ^ Newsom, Gavin; Weber, Shirley N. (6 September 2023). "Executive Order N-12-23" (PDF). Executive Department, State of California. Archived (PDF) from the original on 21 February 2024. Retrieved 7 September 2023.
纽森,加文;韦伯,雪莉·N.(2023 年 9 月 6 日)。"行政命令 N-12-23"(PDF)。加利福尼亚州行政部门。存档(PDF)于 2024 年 2 月 21 日的原件。检索于 7 September 2023。 - ^ Pinaya, Walter H. L.; Graham, Mark S.; Kerfoot, Eric; Tudosiu, Petru-Daniel; Dafflon, Jessica; Fernandez, Virginia; Sanchez, Pedro; Wolleb, Julia; da Costa, Pedro F.; Patel, Ashay (2023). "Generative AI for Medical Imaging: extending the MONAI Framework". arXiv:2307.15208 [eess.IV].
皮纳亚,沃尔特·H·L;格雷厄姆,马克·S;克尔福特,埃里克;图多西乌,佩特鲁-丹尼尔;达夫隆,杰西卡;费尔南德斯,维尔吉尼亚;桑切斯,佩德罗;沃莱布,朱莉亚;达科斯塔,佩德罗·F;帕特尔,阿沙伊(2023)。"用于医学成像的生成性人工智能:扩展 MONAI 框架"。arXiv:2307.15208 [eess.IV]。 - ^ Griffith, Erin; Metz, Cade (27 January 2023). "Anthropic Said to Be Closing In on $300 Million in New A.I. Funding". The New York Times. Archived from the original on 9 December 2023. Retrieved 14 March 2023.
格里菲斯,埃琳;梅茨,凯德(2023 年 1 月 27 日)。"Anthropic 据说正在接近 3 亿美元的新人工智能融资"。《纽约时报》。存档于 2023 年 12 月 9 日的原文。检索于 14 March 2023。 - ^ Lanxon, Nate; Bass, Dina; Davalos, Jackie (10 March 2023). "A Cheat Sheet to AI Buzzwords and Their Meanings". Bloomberg News. Archived from the original on 17 November 2023. Retrieved 14 March 2023.
兰克森,内特;巴斯,迪娜;达瓦洛斯,杰基(2023 年 3 月 10 日)。"人工智能流行词及其含义速查表"。彭博新闻。存档于 2023 年 11 月 17 日的原文。检索于 14 March 2023。 - ^ Marcelline, Marco (27 May 2023). "ChatGPT: Most Americans Know About It, But Few Actually Use the AI Chatbot". PCMag. Retrieved 28 January 2024.
马塞林,马尔科(2023 年 5 月 27 日)。"ChatGPT:大多数美国人知道它,但实际上使用这个人工智能聊天机器人的人很少"。PCMag。检索于 28 January 2024。 - ^ Lu, Donna (31 March 2023). "Misinformation, mistakes and the Pope in a puffer: what rapidly evolving AI can – and can't – do". The Guardian. ISSN 0261-3077. Retrieved 28 January 2024.
- ^ Hurst, Luke (23 May 2023). "How a fake image of a Pentagon explosion shared on Twitter caused a real dip on Wall Street". euronews. Retrieved 28 January 2024.
- ^ Poole, David; Mackworth, Alan (2023). Artificial Intelligence, Foundations of Computational Agents (3rd ed.). Cambridge University Press. doi:10.1017/9781009258227. ISBN 9781009258197.
- ^ Russell, Stuart; Norvig, Peter (2020). Artificial Intelligence: A Modern Approach (4th ed.). Pearson. ISBN 9780134610993.
- ^ "Why agents are the next frontier of generative AI". McKinsey Digital. 24 July 2024. Retrieved 10 August 2024.
- ^ Ransbotham, Sam; Kiron, David; Gerbert, Philipp; Reeves, Martin (6 September 2017). "Reshaping Business With Artificial Intelligence". MIT Sloan Management Review. Archived from the original on 13 February 2024.
Ransbotham, Sam; Kiron, David; Gerbert, Philipp; Reeves, Martin (2017 年 9 月 6 日). "用人工智能重塑商业". 麻省理工学院斯隆管理评论. 存档于 2024 年 2 月 13 日的原文. - ^ Sun, Yuran; Zhao, Xilei; Lovreglio, Ruggiero; Kuligowski, Erica (1 January 2024), Naser, M. Z. (ed.), "8 – AI for large-scale evacuation modeling: promises and challenges", Interpretable Machine Learning for the Analysis, Design, Assessment, and Informed Decision Making for Civil Infrastructure, Woodhead Publishing Series in Civil and Structural Engineering, Woodhead Publishing, pp. 185–204, ISBN 978-0-12-824073-1, retrieved 28 June 2024.
孙宇然; 赵希雷; 洛夫雷利奥, 鲁杰罗; 库利戈夫斯基, 艾丽卡 (2024 年 1 月 1 日), 纳塞尔, M. Z. (编), "8 – 大规模疏散建模中的人工智能:承诺与挑战", 用于分析、设计、评估和基于信息的决策的可解释机器学习在民用基础设施中的应用, Woodhead 出版系列在土木与结构工程, Woodhead 出版, 页码 185–204, ISBN 978-0-12-824073-1, 检索于 28 June 2024 - ^ Gomaa, Islam; Adelzadeh, Masoud; Gwynne, Steven; Spencer, Bruce; Ko, Yoon; Bénichou, Noureddine; Ma, Chunyun; Elsagan, Nour; Duong, Dana; Zalok, Ehab; Kinateder, Max (1 November 2021). "A Framework for Intelligent Fire Detection and Evacuation System". Fire Technology. 57 (6): 3179–3185. doi:10.1007/s10694-021-01157-3. ISSN 1572-8099.
Gomaa, Islam; Adelzadeh, Masoud; Gwynne, Steven; Spencer, Bruce; Ko, Yoon; Bénichou, Noureddine; Ma, Chunyun; Elsagan, Nour; Duong, Dana; Zalok, Ehab; Kinateder, Max (2021 年 11 月 1 日). "智能火灾检测与疏散系统框架". 火灾技术. 57 (6): 3179–3185. doi:10.1007/s10694-021-01157-3. ISSN1572-8099. - ^ Zhao, Xilei; Lovreglio, Ruggiero; Nilsson, Daniel (1 May 2020). "Modelling and interpreting pre-evacuation decision-making using machine learning". Automation in Construction. 113: 103140. doi:10.1016/j.autcon.2020.103140. ISSN 0926-5805.
赵, 西雷; 洛夫雷吉奥, 鲁杰罗; 尼尔森, 丹尼尔 (2020 年 5 月 1 日). "使用机器学习建模和解释预疏散决策". 建筑自动化. 113: 103140. doi:10.1016/j.autcon.2020.103140. 国际标准连续出版物号0926-5805. - ^ Simonite (2016).
西蒙尼特 (2016)。 - ^ Russell & Norvig (2021), p. 987.
拉塞尔与诺维格 (2021), 第 987 页。 - ^ Laskowski (2023).
Laskowski (2023)。 - ^ GAO (2022). 高(2022)。
- ^ Valinsky (2019).
瓦林斯基 (2019) - ^ Russell & Norvig (2021), p. 991.
拉塞尔与诺维格 (2021), 第 991 页。 - ^ Russell & Norvig (2021), pp. 991–992.
拉塞尔与诺维格 (2021),第 991–992 页。 - ^ Christian (2020), p. 63.
基督教 (2020), 第 63 页。 - ^ Vincent (2022).
文森特 (2022)。 - ^ Kopel, Matthew. "Copyright Services: Fair Use". Cornell University Library. Retrieved 26 April 2024.
科佩尔,马修。"版权服务:合理使用"。康奈尔大学图书馆。检索于 26 April 2024。 - ^ Burgess, Matt. "How to Stop Your Data From Being Used to Train AI". Wired. ISSN 1059-1028. Retrieved 26 April 2024.
伯吉斯,马特。"如何阻止您的数据被用于训练人工智能"。连线。国际标准连续出版物号1059-1028。检索于 26 April 2024。 - ^ Reisner (2023).
Reisner (2023)。 - ^ Alter & Harris (2023).
Alter & Harris (2023)。 - ^ "Getting the Innovation Ecosystem Ready for AI. An IP policy toolkit" (PDF). WIPO.
"为人工智能准备创新生态系统。知识产权政策工具包"(PDF)。世界知识产权组织。 - ^ Hammond, George (27 December 2023). "Big Tech is spending more than VC firms on AI startups". Ars Technica. Archived from the original on 10 January 2024.
汉蒙,乔治(2023 年 12 月 27 日)。"大型科技公司在人工智能初创企业上的支出超过风险投资公司"。阿斯技术。存档于 2024 年 1 月 10 日的原文。 - ^ Wong, Matteo (24 October 2023). "The Future of AI Is GOMA". The Atlantic. Archived from the original on 5 January 2024.
黄,马特奥(2023 年 10 月 24 日)。“人工智能的未来是 GOMA”。大西洋月刊。存档于 2024 年 1 月 5 日的原文。 - ^ "Big tech and the pursuit of AI dominance". The Economist. 26 March 2023. Archived from the original on 29 December 2023.
"大型科技公司与人工智能主导权的追求"。经济学人。2023 年 3 月 26 日。存档于 2023 年 12 月 29 日的原文。 - ^ Fung, Brian (19 December 2023). "Where the battle to dominate AI may be won". CNN Business. Archived from the original on 13 January 2024.
冯,布赖恩(2023 年 12 月 19 日)。"人工智能主导权之战可能会在这里获胜"。CNN 商业。存档于 2024 年 1 月 13 日的原文。 - ^ Metz, Cade (5 July 2023). "In the Age of A.I., Tech's Little Guys Need Big Friends". The New York Times.
梅茨,凯德(2023 年 7 月 5 日)。"在人工智能时代,科技的小角色需要大朋友"。纽约时报。 - ^ "Electricity 2024 – Analysis". IEA. 24 January 2024. Retrieved 13 July 2024.
- ^ Calvert, Brian (28 March 2024). "AI already uses as much energy as a small country. It's only the beginning". Vox. New York, New York.
- ^ Halper, Evan; O'Donovan, Caroline (21 June 2024). "AI is exhausting the power grid. Tech firms are seeking a miracle solution". Washington Post.
哈普尔,埃文;奥多诺万,卡罗琳(2024 年 6 月 21 日)。"人工智能正在耗尽电网。科技公司正在寻求奇迹解决方案"。华盛顿邮报。 - ^ Davenport, Carly. "AI Data Centers and the Coming YS Power Demand Surge" (PDF). Goldman Sachs.
达文波特,卡莉。"人工智能数据中心与即将到来的 YS 电力需求激增"(PDF)。高盛。 - ^ Ryan, Carol (12 April 2024). "Energy-Guzzling AI Is Also the Future of Energy Savings". Wall Street Journal. Dow Jones.
瑞安,卡罗尔(2024 年 4 月 12 日)。"耗能的人工智能也是节能的未来"。华尔街日报。道琼斯。 - ^ Hiller, Jennifer (1 July 2024). "Tech Industry Wants to Lock Up Nuclear Power for AI". Wall Street Journal. Dow Jones.
希勒,詹妮弗(2024 年 7 月 1 日)。"科技行业希望将核能锁定用于人工智能"。华尔街日报。道琼斯。 - ^ Nicas (2018).
尼卡斯 (2018) - ^ Rainie, Lee; Keeter, Scott; Perrin, Andrew (22 July 2019). "Trust and Distrust in America". Pew Research Center. Archived from the original on 22 February 2024.
Rainie, Lee; Keeter, Scott; Perrin, Andrew (2019 年 7 月 22 日). "美国的信任与不信任". 皮尤研究中心. 存档于 2024 年 2 月 22 日的原文. - ^ Williams (2023).
威廉姆斯 (2023)。 - ^ Taylor & Hern (2023).
泰勒与赫恩 (2023) - ^ Jump up to: a b Samuel, Sigal (19 April 2022). "Why it's so damn hard to make AI fair and unbiased". Vox. Retrieved 24 July 2024.
塞缪尔,西戈尔(2022 年 4 月 19 日)。"为什么让人工智能公平和无偏见如此困难"。Vox。检索于 24 July 2024。 - ^ Jump up to: a b Rose (2023). 玫瑰 (2023)
- ^ CNA (2019). CNA (2019)。
- ^ Goffrey (2008), p. 17.
戈弗雷 (2008), 第 17 页。 - ^ Berdahl et al. (2023); Goffrey (2008, p. 17); Rose (2023); Russell & Norvig (2021, p. 995)
伯达尔等人 (2023); 戈弗雷 (2008, 第 17 页); 罗斯 (2023); 拉塞尔与诺维格 (2021, 第 995 页) - ^ Christian (2020), p. 25.
基督教 (2020), 第 25 页。 - ^ Jump up to: a b Russell & Norvig (2021), p. 995.
拉塞尔与诺维格 (2021), 第 995 页。 - ^ Grant & Hill (2023).
格兰特与希尔 (2023) - ^ Larson & Angwin (2016).
拉尔森与安格温 (2016) - ^ Christian (2020), p. 67–70.
基督教 (2020), 第 67–70 页。 - ^ Christian (2020, pp. 67–70); Russell & Norvig (2021, pp. 993–994)
Christian (2020, pp. 67–70); Russell & Norvig (2021, pp. 993–994) - ^ Russell & Norvig (2021, p. 995); Lipartito (2011, p. 36); Goodman & Flaxman (2017, p. 6); Christian (2020, pp. 39–40, 65)
拉塞尔与诺维格 (2021, 第 995 页); 利帕尔蒂托 (2011, 第 36 页); 古德曼与弗拉克斯曼 (2017, 第 6 页); 克里斯蒂安 (2020, 第 39–40 页, 65 页) - ^ Quoted in Christian (2020, p. 65).
引用于 Christian (2020, 第 65 页). - ^ Russell & Norvig (2021, p. 994); Christian (2020, pp. 40, 80–81)
拉塞尔与诺维格 (2021, 第 994 页); 克里斯蒂安 (2020, 第 40, 80–81 页) - ^ Quoted in Christian (2020, p. 80)
引用于 Christian (2020, 第 80 页) - ^ Dockrill (2022).
Dockrill (2022)。 - ^ Sample (2017).
示例 (2017)。 - ^ "Black Box AI". 16 June 2023.
"黑箱人工智能"。2023 年 6 月 16 日。 - ^ Christian (2020), p. 110.
基督教 (2020), 第 110 页。 - ^ Christian (2020), pp. 88–91.
基督教 (2020),第 88–91 页。 - ^ Christian (2020, p. 83); Russell & Norvig (2021, p. 997)
Christian (2020, 第 83 页); Russell & Norvig (2021, 第 997 页) - ^ Christian (2020), p. 91.
基督教 (2020), 第 91 页。 - ^ Christian (2020), p. 83.
基督教 (2020),第 83 页。 - ^ Verma (2021).
Verma (2021)。 - ^ Rothman (2020).
罗斯曼 (2020)。 - ^ Christian (2020), pp. 105–108.
基督教 (2020), pp. 105–108. - ^ Christian (2020), pp. 108–112.
- ^ Ropek, Lucas (21 May 2024). "New Anthropic Research Sheds Light on AI's 'Black Box'". Gizmodo. Retrieved 23 May 2024.
- ^ Russell & Norvig (2021), p. 989.
- ^ Jump up to: a b Russell & Norvig (2021), pp. 987–990.
拉塞尔与诺维格 (2021),第 987–990 页。 - ^ Russell & Norvig (2021), p. 988.
拉塞尔与诺维格 (2021), 第 988 页。 - ^ Robitzski (2018); Sainato (2015)
- ^ Harari (2018).
哈拉里 (2018) - ^ Buckley, Chris; Mozur, Paul (22 May 2019). "How China Uses High-Tech Surveillance to Subdue Minorities". The New York Times.
- ^ "Security lapse exposed a Chinese smart city surveillance system". 3 May 2019. Archived from the original on 7 March 2021. Retrieved 14 September 2020.
- ^ Urbina et al. (2022).
- ^ Jump up to: a b E. McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2022), 51(3) Industrial Law Journal 511–559. Archived 27 May 2023 at the Wayback Machine.
E. McGaughey, '机器人会让你的工作消失吗?充分就业、基本收入与经济民主' (2022), 51(3) 工业法杂志 511–559. 存档 2023 年 5 月 27 日在时光机上。 - ^ Ford & Colvin (2015);McGaughey (2022)
福特与科尔文 (2015);麦高赫 (2022) - ^ IGM Chicago (2017).
IGM 芝加哥 (2017) - ^ Arntz, Gregory & Zierahn (2016), p. 33.
阿恩茨,格雷戈里 & 齐拉恩 (2016),第 33 页。 - ^ Lohr (2017); Frey & Osborne (2017); Arntz, Gregory & Zierahn (2016, p. 33)
Lohr (2017); Frey & Osborne (2017); Arntz, Gregory & Zierahn (2016, 第 33 页) - ^ Zhou, Viola (11 April 2023). "AI is already taking video game illustrators' jobs in China". Rest of World. Retrieved 17 August 2023.
周,维奥拉(2023 年 4 月 11 日)。"人工智能已经在中国取代了视频游戏插画师的工作"。世界其他地区。检索于 17 August 2023。 - ^ Carter, Justin (11 April 2023). "China's game art industry reportedly decimated by growing AI use". Game Developer. Retrieved 17 August 2023.
卡特,贾斯廷(2023 年 4 月 11 日)。"中国的游戏艺术产业 reportedly 被日益增长的人工智能使用摧毁"。游戏开发者。检索于 17 August 2023。 - ^ Morgenstern (2015).
莫根斯特恩 (2015)。 - ^ Mahdawi (2017); Thompson (2014)
- ^ Tarnoff, Ben (4 August 2023). "Lessons from Eliza". The Guardian Weekly. pp. 34–39.
塔诺夫,本(2023 年 8 月 4 日)。“来自伊丽莎的教训”。卫报周刊。第 34–39 页。 - ^ Cellan-Jones (2014).
- ^ Russell & Norvig 2021, p. 1001.
- ^ Bostrom (2014).
- ^ Russell (2019).
- ^ Bostrom (2014); Müller & Bostrom (2014); Bostrom (2015).
博斯特罗姆 (2014); 穆勒与博斯特罗姆 (2014); 博斯特罗姆 (2015)。 - ^ Harari (2023).
哈拉里 (2023)。 - ^ Müller & Bostrom (2014).
穆勒与博斯特罗姆 (2014) - ^
Leaders' concerns about the existential risks of AI around 2015:
2015 年左右,领导者对人工智能存在风险的担忧: - ^ ""Godfather of artificial intelligence" talks impact and potential of new AI". CBS News. 25 March 2023. Archived from the original on 28 March 2023. Retrieved 28 March 2023.
""人工智能教父"谈论新人工智能的影响和潜力"。 CBS 新闻。2023 年 3 月 25 日。 存档于 2023 年 3 月 28 日的原文。检索于 28 March 2023。 - ^ Pittis, Don (4 May 2023). "Canadian artificial intelligence leader Geoffrey Hinton piles on fears of computer takeover". CBC.
皮蒂斯,唐(2023 年 5 月 4 日)。"加拿大人工智能领袖杰弗里·辛顿加剧了对计算机接管的担忧"。CBC。 - ^ "'50–50 chance' that AI outsmarts humanity, Geoffrey Hinton says". Bloomberg BNN. 14 June 2024. Retrieved 6 July 2024.
"'50–50 的机会'人工智能超越人类,杰弗里·辛顿说"。彭博 BNN。2024 年 6 月 14 日。检索于 6 July 2024。 - ^ Valance (2023).
瓦朗斯 (2023) - ^ Taylor, Josh (7 May 2023). "Rise of artificial intelligence is inevitable but should not be feared, 'father of AI' says". The Guardian. Retrieved 26 May 2023.
泰勒,乔希(2023 年 5 月 7 日)。"人工智能的崛起是不可避免的,但不应被恐惧,'人工智能之父'表示"。卫报。检索于 26 May 2023 年。 - ^ Colton, Emma (7 May 2023). "'Father of AI' says tech fears misplaced: 'You cannot stop it'". Fox News. Retrieved 26 May 2023.
科尔顿,艾玛(2023 年 5 月 7 日)。"'人工智能之父'表示技术恐惧是错位的:'你无法阻止它'"。福克斯新闻。检索于 26 May 2023。 - ^ Jones, Hessie (23 May 2023). "Juergen Schmidhuber, Renowned 'Father Of Modern AI,' Says His Life's Work Won't Lead To Dystopia". Forbes. Retrieved 26 May 2023.
琼斯,赫西(2023 年 5 月 23 日)。"尤尔根·施密德胡伯,著名的“现代人工智能之父”,表示他的毕生工作不会导致反乌托邦"。福布斯。检索于 26 May 2023。 - ^ McMorrow, Ryan (19 December 2023). "Andrew Ng: 'Do we think the world is better off with more or less intelligence?'". Financial Times. Retrieved 30 December 2023.
麦克莫罗,瑞安(2023 年 12 月 19 日)。"安德鲁·吴:'我们认为世界是更好还是更糟,智力更多还是更少?'"。金融时报. 于 30 December 2023 年检索。 - ^ Levy, Steven (22 December 2023). "How Not to Be Stupid About AI, With Yann LeCun". Wired. Retrieved 30 December 2023.
莱维,史蒂芬(2023 年 12 月 22 日)。"如何不对人工智能愚蠢,雅恩·勒昆"。连线。检索于 30 December 2023。 - ^
Arguments that AI is not an imminent risk:
AI 并不是一个迫在眉睫的风险的论点: - ^ Jump up to: a b Christian (2020), pp. 67, 73.
克里斯蒂安 (2020),第 67 页,第 73 页。 - ^ Yudkowsky (2008).
尤德科夫斯基 (2008) - ^ Jump up to: a b Anderson & Anderson (2011).
安德森与安德森 (2011) - ^ AAAI (2014). AAAI (2014)。
- ^ Wallach (2010).
- ^ Russell (2019), p. 173.
- ^ Stewart, Ashley; Melton, Monica. "Hugging Face CEO says he's focused on building a 'sustainable model' for the $4.5 billion open-source-AI startup". Business Insider. Retrieved 14 April 2024.
- ^ Wiggers, Kyle (9 April 2024). "Google open sources tools to support AI model development". TechCrunch. Retrieved 14 April 2024.
威格斯,凯尔(2024 年 4 月 9 日)。"谷歌开源工具以支持 AI 模型开发"。科技 Crunch。检索于 14 April 2024。 - ^ Heaven, Will Douglas (12 May 2023). "The open-source AI boom is built on Big Tech's handouts. How long will it last?". MIT Technology Review. Retrieved 14 April 2024.
- ^ Brodsky, Sascha (19 December 2023). "Mistral AI's New Language Model Aims for Open Source Supremacy". AI Business.
布罗茨基,萨沙(2023 年 12 月 19 日)。"Mistral AI 的新语言模型旨在实现开源霸权"。AI 商业。 - ^ Edwards, Benj (22 February 2024). "Stability announces Stable Diffusion 3, a next-gen AI image generator". Ars Technica. Retrieved 14 April 2024.
爱德华兹,班杰(2024 年 2 月 22 日)。"Stability 宣布 Stable Diffusion 3,一款下一代 AI 图像生成器"。Ars Technica。检索于 14 April 2024。 - ^ Marshall, Matt (29 January 2024). "How enterprises are using open source LLMs: 16 examples". VentureBeat.
马歇尔,马特(2024 年 1 月 29 日)。"企业如何使用开源 LLMs: 16 个例子"。VentureBeat。 - ^ Piper, Kelsey (2 February 2024). "Should we make our most powerful AI models open source to all?". Vox. Retrieved 14 April 2024.
派珀,凯尔西(2024 年 2 月 2 日)。"我们是否应该将我们最强大的人工智能模型开放源代码给所有人?"。Vox。检索于 14 April 2024 年。 - ^ Alan Turing Institute (2019). "Understanding artificial intelligence ethics and safety" (PDF).
艾伦·图灵研究所 (2019). "理解人工智能伦理与安全"(PDF)。 - ^ Alan Turing Institute (2023). "AI Ethics and Governance in Practice" (PDF).
艾伦·图灵研究所 (2023). "人工智能伦理与治理实践"(PDF)。 - ^ Floridi, Luciano; Cowls, Josh (23 June 2019). "A Unified Framework of Five Principles for AI in Society". Harvard Data Science Review. 1 (1). doi:10.1162/99608f92.8cd550d1. S2CID 198775713.
Floridi, Luciano; Cowls, Josh (2019 年 6 月 23 日). "社会中人工智能的五项原则统一框架". 哈佛数据科学评论. 1 (1). doi:10.1162/99608f92.8cd550d1. S2CID198775713. - ^ Buruk, Banu; Ekmekci, Perihan Elif; Arda, Berna (1 September 2020). "A critical perspective on guidelines for responsible and trustworthy artificial intelligence". Medicine, Health Care and Philosophy. 23 (3): 387–399. doi:10.1007/s11019-020-09948-1. ISSN 1572-8633. PMID 32236794. S2CID 214766800.
Buruk, Banu; Ekmekci, Perihan Elif; Arda, Berna (2020 年 9 月 1 日). "对负责任和可信赖的人工智能指南的批判性视角". 医学、医疗保健与哲学. 23 (3): 387–399. doi:10.1007/s11019-020-09948-1. ISSN1572-8633. PMID32236794. S2CID214766800. - ^ Kamila, Manoj Kumar; Jasrotia, Sahil Singh (1 January 2023). "Ethical issues in the development of artificial intelligence: recognizing the risks". International Journal of Ethics and Systems. ahead-of-print (ahead-of-print). doi:10.1108/IJOES-05-2023-0107. ISSN 2514-9369. S2CID 259614124.
卡米拉,马诺杰·库马尔;贾斯罗蒂亚,萨希尔·辛格(2023 年 1 月 1 日)。 "人工智能发展中的伦理问题:识别风险"。 国际伦理与系统期刊。提前出版(提前出版)。 doi:10.1108/IJOES-05-2023-0107。 ISSN2514-9369。 S2CID259614124。 - ^ "AI Safety Institute releases new AI safety evaluations platform". UK Government. 10 May 2024. Retrieved 14 May 2024.
"人工智能安全研究所发布新的人工智能安全评估平台"。英国政府。2024 年 5 月 10 日。检索于 14 May 2024。 - ^
Regulation of AI to mitigate risks:
人工智能的监管以降低风险: - ^ Law Library of Congress (U.S.). Global Legal Research Directorate (2019).
美国国会法库。全球法律研究局(2019)。 - ^ Jump up to: a b Vincent (2023).
文森特 (2023)。 - ^ Stanford University (2023).
斯坦福大学 (2023)。 - ^ Jump up to: a b c d UNESCO (2021).
联合国教科文组织 (2021) - ^ Kissinger (2021).
- ^ Altman, Brockman & Sutskever (2023).
- ^ VOA News (25 October 2023). "UN Announces Advisory Body on Artificial Intelligence".
VOA 新闻(2023 年 10 月 25 日)。"联合国宣布人工智能咨询机构"。 - ^ "Council of Europe opens first ever global treaty on AI for signature". Council of Europe. 5 September 2024. Retrieved 17 September 2024.
"欧洲委员会首次全球人工智能条约开放签署"。欧洲委员会。2024 年 9 月 5 日。检索于 17 September 2024。 - ^ Edwards (2023).
爱德华兹 (2023)。 - ^ Kasperowicz (2023).
Kasperowicz (2023)。 - ^ Fox News (2023).
福克斯新闻 (2023) - ^ Milmo, Dan (3 November 2023). "Hope or Horror? The great AI debate dividing its pioneers". The Guardian Weekly. pp. 10–12.
米尔莫,丹(2023 年 11 月 3 日)。"希望还是恐惧?伟大的人工智能辩论分裂了它的先驱们"。卫报周刊。第 10–12 页。 - ^ "The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023". GOV.UK. 1 November 2023. Archived from the original on 1 November 2023. Retrieved 2 November 2023.
"2023 年 11 月 1 日至 2 日参加人工智能安全峰会的国家布莱奇利宣言"。英国政府官网。2023 年 11 月 1 日。从原文存档于 2023 年 11 月 1 日。检索于 2 November 2023。 - ^ "Countries agree to safe and responsible development of frontier AI in landmark Bletchley Declaration". GOV.UK (Press release). Archived from the original on 1 November 2023. Retrieved 1 November 2023.
“各国同意在具有里程碑意义的布莱奇利宣言中安全和负责任地发展前沿人工智能”。英国政府官网(新闻稿)。已归档,原文于 2023 年 11 月 1 日存档。检索于 1 November 2023。 - ^ "Second global AI summit secures safety commitments from companies". Reuters. 21 May 2024. Retrieved 23 May 2024.
"第二届全球人工智能峰会获得企业安全承诺"。路透社。2024 年 5 月 21 日。检索于 23 May 2024。 - ^ "Frontier AI Safety Commitments, AI Seoul Summit 2024". gov.uk. 21 May 2024. Archived from the original on 23 May 2024. Retrieved 23 May 2024.
"前沿人工智能安全承诺,2024 年首尔人工智能峰会"。gov.uk。2024 年 5 月 21 日。从原文存档于 2024 年 5 月 23 日。检索于 23 May 2024。 - ^ Jump up to: a b Russell & Norvig 2021, p. 9.
拉塞尔与诺维格 2021,第 9 页。 - ^ Jump up to: a b c Copeland, J., ed. (2004). The Essential Turing: the ideas that gave birth to the computer age. Oxford, England: Clarendon Press. ISBN 0-19-825079-7.
科佩兰, J., 编. (2004). 图灵的精髓:孕育计算机时代的思想. 英国牛津: 克拉伦登出版社. 国际标准书号0-19-825079-7. - ^ "Google books ngram".
"谷歌图书 n-gram" - ^
AI's immediate precursors:
AI 的直接前身:- McCorduck (2004, pp. 51–107)
麦考杜克 (2004, 第 51–107 页) - Crevier (1993, pp. 27–32)
Crevier (1993, 第 27–32 页) - Russell & Norvig (2021, pp. 8–17)
拉塞尔与诺维格 (2021, 第 8–17 页) - Moravec (1988, p. 3)
莫拉维克 (1988, 第 3 页)
- McCorduck (2004, pp. 51–107)
- ^ Jump up to: a b
Turing's original publication of the Turing test in "Computing machinery and intelligence":
图灵在《计算机与智能》中首次发表的图灵测试: Historical influence and philosophical implications:
历史影响与哲学意义:- Haugeland (1985, pp. 6–9)
霍格兰 (1985, pp. 6–9) - Crevier (1993, p. 24)
Crevier (1993, 第 24 页) - McCorduck (2004, pp. 70–71)
麦考杜克 (2004, 第 70–71 页) - Russell & Norvig (2021, pp. 2, 984)
拉塞尔与诺维格 (2021, 第 2 页, 984)
- Haugeland (1985, pp. 6–9)
- ^ Crevier (1993), pp. 47–49.
Crevier (1993),第 47–49 页。 - ^ Russell & Norvig (2003), p. 17.
拉塞尔与诺维格 (2003),第 17 页。 - ^ Russell & Norvig (2003), p. 18.
拉塞尔与诺维格 (2003),第 18 页。 - ^ Newquist (1994), pp. 86–86.
Newquist (1994),第 86–86 页。 - ^
Simon (1965, p. 96) quoted in Crevier (1993, p. 109)
西蒙 (1965, 第 96 页) 引用自 克雷维耶 (1993, 第 109 页) - ^
Minsky (1967, p. 2) quoted in Crevier (1993, p. 109)
明斯基 (1967, 第 2 页) 引用自 克雷维耶 (1993, 第 109 页) - ^ Russell & Norvig (2021), p. 21.
拉塞尔与诺维格 (2021), 第 21 页。 - ^ Lighthill (1973).
- ^ NRC 1999, pp. 212–213.
- ^ Russell & Norvig (2021), p. 22.
- ^
Expert systems:
- Russell & Norvig (2021, pp. 23, 292)
- Luger & Stubblefield (2004, pp. 227–331)
- Nilsson (1998, chpt. 17.4)
- McCorduck (2004, pp. 327–335, 434–435)
- Crevier (1993, pp. 145–162, 197–203)
- Newquist (1994, pp. 155–183)
- ^ Russell & Norvig (2021), p. 24.
- ^ Nilsson (1998), p. 7.
- ^ McCorduck (2004), pp. 454–462.
- ^ Moravec (1988).
- ^ Jump up to: a b Brooks (1990).
- ^
Developmental robotics:
发展机器人技术: - ^ Russell & Norvig (2021), p. 25.
- ^
- Crevier (1993, pp. 214–215)
- Russell & Norvig (2021, pp. 24, 26)
- ^ Russell & Norvig (2021), p. 26.
- ^
Formal and narrow methods adopted in the 1990s:
正式 和 狭窄 方法在 1990 年代采用:- Russell & Norvig (2021, pp. 24–26)
拉塞尔与诺维格 (2021, pp. 24–26) - McCorduck (2004, pp. 486–487)
麦考杜克 (2004, pp. 486–487)
- Russell & Norvig (2021, pp. 24–26)
- ^
AI widely used in the late 1990s:
人工智能在 1990 年代末广泛应用- Kurzweil (2005, p. 265)
库兹韦尔 (2005, 第 265 页) - NRC (1999, pp. 216–222)
NRC (1999, pp. 216–222) - Newquist (1994, pp. 189–201)
Newquist (1994, 第 189–201 页)
- Kurzweil (2005, p. 265)
- ^ Wong (2023). 黄 (2023)。
- ^
Moore's Law and AI:
摩尔定律 和人工智能:- Russell & Norvig (2021, pp. 14, 27)
拉塞尔与诺维格 (2021, 页码 14, 27)
- Russell & Norvig (2021, pp. 14, 27)
- ^ Jump up to: a b c Clark (2015b).
克拉克 (2015b)。 - ^
Big data:
大数据:
- Russell & Norvig (2021, p. 26)
拉塞尔与诺维格 (2021, 第 26 页)
- Russell & Norvig (2021, p. 26)
- ^ Sagar, Ram (3 June 2020). "OpenAI Releases GPT-3, The Largest Model So Far". Analytics India Magazine. Archived from the original on 4 August 2020. Retrieved 15 March 2023.
萨加尔,拉姆(2020 年 6 月 3 日)。"OpenAI 发布 GPT-3,迄今为止最大的模型"。印度分析杂志。存档于 2020 年 8 月 4 日的原文。检索于 15 March 2023。 - ^ DiFeliciantonio (2023).
迪费利恰 ntonio (2023)。 - ^ Goswami (2023).
戈斯瓦米 (2023) - ^ Jump up to: a b Turing (1950), p. 1.
图灵 (1950), 第 1 页。 - ^ Turing (1950), Under "The Argument from Consciousness".
图灵 (1950),在“意识论证”下。 - ^ Kirk-Giannini, Cameron Domenico; Goldstein, Simon (16 October 2023). "AI is closer than ever to passing the Turing test for 'intelligence'. What happens when it does?". The Conversation. Retrieved 17 August 2024.
Kirk-Giannini, Cameron Domenico; Goldstein, Simon (2023 年 10 月 16 日). "人工智能比以往任何时候都更接近通过图灵测试来评估'智能'。当它做到这一点时会发生什么?". 对话. retrieved 17 August 2024. - ^ Russell & Norvig (2021), p. 3.
拉塞尔与诺维格 (2021), 第 3 页。 - ^ Maker (2006).
制造者 (2006) - ^ McCarthy (1999).
麦卡锡 (1999) - ^ Minsky (1986).
明斯基 (1986)。 - ^ "What Is Artificial Intelligence (AI)?". Google Cloud Platform. Archived from the original on 31 July 2023. Retrieved 16 October 2023.
"什么是人工智能(AI)?"。谷歌云平台。已归档于 2023 年 7 月 31 日的原文。检索于 16 October 2023。 - ^ "One of the Biggest Problems in Regulating AI Is Agreeing on a Definition". carnegieendowment.org. Retrieved 31 July 2024.
"监管人工智能的最大问题之一是达成一致的定义"。卡内基国际和平基金会。检索于 31 July 2024。 - ^ "AI or BS? How to tell if a marketing tool really uses artificial intelligence". The Drum. Retrieved 31 July 2024.
"人工智能还是废话?如何判断一个营销工具是否真的使用人工智能"。鼓。检索于 31 July 2024。 - ^ Nilsson (1983), p. 10.
尼尔森 (1983), 第 10 页。 - ^ Haugeland (1985), pp. 112–117.
霍格兰 (1985),第 112–117 页。 - ^
Physical symbol system hypothesis:
物理符号系统假设:- Newell & Simon (1976, p. 116)
纽厄尔与西蒙 (1976, 第 116 页)
- McCorduck (2004, p. 153)
麦考杜克 (2004, 第 153 页) - Russell & Norvig (2021, p. 19)
拉塞尔与诺维格 (2021, 第 19 页)
- Newell & Simon (1976, p. 116)
- ^
Moravec's paradox:
莫拉维克悖论:- Moravec (1988, pp. 15–16)
莫拉维克 (1988, pp. 15–16) - Minsky (1986, p. 29)
明斯基 (1986, 第 29 页) - Pinker (2007, pp. 190–191)
平克 (2007, 第 190–191 页)
- Moravec (1988, pp. 15–16)
- ^
Dreyfus' critique of AI:
德雷福斯对人工智能的批评 Historical significance and philosophical implications:
历史意义和哲学含义:- Crevier (1993, pp. 120–132)
Crevier (1993, 第 120–132 页) - McCorduck (2004, pp. 211–239)
麦考杜克 (2004, 第 211–239 页) - Russell & Norvig (2021, pp. 981–982)
拉塞尔与诺维格 (2021, 第 981–982 页) - Fearn (2007, Chpt. 3)
费恩 (2007, 第 3 章)
- Crevier (1993, pp. 120–132)
- ^ Crevier (1993), p. 125.
- ^ Langley (2011).
兰利 (2011)。 - ^ Katz (2012). 卡茨 (2012)。
- ^
Neats vs. scruffies, the historic debate:
整洁与邋遢,历史性的辩论:- McCorduck (2004, pp. 421–424, 486–489)
麦考杜克 (2004, 第 421–424 页, 486–489) - Crevier (1993, p. 168)
Crevier (1993, 第 168 页) - Nilsson (1983, pp. 10–11)
尼尔森 (1983, 第 10–11 页) - Russell & Norvig (2021, p. 24)
拉塞尔与诺维格 (2021, 第 24 页)
一个“邋遢”智力方法的经典例子: A modern example of neat AI and its aspirations in the 21st century:
21 世纪整洁人工智能及其抱负的现代例子: - McCorduck (2004, pp. 421–424, 486–489)
- ^ Pennachin & Goertzel (2007).
Pennachin 和 Goertzel (2007) - ^ Jump up to: a b Roberts (2016).
罗伯茨 (2016)。 - ^ Russell & Norvig (2021), p. 986.
拉塞尔与诺维格 (2021), 第 986 页。 - ^ Chalmers (1995).
查尔默斯 (1995)。 - ^ Dennett (1991).
丹尼特 (1991)。 - ^ Horst (2005).
霍斯特 (2005) - ^ Searle (1999).
塞尔(1999)。 - ^ Searle (1980), p. 1.
塞尔(1980), 第 1 页。 - ^ Russell & Norvig (2021), p. 9817.
拉塞尔与诺维格 (2021), p. 9817. - ^
Searle's Chinese room argument:
塞尔的 中文房间 论证:- Searle (1980). Searle's original presentation of the thought experiment.
塞尔(1980)。塞尔对这一思想实验的最初呈现。 - Searle (1999).
塞尔(1999)。
- Russell & Norvig (2021, pp. 985)
拉塞尔与诺维格 (2021, 第 985 页) - McCorduck (2004, pp. 443–445)
麦考杜克 (2004, 第 443–445 页) - Crevier (1993, pp. 269–271)
Crevier (1993, pp. 269–271)
- Searle (1980). Searle's original presentation of the thought experiment.
- ^ Leith, Sam (7 July 2022). "Nick Bostrom: How can we be certain a machine isn't conscious?". The Spectator. Retrieved 23 February 2024.
利斯,萨姆(2022 年 7 月 7 日)。"尼克·博斯特罗姆:我们如何能确定一台机器没有意识?"。观察者。检索于 23 February 2024。 - ^ Jump up to: a b c Thomson, Jonny (31 October 2022). "Why don't robots have rights?". Big Think. Retrieved 23 February 2024.
汤姆森,乔尼(2022 年 10 月 31 日)。"为什么机器人没有权利?"。大思想。检索于 23 February 2024。 - ^ Jump up to: a b Kateman, Brian (24 July 2023). "AI Should Be Terrified of Humans". Time. Retrieved 23 February 2024.
Kateman, Brian (2023 年 7 月 24 日). "人工智能应该对人类感到恐惧". 时代. retrieved 23 February 2024. - ^ Wong, Jeff (10 July 2023). "What leaders need to know about robot rights". Fast Company.
黄杰夫 (2023 年 7 月 10 日). "领导者需要了解的机器人权利". 快速公司. - ^ Hern, Alex (12 January 2017). "Give robots 'personhood' status, EU committee argues". The Guardian. ISSN 0261-3077. Retrieved 23 February 2024.
赫恩,亚历克斯(2017 年 1 月 12 日)。"给予机器人‘人格’地位,欧盟委员会主张"。卫报。国际标准连续出版物号0261-3077。检索于 23 February 2024。 - ^ Dovey, Dana (14 April 2018). "Experts Don't Think Robots Should Have Rights". Newsweek. Retrieved 23 February 2024.
达维,达娜(2018 年 4 月 14 日)。"专家认为机器人不应该拥有权利"。新闻周刊。检索于 23 February 2024。 - ^ Cuddy, Alice (13 April 2018). "Robot rights violate human rights, experts warn EU". euronews. Retrieved 23 February 2024.
Cuddy, Alice (2018 年 4 月 13 日). "机器人权利侵犯人权,专家警告欧盟". 欧洲新闻. 于 23 February 2024 年检索. - ^
The Intelligence explosion and technological singularity:
智能爆炸 和 技术奇点:- Russell & Norvig (2021, pp. 1004–1005)
拉塞尔与诺维格 (2021, pp. 1004–1005) - Omohundro (2008)
- Kurzweil (2005) 库兹韦尔 (2005)
I. J. Good的“智能爆炸” Vernor Vinge's "singularity"
维尔诺·文奇的“奇点” - Russell & Norvig (2021, pp. 1004–1005)
- ^ Russell & Norvig (2021), p. 1005.
拉塞尔与诺维格 (2021), 第 1005 页。 - ^
Transhumanism:
超人类主义:- Moravec (1988) 莫拉维克 (1988)
- Kurzweil (2005) 库兹韦尔 (2005)
- Russell & Norvig (2021, p. 1005)
拉塞尔与诺维格 (2021, 第 1005 页)
- ^
AI as evolution:
人工智能作为进化:
- Edward Fredkin is quoted in McCorduck (2004, p. 401)
- Butler (1863)
- Dyson (1998) 戴森 (1998)
- ^
AI in myth:
神话中的人工智能:
- McCorduck (2004, pp. 4–5)
麦考杜克 (2004, 第 4–5 页)
- McCorduck (2004, pp. 4–5)
- ^ McCorduck (2004), pp. 340–400.
麦考杜克 (2004),第 340–400 页。 - ^ Buttazzo (2001).
巴塔佐 (2001) - ^ Anderson (2008).
- ^ McCauley (2007).
- ^ Galvan (1997).
AI textbooks
The two most widely used textbooks in 2023 (see the Open Syllabus):
- Russell, Stuart J.; Norvig, Peter. (2021). Artificial Intelligence: A Modern Approach (4th ed.). Hoboken: Pearson. ISBN 978-0134610993. LCCN 20190474.
- Rich, Elaine; Knight, Kevin; Nair, Shivashankar B (2010). Artificial Intelligence (3rd ed.). New Delhi: Tata McGraw Hill India. ISBN 978-0070087705.
里奇,艾琳; 奈特,凯文; 奈尔,希瓦香卡尔·B (2010). 人工智能 (第 3 版). 新德里: 塔塔麦格劳希尔印度. 国际标准书号978-0070087705.
These were the four of the most widely used AI textbooks in 2008:
2008 年最广泛使用的四本人工智能教科书是:
- Luger, George; Stubblefield, William (2004). Artificial Intelligence: Structures and Strategies for Complex Problem Solving (5th ed.). Benjamin/Cummings. ISBN 978-0-8053-4780-7. Archived from the original on 26 July 2020. Retrieved 17 December 2019.
卢戈,乔治; 斯塔布尔菲尔德,威廉 (2004). 人工智能:复杂问题解决的结构与策略 (第 5 版). 本杰明/卡明斯. 国际标准书号978-0-8053-4780-7. 存档于 2020 年 7 月 26 日的原始版本. 检索于 17 December 2019. - Nilsson, Nils (1998). Artificial Intelligence: A New Synthesis. Morgan Kaufmann. ISBN 978-1-55860-467-4. Archived from the original on 26 July 2020. Retrieved 18 November 2019.
尼尔森,尼尔斯 (1998). 人工智能:一种新合成. 摩根·考夫曼. 国际标准书号978-1-55860-467-4. 存档于 2020 年 7 月 26 日的原始文献. 于 18 November 2019 年检索. - Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2.
拉塞尔,斯图尔特·J.; 诺维格,彼得 (2003), 人工智能:一种现代方法 (第 2 版), 新泽西州上萨德尔河:普伦蒂斯霍尔, ISBN 0-13-790395-2 - Poole, David; Mackworth, Alan; Goebel, Randy (1998). Computational Intelligence: A Logical Approach. New York: Oxford University Press. ISBN 978-0-19-510270-3. Archived from the original on 26 July 2020. Retrieved 22 August 2020.
普尔,戴维; 麦克沃斯,艾伦; 戈贝尔,兰迪 (1998). 计算智能:一种逻辑方法. 纽约:牛津大学出版社。 国际标准书号978-0-19-510270-3. 存档于原始版本,日期为 2020 年 7 月 26 日. 检索于 22 August 2020.
Later editions: 后来的版本:
- Poole, David; Mackworth, Alan (2017). Artificial Intelligence: Foundations of Computational Agents (2nd ed.). Cambridge University Press. ISBN 978-1-107-19539-4. Archived from the original on 7 December 2017. Retrieved 6 December 2017.
普尔,戴维;麦克沃斯,艾伦(2017)。人工智能:计算代理的基础(第 2 版)。剑桥大学出版社。国际标准书号978-1-107-19539-4。存档于 2017 年 12 月 7 日的原始版本。检索于 6 December 2017。
Other textbooks: 其他教科书:
- Ertel, Wolfgang (2017). Introduction to Artificial Intelligence (2nd ed.). ISBN 978-3319584867.
Ertel, Wolfgang (2017). 人工智能导论 (第 2 版). 国际标准书号978-3319584867. - Ciaramella, Alberto; Ciaramella, Marco (2024). Introduction to Artificial Intelligence: from data analysis to generative AI (1st ed.). ISBN 978-8894787603.
基亚拉梅拉,阿尔贝托; 基亚拉梅拉,马尔科 (2024). 人工智能导论:从数据分析到生成式人工智能 (第 1 版). 国际标准书号978-8894787603.
History of AI 人工智能的历史
- Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3..
克雷维尔,丹尼尔 (1993). 人工智能:对人工智能的动荡探索. 纽约,NY: BasicBooks. 国际标准书号 0-465-02997-3. - McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 1-56881-205-1.
帕梅拉·麦考德克 (2004), 会思考的机器 (第 2 版), 马萨诸塞州纳提克: A. K. Peters, Ltd., 国际标准书号 1-56881-205-1 - Newquist, H. P. (1994). The Brain Makers: Genius, Ego, And Greed In The Quest For Machines That Think. New York: Macmillan/SAMS. ISBN 978-0-672-30412-5.
纽奎斯特, H. P. (1994). 大脑制造者:在追求能思考的机器中,天才、自我与贪婪. 纽约: 麦克米伦/萨姆斯. 国际标准书号978-0-672-30412-5.
Other sources 其他来源
- AI & ML in Fusion
人工智能与机器学习在融合中 - AI & ML in Fusion, video lecture Archived 2 July 2023 at the Wayback Machine
融合中的人工智能与机器学习,视频讲座归档 2023 年 7 月 2 日在时光机上 - Alter, Alexandra; Harris, Elizabeth A. (20 September 2023), "Franzen, Grisham and Other Prominent Authors Sue OpenAI", The New York Times
阿尔特,亚历山德拉;哈里斯,伊丽莎白·A.(2023 年 9 月 20 日),"弗兰岑、格里沙姆和其他知名作家起诉 OpenAI",《纽约时报》 - Altman, Sam; Brockman, Greg; Sutskever, Ilya (22 May 2023). "Governance of Superintelligence". openai.com. Archived from the original on 27 May 2023. Retrieved 27 May 2023.
阿特曼,萨姆; 布罗克曼,格雷格; 苏茨克维尔,伊利亚 (2023 年 5 月 22 日). “超级智能的治理”. openai.com. 存档于 2023 年 5 月 27 日的原文. 检索于 27 May 2023. - Anderson, Susan Leigh (2008). "Asimov's "three laws of robotics" and machine metaethics". AI & Society. 22 (4): 477–493. doi:10.1007/s00146-007-0094-5. S2CID 1809459.
安德森,苏珊·利(2008)。“阿西莫夫的‘机器人三大法则’与机器元伦理学”。人工智能与社会。22(4):477–493。doi:10.1007/s00146-007-0094-5。S2CID1809459。 - Anderson, Michael; Anderson, Susan Leigh (2011). Machine Ethics. Cambridge University Press.
安德森,迈克尔;安德森,苏珊·利(2011)。机器伦理。剑桥大学出版社。 - Arntz, Melanie; Gregory, Terry; Zierahn, Ulrich (2016), "The risk of automation for jobs in OECD countries: A comparative analysis", OECD Social, Employment, and Migration Working Papers 189
阿恩茨,梅拉妮;格雷戈里,特里;齐拉恩,乌尔里希(2016),"经合组织国家就业自动化风险:比较分析",经合组织社会、就业与移民工作论文 189 - Asada, M.; Hosoda, K.; Kuniyoshi, Y.; Ishiguro, H.; Inui, T.; Yoshikawa, Y.; Ogino, M.; Yoshida, C. (2009). "Cognitive developmental robotics: a survey". IEEE Transactions on Autonomous Mental Development. 1 (1): 12–34. doi:10.1109/tamd.2009.2021702. S2CID 10168773.
Asada, M.; Hosoda, K.; Kuniyoshi, Y.; Ishiguro, H.; Inui, T.; Yoshikawa, Y.; Ogino, M.; Yoshida, C. (2009). "认知发展机器人学:一项调查". IEEE 自主心理发展汇刊. 1 (1): 12–34. doi:10.1109/tamd.2009.2021702. S2CID10168773. - "Ask the AI experts: What's driving today's progress in AI?". McKinsey & Company. Archived from the original on 13 April 2018. Retrieved 13 April 2018.
"询问人工智能专家:今天推动人工智能进步的是什么?"。麦肯锡公司。存档于 2018 年 4 月 13 日的原文。检索于 13 April 2018。 - Barfield, Woodrow; Pagallo, Ugo (2018). Research handbook on the law of artificial intelligence. Cheltenham, UK: Edward Elgar Publishing. ISBN 978-1-78643-904-8. OCLC 1039480085.
巴菲尔德,伍德罗;帕加洛,乌戈(2018)。人工智能法研究手册。英国切尔滕纳姆:爱德华·埃尔加出版社。国际标准书号978-1-78643-904-8。OCLC1039480085。 - Beal, J.; Winston, Patrick (2009), "The New Frontier of Human-Level Artificial Intelligence", IEEE Intelligent Systems, 24: 21–24, doi:10.1109/MIS.2009.75, hdl:1721.1/52357, S2CID 32437713
比尔, J.; 温斯顿, 帕特里克 (2009), "人类水平人工智能的新前沿", IEEE 智能系统, 24: 21–24, doi:10.1109/MIS.2009.75, hdl:1721.1/52357, S2CID32437713 - Berdahl, Carl Thomas; Baker, Lawrence; Mann, Sean; Osoba, Osonde; Girosi, Federico (7 February 2023). "Strategies to Improve the Impact of Artificial Intelligence on Health Equity: Scoping Review". JMIR AI. 2: e42936. doi:10.2196/42936. ISSN 2817-1705. PMC 11041459. PMID 38875587. S2CID 256681439.
Berdahl, Carl Thomas; Baker, Lawrence; Mann, Sean; Osoba, Osonde; Girosi, Federico (2023 年 2 月 7 日). "改善人工智能对健康公平影响的策略:范围审查". JMIR AI. 2: e42936. doi:10.2196/42936. ISSN2817-1705. PMC11041459. PMID38875587. S2CID256681439. - Berlinski, David (2000). The Advent of the Algorithm. Harcourt Books. ISBN 978-0-15-601391-8. OCLC 46890682. Archived from the original on 26 July 2020. Retrieved 22 August 2020.
贝尔林斯基,戴维 (2000). 算法的出现. 哈考特书籍. 国际标准书号978-0-15-601391-8. OCLC46890682. 存档 于 2020 年 7 月 26 日的原始版本. 检索于 22 August 2020. - Berryhill, Jamie; Heang, Kévin Kok; Clogher, Rob; McBride, Keegan (2019). Hello, World: Artificial Intelligence and its Use in the Public Sector (PDF). Paris: OECD Observatory of Public Sector Innovation. Archived (PDF) from the original on 20 December 2019. Retrieved 9 August 2020.
贝里希尔,杰米;亨,凯文·科克;克洛赫,罗布;麦克布赖德,基根(2019)。你好,世界:人工智能及其在公共部门的应用(PDF)。巴黎:经济合作与发展组织公共部门创新观察站。存档(PDF),原文于 2019 年 12 月 20 日存档。检索于 9 August 2020。 - Bertini, M; Del Bimbo, A; Torniai, C (2006). "Automatic annotation and semantic retrieval of video sequences using multimedia ontologies". MM '06 Proceedings of the 14th ACM international conference on Multimedia. 14th ACM international conference on Multimedia. Santa Barbara: ACM. pp. 679–682.
贝尔蒂尼, M; 德尔·比姆博, A; 托尔尼艾, C (2006). "使用多媒体本体的自动注释和视频序列的语义检索". MM '06 第 14 届 ACM 国际多媒体会议论文集. 第 14 届 ACM 国际多媒体会议. 圣巴巴拉: ACM. 页码 679–682. - Bostrom, Nick (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
博斯特罗姆,尼克 (2014). 超级智能:路径、危险、策略. 牛津大学出版社。 - Bostrom, Nick (2015). "What happens when our computers get smarter than we are?". TED (conference). Archived from the original on 25 July 2020. Retrieved 30 January 2020.
博斯特罗姆, 尼克 (2015). "当我们的计算机比我们更聪明时会发生什么?". TED(会议). 存档于 2020 年 7 月 25 日的原始内容. 检索于 30 January 2020。 - Brooks, Rodney (10 November 2014). "artificial intelligence is a tool, not a threat". Archived from the original on 12 November 2014.
布鲁克斯,罗德尼(2014 年 11 月 10 日)。"人工智能是工具,而不是威胁"。于 2014 年 11 月 12 日从原文存档。 - Brooks, Rodney (1990). "Elephants Don't Play Chess" (PDF). Robotics and Autonomous Systems. 6 (1–2): 3–15. CiteSeerX 10.1.1.588.7539. doi:10.1016/S0921-8890(05)80025-9. Archived (PDF) from the original on 9 August 2007.
布鲁克斯,罗德尼 (1990). "大象不下棋"(PDF). 机器人与自主系统. 6 (1–2): 3–15. CiteSeerX10.1.1.588.7539. doi:10.1016/S0921-8890(05)80025-9. 存档(PDF) 于 2007 年 8 月 9 日的原始版本。 - Buiten, Miriam C (2019). "Towards Intelligent Regulation of Artificial Intelligence". European Journal of Risk Regulation. 10 (1): 41–59. doi:10.1017/err.2019.8. ISSN 1867-299X.
巴尔滕,米里亚姆 C (2019). "走向人工智能的智能监管". 欧洲风险监管杂志. 10 (1): 41–59. doi:10.1017/err.2019.8. 国际标准连续出版物号1867-299X. - Bushwick, Sophie (16 March 2023), "What the New GPT-4 AI Can Do", Scientific American
布什维克,索菲(2023 年 3 月 16 日),"新 GPT-4 AI 能做什么",科学美国人 - Butler, Samuel (13 June 1863). "Darwin among the Machines". Letters to the Editor. The Press. Christchurch, New Zealand. Archived from the original on 19 September 2008. Retrieved 16 October 2014 – via Victoria University of Wellington.
巴特勒,塞缪尔(1863 年 6 月 13 日)。“机器中的达尔文”。致编辑的信。新闻。新西兰基督城。存档于 2008 年 9 月 19 日的原件。检索于 16 October 2014 – 通过惠灵顿维多利亚大学。 - Buttazzo, G. (July 2001). "Artificial consciousness: Utopia or real possibility?". Computer. 34 (7): 24–30. doi:10.1109/2.933500.
布塔佐, G. (2001 年 7 月). "人工意识:乌托邦还是现实可能性?". 计算机. 34 (7): 24–30. doi:10.1109/2.933500. - Cambria, Erik; White, Bebo (May 2014). "Jumping NLP Curves: A Review of Natural Language Processing Research [Review Article]". IEEE Computational Intelligence Magazine. 9 (2): 48–57. doi:10.1109/MCI.2014.2307227. S2CID 206451986.
Cambria, Erik; White, Bebo (2014 年 5 月). "跳跃 NLP 曲线:自然语言处理研究综述 [综述文章]". IEEE 计算智能杂志. 9 (2): 48–57. doi:10.1109/MCI.2014.2307227. S2CID206451986. - Cellan-Jones, Rory (2 December 2014). "Stephen Hawking warns artificial intelligence could end mankind". BBC News. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
Cellan-Jones, Rory (2014 年 12 月 2 日). "斯蒂芬·霍金警告人工智能可能会结束人类". BBC 新闻. 存档于 2015 年 10 月 30 日的原文. 检索于 30 October 2015 年. - Chalmers, David (1995). "Facing up to the problem of consciousness". Journal of Consciousness Studies. 2 (3): 200–219. Archived from the original on 8 March 2005. Retrieved 11 October 2018.
查尔默斯,戴维 (1995). "面对意识问题". 意识研究杂志. 2 (3): 200–219. 存档于 2005 年 3 月 8 日的原始版本. 检索于 11 October 2018. - Challa, Subhash; Moreland, Mark R.; Mušicki, Darko; Evans, Robin J. (2011). Fundamentals of Object Tracking. Cambridge University Press. doi:10.1017/CBO9780511975837. ISBN 978-0-521-87628-5.
查拉,苏巴什;莫兰德,马克·R.;穆西基,达尔科;埃文斯,罗宾·J.(2011)。目标跟踪基础。剑桥大学出版社。doi:10.1017/CBO9780511975837。ISBN978-0-521-87628-5。 - Christian, Brian (2020). The Alignment Problem: Machine learning and human values. W. W. Norton & Company. ISBN 978-0-393-86833-3. OCLC 1233266753.
克里斯蒂安,布莱恩 (2020). 对齐问题: 机器学习与人类价值. W. W. 诺顿公司. 国际标准书号978-0-393-86833-3. OCLC1233266753. - Ciresan, D.; Meier, U.; Schmidhuber, J. (2012). "Multi-column deep neural networks for image classification". 2012 IEEE Conference on Computer Vision and Pattern Recognition. pp. 3642–3649. arXiv:1202.2745. doi:10.1109/cvpr.2012.6248110. ISBN 978-1-4673-1228-8. S2CID 2161592.
Ciresan, D.; Meier, U.; Schmidhuber, J. (2012). "用于图像分类的多列深度神经网络". 2012 IEEE 计算机视觉与模式识别会议. pp. 3642–3649. arXiv:1202.2745. doi:10.1109/cvpr.2012.6248110. ISBN978-1-4673-1228-8. S2CID2161592. - Clark, Jack (2015b). "Why 2015 Was a Breakthrough Year in Artificial Intelligence". Bloomberg.com. Archived from the original on 23 November 2016. Retrieved 23 November 2016.
克拉克,杰克 (2015b)。"为什么 2015 年是人工智能的突破年"。彭博社。存档于 2016 年 11 月 23 日的原文. 于 23 November 2016 年检索。 - CNA (12 January 2019). "Commentary: Bad news. Artificial intelligence is biased". CNA. Archived from the original on 12 January 2019. Retrieved 19 June 2020.
CNA(2019 年 1 月 12 日)。"评论:坏消息。人工智能存在偏见"。CNA。存档于 2019 年 1 月 12 日的原文。检索于 19 June 2020。 - Cybenko, G. (1988). Continuous valued neural networks with two hidden layers are sufficient (Report). Department of Computer Science, Tufts University.
Cybenko, G. (1988). 具有两个隐藏层的连续值神经网络是足够的(报告)。塔夫茨大学计算机科学系。 - Deng, L.; Yu, D. (2014). "Deep Learning: Methods and Applications" (PDF). Foundations and Trends in Signal Processing. 7 (3–4): 1–199. doi:10.1561/2000000039. Archived (PDF) from the original on 14 March 2016. Retrieved 18 October 2014.
邓, L.; 余, D. (2014). "深度学习:方法与应用"(PDF). 信号处理的基础与趋势. 7 (3–4): 1–199. doi:10.1561/2000000039. 存档(PDF) 于 2016 年 3 月 14 日的原始版本. 检索于 18 October 2014. - Dennett, Daniel (1991). Consciousness Explained. The Penguin Press. ISBN 978-0-7139-9037-9.
丹尼尔·丹尼特 (1991). 意识的解释. 企鹅出版社. 国际标准书号978-0-7139-9037-9. - DiFeliciantonio, Chase (3 April 2023). "AI has already changed the world. This report shows how". San Francisco Chronicle. Archived from the original on 19 June 2023. Retrieved 19 June 2023.
DiFeliciantonio, Chase (2023 年 4 月 3 日). "人工智能已经改变了世界。该报告展示了如何改变". 旧金山纪事报. 存档于 2023 年 6 月 19 日的原文. 检索于 19 June 2023. - Dickson, Ben (2 May 2022). "Machine learning: What is the transformer architecture?". TechTalks. Retrieved 22 November 2023.
迪克森,本(2022 年 5 月 2 日)。"机器学习:什么是变压器架构?"。科技谈话。检索于 22 November 2023。 - Dockrill, Peter (27 June 2022), "Robots With Flawed AI Make Sexist And Racist Decisions, Experiment Shows", Science Alert, archived from the original on 27 June 2022
Dockrill, Peter (2022 年 6 月 27 日), "有缺陷的人工智能机器人做出性别歧视和种族歧视的决定,实验显示", 科学警报, 从 原文 存档于 2022 年 6 月 27 日 - Domingos, Pedro (2015). The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World. Basic Books. ISBN 978-0465065707.
多明戈斯,佩德罗 (2015). 大师算法:追寻终极学习机器的旅程将如何重塑我们的世界. 基础书籍. 国际标准书号978-0465065707. - Dreyfus, Hubert (1972). What Computers Can't Do. New York: MIT Press. ISBN 978-0-06-011082-6.
德雷福斯,休伯特(1972)。计算机无法做到的事情。纽约:麻省理工学院出版社。国际标准书号978-0-06-011082-6。 - Dreyfus, Hubert; Dreyfus, Stuart (1986). Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Oxford: Blackwell. ISBN 978-0-02-908060-3. Archived from the original on 26 July 2020. Retrieved 22 August 2020.
德雷福斯,休伯特;德雷福斯,斯图尔特(1986)。超越机器的思维:计算机时代人类直觉和专业知识的力量。牛津:布莱克威尔。国际标准书号978-0-02-908060-3。存档于 2020 年 7 月 26 日的原始版本。检索于 22 August 2020。 - Dyson, George (1998). Darwin among the Machines. Allan Lane Science. ISBN 978-0-7382-0030-9. Archived from the original on 26 July 2020. Retrieved 22 August 2020.
戴森,乔治 (1998). 机器中的达尔文. 艾伦·莱恩科学. 国际标准书号978-0-7382-0030-9. 存档于原始版本,日期为 2020 年 7 月 26 日. 检索于 22 August 2020. - Edelson, Edward (1991). The Nervous System. New York: Chelsea House. ISBN 978-0-7910-0464-7. Archived from the original on 26 July 2020. Retrieved 18 November 2019.
埃德尔森,爱德华(1991)。神经系统。纽约:切尔西出版社。国际标准书号978-0-7910-0464-7。存档于 2020 年 7 月 26 日的原始版本。检索于 18 November 2019。 - Edwards, Benj (17 May 2023). "Poll: AI poses risk to humanity, according to majority of Americans". Ars Technica. Archived from the original on 19 June 2023. Retrieved 19 June 2023.
爱德华兹,班杰(2023 年 5 月 17 日)。"民调:大多数美国人认为人工智能对人类构成风险"。阿斯技术。存档于 2023 年 6 月 19 日的原文。检索于 19 June 2023。 - Evans, Woody (2015). "Posthuman Rights: Dimensions of Transhuman Worlds". Teknokultura. 12 (2). doi:10.5209/rev_TK.2015.v12.n2.49072.
埃文斯,伍迪 (2015). "后人权:超人类世界的维度". 科技文化. 12 (2). doi:10.5209/rev_TK.2015.v12.n2.49072. - Fearn, Nicholas (2007). The Latest Answers to the Oldest Questions: A Philosophical Adventure with the World's Greatest Thinkers. New York: Grove Press. ISBN 978-0-8021-1839-4.
费恩,尼古拉斯(2007)。最新答案与最古老的问题:与世界伟大思想家的哲学冒险。纽约:格罗夫出版社。国际标准书号978-0-8021-1839-4。 - Ford, Martin; Colvin, Geoff (6 September 2015). "Will robots create more jobs than they destroy?". The Guardian. Archived from the original on 16 June 2018. Retrieved 13 January 2018.
福特,马丁;科尔文,杰夫(2015 年 9 月 6 日)。"机器人会创造比它们摧毁更多的工作吗?"。卫报。存档于 2018 年 6 月 16 日的原文。检索于 13 January 2018。 - Fox News (2023). "Fox News Poll" (PDF). Fox News. Archived (PDF) from the original on 12 May 2023. Retrieved 19 June 2023.
福克斯新闻 (2023). "福克斯新闻民调"(PDF). 福克斯新闻. 存档(PDF) 于 2023 年 5 月 12 日的原始资料. 检索于 19 June 2023。 - Frank, Michael (22 September 2023). "US Leadership in Artificial Intelligence Can Shape the 21st Century Global Order". The Diplomat. Retrieved 8 December 2023.
Instead, the United States has developed a new area of dominance that the rest of the world views with a mixture of awe, envy, and resentment: artificial intelligence... From AI models and research to cloud computing and venture capital, U.S. companies, universities, and research labs – and their affiliates in allied countries – appear to have an enormous lead in both developing cutting-edge AI and commercializing it. The value of U.S. venture capital investments in AI start-ups exceeds that of the rest of the world combined.
弗兰克,迈克尔(2023 年 9 月 22 日)。"美国在人工智能领域的领导地位可以塑造 21 世纪全球秩序"。外交家。检索于 8 December 2023。相反,美国已经发展出一个新的主导领域,世界其他地方对此既感到敬畏,又充满嫉妒和怨恨:人工智能……从 AI 模型和研究到云计算和风险投资,美国的公司、大学和研究实验室——以及它们在盟国的附属机构——在开发尖端 AI 和商业化方面似乎拥有巨大的领先优势。美国对 AI 初创公司的风险投资价值超过了世界其他地区的总和。 - Frey, Carl Benedikt; Osborne, Michael A (1 January 2017). "The future of employment: How susceptible are jobs to computerisation?". Technological Forecasting and Social Change. 114: 254–280. CiteSeerX 10.1.1.395.416. doi:10.1016/j.techfore.2016.08.019. ISSN 0040-1625.
Frey, Carl Benedikt; Osborne, Michael A (2017 年 1 月 1 日). "就业的未来:工作对计算机化的敏感程度如何?". 技术预测与社会变革. 114: 254–280. CiteSeerX10.1.1.395.416. doi:10.1016/j.techfore.2016.08.019. 国际标准连续出版物号0040-1625. - "From not working to neural networking". The Economist. 2016. Archived from the original on 31 December 2016. Retrieved 26 April 2018.
"从不工作到神经网络"。经济学人。2016 年。存档于 2016 年 12 月 31 日的原文。检索于 26 April 2018 年。 - Galvan, Jill (1 January 1997). "Entering the Posthuman Collective in Philip K. Dick's "Do Androids Dream of Electric Sheep?"". Science Fiction Studies. 24 (3): 413–429. JSTOR 4240644.
Galvan, Jill (1997 年 1 月 1 日). "在菲利普·K·迪克的《机器人会梦见电子羊吗?》中进入后人类集体". 科学幻想研究. 24 (3): 413–429. JSTOR4240644. - Geist, Edward Moore (9 August 2015). "Is artificial intelligence really an existential threat to humanity?". Bulletin of the Atomic Scientists. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
盖斯特,爱德华·摩尔(2015 年 8 月 9 日)。"人工智能真的对人类构成生存威胁吗?"。原子科学家公报。存档于 2015 年 10 月 30 日的原件。检索于 30 October 2015。 - Gertner, Jon (18 July 2023). "Wikipedia's Moment of Truth – Can the online encyclopedia help teach A.I. chatbots to get their facts right — without destroying itself in the process? + comment". The New York Times. Archived from the original on 18 July 2023. Retrieved 19 July 2023.
格特纳,乔恩(2023 年 7 月 18 日)。“维基百科的真相时刻——在线百科全书能否帮助教导人工智能聊天机器人正确获取事实,而不在此过程中自我毁灭? + 评论”。纽约时报。存档于 2023 年 7 月 18 日的原文。检索于 19 July 2023。 - Gibbs, Samuel (27 October 2014). "Elon Musk: artificial intelligence is our biggest existential threat". The Guardian. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
吉布斯,塞缪尔(2014 年 10 月 27 日)。"埃隆·马斯克:人工智能是我们最大的生存威胁"。卫报。存档于 2015 年 10 月 30 日的原文。检索于 30 October 2015。 - Goffrey, Andrew (2008). "Algorithm". In Fuller, Matthew (ed.). Software studies: a lexicon. Cambridge, Mass.: MIT Press. pp. 15–20. ISBN 978-1-4356-4787-9.
戈弗里,安德鲁(2008)。 “算法”。 在富勒,马修(编)。 软件研究:词汇表。 马萨诸塞州剑桥:麻省理工学院出版社。 页码 15–20。 ISBN978-1-4356-4787-9。 - Goldman, Sharon (14 September 2022). "10 years later, deep learning 'revolution' rages on, say AI pioneers Hinton, LeCun and Li". VentureBeat. Retrieved 8 December 2023.
高曼,莎伦(2022 年 9 月 14 日)。"10 年后,深度学习的‘革命’仍在继续,人工智能先驱 Hinton、LeCun 和 Li 表示"。风险投资快报。检索于 8 December 2023。 - Good, I. J. (1965), Speculations Concerning the First Ultraintelligent Machine
好,我. J. (1965), 关于第一台超智能机器的推测 - Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016), Deep Learning, MIT Press., archived from the original on 16 April 2016, retrieved 12 November 2017
古德费洛,伊恩;本吉奥,约书亚;库尔维尔,亚伦(2016),深度学习,麻省理工学院出版社,存档于原文于 2016 年 4 月 16 日,检索于 12 November 2017 - Goodman, Bryce; Flaxman, Seth (2017). "EU regulations on algorithmic decision-making and a 'right to explanation'". AI Magazine. 38 (3): 50. arXiv:1606.08813. doi:10.1609/aimag.v38i3.2741. S2CID 7373959.
古德曼,布莱斯;弗拉克斯曼,塞思(2017)。 “欧盟关于算法决策和‘解释权’的法规”。 人工智能杂志。 38(3):50。 arXiv:1606.08813。 doi:10.1609/aimag.v38i3.2741。 S2CID7373959。 - Government Accountability Office (13 September 2022). Consumer Data: Increasing Use Poses Risks to Privacy. gao.gov (Report).
政府问责办公室(2022 年 9 月 13 日)。消费者数据:日益增加的使用对隐私构成风险。gao.gov(报告)。 - Grant, Nico; Hill, Kashmir (22 May 2023). "Google's Photo App Still Can't Find Gorillas. And Neither Can Apple's". The New York Times.
格兰特,尼科;希尔,卡什米尔(2023 年 5 月 22 日)。"谷歌的照片应用仍然找不到大猩猩。苹果的也找不到"。纽约时报。 - Goswami, Rohan (5 April 2023). "Here's where the A.I. jobs are". CNBC. Archived from the original on 19 June 2023. Retrieved 19 June 2023.
戈斯瓦米,罗汉(2023 年 4 月 5 日)。"这里是人工智能工作的地方"。CNBC。存档于 2023 年 6 月 19 日的原文。检索于 19 June 2023。 - Harari, Yuval Noah (October 2018). "Why Technology Favors Tyranny". The Atlantic. Archived from the original on 25 September 2021. Retrieved 23 September 2021.
哈拉里,尤瓦尔·诺亚(2018 年 10 月)。“为什么技术偏向暴政”。大西洋月刊。存档于 2021 年 9 月 25 日的原文。检索于 23 September 2021。 - Harari, Yuval Noah (2023). "AI and the future of humanity". YouTube.
哈拉里,尤瓦尔·诺亚(2023)。"人工智能与人类的未来"。优酷。 - Haugeland, John (1985). Artificial Intelligence: The Very Idea. Cambridge, Mass.: MIT Press. ISBN 978-0-262-08153-5.
霍格兰德,约翰 (1985). 人工智能:非常的想法. 剑桥,马萨诸塞州:麻省理工学院出版社。 国际标准书号978-0-262-08153-5。 - Henderson, Mark (24 April 2007). "Human rights for robots? We're getting carried away". The Times Online. London. Archived from the original on 31 May 2014. Retrieved 31 May 2014.
亨德森,马克(2007 年 4 月 24 日)。"机器人的人权?我们有些过于激动"。泰晤士报在线。伦敦。存档于 2014 年 5 月 31 日的原文。检索于 31 May 2014。 - Hinton, G.; Deng, L.; Yu, D.; Dahl, G.; Mohamed, A.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T.; Kingsbury, B. (2012). "Deep Neural Networks for Acoustic Modeling in Speech Recognition – The shared views of four research groups". IEEE Signal Processing Magazine. 29 (6): 82–97. Bibcode:2012ISPM...29...82H. doi:10.1109/msp.2012.2205597. S2CID 206485943.
Hinton, G.; Deng, L.; Yu, D.; Dahl, G.; Mohamed, A.; Jaitly, N.; Senior, A.; Vanhoucke, V.; Nguyen, P.; Sainath, T.; Kingsbury, B. (2012). "用于语音识别的声学建模的深度神经网络——四个研究小组的共同观点". IEEE 信号处理杂志. 29 (6): 82–97. Bibcode:2012ISPM...29...82H. doi:10.1109/msp.2012.2205597. S2CID206485943. - Holley, Peter (28 January 2015). "Bill Gates on dangers of artificial intelligence: 'I don't understand why some people are not concerned'". The Washington Post. ISSN 0190-8286. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
霍利,彼得(2015 年 1 月 28 日)。"比尔·盖茨谈人工智能的危险:'我不明白为什么有些人不担心'"。华盛顿邮报。国际标准连续出版物号0190-8286。存档于 2015 年 10 月 30 日的原文。检索于 30 October 2015。 - Hornik, Kurt; Stinchcombe, Maxwell; White, Halbert (1989). Multilayer Feedforward Networks are Universal Approximators (PDF). Neural Networks. Vol. 2. Pergamon Press. pp. 359–366.
霍尔尼克,库尔特;斯廷奇科姆,麦克斯韦;怀特,哈伯特(1989)。多层前馈网络是通用逼近器(PDF)。神经网络。第 2 卷。培根出版社。第 359–366 页。 - Horst, Steven (2005). "The Computational Theory of Mind". The Stanford Encyclopedia of Philosophy. Archived from the original on 6 March 2016. Retrieved 7 March 2016.
霍斯特,史蒂文(2005)。“心智的计算理论”。斯坦福哲学百科全书。存档于 2016 年 3 月 6 日的原始版本。检索于 7 March 2016。 - Howe, J. (November 1994). "Artificial Intelligence at Edinburgh University: a Perspective". Archived from the original on 15 May 2007. Retrieved 30 August 2007.
霍威, J. (1994 年 11 月). "爱丁堡大学的人工智能:一个视角". 存档于 2007 年 5 月 15 日的原始版本. 检索于 30 August 2007 年。 - IGM Chicago (30 June 2017). "Robots and Artificial Intelligence". www.igmchicago.org. Archived from the original on 1 May 2019. Retrieved 3 July 2019.
IGM 芝加哥 (2017 年 6 月 30 日)。"机器人和人工智能"。www.igmchicago.org。存档于 2019 年 5 月 1 日的原始内容。检索于 3 July 2019。 - Iphofen, Ron; Kritikos, Mihalis (3 January 2019). "Regulating artificial intelligence and robotics: ethics by design in a digital society". Contemporary Social Science. 16 (2): 170–184. doi:10.1080/21582041.2018.1563803. ISSN 2158-2041. S2CID 59298502.
伊福芬,罗恩;克里提科斯,米哈利斯(2019 年 1 月 3 日)。“监管人工智能和机器人:数字社会中的设计伦理”。当代社会科学。16(2):170–184。doi:10.1080/21582041.2018.1563803。国际标准连续出版物号2158-2041。S2CID59298502。 - Jordan, M. I.; Mitchell, T. M. (16 July 2015). "Machine learning: Trends, perspectives, and prospects". Science. 349 (6245): 255–260. Bibcode:2015Sci...349..255J. doi:10.1126/science.aaa8415. PMID 26185243. S2CID 677218.
乔丹, M. I.; 米切尔, T. M. (2015 年 7 月 16 日). "机器学习:趋势、视角和前景". 科学. 349 (6245): 255–260. 文献代码:2015Sci...349..255J. 数字对象标识符:10.1126/science.aaa8415. PMID26185243. S2CID677218. - Kahneman, Daniel (2011). Thinking, Fast and Slow. Macmillan. ISBN 978-1-4299-6935-2. Archived from the original on 15 March 2023. Retrieved 8 April 2012.
卡尼曼,丹尼尔 (2011). 思考,快与慢. 麦克米伦. 国际标准书号978-1-4299-6935-2. 存档于原始版本,日期为 2023 年 3 月 15 日. 检索于 8 April 2012. - Kahneman, Daniel; Slovic, D.; Tversky, Amos (1982). "Judgment under uncertainty: Heuristics and biases". Science. 185 (4157). New York: Cambridge University Press: 1124–1131. Bibcode:1974Sci...185.1124T. doi:10.1126/science.185.4157.1124. ISBN 978-0-521-28414-1. PMID 17835457. S2CID 143452957.
卡尼曼,丹尼尔;斯洛维克,D.;特沃斯基,阿莫斯(1982)。 “不确定性下的判断:启发式和偏见”。 科学。 185(4157)。 纽约:剑桥大学出版社:1124–1131。 Bibcode:1974Sci...185.1124T。 doi:10.1126/science.185.4157.1124。 ISBN978-0-521-28414-1。 PMID17835457。 S2CID143452957。 - Kasperowicz, Peter (1 May 2023). "Regulate AI? GOP much more skeptical than Dems that government can do it right: poll". Fox News. Archived from the original on 19 June 2023. Retrieved 19 June 2023.
Kasperowicz, Peter (2023 年 5 月 1 日). "监管人工智能?共和党对政府能否正确处理此事的怀疑远超过民主党:民调". 福克斯新闻. 存档于 2023 年 6 月 19 日的原文. 检索于 19 June 2023。 - Katz, Yarden (1 November 2012). "Noam Chomsky on Where Artificial Intelligence Went Wrong". The Atlantic. Archived from the original on 28 February 2019. Retrieved 26 October 2014.
卡茨,雅尔登(2012 年 11 月 1 日)。"诺姆·乔姆斯基谈人工智能的错误"。大西洋月刊。存档于 2019 年 2 月 28 日的原文。检索于 26 October 2014。 - "Kismet". MIT Artificial Intelligence Laboratory, Humanoid Robotics Group. Archived from the original on 17 October 2014. Retrieved 25 October 2014.
"命运"。麻省理工学院人工智能实验室,人形机器人组。存档于 2014 年 10 月 17 日的原始版本。检索于 25 October 2014。 - Kissinger, Henry (1 November 2021). "The Challenge of Being Human in the Age of AI". The Wall Street Journal. Archived from the original on 4 November 2021. Retrieved 4 November 2021.
基辛格,亨利(2021 年 11 月 1 日)。“在人工智能时代做人的挑战”。华尔街日报。存档于 2021 年 11 月 4 日的原文。检索于 4 November 2021。 - Kobielus, James (27 November 2019). "GPUs Continue to Dominate the AI Accelerator Market for Now". InformationWeek. Archived from the original on 19 October 2021. Retrieved 11 June 2020.
Kobielus, James (2019 年 11 月 27 日). "GPU 目前继续主导人工智能加速器市场". 信息周刊. 存档于 2021 年 10 月 19 日的原文. 于 11 June 2020 年检索. - Kuperman, G. J.; Reichley, R. M.; Bailey, T. C. (1 July 2006). "Using Commercial Knowledge Bases for Clinical Decision Support: Opportunities, Hurdles, and Recommendations". Journal of the American Medical Informatics Association. 13 (4): 369–371. doi:10.1197/jamia.M2055. PMC 1513681. PMID 16622160.
库珀曼, G. J.; 莱克利, R. M.; 贝利, T. C. (2006 年 7 月 1 日). "使用商业知识库进行临床决策支持:机遇、障碍和建议". 美国医学信息学协会期刊. 13 (4): 369–371. doi:10.1197/jamia.M2055. PMC1513681. PMID16622160. - Kurzweil, Ray (2005). The Singularity is Near. Penguin Books. ISBN 978-0-670-03384-3.
- Langley, Pat (2011). "The changing science of machine learning". Machine Learning. 82 (3): 275–279. doi:10.1007/s10994-011-5242-y.
兰利,帕特(2011)。"机器学习的变化科学"。机器学习。82(3):275–279。doi:10.1007/s10994-011-5242-y。 - Larson, Jeff; Angwin, Julia (23 May 2016). "How We Analyzed the COMPAS Recidivism Algorithm". ProPublica. Archived from the original on 29 April 2019. Retrieved 19 June 2020.
- Laskowski, Nicole (November 2023). "What is Artificial Intelligence and How Does AI Work? TechTarget". Enterprise AI. Retrieved 30 October 2023.
拉斯科夫斯基,尼科尔(2023 年 11 月)。"什么是人工智能,人工智能是如何工作的?TechTarget"。企业人工智能。检索于 30 October 2023 年。 - Law Library of Congress (U.S.). Global Legal Research Directorate, issuing body. (2019). Regulation of artificial intelligence in selected jurisdictions. LCCN 2019668143. OCLC 1110727808.
美国国会法库(U.S.)。全球法律研究局,发行机构。(2019)。在选定法域中对人工智能的监管。LCCN2019668143。OCLC1110727808。 - Lee, Timothy B. (22 August 2014). "Will artificial intelligence destroy humanity? Here are 5 reasons not to worry". Vox. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
李,蒂莫西·B.(2014 年 8 月 22 日)。"人工智能会毁灭人类吗?这里有 5 个理由让你不必担心"。Vox。存档于 2015 年 10 月 30 日的原文。检索于 30 October 2015。 - Lenat, Douglas; Guha, R. V. (1989). Building Large Knowledge-Based Systems. Addison-Wesley. ISBN 978-0-201-51752-1.
伦纳特,道格拉斯; 古哈,R. V. (1989). 构建大型知识基础系统. 艾迪森-韦斯利. 国际标准书号978-0-201-51752-1. - Lighthill, James (1973). "Artificial Intelligence: A General Survey". Artificial Intelligence: a paper symposium. Science Research Council.
莱特希尔,詹姆斯 (1973). "人工智能:一般调查". 人工智能:一篇论文研讨会. 科学研究委员会。 - Lipartito, Kenneth (6 January 2011), The Narrative and the Algorithm: Genres of Credit Reporting from the Nineteenth Century to Today (PDF) (Unpublished manuscript), doi:10.2139/ssrn.1736283, S2CID 166742927, archived (PDF) from the original on 9 October 2022
Lipartito, Kenneth (2011 年 1 月 6 日),叙事与算法:从十九世纪到今天的信用报告类型(PDF)(未发表手稿),doi:10.2139/ssrn.1736283,S2CID166742927,存档(PDF),原文于 2022 年 10 月 9 日存档 - Lohr, Steve (2017). "Robots Will Take Jobs, but Not as Fast as Some Fear, New Report Says". The New York Times. Archived from the original on 14 January 2018. Retrieved 13 January 2018.
洛尔,史蒂夫(2017)。"机器人将取代工作,但不会像一些人担心的那样快,新报告称"。纽约时报。存档于 2018 年 1 月 14 日的原文。检索于 13 January 2018。 - Lungarella, M.; Metta, G.; Pfeifer, R.; Sandini, G. (2003). "Developmental robotics: a survey". Connection Science. 15 (4): 151–190. CiteSeerX 10.1.1.83.7615. doi:10.1080/09540090310001655110. S2CID 1452734.
Lungarella, M.; Metta, G.; Pfeifer, R.; Sandini, G. (2003). "发展机器人学:一项调查". 连接科学. 15 (4): 151–190. CiteSeerX10.1.1.83.7615. doi:10.1080/09540090310001655110. S2CID1452734. - "Machine Ethics". aaai.org. Archived from the original on 29 November 2014.
"机器伦理"。aaai.org。于 2014 年 11 月 29 日存档自原始版本。 - Madrigal, Alexis C. (27 February 2015). "The case against killer robots, from a guy actually working on artificial intelligence". Fusion.net. Archived from the original on 4 February 2016. Retrieved 31 January 2016.
马德里加尔,亚历克西斯·C.(2015 年 2 月 27 日)。"反对杀手机器人的理由,来自一位实际从事人工智能工作的人"。Fusion.net。存档于 2016 年 2 月 4 日的原文。检索于 31 January 2016。 - Mahdawi, Arwa (26 June 2017). "What jobs will still be around in 20 years? Read this to prepare your future". The Guardian. Archived from the original on 14 January 2018. Retrieved 13 January 2018.
Mahdawi, Arwa (2017 年 6 月 26 日). "20 年后哪些工作仍然存在?阅读此文以准备你的未来". 卫报. 存档于 2018 年 1 月 14 日的原文. 检索于 13 January 2018. - Maker, Meg Houston (2006), AI@50: AI Past, Present, Future, Dartmouth College, archived from the original on 8 October 2008, retrieved 16 October 2008
制造者,梅格·休斯顿(2006),AI@50: 人工智能的过去、现在与未来,达特茅斯学院,存档于原始版本,日期为 2008 年 10 月 8 日,检索于 16 October 2008 - Marmouyet, Françoise (15 December 2023). "Google's Gemini: is the new AI model really better than ChatGPT?". The Conversation. Retrieved 25 December 2023.
马尔穆耶,弗朗索瓦(2023 年 12 月 15 日)。"谷歌的双子星:这个新的人工智能模型真的比 ChatGPT 更好吗?"。对话。检索于 25 December 2023。 - Minsky, Marvin (1986), The Society of Mind, Simon and Schuster
明斯基,马文 (1986),心智的社会,西蒙与舒斯特 - Maschafilm (2010). "Content: Plug & Pray Film – Artificial Intelligence – Robots". plugandpray-film.de. Archived from the original on 12 February 2016.
Maschafilm(2010)。“内容:插入与祈祷电影 – 人工智能 – 机器人”。plugandpray-film.de。存档于 2016 年 2 月 12 日的原始版本。 - McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (1955). "A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence". Archived from the original on 26 August 2007. Retrieved 30 August 2007.
麦卡锡,约翰; 明斯基,马文; 罗切斯特,内森; 香农,克劳德 (1955). “达特茅斯夏季人工智能研究项目提案”. 从原始版本存档于 2007 年 8 月 26 日. 检索于 30 August 2007. - McCarthy, John (2007), "From Here to Human-Level AI", Artificial Intelligence: 171
麦卡锡,约翰 (2007),"从这里到人类水平的人工智能",人工智能: 171 - McCarthy, John (1999), What is AI?, archived from the original on 4 December 2022, retrieved 4 December 2022
麦卡锡,约翰 (1999),什么是人工智能?,存档于 2022 年 12 月 4 日的原始资料,检索于 4 December 2022 - McCauley, Lee (2007). "AI armageddon and the three laws of robotics". Ethics and Information Technology. 9 (2): 153–164. CiteSeerX 10.1.1.85.8904. doi:10.1007/s10676-007-9138-2. S2CID 37272949.
麦考利,李(2007)。"人工智能末日与机器人三大法则"。伦理与信息技术。9 (2): 153–164。CiteSeerX10.1.1.85.8904。doi:10.1007/s10676-007-9138-2。S2CID37272949。 - McGarry, Ken (1 December 2005). "A survey of interestingness measures for knowledge discovery". The Knowledge Engineering Review. 20 (1): 39–61. doi:10.1017/S0269888905000408. S2CID 14987656.
麦加里,肯(2005 年 12 月 1 日)。“知识发现的有趣性度量调查”。知识工程评论。20(1):39–61。doi:10.1017/S0269888905000408。S2CID14987656。 - McGaughey, E (2022), Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy, p. 51(3) Industrial Law Journal 511–559, doi:10.2139/ssrn.3044448, S2CID 219336439, SSRN 3044448, archived from the original on 31 January 2021, retrieved 27 May 2023
McGaughey, E (2022), 机器人会取代你的工作吗?充分就业、基本收入与经济民主, p. 51(3) 工业法杂志 511–559, doi:10.2139/ssrn.3044448, S2CID219336439, SSRN3044448, 存档于原始文献,日期为 2021 年 1 月 31 日, retrieved 27 May 2023 - Merkle, Daniel; Middendorf, Martin (2013). "Swarm Intelligence". In Burke, Edmund K.; Kendall, Graham (eds.). Search Methodologies: Introductory Tutorials in Optimization and Decision Support Techniques. Springer Science & Business Media. ISBN 978-1-4614-6940-7.
Merkle, Daniel; Middendorf, Martin (2013). "群体智能". 在 Burke, Edmund K.; Kendall, Graham (编辑). 搜索方法论:优化和决策支持技术的入门教程. Springer Science & Business Media. 国际标准书号978-1-4614-6940-7. - Minsky, Marvin (1967), Computation: Finite and Infinite Machines, Englewood Cliffs, N.J.: Prentice-Hall
明斯基,马文 (1967),计算:有限与无限机器,新泽西州恩格尔伍德悬崖:普伦蒂斯霍尔 - Moravec, Hans (1988). Mind Children. Harvard University Press. ISBN 978-0-674-57616-2. Archived from the original on 26 July 2020. Retrieved 18 November 2019.
莫拉维克,汉斯 (1988). 心灵的孩子. 哈佛大学出版社. 国际标准书号978-0-674-57616-2. 存档于原始版本,日期为 2020 年 7 月 26 日. 检索于 18 November 2019. - Morgenstern, Michael (9 May 2015). "Automation and anxiety". The Economist. Archived from the original on 12 January 2018. Retrieved 13 January 2018.
摩根斯特恩,迈克尔(2015 年 5 月 9 日)。"自动化与焦虑"。经济学人。存档于 2018 年 1 月 12 日的原文。检索于 13 January 2018。 - Müller, Vincent C.; Bostrom, Nick (2014). "Future Progress in Artificial Intelligence: A Poll Among Experts" (PDF). AI Matters. 1 (1): 9–11. doi:10.1145/2639475.2639478. S2CID 8510016. Archived (PDF) from the original on 15 January 2016.
穆勒,文森特·C;博斯特罗姆,尼克(2014)。"人工智能的未来进展:专家调查"(PDF)。人工智能事务。1 (1):9–11。doi:10.1145/2639475.2639478。S2CID8510016。存档(PDF),原文于 2016 年 1 月 15 日存档。 - Neumann, Bernd; Möller, Ralf (January 2008). "On scene interpretation with description logics". Image and Vision Computing. 26 (1): 82–101. doi:10.1016/j.imavis.2007.08.013. S2CID 10767011.
诺伊曼,伯恩德;莫勒,拉尔夫(2008 年 1 月)。“使用描述逻辑进行场景解释”。图像与视觉计算。26(1):82–101。doi:10.1016/j.imavis.2007.08.013。S2CID10767011。 - Nilsson, Nils (1995), "Eyes on the Prize", AI Magazine, vol. 16, pp. 9–17
尼尔森,尼尔斯 (1995),"眼睛盯着奖品",人工智能杂志,第 16 卷,第 9–17 页 - Newell, Allen; Simon, H. A. (1976). "Computer Science as Empirical Inquiry: Symbols and Search". Communications of the ACM. 19 (3): 113–126. doi:10.1145/360018.360022.
纽厄尔,艾伦; 西蒙,H. A. (1976). "计算机科学作为实证研究:符号与搜索". ACM 通讯. 19 (3): 113–126. doi:10.1145/360018.360022. - Nicas, Jack (7 February 2018). "How YouTube Drives People to the Internet's Darkest Corners". The Wall Street Journal. ISSN 0099-9660. Retrieved 16 June 2018.
尼卡斯,杰克(2018 年 2 月 7 日)。"YouTube 如何将人们引向互联网最黑暗的角落"。华尔街日报。国际标准连续出版物号0099-9660。检索于 16 June 2018 年。 - Nilsson, Nils (1983). "Artificial Intelligence Prepares for 2001" (PDF). AI Magazine. 1 (1). Archived (PDF) from the original on 17 August 2020. Retrieved 22 August 2020. Presidential Address to the Association for the Advancement of Artificial Intelligence.
尼尔森,尼尔斯 (1983). "人工智能为 2001 年做准备" (PDF). 人工智能杂志. 1 (1). 存档 (PDF) 于 2020 年 8 月 17 日的原始文档. 检索于 22 August 2020. 向人工智能促进协会的总统演讲。 - NRC (United States National Research Council) (1999). "Developments in Artificial Intelligence". Funding a Revolution: Government Support for Computing Research. National Academy Press.
美国国家研究委员会 (NRC) (1999). "人工智能的发展". 资助一场革命:政府对计算研究的支持. 国家科学院出版社. - Omohundro, Steve (2008). The Nature of Self-Improving Artificial Intelligence. presented and distributed at the 2007 Singularity Summit, San Francisco, CA.
Omohundro, Steve (2008). 自我改进人工智能的本质。在 2007 年奇点峰会(旧金山,加利福尼亚州)上展示和分发。 - Oudeyer, P-Y. (2010). "On the impact of robotics in behavioral and cognitive sciences: from insect navigation to human cognitive development" (PDF). IEEE Transactions on Autonomous Mental Development. 2 (1): 2–16. doi:10.1109/tamd.2009.2039057. S2CID 6362217. Archived (PDF) from the original on 3 October 2018. Retrieved 4 June 2013.
Oudeyer, P-Y. (2010). "关于机器人技术在行为和认知科学中的影响:从昆虫导航到人类认知发展"(PDF). IEEE 自主心理发展期刊. 2 (1): 2–16. doi:10.1109/tamd.2009.2039057. S2CID6362217. 存档(PDF) 于 2018 年 10 月 3 日的原始版本. 检索于 4 June 2013. - Pennachin, C.; Goertzel, B. (2007). "Contemporary Approaches to Artificial General Intelligence". Artificial General Intelligence. Cognitive Technologies. Berlin, Heidelberg: Springer. pp. 1–30. doi:10.1007/978-3-540-68677-4_1. ISBN 978-3-540-23733-4.
Pennachin, C.; Goertzel, B. (2007). "当代人工通用智能的方法". 人工通用智能. 认知技术. 柏林,海德堡:施普林格. 页码 1–30. doi:10.1007/978-3-540-68677-4_1. ISBN978-3-540-23733-4. - Pinker, Steven (2007) [1994], The Language Instinct, Perennial Modern Classics, Harper, ISBN 978-0-06-133646-1
平克,史蒂文 (2007) [1994],语言本能,常青现代经典,哈珀,国际标准书号978-0-06-133646-1 - Poria, Soujanya; Cambria, Erik; Bajpai, Rajiv; Hussain, Amir (September 2017). "A review of affective computing: From unimodal analysis to multimodal fusion". Information Fusion. 37: 98–125. doi:10.1016/j.inffus.2017.02.003. hdl:1893/25490. S2CID 205433041. Archived from the original on 23 March 2023. Retrieved 27 April 2021.
白术,Soujanya;Cambria,Erik;Bajpai,Rajiv;Hussain,Amir(2017 年 9 月)。"情感计算的综述:从单模态分析到多模态融合"。信息融合。37:98–125。doi:10.1016/j.inffus.2017.02.003。hdl:1893/25490。S2CID205433041。存档于 2023 年 3 月 23 日的原文。检索于 27 April 2021。 - Rawlinson, Kevin (29 January 2015). "Microsoft's Bill Gates insists AI is a threat". BBC News. Archived from the original on 29 January 2015. Retrieved 30 January 2015.
罗林森,凯文(2015 年 1 月 29 日)。"微软的比尔·盖茨坚称人工智能是一个威胁"。BBC 新闻。存档于 2015 年 1 月 29 日的原文。检索于 30 January 2015。 - Reisner, Alex (19 August 2023), "Revealed: The Authors Whose Pirated Books are Powering Generative AI", The Atlantic
Reisner, Alex (2023 年 8 月 19 日), "揭示:那些被盗版书籍推动生成式人工智能的作者", 大西洋月刊 - Roberts, Jacob (2016). "Thinking Machines: The Search for Artificial Intelligence". Distillations. Vol. 2, no. 2. pp. 14–23. Archived from the original on 19 August 2018. Retrieved 20 March 2018.
罗伯茨,雅各布 (2016). "思考机器:人工智能的探索". 蒸馏. 第 2 卷,第 2 期. 第 14–23 页. 于 2018 年 8 月 19 日存档自原文. 检索于 20 March 2018. - Robitzski, Dan (5 September 2018). "Five experts share what scares them the most about AI". Archived from the original on 8 December 2019. Retrieved 8 December 2019.
Robitzski, Dan (2018 年 9 月 5 日). "五位专家分享他们对人工智能最害怕的事情". 存档于 2019 年 12 月 8 日的原文. 检索于 8 December 2019. - "Robots could demand legal rights". BBC News. 21 December 2006. Archived from the original on 15 October 2019. Retrieved 3 February 2011.
"机器人可能会要求法律权利"。BBC 新闻。2006 年 12 月 21 日。存档于 2019 年 10 月 15 日的原文。检索于 3 February 2011 年。 - Rose, Steve (11 July 2023). "AI Utopia or dystopia?". The Guardian Weekly. pp. 42–43.
罗斯,史蒂夫(2023 年 7 月 11 日)。“人工智能的乌托邦还是反乌托邦?”卫报周刊。第 42–43 页。 - Russell, Stuart (2019). Human Compatible: Artificial Intelligence and the Problem of Control. United States: Viking. ISBN 978-0-525-55861-3. OCLC 1083694322.
拉塞尔,斯图尔特 (2019). 人类兼容:人工智能与控制问题. 美国:维京出版社. 国际标准书号978-0-525-55861-3. OCLC1083694322. - Sainato, Michael (19 August 2015). "Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence". Observer. Archived from the original on 30 October 2015. Retrieved 30 October 2015.
Sainato, Michael (2015 年 8 月 19 日). "斯蒂芬·霍金、埃隆·马斯克和比尔·盖茨警告人工智能". 观察者. 存档于 2015 年 10 月 30 日的原文. 检索于 30 October 2015. - Sample, Ian (5 November 2017). "Computer says no: why making AIs fair, accountable and transparent is crucial". The Guardian. Retrieved 30 January 2018.
样本,伊恩(2017 年 11 月 5 日)。"计算机说不:为什么让人工智能公平、负责任和透明至关重要"。卫报。检索于 30 January 2018。 - Rothman, Denis (7 October 2020). "Exploring LIME Explanations and the Mathematics Behind It". Codemotion. Retrieved 25 November 2023.
罗思曼,丹尼斯(2020 年 10 月 7 日)。"探索 LIME 解释及其背后的数学"。Codemotion。检索于 25 November 2023。 - Scassellati, Brian (2002). "Theory of mind for a humanoid robot". Autonomous Robots. 12 (1): 13–24. doi:10.1023/A:1013298507114. S2CID 1979315.
Scassellati, Brian (2002). "类人机器人心智理论". 自主机器人. 12 (1): 13–24. doi:10.1023/A:1013298507114. S2CID1979315. - Schmidhuber, J. (2015). "Deep Learning in Neural Networks: An Overview". Neural Networks. 61: 85–117. arXiv:1404.7828. doi:10.1016/j.neunet.2014.09.003. PMID 25462637. S2CID 11715509.
施密特胡伯, J. (2015). "神经网络中的深度学习:概述". 神经网络. 61: 85–117. arXiv:1404.7828. doi:10.1016/j.neunet.2014.09.003. PMID25462637. S2CID11715509. - Schmidhuber, Jürgen (2022). "Annotated History of Modern AI and Deep Learning".
施密特胡伯,尤尔根 (2022). "现代人工智能和深度学习的注释历史"。 - Schulz, Hannes; Behnke, Sven (1 November 2012). "Deep Learning". KI – Künstliche Intelligenz. 26 (4): 357–363. doi:10.1007/s13218-012-0198-z. ISSN 1610-1987. S2CID 220523562.
Schulz, Hannes; Behnke, Sven (2012 年 11 月 1 日). "深度学习". 人工智能. 26 (4): 357–363. doi:10.1007/s13218-012-0198-z. 国际标准连续出版物号1610-1987. S2CID220523562. - Searle, John (1980). "Minds, Brains and Programs" (PDF). Behavioral and Brain Sciences. 3 (3): 417–457. doi:10.1017/S0140525X00005756. S2CID 55303721. Archived (PDF) from the original on 17 March 2019. Retrieved 22 August 2020.
塞尔, 约翰 (1980). "心智、大脑与程序"(PDF). 行为与脑科学. 3 (3): 417–457. doi:10.1017/S0140525X00005756. S2CID55303721. 存档(PDF) 于 2019 年 3 月 17 日的原始版本. 检索于 22 August 2020. - Searle, John (1999). Mind, language and society. New York: Basic Books. ISBN 978-0-465-04521-1. OCLC 231867665. Archived from the original on 26 July 2020. Retrieved 22 August 2020.
塞尔, 约翰 (1999). 心智、语言与社会. 纽约: 基本书籍. 国际标准书号978-0-465-04521-1. OCLC231867665. 存档于 2020 年 7 月 26 日的原始版本. 检索于 22 August 2020. - Simon, H. A. (1965), The Shape of Automation for Men and Management, New York: Harper & Row
西蒙, H. A. (1965), 人类与管理的自动化形态, 纽约: 哈珀与罗우 - Simonite, Tom (31 March 2016). "How Google Plans to Solve Artificial Intelligence". MIT Technology Review.
西蒙尼特,汤姆(2016 年 3 月 31 日)。"谷歌如何计划解决人工智能"。麻省理工学院技术评论。 - Smith, Craig S. (15 March 2023). "ChatGPT-4 Creator Ilya Sutskever on AI Hallucinations and AI Democracy". Forbes. Retrieved 25 December 2023.
史密斯,克雷格·S.(2023 年 3 月 15 日)。"ChatGPT-4 创造者伊利亚·苏茨克弗谈人工智能幻觉与人工智能民主"。福布斯。于 25 December 2023 年检索。 - Smoliar, Stephen W.; Zhang, HongJiang (1994). "Content based video indexing and retrieval". IEEE MultiMedia. 1 (2): 62–72. doi:10.1109/93.311653. S2CID 32710913.
Smoliar, Stephen W.; Zhang, HongJiang (1994). "基于内容的视频索引和检索". IEEE MultiMedia. 1 (2): 62–72. doi:10.1109/93.311653. S2CID32710913. - Solomonoff, Ray (1956). An Inductive Inference Machine (PDF). Dartmouth Summer Research Conference on Artificial Intelligence. Archived (PDF) from the original on 26 April 2011. Retrieved 22 March 2011 – via std.com, pdf scanned copy of the original. Later published as
所罗门诺夫,雷 (1956). 归纳推理机器 (PDF). 达特茅斯夏季人工智能研究会议. 存档 (PDF) 于 2011 年 4 月 26 日从原始文档获取. 检索于 22 March 2011 – 通过 std.com,原始文档的扫描版。 后来出版为
Solomonoff, Ray (1957). "An Inductive Inference Machine". IRE Convention Record. Vol. Section on Information Theory, part 2. pp. 56–62.
所罗门诺夫,雷(1957)。"归纳推理机器"。IRE 大会记录。第卷 信息理论部分,第 2 部分。第 56–62 页。 - Stanford University (2023). "Artificial Intelligence Index Report 2023/Chapter 6: Policy and Governance" (PDF). AI Index. Archived (PDF) from the original on 19 June 2023. Retrieved 19 June 2023.
斯坦福大学 (2023). "人工智能指数报告 2023/第 6 章:政策与治理"(PDF). AI 指数. 存档(PDF) 于 2023 年 6 月 19 日的原始文件. 检索于 19 June 2023。 - Tao, Jianhua; Tan, Tieniu (2005). Affective Computing and Intelligent Interaction. Affective Computing: A Review. Lecture Notes in Computer Science. Vol. 3784. Springer. pp. 981–995. doi:10.1007/11573548. ISBN 978-3-540-29621-8.
陶建华;谭天宇(2005)。情感计算与智能互动。《情感计算:综述》。计算机科学讲义。第 3784 卷。施普林格。第 981–995 页。doi:10.1007/11573548。ISBN978-3-540-29621-8。 - Taylor, Josh; Hern, Alex (2 May 2023). "'Godfather of AI' Geoffrey Hinton quits Google and warns over dangers of misinformation". The Guardian.
泰勒,乔希;赫恩,亚历克斯(2023 年 5 月 2 日)。"'人工智能教父'杰弗里·辛顿辞去谷歌职务,并警告虚假信息的危险"。卫报。 - Thompson, Derek (23 January 2014). "What Jobs Will the Robots Take?". The Atlantic. Archived from the original on 24 April 2018. Retrieved 24 April 2018.
汤普森,德里克(2014 年 1 月 23 日)。"机器人将取代哪些工作?"。大西洋月刊。存档于 2018 年 4 月 24 日的原文。检索于 24 April 2018。 - Thro, Ellen (1993). Robotics: The Marriage of Computers and Machines. New York: Facts on File. ISBN 978-0-8160-2628-9. Archived from the original on 26 July 2020. Retrieved 22 August 2020.
Thro, Ellen (1993). 机器人技术:计算机与机器的结合. 纽约:Facts on File. 国际标准书号978-0-8160-2628-9. 存档于 2020 年 7 月 26 日的原始版本. 检索于 22 August 2020. - Toews, Rob (3 September 2023). "Transformers Revolutionized AI. What Will Replace Them?". Forbes. Retrieved 8 December 2023.
托维斯,罗布(2023 年 9 月 3 日)。"变形金刚革命了人工智能。将会有什么取代它们?"。福布斯。检索于 8 December 2023。 - Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind, LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423
图灵,艾伦(1950 年 10 月),“计算机械与智能”,心智,LIX(236):433–460,doi:10.1093/mind/LIX.236.433,ISSN0026-4423 - UNESCO Science Report: the Race Against Time for Smarter Development. Paris: UNESCO. 2021. ISBN 978-92-3-100450-6. Archived from the original on 18 June 2022. Retrieved 18 September 2021.
联合国教科文组织科学报告:为更智能的发展而争分夺秒。巴黎:联合国教科文组织。2021 年。国际标准书号978-92-3-100450-6。存档于 2022 年 6 月 18 日的原始版本。检索于 18 September 2021 年。 - Urbina, Fabio; Lentzos, Filippa; Invernizzi, Cédric; Ekins, Sean (7 March 2022). "Dual use of artificial-intelligence-powered drug discovery". Nature Machine Intelligence. 4 (3): 189–191. doi:10.1038/s42256-022-00465-9. PMC 9544280. PMID 36211133. S2CID 247302391.
乌尔比纳,法比奥;伦佐斯,菲利帕;因维尔尼齐,塞德里克;埃金斯,肖恩(2022 年 3 月 7 日)。"人工智能驱动的药物发现的双重用途"。自然机器智能。4(3):189–191。doi:10.1038/s42256-022-00465-9。PMC9544280。PMID36211133。S2CID247302391。 - Valance, Christ (30 May 2023). "Artificial intelligence could lead to extinction, experts warn". BBC News. Archived from the original on 17 June 2023. Retrieved 18 June 2023.
瓦朗斯,克里斯特(2023 年 5 月 30 日)。"人工智能可能导致灭绝,专家警告"。BBC 新闻。存档于 2023 年 6 月 17 日的原文。检索于 18 June 2023。 - Valinsky, Jordan (11 April 2019), "Amazon reportedly employs thousands of people to listen to your Alexa conversations", CNN.com
瓦林斯基,乔丹(2019 年 4 月 11 日),"亚马逊 reportedly 雇佣数千人来监听你的 Alexa 对话",CNN.com - Verma, Yugesh (25 December 2021). "A Complete Guide to SHAP – SHAPley Additive exPlanations for Practitioners". Analytics India Magazine. Retrieved 25 November 2023.
Verma, Yugesh (2021 年 12 月 25 日). "SHAP 的完整指南 – 实践者的 SHAPley 加法解释". Analytics India Magazine. retrieved 25 November 2023. - Vincent, James (7 November 2019). "OpenAI has published the text-generating AI it said was too dangerous to share". The Verge. Archived from the original on 11 June 2020. Retrieved 11 June 2020.
文森特,詹姆斯(2019 年 11 月 7 日)。"OpenAI 发布了它所称的过于危险而无法分享的文本生成 AI"。极客公园。存档于 2020 年 6 月 11 日的原文。检索于 11 June 2020。 - Vincent, James (15 November 2022). "The scary truth about AI copyright is nobody knows what will happen next". The Verge. Archived from the original on 19 June 2023. Retrieved 19 June 2023.
文森特,詹姆斯(2022 年 11 月 15 日)。"关于人工智能版权的可怕真相是没有人知道接下来会发生什么"。边缘。存档于 2023 年 6 月 19 日的原文。检索于 19 June 2023。 - Vincent, James (3 April 2023). "AI is entering an era of corporate control". The Verge. Archived from the original on 19 June 2023. Retrieved 19 June 2023.
文森特,詹姆斯(2023 年 4 月 3 日)。"人工智能正进入企业控制的时代"。边缘。存档于 2023 年 6 月 19 日的原文。检索于 19 June 2023。 - Vinge, Vernor (1993). "The Coming Technological Singularity: How to Survive in the Post-Human Era". Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace: 11. Bibcode:1993vise.nasa...11V. Archived from the original on 1 January 2007. Retrieved 14 November 2011.
文戈,维尔诺 (1993). "即将到来的技术奇点:如何在后人类时代生存". 愿景 21:网络空间时代的跨学科科学与工程: 11. 文献代码:1993vise.nasa...11V. 于 2007 年 1 月 1 日从原始文献存档. 检索于 14 November 2011. - Waddell, Kaveh (2018). "Chatbots Have Entered the Uncanny Valley". The Atlantic. Archived from the original on 24 April 2018. Retrieved 24 April 2018.
Waddell, Kaveh (2018). "聊天机器人已进入怪异谷". 大西洋月刊. 存档于 2018 年 4 月 24 日的原文. 检索于 24 April 2018. - Wallach, Wendell (2010). Moral Machines. Oxford University Press.
瓦拉赫,温德尔(2010)。道德机器。牛津大学出版社。 - Wason, P. C.; Shapiro, D. (1966). "Reasoning". In Foss, B. M. (ed.). New horizons in psychology. Harmondsworth: Penguin. Archived from the original on 26 July 2020. Retrieved 18 November 2019.
瓦森, P. C.; 沙皮罗, D. (1966). "推理". 在福斯, B. M. (编). 心理学的新视野. 哈蒙兹沃斯: 企鹅出版社. 存档于 2020 年 7 月 26 日的原始文献. 检索于 18 November 2019. - Weng, J.; McClelland; Pentland, A.; Sporns, O.; Stockman, I.; Sur, M.; Thelen, E. (2001). "Autonomous mental development by robots and animals" (PDF). Science. 291 (5504): 599–600. doi:10.1126/science.291.5504.599. PMID 11229402. S2CID 54131797. Archived (PDF) from the original on 4 September 2013. Retrieved 4 June 2013 – via msu.edu.
翁, J.; 麦克莱兰; 彭特兰, A.; 斯波恩斯, O.; 斯托克曼, I.; 苏尔, M.; 泰伦, E. (2001). "机器人和动物的自主心理发展"(PDF). 科学. 291 (5504): 599–600. doi:10.1126/science.291.5504.599. PMID11229402. S2CID54131797. 存档(PDF) 于 2013 年 9 月 4 日的原始文档. 检索于 4 June 2013 – 通过 msu.edu. - "What is 'fuzzy logic'? Are there computers that are inherently fuzzy and do not apply the usual binary logic?". Scientific American. 21 October 1999. Archived from the original on 6 May 2018. Retrieved 5 May 2018.
“什么是‘模糊逻辑’?是否存在本质上模糊且不应用常规二元逻辑的计算机?”。《科学美国人》。1999 年 10 月 21 日。存档于 2018 年 5 月 6 日的原文。检索于 5 May 2018。 - Williams, Rhiannon (28 June 2023), "Humans may be more likely to believe disinformation generated by AI", MIT Technology Review
威廉姆斯,瑞安农(2023 年 6 月 28 日),"人类可能更容易相信由人工智能生成的虚假信息",麻省理工学院技术评论 - Wirtz, Bernd W.; Weyerer, Jan C.; Geyer, Carolin (24 July 2018). "Artificial Intelligence and the Public Sector – Applications and Challenges". International Journal of Public Administration. 42 (7): 596–615. doi:10.1080/01900692.2018.1498103. ISSN 0190-0692. S2CID 158829602. Archived from the original on 18 August 2020. Retrieved 22 August 2020.
维尔茨,伯恩德·W.;韦耶雷,扬·C.;盖耶,卡罗琳(2018 年 7 月 24 日)。"人工智能与公共部门——应用与挑战"。国际公共行政杂志。42(7):596–615。doi:10.1080/01900692.2018.1498103。ISSN0190-0692。S2CID158829602。存档于 2020 年 8 月 18 日的原始文献。检索于 22 August 2020。 - Wong, Matteo (19 May 2023), "ChatGPT Is Already Obsolete", The Atlantic
黄,马特奥(2023 年 5 月 19 日),"ChatGPT 已经过时",大西洋月刊 - Yudkowsky, E (2008), "Artificial Intelligence as a Positive and Negative Factor in Global Risk" (PDF), Global Catastrophic Risks, Oxford University Press, 2008, Bibcode:2008gcr..book..303Y, archived (PDF) from the original on 19 October 2013, retrieved 24 September 2021
尤德科夫斯基,E(2008),"人工智能作为全球风险的正面和负面因素"(PDF),全球灾难风险,牛津大学出版社,2008,文献代码:2008gcr..book..303Y,存档(PDF),原文于 2013 年 10 月 19 日存档,检索于 24 September 2021
Further reading 进一步阅读
- Ashish Vaswani, Noam Shazeer, Niki Parmar et al. "Attention is all you need." Advances in neural information processing systems 30 (2017). Seminal paper on transformers.
Ashish Vaswani、Noam Shazeer、Niki Parmar 等人。"注意力机制是你所需要的一切。" 神经信息处理系统进展 30 (2017)。关于变换器的开创性论文。 - Autor, David H., "Why Are There Still So Many Jobs? The History and Future of Workplace Automation" (2015) 29(3) Journal of Economic Perspectives 3.
作者,大卫·H。,“为什么还有这么多工作?工作场所自动化的历史与未来”(2015)29(3) 经济学视角杂志 3。 - Boden, Margaret, Mind As Machine, Oxford University Press, 2006.
博登,玛格丽特,心智如机器,牛津大学出版社,2006 年。 - Cukier, Kenneth, "Ready for Robots? How to Think about the Future of AI", Foreign Affairs, vol. 98, no. 4 (July/August 2019), pp. 192–98. George Dyson, historian of computing, writes (in what might be called "Dyson's Law") that "Any system simple enough to be understandable will not be complicated enough to behave intelligently, while any system complicated enough to behave intelligently will be too complicated to understand." (p. 197.) Computer scientist Alex Pentland writes: "Current AI machine-learning algorithms are, at their core, dead simple stupid. They work, but they work by brute force." (p. 198.)
库基尔,肯尼斯,“准备好迎接机器人了吗?如何思考人工智能的未来”,外交事务,第 98 卷,第 4 期(2019 年 7 月/8 月),第 192–198 页。乔治·戴森,计算机历史学家,写道(可以称之为“戴森定律”):“任何简单到可以理解的系统都不会复杂到能够智能行为,而任何复杂到能够智能行为的系统又会复杂到无法理解。”(第 197 页。)计算机科学家亚历克斯·彭特兰德写道:“当前的人工智能机器学习算法在本质上是非常简单愚蠢的。它们有效,但它们是通过蛮力工作。”(第 198 页。) - Gertner, Jon. (2023) "Wikipedia's Moment of Truth: Can the online encyclopedia help teach A.I. chatbots to get their facts right — without destroying itself in the process?" New York Times Magazine (July 18, 2023) online
Gertner, Jon. (2023) "维基百科的真相时刻:在线百科全书能否帮助教导人工智能聊天机器人正确获取事实——而不在此过程中自我毁灭?" 纽约时报杂志 (2023 年 7 月 18 日) 在线 - Gleick, James, "The Fate of Free Will" (review of Kevin J. Mitchell, Free Agents: How Evolution Gave Us Free Will, Princeton University Press, 2023, 333 pp.), The New York Review of Books, vol. LXXI, no. 1 (18 January 2024), pp. 27–28, 30. "Agency is what distinguishes us from machines. For biological creatures, reason and purpose come from acting in the world and experiencing the consequences. Artificial intelligences – disembodied, strangers to blood, sweat, and tears – have no occasion for that." (p. 30.)
格莱克,詹姆斯,“自由意志的命运”(对凯文·J·米切尔的书评,自由代理:进化如何赋予我们自由意志,普林斯顿大学出版社,2023 年,333 页),纽约书评,第 71 卷,第 1 期(2024 年 1 月 18 日),第 27–28,30 页。“代理是我们与机器的区别。对于生物生物体来说,理性和目的来自于在世界中行动并体验后果。人工智能——无身体、与血、汗和泪无关的陌生人——没有这样的机会。”(第 30 页。) - Hughes-Castleberry, Kenna, "A Murder Mystery Puzzle: The literary puzzle Cain's Jawbone, which has stumped humans for decades, reveals the limitations of natural-language-processing algorithms", Scientific American, vol. 329, no. 4 (November 2023), pp. 81–82. "This murder mystery competition has revealed that although NLP (natural-language processing) models are capable of incredible feats, their abilities are very much limited by the amount of context they receive. This [...] could cause [difficulties] for researchers who hope to use them to do things such as analyze ancient languages. In some cases, there are few historical records on long-gone civilizations to serve as training data for such a purpose." (p. 82.)
休斯-卡斯特尔伯里,肯娜,“谋杀之谜拼图:文学拼图 凯恩的下颚骨,几十年来让人类困惑,揭示了自然语言处理算法的局限性”,科学美国人,第 329 卷,第 4 期(2023 年 11 月),第 81–82 页。“这场谋杀之谜竞赛揭示了尽管 NLP(自然语言处理)模型能够实现惊人的成就,但它们的能力在很大程度上受到所接收的上下文数量的限制。这[...]可能会给希望利用它们进行分析古代语言的研究人员带来[困难]。在某些情况下,关于早已消失的文明的历史记录很少,无法作为此类目的的训练数据。”(第 82 页。) - Immerwahr, Daniel, "Your Lying Eyes: People now use A.I. to generate fake videos indistinguishable from real ones. How much does it matter?", The New Yorker, 20 November 2023, pp. 54–59. "If by 'deepfakes' we mean realistic videos produced using artificial intelligence that actually deceive people, then they barely exist. The fakes aren't deep, and the deeps aren't fake. [...] A.I.-generated videos are not, in general, operating in our media as counterfeited evidence. Their role better resembles that of cartoons, especially smutty ones." (p. 59.)
Immerwahr, Daniel,“你的谎言眼睛:人们现在使用人工智能生成与真实视频无法区分的假视频。这有多重要?”,纽约客,2023 年 11 月 20 日,第 54–59 页。“如果我们所说的‘深度伪造’是指使用人工智能制作的实际上能欺骗人的真实视频,那么它们几乎不存在。假视频不深,深度视频不假。[...] 一般来说,人工智能生成的视频并不是在我们的媒体中作为伪造证据运作。它们的角色更像是卡通,尤其是那些低俗的卡通。”(第 59 页。) - Johnston, John (2008) The Allure of Machinic Life: Cybernetics, Artificial Life, and the New AI, MIT Press.
约翰斯顿, 约翰 (2008) 机器生命的魅力:控制论、人工生命与新人工智能, 麻省理工学院出版社。 - Jumper, John; Evans, Richard; Pritzel, Alexander; et al. (26 August 2021). "Highly accurate protein structure prediction with AlphaFold". Nature. 596 (7873): 583–589. Bibcode:2021Natur.596..583J. doi:10.1038/s41586-021-03819-2. PMC 8371605. PMID 34265844. S2CID 235959867.
跳跃者,约翰;埃文斯,理查德;普里策尔,亚历山大;等(2021 年 8 月 26 日)。"使用 AlphaFold 进行高精度蛋白质结构预测"。自然。596(7873):583–589。Bibcode:2021Natur.596..583J。doi:10.1038/s41586-021-03819-2。PMC8371605。PMID34265844。S2CID235959867。 - LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey (28 May 2015). "Deep learning". Nature. 521 (7553): 436–444. Bibcode:2015Natur.521..436L. doi:10.1038/nature14539. PMID 26017442. S2CID 3074096. Archived from the original on 5 June 2023. Retrieved 19 June 2023.
LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey (2015 年 5 月 28 日). "深度学习". 自然. 521 (7553): 436–444. 文献代码:2015Natur.521..436L. 数字对象标识符:10.1038/nature14539. PMID26017442. S2CID3074096. 存档于 2023 年 6 月 5 日的原始文献. 检索于 19 June 2023. - Leffer, Lauren, "The Risks of Trusting AI: We must avoid humanizing machine-learning models used in scientific research", Scientific American, vol. 330, no. 6 (June 2024), pp. 80-81.
Leffer, Lauren, "信任人工智能的风险:我们必须避免将用于科学研究的机器学习模型人性化", 科学美国人, 第 330 卷,第 6 期(2024 年 6 月),第 80-81 页。 - Marcus, Gary, "Artificial Confidence: Even the newest, buzziest systems of artificial general intelligence are stymmied by the same old problems", Scientific American, vol. 327, no. 4 (October 2022), pp. 42–45.
马库斯,盖瑞,“人工信心:即使是最新、最热门的人工通用智能系统也被同样的老问题所困扰”,科学美国人,第 327 卷,第 4 期(2022 年 10 月),第 42–45 页。 - Mitchell, Melanie (2019). Artificial intelligence: a guide for thinking humans. New York: Farrar, Straus and Giroux. ISBN 9780374257835.
米切尔,梅兰妮(2019)。人工智能:思考人类的指南。纽约:法拉尔、斯特劳斯和吉鲁。国际标准书号9780374257835。 - Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; et al. (26 February 2015). "Human-level control through deep reinforcement learning". Nature. 518 (7540): 529–533. Bibcode:2015Natur.518..529M. doi:10.1038/nature14236. PMID 25719670. S2CID 205242740. Archived from the original on 19 June 2023. Retrieved 19 June 2023. Introduced DQN, which produced human-level performance on some Atari games.
Mnih, Volodymyr; Kavukcuoglu, Koray; Silver, David; 等 (2015 年 2 月 26 日). "通过深度强化学习实现人类水平的控制". 自然. 518 (7540): 529–533. Bibcode:2015Natur.518..529M. doi:10.1038/nature14236. PMID 25719670. S2CID 205242740. 存档于 2023 年 6 月 19 日的原始文献. 检索于 19 June 2023. 引入了DQN,在一些 Atari 游戏中达到了人类水平的表现。 - Press, Eyal, "In Front of Their Faces: Does facial-recognition technology lead police to ignore contradictory evidence?", The New Yorker, 20 November 2023, pp. 20–26.
Press, Eyal,“在他们面前:面部识别技术是否导致警方忽视矛盾证据?”,纽约客,2023 年 11 月 20 日,第 20–26 页。 - Roivainen, Eka, "AI's IQ: ChatGPT aced a [standard intelligence] test but showed that intelligence cannot be measured by IQ alone", Scientific American, vol. 329, no. 1 (July/August 2023), p. 7. "Despite its high IQ, ChatGPT fails at tasks that require real humanlike reasoning or an understanding of the physical and social world.... ChatGPT seemed unable to reason logically and tried to rely on its vast database of... facts derived from online texts."
罗伊瓦宁,埃卡,“人工智能的智商:ChatGPT 在一项[标准智力]测试中表现出色,但显示出 智力 不能仅通过 智商 来衡量”,科学美国人,第 329 卷,第 1 期(2023 年 7/8 月),第 7 页。“尽管智商很高,ChatGPT 在需要真实人类推理或理解物理和社会世界的任务中表现不佳……ChatGPT 似乎无法进行逻辑推理,试图依赖其庞大的数据库……来自在线文本的事实。” - Scharre, Paul, "Killer Apps: The Real Dangers of an AI Arms Race", Foreign Affairs, vol. 98, no. 3 (May/June 2019), pp. 135–44. "Today's AI technologies are powerful but unreliable. Rules-based systems cannot deal with circumstances their programmers did not anticipate. Learning systems are limited by the data on which they were trained. AI failures have already led to tragedy. Advanced autopilot features in cars, although they perform well in some circumstances, have driven cars without warning into trucks, concrete barriers, and parked cars. In the wrong situation, AI systems go from supersmart to superdumb in an instant. When an enemy is trying to manipulate and hack an AI system, the risks are even greater." (p. 140.)
沙尔,保罗,“杀手应用:人工智能军备竞赛的真实危险”,《外交事务》,第 98 卷,第 3 期(2019 年 5/6 月),第 135–144 页。“今天的人工智能技术强大但不可靠。基于规则的系统无法处理程序员未预见的情况。学习系统受到训练数据的限制。人工智能的失败已经导致悲剧。尽管汽车中的高级自动驾驶功能在某些情况下表现良好,但它们也曾在没有警告的情况下驶入卡车、混凝土障碍物和停放的汽车。在错误的情况下,人工智能系统瞬间从超级智能变为超级愚蠢。当敌人试图操纵和攻击人工智能系统时,风险更大。”(第 140 页。) - Serenko, Alexander; Michael Dohan (2011). "Comparing the expert survey and citation impact journal ranking methods: Example from the field of Artificial Intelligence" (PDF). Journal of Informetrics. 5 (4): 629–49. doi:10.1016/j.joi.2011.06.002. Archived (PDF) from the original on 4 October 2013. Retrieved 12 September 2013.
Serenko, Alexander; Michael Dohan (2011). "比较专家调查与引用影响期刊排名方法:来自人工智能领域的例子"(PDF). 信息计量学杂志. 5 (4): 629–49. doi:10.1016/j.joi.2011.06.002. 存档(PDF) 于 2013 年 10 月 4 日的原始版本. 检索于 12 September 2013. - Silver, David; Huang, Aja; Maddison, Chris J.; et al. (28 January 2016). "Mastering the game of Go with deep neural networks and tree search". Nature. 529 (7587): 484–489. Bibcode:2016Natur.529..484S. doi:10.1038/nature16961. PMID 26819042. S2CID 515925. Archived from the original on 18 June 2023. Retrieved 19 June 2023.
银, 大卫; 黄, 阿佳; 马迪森, 克里斯·J.; 等 (2016 年 1 月 28 日). "通过深度神经网络和树搜索掌握围棋". 自然. 529 (7587): 484–489. 文献代码:2016Natur.529..484S. 数字对象标识符:10.1038/nature16961. PMID26819042. S2CID515925. 存档于 2023 年 6 月 18 日的原始文献. 检索于 19 June 2023. - White Paper: On Artificial Intelligence – A European approach to excellence and trust (PDF). Brussels: European Commission. 2020. Archived (PDF) from the original on 20 February 2020. Retrieved 20 February 2020.
白皮书:关于人工智能 - 欧洲的卓越与信任之道(PDF)。布鲁塞尔:欧洲委员会。2020 年。存档(PDF),原文于 2020 年 2 月 20 日存档。检索于 20 February 2020 年。
External links 外部链接
- "Artificial Intelligence". Internet Encyclopedia of Philosophy.
"人工智能"。互联网哲学百科全书。 - Thomason, Richmond. "Logic and Artificial Intelligence". In Zalta, Edward N. (ed.). Stanford Encyclopedia of Philosophy.
汤马森,里士满。 "逻辑与人工智能"。在 扎尔塔,爱德华·N.(编)。 斯坦福哲学百科全书。 - Artificial Intelligence. BBC Radio 4 discussion with John Agar, Alison Adam & Igor Aleksander (In Our Time, 8 December 2005).
人工智能。BBC Radio 4 与约翰·阿加尔、艾莉森·亚当和伊戈尔·亚历山大讨论(我们的时代,2005 年 12 月 8 日)。