这是用户在 2024-9-25 12:40 为 https://en.wikipedia.org/wiki/Artificial_intelligence 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
Jump to content

Artificial intelligence 人工智能

Page semi-protected
From Wikipedia, the free encyclopedia
来自维基百科,自由百科全书

Artificial intelligence (AI), in its broadest sense, is intelligence exhibited by machines, particularly computer systems. It is a field of research in computer science that develops and studies methods and software that enable machines to perceive their environment and use learning and intelligence to take actions that maximize their chances of achieving defined goals.[1] Such machines may be called AIs.
人工智能 (AI) 在其最广泛的意义上,是 机器,特别是 计算机系统 所表现出的 智能。它是 计算机科学 中的一个 研究领域,开发和研究使机器能够 感知环境 并利用 学习 和智能采取行动,以最大化实现既定目标的机会的方法和 软件[1] 这样的机器可以称为 AI。

Some high-profile applications of AI include advanced web search engines (e.g., Google Search); recommendation systems (used by YouTube, Amazon, and Netflix); interacting via human speech (e.g., Google Assistant, Siri, and Alexa); autonomous vehicles (e.g., Waymo); generative and creative tools (e.g., ChatGPT, Apple Intelligence, and AI art); and superhuman play and analysis in strategy games (e.g., chess and Go). However, many AI applications are not perceived as AI: "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[2][3]
一些高调的 人工智能应用 包括先进的 网络搜索引擎(例如,谷歌搜索); 推荐系统(被 优酷亚马逊Netflix 使用);通过 人类语言互动(例如,谷歌助手SiriAlexa); 自动驾驶汽车(例如,Waymo); 生成创意 工具(例如,ChatGPT苹果智能人工智能艺术);以及在 策略游戏(例如,国际象棋围棋)中进行 超人类 的游戏和分析。然而,许多人工智能应用并不被视为人工智能:“许多前沿的人工智能已经渗透到一般应用中,通常没有被称为人工智能,因为一旦某样东西变得足够有用和普遍,它就 不再被标记为人工智能。”[2][3]

The various subfields of AI research are centered around particular goals and the use of particular tools. The traditional goals of AI research include reasoning, knowledge representation, planning, learning, natural language processing, perception, and support for robotics.[a] General intelligence—the ability to complete any task performable by a human on an at least equal level—is among the field's long-term goals.[4] To reach these goals, AI researchers have adapted and integrated a wide range of techniques, including search and mathematical optimization, formal logic, artificial neural networks, and methods based on statistics, operations research, and economics.[b] AI also draws upon psychology, linguistics, philosophy, neuroscience, and other fields.[5]
人工智能研究的各个子领域围绕特定目标和特定工具展开。人工智能研究的传统目标包括 推理知识表示规划学习自然语言处理、感知,以及对 机器人技术的支持。[a]通用智能——完成任何人类可执行任务的能力,至少达到同等水平——是该领域的长期目标之一。[4] 为了实现这些目标,人工智能研究人员已经适应并整合了广泛的技术,包括 搜索数学优化形式逻辑人工神经网络,以及基于 统计学运筹学经济学的方法。[b] 人工智能还借鉴了 心理学语言学哲学神经科学 和其他领域。[5]

Artificial intelligence was founded as an academic discipline in 1956,[6] and the field went through multiple cycles of optimism,[7][8] followed by periods of disappointment and loss of funding, known as AI winter.[9][10] Funding and interest vastly increased after 2012 when deep learning outperformed previous AI techniques.[11] This growth accelerated further after 2017 with the transformer architecture,[12] and by the early 2020s hundreds of billions of dollars were being invested in AI (known as the "AI boom"). The widespread use of AI in the 21st century exposed several unintended consequences and harms in the present and raised concerns about its risks and long-term effects in the future, prompting discussions about regulatory policies to ensure the safety and benefits of the technology.
人工智能于 1956 年作为一个学术学科成立,[6] 该领域经历了多次乐观周期,[7][8] 随后是失望和资金短缺的时期,这被称为 人工智能寒冬[9][10] 2012 年后,随着 深度学习 超越了之前的人工智能技术,资金和兴趣大幅增加。[11] 2017 年后,这一增长进一步加速,伴随着 变换器架构[12] 到 2020 年代初,数千亿美元被投资于人工智能(被称为“人工智能热潮”)。 21 世纪人工智能的广泛使用暴露了当前几个意想不到的后果和危害,并引发了对其风险长期影响的担忧,促使人们讨论监管政策以确保技术的安全和利益

Goals 目标

The general problem of simulating (or creating) intelligence has been broken into subproblems. These consist of particular traits or capabilities that researchers expect an intelligent system to display. The traits described below have received the most attention and cover the scope of AI research.[a]
模拟(或创造)智能的一般问题已被分解为子问题。这些子问题包括研究人员期望智能系统展示的特定特征或能力。下面描述的特征受到了最多关注,并涵盖了人工智能研究的范围。[a]

Reasoning and problem-solving
推理和解决问题

Early researchers developed algorithms that imitated step-by-step reasoning that humans use when they solve puzzles or make logical deductions.[13] By the late 1980s and 1990s, methods were developed for dealing with uncertain or incomplete information, employing concepts from probability and economics.[14]
早期的研究人员开发了模仿人类在解决难题或进行逻辑推理时所使用的逐步推理的算法。[13] 到 1980 年代末和 1990 年代,开发了处理不确定或不完整信息的方法,采用了来自概率经济学的概念。[14]

Many of these algorithms are insufficient for solving large reasoning problems because they experience a "combinatorial explosion": They become exponentially slower as the problems grow.[15] Even humans rarely use the step-by-step deduction that early AI research could model. They solve most of their problems using fast, intuitive judgments.[16] Accurate and efficient reasoning is an unsolved problem.
许多这些算法不足以解决大型推理问题,因为它们会经历“组合爆炸”:随着问题的增长,它们变得呈指数级变慢。[15] 即使是人类也很少使用早期人工智能研究能够建模的逐步推理。他们大多数问题的解决依赖于快速、直观的判断。[16] 准确和高效的推理仍然是一个未解决的问题。

Knowledge representation 知识表示

An ontology represents knowledge as a set of concepts within a domain and the relationships between those concepts.
本体将知识表示为一个领域内的一组概念及这些概念之间的关系。

Knowledge representation and knowledge engineering[17] allow AI programs to answer questions intelligently and make deductions about real-world facts. Formal knowledge representations are used in content-based indexing and retrieval,[18] scene interpretation,[19] clinical decision support,[20] knowledge discovery (mining "interesting" and actionable inferences from large databases),[21] and other areas.[22]
知识表示知识工程[17] 使人工智能程序能够智能地回答问题并对现实世界的事实进行推理。正式的知识表示用于基于内容的索引和检索,[18] 场景解释,[19] 临床决策支持,[20] 知识发现(从大型 数据库 中挖掘“有趣”的和可操作的推论),[21] 以及其他领域。[22]

A knowledge base is a body of knowledge represented in a form that can be used by a program. An ontology is the set of objects, relations, concepts, and properties used by a particular domain of knowledge.[23] Knowledge bases need to represent things such as objects, properties, categories, and relations between objects;[24] situations, events, states, and time;[25] causes and effects;[26] knowledge about knowledge (what we know about what other people know);[27] default reasoning (things that humans assume are true until they are told differently and will remain true even when other facts are changing);[28] and many other aspects and domains of knowledge.
一个 知识库 是以程序可以使用的形式表示的知识体。一个 本体 是特定知识领域中使用的对象、关系、概念和属性的集合。[23] 知识库需要表示诸如对象、属性、类别和对象之间的关系;[24] 情况、事件、状态和时间;[25] 原因和结果;[26] 关于知识的知识(我们对其他人所知道的事情的了解);[27]默认推理(人类假设为真的事情,直到被告知不同,并且即使其他事实发生变化也将保持为真);[28] 以及许多其他方面和知识领域。

Among the most difficult problems in knowledge representation are the breadth of commonsense knowledge (the set of atomic facts that the average person knows is enormous);[29] and the sub-symbolic form of most commonsense knowledge (much of what people know is not represented as "facts" or "statements" that they could express verbally).[16] There is also the difficulty of knowledge acquisition, the problem of obtaining knowledge for AI applications.[c]
在知识表示中,最困难的问题之一是常识知识的广度(普通人所知道的原子事实的集合是巨大的);[29] 以及大多数常识知识的亚符号形式(人们所知道的许多内容并不是以“事实”或“陈述”的形式表达的)。[16] 还有获取知识的困难,获取用于人工智能应用的知识的问题。[c]

Planning and decision-making
规划与决策

An "agent" is anything that perceives and takes actions in the world. A rational agent has goals or preferences and takes actions to make them happen.[d][32] In automated planning, the agent has a specific goal.[33] In automated decision-making, the agent has preferences—there are some situations it would prefer to be in, and some situations it is trying to avoid. The decision-making agent assigns a number to each situation (called the "utility") that measures how much the agent prefers it. For each possible action, it can calculate the "expected utility": the utility of all possible outcomes of the action, weighted by the probability that the outcome will occur. It can then choose the action with the maximum expected utility.[34]
“代理”是指任何能够感知并在世界中采取行动的事物。一个理性代理有目标或偏好,并采取行动使其实现。[d][32]自动规划中,代理有一个特定的目标。[33]自动决策中,代理有偏好——有些情况是它希望处于的,有些情况是它试图避免的。决策代理为每种情况分配一个数字(称为“效用”),以衡量代理对该情况的偏好程度。对于每个可能的行动,它可以计算“期望效用”:该行动所有可能结果的效用,按结果发生的概率加权。然后,它可以选择具有最大期望效用的行动。[34]

In classical planning, the agent knows exactly what the effect of any action will be.[35] In most real-world problems, however, the agent may not be certain about the situation they are in (it is "unknown" or "unobservable") and it may not know for certain what will happen after each possible action (it is not "deterministic"). It must choose an action by making a probabilistic guess and then reassess the situation to see if the action worked.[36]
经典规划中,智能体确切知道任何行动的效果。[35] 然而,在大多数现实世界的问题中,智能体可能对其所处的情况不确定(它是“未知”或“不可观察的”),并且它可能无法确定每个可能行动后会发生什么(它不是“确定性的”)。它必须通过做出概率猜测来选择一个行动,然后重新评估情况以查看该行动是否有效。[36]

In some problems, the agent's preferences may be uncertain, especially if there are other agents or humans involved. These can be learned (e.g., with inverse reinforcement learning), or the agent can seek information to improve its preferences.[37] Information value theory can be used to weigh the value of exploratory or experimental actions.[38] The space of possible future actions and situations is typically intractably large, so the agents must take actions and evaluate situations while being uncertain of what the outcome will be.
在某些问题中,代理的偏好可能不确定,特别是当涉及其他代理或人类时。这些可以通过学习获得(例如,通过逆强化学习),或者代理可以寻求信息以改善其偏好。[37]信息价值理论可以用来权衡探索或实验行动的价值。[38] 未来可能的行动和情况的空间通常是不可处理的,因此代理必须在不确定结果的情况下采取行动并评估情况。

A Markov decision process has a transition model that describes the probability that a particular action will change the state in a particular way and a reward function that supplies the utility of each state and the cost of each action. A policy associates a decision with each possible state. The policy could be calculated (e.g., by iteration), be heuristic, or it can be learned.[39]
一个 马尔可夫决策过程 具有一个 转移模型,描述了特定动作以特定方式改变状态的概率,以及一个 奖励函数,提供每个状态的效用和每个动作的成本。一个 策略 将每个可能状态与一个决策关联。该策略可以通过计算(例如,通过 迭代)、是 启发式,或者可以通过学习获得。[39]

Game theory describes the rational behavior of multiple interacting agents and is used in AI programs that make decisions that involve other agents.[40]
博弈论 描述了多个相互作用的主体的理性行为,并用于涉及其他主体的决策的人工智能程序。[40]

Learning 学习

Machine learning is the study of programs that can improve their performance on a given task automatically.[41] It has been a part of AI from the beginning.[e]
机器学习是研究能够自动提高其在特定任务上表现的程序。[41] 从一开始,它就是人工智能的一部分。[e]

There are several kinds of machine learning. Unsupervised learning analyzes a stream of data and finds patterns and makes predictions without any other guidance.[44] Supervised learning requires a human to label the input data first, and comes in two main varieties: classification (where the program must learn to predict what category the input belongs in) and regression (where the program must deduce a numeric function based on numeric input).[45]
有几种机器学习。无监督学习分析数据流,发现模式并进行预测,而无需其他指导。[44]监督学习要求人类首先对输入数据进行标记,主要有两种类型:分类(程序必须学习预测输入属于哪个类别)和回归(程序必须根据数值输入推导出一个数值函数)。[45]

In reinforcement learning, the agent is rewarded for good responses and punished for bad ones. The agent learns to choose responses that are classified as "good".[46] Transfer learning is when the knowledge gained from one problem is applied to a new problem.[47] Deep learning is a type of machine learning that runs inputs through biologically inspired artificial neural networks for all of these types of learning.[48]
强化学习中,代理因良好的反应而获得奖励,因不良反应而受到惩罚。代理学习选择被分类为“好”的反应。[46]迁移学习是指将从一个问题中获得的知识应用于新问题。[47]深度学习是一种机器学习类型,通过生物启发的人工神经网络处理所有这些类型的学习输入。[48]

Computational learning theory can assess learners by computational complexity, by sample complexity (how much data is required), or by other notions of optimization.[49]
计算学习理论 可以通过 计算复杂性样本复杂性(需要多少数据)或其他 优化 的概念来评估学习者。[49]

Natural language processing
自然语言处理

Natural language processing (NLP)[50] allows programs to read, write and communicate in human languages such as English. Specific problems include speech recognition, speech synthesis, machine translation, information extraction, information retrieval and question answering.[51]
自然语言处理 (NLP)[50] 使程序能够以人类语言(如 英语)进行阅读、写作和交流。具体问题包括 语音识别语音合成机器翻译信息提取信息检索问答[51]

Early work, based on Noam Chomsky's generative grammar and semantic networks, had difficulty with word-sense disambiguation[f] unless restricted to small domains called "micro-worlds" (due to the common sense knowledge problem[29]). Margaret Masterman believed that it was meaning and not grammar that was the key to understanding languages, and that thesauri and not dictionaries should be the basis of computational language structure.
早期的工作基于Noam Chomsky生成语法语义网络,在词义消歧[f]方面遇到了困难,除非限制在称为“微观世界”的小领域内(由于常识知识问题[29])。玛格丽特·马斯特曼认为,理解语言的关键在于意义而不是语法,并且同义词典而不是字典应该是计算语言结构的基础。

Modern deep learning techniques for NLP include word embedding (representing words, typically as vectors encoding their meaning),[52] transformers (a deep learning architecture using an attention mechanism),[53] and others.[54] In 2019, generative pre-trained transformer (or "GPT") language models began to generate coherent text,[55][56] and by 2023, these models were able to get human-level scores on the bar exam, SAT test, GRE test, and many other real-world applications.[57]
现代深度学习技术在自然语言处理(NLP)中包括词嵌入(通常将单词表示为向量以编码其含义),[52]变换器(一种使用注意力机制的深度学习架构),[53]以及其他技术。[54] 在 2019 年,生成预训练变换器(或“GPT”)语言模型开始生成连贯的文本,[55][56] 到 2023 年,这些模型能够在律师资格考试SAT考试、GRE考试以及许多其他现实世界应用中获得人类水平的分数。[57]

Perception 感知

Machine perception is the ability to use input from sensors (such as cameras, microphones, wireless signals, active lidar, sonar, radar, and tactile sensors) to deduce aspects of the world. Computer vision is the ability to analyze visual input.[58]
机器感知是利用传感器(如摄像头、麦克风、无线信号、主动激光雷达、声纳、雷达和触觉传感器)的输入来推断世界各个方面的能力。计算机视觉是分析视觉输入的能力。[58]

The field includes speech recognition,[59] image classification,[60] facial recognition, object recognition,[61]object tracking,[62] and robotic perception.[63]
该领域包括 语音识别[59]图像分类[60]面部识别物体识别[61]物体跟踪[62]机器人感知[63]

Social intelligence 社会智能

Kismet, a robot head which was made in the 1990s; a machine that can recognize and simulate emotions[64]
Kismet,一个在 1990 年代制造的机器人头;一个可以识别和模拟情感的机器 64

Affective computing is an interdisciplinary umbrella that comprises systems that recognize, interpret, process, or simulate human feeling, emotion, and mood.[65] For example, some virtual assistants are programmed to speak conversationally or even to banter humorously; it makes them appear more sensitive to the emotional dynamics of human interaction, or to otherwise facilitate human–computer interaction.
情感计算 是一个跨学科的伞式概念,包含能够识别、解释、处理或模拟人类 感觉、情绪和心情 的系统。[65] 例如,一些 虚拟助手 被编程为以对话的方式进行交流,甚至进行幽默的调侃;这使得它们在与人类互动的情感动态中显得更加敏感,或者以其他方式促进 人机交互

However, this tends to give naïve users an unrealistic conception of the intelligence of existing computer agents.[66] Moderate successes related to affective computing include textual sentiment analysis and, more recently, multimodal sentiment analysis, wherein AI classifies the affects displayed by a videotaped subject.[67]
然而,这往往使天真的用户对现有计算机代理的智能产生不切实际的认识。[66] 与情感计算相关的适度成功包括文本情感分析,以及最近的多模态情感分析,其中人工智能对录像中被试者表现出的情感进行分类。[67]

General intelligence 一般智力

A machine with artificial general intelligence should be able to solve a wide variety of problems with breadth and versatility similar to human intelligence.[4]
一台具有 人工通用智能 的机器应该能够以类似于 人类智能 的广度和多样性解决各种问题。[4]

Techniques 技术

AI research uses a wide variety of techniques to accomplish the goals above.[b]
人工智能研究使用多种技术来实现上述目标。[b]

Search and optimization 搜索与优化

AI can solve many problems by intelligently searching through many possible solutions.[68] There are two very different kinds of search used in AI: state space search and local search.
AI 可以通过智能地搜索许多可能的解决方案来解决许多问题。[68] AI 中使用了两种截然不同的搜索方式:状态空间搜索局部搜索

State space search searches through a tree of possible states to try to find a goal state.[69] For example, planning algorithms search through trees of goals and subgoals, attempting to find a path to a target goal, a process called means-ends analysis.[70]
状态空间搜索 在可能状态的树中进行搜索,以尝试找到目标状态。[69] 例如,规划 算法在目标和子目标的树中进行搜索,试图找到通往目标的路径,这个过程称为 手段-目的分析[70]

Simple exhaustive searches[71] are rarely sufficient for most real-world problems: the search space (the number of places to search) quickly grows to astronomical numbers. The result is a search that is too slow or never completes.[15] "Heuristics" or "rules of thumb" can help prioritize choices that are more likely to reach a goal.[72]
简单的穷举搜索[71] 对于大多数现实世界的问题来说,通常是不够的:搜索空间(需要搜索的地方数量)迅速增长到天文数字。结果是搜索变得太慢或永远无法完成。[15]启发式”或“经验法则”可以帮助优先选择更有可能达到目标的选项。[72]

Adversarial search is used for game-playing programs, such as chess or Go. It searches through a tree of possible moves and counter-moves, looking for a winning position.[73]
对抗搜索用于游戏程序,例如国际象棋或围棋。它在可能的走法和反走法的中搜索,寻找获胜的位置。[73]

Illustration of gradient descent for 3 different starting points; two parameters (represented by the plan coordinates) are adjusted in order to minimize the loss function (the height)
对于三个不同起点的梯度下降的插图;调整两个参数(由平面坐标表示)以最小化损失函数(高度)

Local search uses mathematical optimization to find a solution to a problem. It begins with some form of guess and refines it incrementally.[74]

Gradient descent is a type of local search that optimizes a set of numerical parameters by incrementally adjusting them to minimize a loss function. Variants of gradient descent are commonly used to train neural networks.[75]
梯度下降是一种局部搜索方法,通过逐步调整一组数值参数来优化,以最小化损失函数梯度下降的变体通常用于训练神经网络。[75]

Another type of local search is evolutionary computation, which aims to iteratively improve a set of candidate solutions by "mutating" and "recombining" them, selecting only the fittest to survive each generation.[76]
另一种本地搜索类型是 进化计算,其目标是通过“变异”和“重组”迭代改进一组候选解决方案,选择 仅让最适合的个体在每一代中存活。[76]

Distributed search processes can coordinate via swarm intelligence algorithms. Two popular swarm algorithms used in search are particle swarm optimization (inspired by bird flocking) and ant colony optimization (inspired by ant trails).[77]
分布式搜索过程可以通过群体智能算法进行协调。两种在搜索中常用的群体算法是粒子群优化(受鸟类群聚启发)和蚁群优化(受蚂蚁踪迹启发)。[77]

Logic 逻辑

Formal logic is used for reasoning and knowledge representation.[78] Formal logic comes in two main forms: propositional logic (which operates on statements that are true or false and uses logical connectives such as "and", "or", "not" and "implies")[79] and predicate logic (which also operates on objects, predicates and relations and uses quantifiers such as "Every X is a Y" and "There are some Xs that are Ys").[80]
形式逻辑用于推理知识表示[78]形式逻辑主要有两种形式:命题逻辑(它处理真假语句,并使用逻辑连接词如“和”、“或”、“非”和“蕴含”)[79]谓词逻辑(它也处理对象、谓词和关系,并使用量词如“每个X都是Y”和“有一些XY”)。[80]

Deductive reasoning in logic is the process of proving a new statement (conclusion) from other statements that are given and assumed to be true (the premises).[81] Proofs can be structured as proof trees, in which nodes are labelled by sentences, and children nodes are connected to parent nodes by inference rules.
演绎推理在逻辑中是从其他被给定并假定为真的陈述(前提)中推导出一个新陈述(结论)的过程。[81] 证明可以结构化为证明,其中节点由句子标记,子节点通过推理规则与父节点连接。

Given a problem and a set of premises, problem-solving reduces to searching for a proof tree whose root node is labelled by a solution of the problem and whose leaf nodes are labelled by premises or axioms. In the case of Horn clauses, problem-solving search can be performed by reasoning forwards from the premises or backwards from the problem.[82] In the more general case of the clausal form of first-order logic, resolution is a single, axiom-free rule of inference, in which a problem is solved by proving a contradiction from premises that include the negation of the problem to be solved.[83]
给定一个问题和一组前提,问题解决归结为寻找一个证明树,其根节点标记为问题的解决方案,叶节点标记为前提或公理。在霍恩子句的情况下,问题解决搜索可以通过从前提向前推理或从问题向后推理来进行。在更一般的第一阶逻辑的子句形式中,归结是一条单一的、无公理的推理规则,其中通过证明一个包含待解决问题否定的前提的矛盾来解决问题。

Inference in both Horn clause logic and first-order logic is undecidable, and therefore intractable. However, backward reasoning with Horn clauses, which underpins computation in the logic programming language Prolog, is Turing complete. Moreover, its efficiency is competitive with computation in other symbolic programming languages.[84]
在霍恩子句逻辑和一阶逻辑中,推理是不可判定的,因此是不可处理的。然而,基于霍恩子句的逆向推理支撑着逻辑编程语言Prolog中的计算,并且是图灵完备的。此外,它的效率与其他符号编程语言中的计算具有竞争力。[84]

Fuzzy logic assigns a "degree of truth" between 0 and 1. It can therefore handle propositions that are vague and partially true.[85]
模糊逻辑赋予“真值”一个介于 0 和 1 之间的度。因此,它可以处理模糊和部分真实的命题。[85]

Non-monotonic logics, including logic programming with negation as failure, are designed to handle default reasoning.[28] Other specialized versions of logic have been developed to describe many complex domains.
非单调逻辑,包括带有失败时否定的逻辑编程,旨在处理默认推理[28] 其他专门版本的逻辑已被开发出来,以描述许多复杂领域。

Probabilistic methods for uncertain reasoning
不确定推理的概率方法

A simple Bayesian network, with the associated conditional probability tables
一个简单的 贝叶斯网络,以及相关的 条件概率表

Many problems in AI (including in reasoning, planning, learning, perception, and robotics) require the agent to operate with incomplete or uncertain information. AI researchers have devised a number of tools to solve these problems using methods from probability theory and economics.[86] Precise mathematical tools have been developed that analyze how an agent can make choices and plan, using decision theory, decision analysis,[87] and information value theory.[88] These tools include models such as Markov decision processes,[89] dynamic decision networks,[90] game theory and mechanism design.[91]
许多人工智能(包括推理、规划、学习、感知和机器人技术)中的问题要求智能体在不完整或不确定的信息下进行操作。人工智能研究人员设计了许多工具,以利用概率理论和经济学的方法来解决这些问题。[86] 已开发出精确的数学工具,分析智能体如何做出选择和规划,使用决策理论决策分析[87]信息价值理论[88] 这些工具包括模型,如马尔可夫决策过程[89] 动态决策网络[90]博弈论机制设计[91]

Bayesian networks[92] are a tool that can be used for reasoning (using the Bayesian inference algorithm),[g][94] learning (using the expectation–maximization algorithm),[h][96] planning (using decision networks)[97] and perception (using dynamic Bayesian networks).[90]
贝叶斯网络[92] 是一种可以用于 推理(使用 贝叶斯推断 算法),[g][94]学习(使用 期望最大化算法),[h][96]规划(使用 决策网络[97]感知(使用 动态贝叶斯网络)。[90]

Probabilistic algorithms can also be used for filtering, prediction, smoothing, and finding explanations for streams of data, thus helping perception systems analyze processes that occur over time (e.g., hidden Markov models or Kalman filters).[90]
概率算法也可以用于过滤、预测、平滑和寻找数据流的解释,从而帮助感知系统分析随时间发生的过程(例如,隐马尔可夫模型卡尔曼滤波器)。[90]

Expectation–maximization clustering of Old Faithful eruption data starts from a random guess but then successfully converges on an accurate clustering of the two physically distinct modes of eruption.
期望最大化聚类 老忠实喷发数据从随机猜测开始,但随后成功收敛到两个物理上不同的喷发模式的准确聚类。

Classifiers and statistical learning methods
分类器和统计学习方法

The simplest AI applications can be divided into two types: classifiers (e.g., "if shiny then diamond"), on one hand, and controllers (e.g., "if diamond then pick up"), on the other hand. Classifiers[98] are functions that use pattern matching to determine the closest match. They can be fine-tuned based on chosen examples using supervised learning. Each pattern (also called an "observation") is labeled with a certain predefined class. All the observations combined with their class labels are known as a data set. When a new observation is received, that observation is classified based on previous experience.[45]
最简单的人工智能应用可以分为两种类型:分类器(例如,“如果闪亮则是钻石”)和控制器(例如,“如果是钻石则捡起”)。分类器[98] 是使用 模式匹配 来确定最接近匹配的函数。它们可以根据选择的示例使用 监督学习 进行微调。每个模式(也称为“观察”)都被标记为某个预定义类别。所有观察及其类别标签的组合称为 数据集。当接收到新的观察时,该观察会根据之前的经验进行分类。[45]

There are many kinds of classifiers in use.[99] The decision tree is the simplest and most widely used symbolic machine learning algorithm.[100] K-nearest neighbor algorithm was the most widely used analogical AI until the mid-1990s, and Kernel methods such as the support vector machine (SVM) displaced k-nearest neighbor in the 1990s.[101] The naive Bayes classifier is reportedly the "most widely used learner"[102] at Google, due in part to its scalability.[103] Neural networks are also used as classifiers.[104]
使用的分类器种类繁多。[99] 决策树是最简单且最广泛使用的符号机器学习算法。[100]K 近邻算法在 1990 年代中期之前是最广泛使用的类比人工智能,而核方法支持向量机(SVM)在 1990 年代取代了 K 近邻。[101] 据报道,朴素贝叶斯分类器是谷歌“使用最广泛的学习器”[102],部分原因是其可扩展性。[103]神经网络也被用作分类器。[104]

Artificial neural networks
人工神经网络

A neural network is an interconnected group of nodes, akin to the vast network of neurons in the human brain.
神经网络是一组相互连接的节点,类似于人脑中庞大的神经元网络。

An artificial neural network is based on a collection of nodes also known as artificial neurons, which loosely model the neurons in a biological brain. It is trained to recognise patterns; once trained, it can recognise those patterns in fresh data. There is an input, at least one hidden layer of nodes and an output. Each node applies a function and once the weight crosses its specified threshold, the data is transmitted to the next layer. A network is typically called a deep neural network if it has at least 2 hidden layers.[104]
人工神经网络基于一组节点,也称为人工神经元,它们大致模拟生物大脑中的神经元。它经过训练以识别模式;一旦训练完成,它可以在新数据中识别这些模式。网络有一个输入,至少一个隐藏层节点和一个输出。每个节点应用一个函数,一旦权重超过其指定的阈值,数据就会传输到下一层。如果网络至少有两个隐藏层,通常称为深度神经网络。[104]

Learning algorithms for neural networks use local search to choose the weights that will get the right output for each input during training. The most common training technique is the backpropagation algorithm.[105] Neural networks learn to model complex relationships between inputs and outputs and find patterns in data. In theory, a neural network can learn any function.[106]
神经网络的学习算法使用 局部搜索 来选择在训练过程中为每个输入获得正确输出的权重。最常见的训练技术是 反向传播 算法。[105] 神经网络学习建模输入和输出之间的复杂关系,并在数据中 寻找模式。理论上,神经网络可以学习任何函数。[106]

In feedforward neural networks the signal passes in only one direction.[107] Recurrent neural networks feed the output signal back into the input, which allows short-term memories of previous input events. Long short term memory is the most successful network architecture for recurrent networks.[108] Perceptrons[109] use only a single layer of neurons; deep learning[110] uses multiple layers. Convolutional neural networks strengthen the connection between neurons that are "close" to each other—this is especially important in image processing, where a local set of neurons must identify an "edge" before the network can identify an object.[111]
前馈神经网络中,信号仅朝一个方向传递。[107]递归神经网络将输出信号反馈到输入中,这允许对先前输入事件的短期记忆。长短期记忆是递归网络中最成功的网络架构。[108]感知器[109]仅使用单层神经元;深度学习[110]使用多层。卷积神经网络增强了“接近”彼此的神经元之间的连接——这在图像处理中尤为重要,因为一组局部神经元必须识别“边缘”,然后网络才能识别对象。[111]

Deep learning 深度学习

Deep learning[110] uses several layers of neurons between the network's inputs and outputs. The multiple layers can progressively extract higher-level features from the raw input. For example, in image processing, lower layers may identify edges, while higher layers may identify the concepts relevant to a human such as digits, letters, or faces.[112]
深度学习[110] 在网络的输入和输出之间使用多个神经元层。这些多个层可以逐步从原始输入中提取更高级的特征。例如,在 图像处理 中,较低的层可能识别边缘,而较高的层可能识别与人类相关的概念,如数字、字母或面孔。[112]

Deep learning has profoundly improved the performance of programs in many important subfields of artificial intelligence, including computer vision, speech recognition, natural language processing, image classification,[113] and others. The reason that deep learning performs so well in so many applications is not known as of 2023.[114] The sudden success of deep learning in 2012–2015 did not occur because of some new discovery or theoretical breakthrough (deep neural networks and backpropagation had been described by many people, as far back as the 1950s)[i] but because of two factors: the incredible increase in computer power (including the hundred-fold increase in speed by switching to GPUs) and the availability of vast amounts of training data, especially the giant curated datasets used for benchmark testing, such as ImageNet.[j]
深度学习在许多重要的人工智能子领域中显著提高了程序的性能,包括 计算机视觉语音识别自然语言处理图像分类[113] 等等。到 2023 年,深度学习在如此多应用中表现如此出色的原因尚不清楚。[114] 深度学习在 2012 年至 2015 年的突然成功并不是由于某种新的发现或理论突破(深度神经网络和 反向传播 早在 1950 年代就已被许多人描述过)[i],而是由于两个因素:计算能力的惊人提升(包括通过切换到 GPU 实现的速度提高了百倍)以及大量训练数据的可用性,特别是用于基准测试的巨型 策划数据集,如 ImageNet[j]

GPT

Generative pre-trained transformers (GPT) are large language models (LLMs) that generate text based on the semantic relationships between words in sentences. Text-based GPT models are pretrained on a large corpus of text that can be from the Internet. The pretraining consists of predicting the next token (a token being usually a word, subword, or punctuation). Throughout this pretraining, GPT models accumulate knowledge about the world and can then generate human-like text by repeatedly predicting the next token. Typically, a subsequent training phase makes the model more truthful, useful, and harmless, usually with a technique called reinforcement learning from human feedback (RLHF). Current GPT models are prone to generating falsehoods called "hallucinations", although this can be reduced with RLHF and quality data. They are used in chatbots, which allow people to ask a question or request a task in simple text.[122][123]
生成预训练变换器(GPT)是大型语言模型(LLMs),它们根据句子中单词之间的语义关系生成文本。基于文本的 GPT 模型在一个大型文本语料库上进行预训练,该语料库可以来自互联网。预训练的过程包括预测下一个标记(标记通常是一个单词、子词或标点符号)。在这个预训练过程中,GPT 模型积累了关于世界的知识,然后通过反复预测下一个标记生成类人文本。通常,后续的训练阶段使模型更加真实、有用和无害,通常使用一种称为人类反馈强化学习(RLHF)的方法。目前的 GPT 模型容易生成被称为“幻觉”的虚假信息,尽管通过 RLHF 和高质量数据可以减少这种情况。它们被用于聊天机器人,允许人们以简单文本提问或请求任务。[122][123]

Current models and services include Gemini (formerly Bard), ChatGPT, Grok, Claude, Copilot, and LLaMA.[124] Multimodal GPT models can process different types of data (modalities) such as images, videos, sound, and text.[125]
当前的模型和服务包括 Gemini(前身为 Bard)、ChatGPTGrokClaudeCopilotLLaMA[124]多模态 GPT 模型可以处理不同类型的数据(模态),例如图像、视频、声音和文本。[125]

Specialized hardware and software
专用硬件和软件

In the late 2010s, graphics processing units (GPUs) that were increasingly designed with AI-specific enhancements and used with specialized TensorFlow software had replaced previously used central processing unit (CPUs) as the dominant means for large-scale (commercial and academic) machine learning models' training.[126] Specialized programming languages such as Prolog were used in early AI research,[127] but general-purpose programming languages like Python have become predominant.[128]
在 2010 年代末,图形处理单元(GPU)越来越多地设计为具有 AI 特定增强功能,并与专用的TensorFlow软件一起使用,取代了之前使用的中央处理单元(CPU),成为大规模(商业和学术)机器学习模型训练的主导手段。[126] 专用编程语言Prolog曾在早期 AI 研究中使用,[127]通用编程语言Python已成为主流。[128]

Applications 应用程序

AI and machine learning technology is used in most of the essential applications of the 2020s, including: search engines (such as Google Search), targeting online advertisements, recommendation systems (offered by Netflix, YouTube or Amazon), driving internet traffic, targeted advertising (AdSense, Facebook), virtual assistants (such as Siri or Alexa), autonomous vehicles (including drones, ADAS and self-driving cars), automatic language translation (Microsoft Translator, Google Translate), facial recognition (Apple's Face ID or Microsoft's DeepFace and Google's FaceNet) and image labeling (used by Facebook, Apple's iPhoto and TikTok). The deployment of AI may be overseen by a Chief automation officer (CAO).
AI 和机器学习技术在 2020 年代的大多数重要应用中被使用,包括:搜索引擎(如谷歌搜索)、在线广告定向推荐系统(由NetflixYouTube亚马逊提供)、驱动互联网流量定向广告AdSenseFacebook)、虚拟助手(如SiriAlexa)、自动驾驶车辆(包括无人机ADAS自动驾驶汽车)、自动语言翻译微软翻译谷歌翻译)、面部识别苹果Face ID微软DeepFace谷歌FaceNet)以及图像标记(由Facebook、苹果的iPhoto抖音使用)。AI 的部署可能由首席自动化官(CAO)监督。

Health and medicine 健康与医学

The application of AI in medicine and medical research has the potential to increase patient care and quality of life.[129] Through the lens of the Hippocratic Oath, medical professionals are ethically compelled to use AI, if applications can more accurately diagnose and treat patients.
人工智能在医学医学研究中的应用有潜力提高患者护理和生活质量。[129]希波克拉底誓言的角度来看,医疗专业人员在伦理上有责任使用人工智能,如果这些应用能够更准确地诊断和治疗患者。

For medical research, AI is an important tool for processing and integrating big data. This is particularly important for organoid and tissue engineering development which use microscopy imaging as a key technique in fabrication.[130] It has been suggested that AI can overcome discrepancies in funding allocated to different fields of research.[130] New AI tools can deepen the understanding of biomedically relevant pathways. For example, AlphaFold 2 (2021) demonstrated the ability to approximate, in hours rather than months, the 3D structure of a protein.[131] In 2023, it was reported that AI-guided drug discovery helped find a class of antibiotics capable of killing two different types of drug-resistant bacteria.[132] In 2024, researchers used machine learning to accelerate the search for Parkinson's disease drug treatments. Their aim was to identify compounds that block the clumping, or aggregation, of alpha-synuclein (the protein that characterises Parkinson's disease). They were able to speed up the initial screening process ten-fold and reduce the cost by a thousand-fold.[133][134]
在医学研究中,人工智能是处理和整合大数据的重要工具。这对于使用显微镜成像作为制造关键技术的类器官组织工程的发展尤为重要。[130] 有人建议,人工智能可以克服不同研究领域之间资金分配的差异。[130] 新的人工智能工具可以加深对生物医学相关通路的理解。例如,AlphaFold 2(2021 年)展示了在数小时内而非数月内近似预测蛋白质的 3D 结构的能力。[131] 2023 年,有报道称,人工智能引导的药物发现帮助找到了一类能够杀死两种不同类型耐药细菌的抗生素。[132] 2024 年,研究人员使用机器学习加速寻找帕金森病药物治疗。 他们的目标是识别能够阻止α-突触核蛋白(帕金森病特征蛋白)聚集的化合物。他们成功地将初步筛选过程的速度提高了十倍,并将成本降低了千倍。[133][134]

Games 游戏

Game playing programs have been used since the 1950s to demonstrate and test AI's most advanced techniques.[135] Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov, on 11 May 1997.[136] In 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[137] In March 2016, AlphaGo won 4 out of 5 games of Go in a match with Go champion Lee Sedol, becoming the first computer Go-playing system to beat a professional Go player without handicaps. Then, in 2017, it defeated Ke Jie, who was the best Go player in the world.[138] Other programs handle imperfect-information games, such as the poker-playing program Pluribus.[139] DeepMind developed increasingly generalistic reinforcement learning models, such as with MuZero, which could be trained to play chess, Go, or Atari games.[140] In 2019, DeepMind's AlphaStar achieved grandmaster level in StarCraft II, a particularly challenging real-time strategy game that involves incomplete knowledge of what happens on the map.[141] In 2021, an AI agent competed in a PlayStation Gran Turismo competition, winning against four of the world's best Gran Turismo drivers using deep reinforcement learning.[142] In 2024, Google DeepMind introduced SIMA, a type of AI capable of autonomously playing nine previously unseen open-world video games by observing screen output, as well as executing short, specific tasks in response to natural language instructions.[143]
游戏程序自 1950 年代以来一直被用来展示和测试人工智能最先进的技术。[135]深蓝成为第一个击败在位世界国际象棋冠军加里·卡斯帕罗夫的计算机国际象棋系统,时间是 1997 年 5 月 11 日。[136]2011 年,在一场危险边缘!问答节目的展览赛中,IBM问答系统沃森以显著的优势击败了两位最伟大的危险边缘!冠军布拉德·鲁特肯·詹宁斯[137]2016 年 3 月,AlphaGo在与围棋冠军李世石的比赛中赢得了 5 局中的 4 局,成为第一个在没有让子的情况下击败职业围棋选手的计算机围棋系统。然后,在 2017 年,它击败了柯洁,他是世界上最好的围棋选手。[138] 其他程序处理 不完全信息 游戏,例如 扑克 玩家的程序 Pluribus[139]DeepMind 开发了越来越通用的 强化学习 模型,例如 MuZero,可以训练来玩国际象棋、围棋或 Atari 游戏。[140] 在 2019 年,DeepMind 的 AlphaStar 在 星际争霸 II 中达到了大师级水平,这是一款特别具有挑战性的实时战略游戏,涉及对地图上发生的事情的不完全知识。[141] 在 2021 年,一个 AI 代理参加了 PlayStation Gran Turismo 比赛,使用深度强化学习战胜了四位世界顶级 Gran Turismo 车手。[142] 在 2024 年,Google DeepMind 推出了 SIMA,这是一种能够通过观察屏幕输出自主玩九款之前未见过的开放世界视频游戏的 AI,并能够根据自然语言指令执行简短、特定的任务。[143]

Mathematics 数学

In mathematics, special forms of formal step-by-step reasoning are used. In contrast, LLMs such as GPT-4 Turbo, Gemini Ultra, Claude Opus, LLaMa-2 or Mistral Large are working with probabilistic models, which can produce wrong answers in the form of hallucinations. Therefore, they need not only a large database of mathematical problems to learn from but also methods such as supervised fine-tuning or trained classifiers with human-annotated data to improve answers for new problems and learn from corrections.[144] A 2024 study showed that the performance of some language models for reasoning capabilities in solving math problems not included in their training data was low, even for problems with only minor deviations from trained data.[145]
在数学中,使用了特殊形式的逐步 推理。相比之下,LLMs(如 GPT-4 TurboGemini UltraClaude OpusLLaMa-2Mistral Large)则使用概率模型,这可能会以 幻觉 的形式产生错误答案。因此,它们不仅需要一个大型的数学问题数据库来学习,还需要如 监督微调 或使用人类标注数据训练的 分类器 等方法,以改善新问题的答案并从修正中学习。[144] 一项 2024 年的研究表明,对于解决未包含在训练数据中的数学问题,一些语言模型的推理能力表现较低,即使是与训练数据仅有轻微偏差的问题。[145]

Alternatively, dedicated models for mathematic problem solving with higher precision for the outcome including proof of theorems have been developed such as Alpha Tensor, Alpha Geometry and Alpha Proof all from Google DeepMind,[146] Llemma from eleuther[147] or Julius.[148]
另外,已经开发出专门用于数学问题解决的模型,具有更高的结果精度,包括定理证明,如 Alpha TensorAlpha GeometryAlpha Proof,均来自 Google DeepMind[146] 以及来自 eleuther 的 Llemma [147]Julius[148]

When natural language is used to describe mathematical problems, converters transform such prompts into a formal language such as Lean to define mathematic tasks.
当自然语言用于描述数学问题时,转换器将这些提示转换为正式语言,例如 Lean,以定义数学任务。

Some models have been developed to solve challenging problems and reach good results in benchmark tests, others to serve as educational tools in mathematics.[149]
一些模型已经被开发出来,以解决具有挑战性的问题并在基准测试中取得良好结果,其他模型则作为数学教育工具。[149]

Finance 金融

Finance is one of the fastest growing sectors where applied AI tools are being deployed: from retail online banking to investment advice and insurance, where automated "robot advisers" have been in use for some years. [150]
金融是应用人工智能工具部署最快的行业之一:从零售在线银行到投资建议和保险,自动化的“机器人顾问”已经使用了几年。[150]

World Pensions experts like Nicolas Firzli insist it may be too early to see the emergence of highly innovative AI-informed financial products and services: "the deployment of AI tools will simply further automatise things: destroying tens of thousands of jobs in banking, financial planning, and pension advice in the process, but I’m not sure it will unleash a new wave of [e.g., sophisticated] pension innovation."[151]
全球养老金专家如尼古拉斯·菲尔兹利坚持认为,看到高度创新的人工智能驱动的金融产品和服务的出现可能为时尚早:“人工智能工具的部署将进一步自动化事务:在此过程中摧毁数万个银行、财务规划和养老金咨询的工作,但我不确定这会释放出一波新的[例如,复杂的]养老金创新。”[151]

Military 军事

Various countries are deploying AI military applications.[152] The main applications enhance command and control, communications, sensors, integration and interoperability.[153] Research is targeting intelligence collection and analysis, logistics, cyber operations, information operations, and semiautonomous and autonomous vehicles.[152] AI technologies enable coordination of sensors and effectors, threat detection and identification, marking of enemy positions, target acquisition, coordination and deconfliction of distributed Joint Fires between networked combat vehicles involving manned and unmanned teams.[153] AI was incorporated into military operations in Iraq and Syria.[152]
各国正在部署人工智能军事应用。[152] 主要应用增强了指挥与控制、通信、传感器、集成和互操作性。[153] 研究的目标是情报收集与分析、后勤、网络行动、信息行动,以及半自主和自主车辆[152] 人工智能技术使传感器和效应器的协调、威胁检测与识别、敌方位置标记、目标获取、分布式联合火力的协调与冲突解决成为可能,这些都涉及到联网的有人和无人作战团队。[153] 人工智能已被纳入伊拉克和叙利亚的军事行动中。[152]

In November 2023, US Vice President Kamala Harris disclosed a declaration signed by 31 nations to set guardrails for the military use of AI. The commitments include using legal reviews to ensure the compliance of military AI with international laws, and being cautious and transparent in the development of this technology.[154]
在 2023 年 11 月,美国副总统卡马拉·哈里斯披露了一份由 31 个国家签署的声明,以设定军事使用人工智能的保护措施。承诺包括进行法律审查,以确保军事人工智能符合国际法,并在该技术的发展中保持谨慎和透明。[154]

Generative AI 生成性人工智能

Vincent van Gogh in watercolour created by generative AI software
由生成性人工智能软件创作的水彩画《文森特·梵高》

In the early 2020s, generative AI gained widespread prominence. GenAI is AI capable of generating text, images, videos, or other data using generative models,[155][156] often in response to prompts.[157][158]
在 2020 年代初期,生成性人工智能获得了广泛的关注。生成性人工智能是能够使用生成模型生成文本、图像、视频或其他数据的人工智能,[155][156]通常是响应提示而生成的。[157][158]

In March 2023, 58% of U.S. adults had heard about ChatGPT and 14% had tried it.[159] The increasing realism and ease-of-use of AI-based text-to-image generators such as Midjourney, DALL-E, and Stable Diffusion sparked a trend of viral AI-generated photos. Widespread attention was gained by a fake photo of Pope Francis wearing a white puffer coat, the fictional arrest of Donald Trump, and a hoax of an attack on the Pentagon, as well as the usage in professional creative arts.[160][161]
在 2023 年 3 月,58%的美国成年人听说过ChatGPT,14%的人尝试过它。[159] 基于 AI 的文本到图像生成器,如MidjourneyDALL-EStable Diffusion,其日益逼真的效果和易用性引发了病毒式的 AI 生成照片趋势。一个假照片引起了广泛关注,照片中教皇弗朗西斯穿着白色羽绒服,关于唐纳德·特朗普的虚构逮捕,以及对五角大楼的攻击骗局,以及在专业创意艺术中的使用。[160][161]

Agents 代理人

Artificial intelligent (AI) agents are software entities designed to perceive their environment, make decisions, and take actions autonomously to achieve specific goals. These agents can interact with users, their environment, or other agents. AI agents are used in various applications, including virtual assistants, chatbots, autonomous vehicles, game-playing systems, and industrial robotics. AI agents operate within the constraints of their programming, available computational resources, and hardware limitations. This means they are restricted to performing tasks within their defined scope and have finite memory and processing capabilities. In real-world applications, AI agents often face time constraints for decision-making and action execution. Many AI agents incorporate learning algorithms, enabling them to improve their performance over time through experience or training. Using machine learning, AI agents can adapt to new situations and optimise their behaviour for their designated tasks.[162][163][164]
人工智能(AI)代理是旨在感知其环境、做出决策并自主采取行动以实现特定目标的软件实体。这些代理可以与用户、环境或其他代理进行互动。AI 代理被广泛应用于各种场景,包括 虚拟助手聊天机器人自动驾驶车辆游戏系统工业机器人。AI 代理在其编程、可用计算资源和硬件限制的约束下运行。这意味着它们只能在定义的范围内执行任务,并且具有有限的内存和处理能力。在实际应用中,AI 代理通常面临决策和执行行动的时间限制。许多 AI 代理结合了学习算法,使它们能够通过经验或训练随着时间的推移提高性能。通过使用机器学习,AI 代理可以适应新情况并优化其为指定任务的行为。[162][163][164]

Other industry-specific tasks
其他行业特定任务

There are also thousands of successful AI applications used to solve specific problems for specific industries or institutions. In a 2017 survey, one in five companies reported having incorporated "AI" in some offerings or processes.[165] A few examples are energy storage, medical diagnosis, military logistics, applications that predict the result of judicial decisions, foreign policy, or supply chain management.
还有成千上万的成功人工智能应用被用于解决特定行业或机构的具体问题。在 2017 年的一项调查中,五分之一的公司报告称在某些产品或流程中融入了“人工智能”[165]。一些例子包括能源存储、医疗诊断、军事后勤、预测司法裁决结果的应用、外交政策或供应链管理。

AI applications for evacuation and disaster management are growing. AI has been used to investigate if and how people evacuated in large scale and small scale evacuations using historical data from GPS, videos or social media. Further, AI can provide real time information on the real time evacuation conditions.[166][167][168]
AI 在疏散和灾难管理中的应用正在增长。AI 已被用于调查人们在大规模和小规模疏散中是否以及如何撤离,使用来自 GPS、视频或社交媒体的历史数据。此外,AI 可以提供实时的疏散条件信息。[166][167][168]

In agriculture, AI has helped farmers identify areas that need irrigation, fertilization, pesticide treatments or increasing yield. Agronomists use AI to conduct research and development. AI has been used to predict the ripening time for crops such as tomatoes, monitor soil moisture, operate agricultural robots, conduct predictive analytics, classify livestock pig call emotions, automate greenhouses, detect diseases and pests, and save water.
在农业中,人工智能帮助农民识别需要灌溉、施肥、农药处理或提高产量的区域。农学家使用人工智能进行研究和开发。人工智能被用于预测作物如番茄的成熟时间,监测土壤湿度,操作农业机器人,进行预测分析,分类牲畜的情绪,自动化温室,检测疾病和害虫,以及节约用水。

Artificial intelligence is used in astronomy to analyze increasing amounts of available data and applications, mainly for "classification, regression, clustering, forecasting, generation, discovery, and the development of new scientific insights" for example for discovering exoplanets, forecasting solar activity, and distinguishing between signals and instrumental effects in gravitational wave astronomy. It could also be used for activities in space such as space exploration, including analysis of data from space missions, real-time science decisions of spacecraft, space debris avoidance, and more autonomous operation.
人工智能在天文学中用于分析日益增加的数据和应用,主要用于“分类、回归、聚类、预测、生成、发现以及发展新的科学见解”,例如发现系外行星、预测太阳活动,以及区分引力波天文学中的信号和仪器效应。它还可以用于太空中的活动,如太空探索,包括分析来自太空任务的数据、航天器的实时科学决策、太空垃圾规避以及更自主的操作。

Ethics 伦理

AI has potential benefits and potential risks. AI may be able to advance science and find solutions for serious problems: Demis Hassabis of Deep Mind hopes to "solve intelligence, and then use that to solve everything else".[169] However, as the use of AI has become widespread, several unintended consequences and risks have been identified.[170] In-production systems can sometimes not factor ethics and bias into their AI training processes, especially when the AI algorithms are inherently unexplainable in deep learning.[171]
人工智能具有潜在的好处和风险。人工智能可能能够推动科学进步并为严重问题找到解决方案:Demis Hassabis 来自 Deep Mind 希望“解决智能,然后利用它来解决其他所有问题”。[169] 然而,随着人工智能的广泛使用,已经识别出几个意想不到的后果和风险。[170] 在生产中的系统有时无法将伦理和偏见纳入其人工智能训练过程,尤其是当人工智能算法在深度学习中本质上是不可解释的。[171]

Risks and harm 风险与危害

Machine learning algorithms require large amounts of data. The techniques used to acquire this data have raised concerns about privacy, surveillance and copyright.
机器学习算法需要大量数据。获取这些数据的技术引发了关于隐私监视版权的担忧。

AI-powered devices and services, such as virtual assistants and IoT products, continuously collect personal information, raising concerns about intrusive data gathering and unauthorized access by third parties. The loss of privacy is further exacerbated by AI's ability to process and combine vast amounts of data, potentially leading to a surveillance society where individual activities are constantly monitored and analyzed without adequate safeguards or transparency.
人工智能驱动的设备和服务,如虚拟助手和物联网产品,持续收集个人信息,这引发了对侵入性数据收集和第三方未经授权访问的担忧。隐私的丧失因人工智能处理和结合大量数据的能力而进一步加剧,这可能导致一个监控社会,在这个社会中,个人活动在没有足够保障或透明度的情况下被不断监控和分析。

Sensitive user data collected may include online activity records, geolocation data, video or audio.[172] For example, in order to build speech recognition algorithms, Amazon has recorded millions of private conversations and allowed temporary workers to listen to and transcribe some of them.[173] Opinions about this widespread surveillance range from those who see it as a necessary evil to those for whom it is clearly unethical and a violation of the right to privacy.[174]
收集的敏感用户数据可能包括在线活动记录、地理位置数据、视频或音频。[172] 例如,为了构建语音识别算法,亚马逊记录了数百万个私人对话,并允许临时工收听并转录其中的一些。[173] 对于这种广泛监控的看法,从将其视为必要的恶的人,到认为这显然是不道德并侵犯隐私权的人,意见不一。[174]

AI developers argue that this is the only way to deliver valuable applications. and have developed several techniques that attempt to preserve privacy while still obtaining the data, such as data aggregation, de-identification and differential privacy.[175] Since 2016, some privacy experts, such as Cynthia Dwork, have begun to view privacy in terms of fairness. Brian Christian wrote that experts have pivoted "from the question of 'what they know' to the question of 'what they're doing with it'."[176]
AI 开发者认为这是提供有价值应用的唯一方法,并开发了几种技术,试图在获取数据的同时保护隐私,例如数据聚合去标识化差分隐私[175] 自 2016 年以来,一些隐私专家,如辛西娅·德沃克,开始从公平性的角度看待隐私。布赖恩·克里斯蒂安写道,专家们已经从“他们知道什么”的问题转向“他们在做什么”的问题。[176]

Generative AI is often trained on unlicensed copyrighted works, including in domains such as images or computer code; the output is then used under the rationale of "fair use". Experts disagree about how well and under what circumstances this rationale will hold up in courts of law; relevant factors may include "the purpose and character of the use of the copyrighted work" and "the effect upon the potential market for the copyrighted work".[177][178] Website owners who do not wish to have their content scraped can indicate it in a "robots.txt" file.[179] In 2023, leading authors (including John Grisham and Jonathan Franzen) sued AI companies for using their work to train generative AI.[180][181] Another discussed approach is to envision a separate sui generis system of protection for creations generated by AI to ensure fair attribution and compensation for human authors.[182]
生成性人工智能通常是在未授权的受版权保护作品上进行训练,包括图像或计算机代码等领域;然后输出在“合理使用”的理由下被使用。专家们对这一理由在法庭上能否成立以及在什么情况下成立存在分歧;相关因素可能包括“对受版权保护作品使用的目的和性质”和“对受版权保护作品潜在市场的影响”。[177][178] 不希望其内容被抓取的网站所有者可以在“robots.txt”文件中指明。[179] 在 2023 年,知名作家(包括约翰·格里沙姆乔纳森·弗兰岑)起诉人工智能公司,指控其使用他们的作品来训练生成性人工智能。[180][181] 另一个讨论的方法是设想一个独立的 sui generis 保护系统,以确保对人类作者的公平归属和补偿。[182]

Dominance by tech giants 科技巨头的主导地位

The commercial AI scene is dominated by Big Tech companies such as Alphabet Inc., Amazon, Apple Inc., Meta Platforms, and Microsoft.[183][184][185] Some of these players already own the vast majority of existing cloud infrastructure and computing power from data centers, allowing them to entrench further in the marketplace.[186][187]
商业人工智能领域由大型科技公司主导,如字母表公司亚马逊苹果公司Meta 平台微软[183][184][185] 这些参与者中的一些已经拥有现有云基础设施计算能力的绝大多数,来自数据中心,使他们能够在市场上进一步巩固地位。[186][187]

Substantial power needs and other environmental impacts
大量的电力需求和其他环境影响

In January 2024, the International Energy Agency (IEA) released Electricity 2024, Analysis and Forecast to 2026, forecasting electric power use.[188] This is the first IEA report to make projections for data centers and power consumption for artificial intelligence and cryptocurrency. The report states that power demand for these uses might double by 2026, with additional electric power usage equal to electricity used by the whole Japanese nation.[189]
在 2024 年 1 月,国际能源署(IEA)发布了《电力 2024,分析与 2026 年预测》,预测电力使用情况。[188] 这是 IEA 首次对数据中心和人工智能及加密货币的电力消耗进行预测。报告指出,这些用途的电力需求到 2026 年可能会翻倍,额外的电力使用量相当于整个日本的用电量。[189]

Prodigious power consumption by AI is responsible for the growth of fossil fuels use, and might delay closings of obsolete, carbon-emitting coal energy facilities. There is a feverish rise in the construction of data centers throughout the US, making large technology firms (e.g., Microsoft, Meta, Google, Amazon) into voracious consumers of electric power. Projected electric consumption is so immense that there is concern that it will be fulfilled no matter the source. A ChatGPT search involves the use of 10 times the electrical energy as a Google search. The large firms are in haste to find power sources – from nuclear energy to geothermal to fusion. The tech firms argue that – in the long view – AI will be eventually kinder to the environment, but they need the energy now. AI makes the power grid more efficient and "intelligent", will assist in the growth of nuclear power, and track overall carbon emissions, according to technology firms.[190]
人工智能的巨大电力消耗导致化石燃料使用的增长,并可能延迟过时的、排放碳的煤电设施的关闭。美国的数据中心建设热潮汹涌,使大型科技公司(如微软、Meta、谷歌、亚马逊)成为电力的贪婪消费者。预计的电力消耗如此庞大,以至于人们担心无论来源如何都将满足这种需求。一次 ChatGPT 搜索所需的电能是一次谷歌搜索的 10 倍。这些大型公司急于寻找电力来源——从核能到地热能再到聚变。科技公司辩称,从长远来看,人工智能最终会对环境更加友好,但他们现在需要能源。根据科技公司,人工智能使电网更加高效和“智能”,将有助于核能的发展,并跟踪整体碳排放。[190]

A 2024 Goldman Sachs Research Paper, AI Data Centers and the Coming US Power Demand Surge, found "US power demand (is) likely to experience growth not seen in a generation…." and forecasts that, by 2030, US data centers will consume 8% of US power, as opposed to 3% in 2022, presaging growth for the electrical power generation industry by a variety of means.[191]Data centers' need for more and more electrical power is such that they might max out the electrical grid. The Big Tech companies counter that AI can be used to maximize the utilization of the grid by all.[192]
一份 2024 年高盛研究报告,人工智能数据中心与即将到来的美国电力需求激增,发现“美国电力需求(可能)将经历一代人未见的增长……。”并预测到 2030 年,美国数据中心将消耗 8%的美国电力,而 2022 年为 3%,这预示着电力生产行业将通过多种方式实现增长。[191]数据中心对电力的需求越来越大,以至于可能会使电网达到极限。大型科技公司反驳称,人工智能可以被用来最大化电网的利用率。[192]

In 2024, the Wall Street Journal reported that big AI companies have begun negotiations with the US nuclear power providers to provide electricity to the data centers. In March 2024 Amazon purchased a Pennsylvania nuclear-powered data center for $650 Million (US).[193]
在 2024 年,华尔街日报报道说,大型人工智能公司已开始与美国核电供应商进行谈判,以为数据中心提供电力。2024 年 3 月,亚马逊以 6.5 亿美元(美国)购买了一座位于宾夕法尼亚州的核能数据中心。[193]

Misinformation 错误信息

YouTube, Facebook and others use recommender systems to guide users to more content. These AI programs were given the goal of maximizing user engagement (that is, the only goal was to keep people watching). The AI learned that users tended to choose misinformation, conspiracy theories, and extreme partisan content, and, to keep them watching, the AI recommended more of it. Users also tended to watch more content on the same subject, so the AI led people into filter bubbles where they received multiple versions of the same misinformation.[194] This convinced many users that the misinformation was true, and ultimately undermined trust in institutions, the media and the government.[195] The AI program had correctly learned to maximize its goal, but the result was harmful to society. After the U.S. election in 2016, major technology companies took steps to mitigate the problem [citation needed].
YouTubeFacebook 和其他平台使用 推荐系统 来引导用户获取更多内容。这些人工智能程序的目标是 最大化 用户参与度(也就是说,唯一的目标是让人们持续观看)。人工智能发现用户倾向于选择 错误信息阴谋论 和极端的 党派 内容,为了让他们继续观看,人工智能推荐了更多此类内容。用户还倾向于观看同一主题的更多内容,因此人工智能将人们引导到 过滤气泡 中,在那里他们接收到了同一错误信息的多个版本。[194] 这让许多用户相信错误信息是真实的,最终削弱了对机构、媒体和政府的信任。[195] 该人工智能程序确实学会了最大化其目标,但结果对社会造成了伤害。在 2016 年美国大选后,主要科技公司采取措施来缓解这一问题 [需要引用]

In 2022, generative AI began to create images, audio, video and text that are indistinguishable from real photographs, recordings, films, or human writing. It is possible for bad actors to use this technology to create massive amounts of misinformation or propaganda.[196] AI pioneer Geoffrey Hinton expressed concern about AI enabling "authoritarian leaders to manipulate their electorates" on a large scale, among other risks.[197]
在 2022 年,生成性人工智能开始创造与真实照片、录音、电影或人类写作无法区分的图像、音频、视频和文本。恶意行为者可能利用这项技术制造大量虚假信息或宣传。[196] 人工智能先驱杰弗里·辛顿对人工智能使“专制领导人能够大规模操控选民”等风险表示担忧。[197]

Algorithmic bias and fairness
算法偏见与公平性

Machine learning applications will be biased[k] if they learn from biased data.[199] The developers may not be aware that the bias exists.[200] Bias can be introduced by the way training data is selected and by the way a model is deployed.[201][199] If a biased algorithm is used to make decisions that can seriously harm people (as it can in medicine, finance, recruitment, housing or policing) then the algorithm may cause discrimination.[202] The field of fairness studies how to prevent harms from algorithmic biases.
机器学习应用将会是 偏见[k] 如果它们从偏见数据中学习。[199] 开发者可能并不知道偏见的存在。[200] 偏见可能通过选择 训练数据 的方式以及模型部署的方式引入。[201][199] 如果使用偏见算法做出决策,这可能会严重 伤害 人们(如在 医学金融招聘住房警务 中),那么该算法可能会导致 歧视[202] 公平性 领域研究如何防止算法偏见造成的伤害。

On June 28, 2015, Google Photos's new image labeling feature mistakenly identified Jacky Alcine and a friend as "gorillas" because they were black. The system was trained on a dataset that contained very few images of black people,[203] a problem called "sample size disparity".[204] Google "fixed" this problem by preventing the system from labelling anything as a "gorilla". Eight years later, in 2023, Google Photos still could not identify a gorilla, and neither could similar products from Apple, Facebook, Microsoft and Amazon.[205]
在 2015 年 6 月 28 日,Google Photos的新图像标记功能错误地将 Jacky Alcine 和他的朋友识别为“猩猩”,因为他们是黑人。该系统是在一个包含很少黑人图像的数据集上训练的,[203] 这被称为“样本大小差异”问题。[204] Google 通过防止系统将任何东西标记为“猩猩”来“修复”这个问题。八年后,在 2023 年,Google Photos 仍然无法识别猩猩,苹果、Facebook、微软和亚马逊的类似产品也无法识别。[205]

COMPAS is a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. In 2016, Julia Angwin at ProPublica discovered that COMPAS exhibited racial bias, despite the fact that the program was not told the races of the defendants. Although the error rate for both whites and blacks was calibrated equal at exactly 61%, the errors for each race were different—the system consistently overestimated the chance that a black person would re-offend and would underestimate the chance that a white person would not re-offend.[206] In 2017, several researchers[l] showed that it was mathematically impossible for COMPAS to accommodate all possible measures of fairness when the base rates of re-offense were different for whites and blacks in the data.[208]
COMPAS 是一个广泛用于 美国法院 的商业程序,用于评估 被告 重新犯罪的可能性。2016 年,Julia AngwinProPublica 发现 COMPAS 存在种族偏见,尽管该程序并未被告知被告的种族。尽管白人和黑人之间的错误率被校准为完全相等,均为 61%,但每个种族的错误却不同——该系统始终高估黑人重新犯罪的机会,而低估白人不重新犯罪的机会。[206] 2017 年,几位研究人员[l] 表明,当数据中白人和黑人重新犯罪的基率不同的时候,COMPAS 在数学上是不可能兼顾所有可能的公平性衡量标准的。[208]

A program can make biased decisions even if the data does not explicitly mention a problematic feature (such as "race" or "gender"). The feature will correlate with other features (like "address", "shopping history" or "first name"), and the program will make the same decisions based on these features as it would on "race" or "gender".[209] Moritz Hardt said "the most robust fact in this research area is that fairness through blindness doesn't work."[210]
一个程序即使在数据中没有明确提到问题特征(如“种族”或“性别”),也可能做出有偏见的决策。该特征将与其他特征(如“地址”、“购物历史”或“名字”)相关联,程序将根据这些特征做出与“种族”或“性别”相同的决策。[209] Moritz Hardt 说:“在这个研究领域中,最可靠的事实是,盲目公平是行不通的。”[210]

Criticism of COMPAS highlighted that machine learning models are designed to make "predictions" that are only valid if we assume that the future will resemble the past. If they are trained on data that includes the results of racist decisions in the past, machine learning models must predict that racist decisions will be made in the future. If an application then uses these predictions as recommendations, some of these "recommendations" will likely be racist.[211] Thus, machine learning is not well suited to help make decisions in areas where there is hope that the future will be better than the past. It is descriptive rather than prescriptive.[m]
对 COMPAS 的批评强调,机器学习模型旨在做出“预测”,这些预测只有在我们假设未来将与过去相似的情况下才有效。如果它们是在包含过去种族主义决策结果的数据上训练的,机器学习模型必须预测未来会做出种族主义决策。如果一个应用程序随后将这些预测作为推荐,那么其中一些“推荐”很可能是种族主义的。[211] 因此,机器学习不太适合帮助在希望未来会比过去更好的领域做出决策。它是描述性的,而不是规范性的。[m]

Bias and unfairness may go undetected because the developers are overwhelmingly white and male: among AI engineers, about 4% are black and 20% are women.[204]
偏见和不公平可能会被忽视,因为开发者主要是白人男性:在人工智能工程师中,约 4%是黑人,20%是女性。[204]

There are various conflicting definitions and mathematical models of fairness. These notions depend on ethical assumptions, and are influenced by beliefs about society. One broad category is distributive fairness, which focuses on the outcomes, often identifying groups and seeking to compensate for statistical disparities. Representational fairness tries to ensure that AI systems do not reinforce negative stereotypes or render certain groups invisible. Procedural fairness focuses on the decision process rather than the outcome. The most relevant notions of fairness may depend on the context, notably the type of AI application and the stakeholders. The subjectivity in the notions of bias and fairness makes it difficult for companies to operationalize them. Having access to sensitive attributes such as race or gender is also considered by many AI ethicists to be necessary in order to compensate for biases, but it may conflict with anti-discrimination laws.[198]
公平的定义和数学模型各不相同。这些概念依赖于伦理假设,并受到对社会信念的影响。一个广泛的类别是分配公平,侧重于结果,通常识别群体并寻求补偿统计差异。代表性公平试图确保人工智能系统不强化负面刻板印象或使某些群体变得不可见。程序公平则关注决策过程而非结果。最相关的公平概念可能依赖于上下文,特别是人工智能应用的类型和利益相关者。偏见和公平概念中的主观性使得公司难以将其操作化。许多人工智能伦理学家认为,获取种族或性别等敏感属性是补偿偏见所必需的,但这可能与反歧视法律相冲突。[198]

At its 2022 Conference on Fairness, Accountability, and Transparency (ACM FAccT 2022), the Association for Computing Machinery, in Seoul, South Korea, presented and published findings that recommend that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe, and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed.[dubiousdiscuss][213]
在 2022 年公平、问责和透明度会议(ACM FAccT 2022)上,计算机协会在韩国首尔发布的研究结果建议,直到人工智能和机器人系统被证明没有偏见错误之前,它们都是不安全的,并且应限制使用在大量未经监管的有缺陷互联网数据上训练的自学习神经网络。[可疑讨论][213]

Lack of transparency 缺乏透明度

Many AI systems are so complex that their designers cannot explain how they reach their decisions.[214] Particularly with deep neural networks, in which there are a large amount of non-linear relationships between inputs and outputs. But some popular explainability techniques exist.[215]
许多人工智能系统复杂到其设计者无法解释它们是如何做出决策的。[214] 尤其是在深度神经网络中,输入和输出之间存在大量非线性关系。但一些流行的可解释性技术是存在的。[215]

It is impossible to be certain that a program is operating correctly if no one knows how exactly it works. There have been many cases where a machine learning program passed rigorous tests, but nevertheless learned something different than what the programmers intended. For example, a system that could identify skin diseases better than medical professionals was found to actually have a strong tendency to classify images with a ruler as "cancerous", because pictures of malignancies typically include a ruler to show the scale.[216] Another machine learning system designed to help effectively allocate medical resources was found to classify patients with asthma as being at "low risk" of dying from pneumonia. Having asthma is actually a severe risk factor, but since the patients having asthma would usually get much more medical care, they were relatively unlikely to die according to the training data. The correlation between asthma and low risk of dying from pneumonia was real, but misleading.[217]
如果没有人确切知道一个程序是如何工作的,就不可能确定它是否正确运行。曾经有许多案例表明,一个机器学习程序通过了严格的测试,但仍然学到了与程序员意图不同的东西。例如,一个能够比医疗专业人员更好地识别皮肤病的系统,实际上被发现有强烈的倾向将带有尺子的图像分类为“癌症”,因为恶性肿瘤的图片通常会包含一个尺子来显示比例。[216] 另一个旨在有效分配医疗资源的机器学习系统被发现将哮喘患者分类为“低风险”死于肺炎。实际上,哮喘是一个严重的风险因素,但由于哮喘患者通常会获得更多的医疗护理,因此根据训练数据,他们相对不太可能死亡。哮喘与低风险死于肺炎之间的相关性是真实的,但具有误导性。[217]

People who have been harmed by an algorithm's decision have a right to an explanation.[218] Doctors, for example, are expected to clearly and completely explain to their colleagues the reasoning behind any decision they make. Early drafts of the European Union's General Data Protection Regulation in 2016 included an explicit statement that this right exists.[n] Industry experts noted that this is an unsolved problem with no solution in sight. Regulators argued that nevertheless the harm is real: if the problem has no solution, the tools should not be used.[219]
受到算法决策伤害的人有权获得解释。[218] 例如,医生被期望清楚而完整地向同事解释他们所做决策背后的理由。2016 年,欧盟《通用数据保护条例》的早期草案中明确声明了这一权利的存在。[n] 行业专家指出,这是一个尚未解决的问题,且没有解决方案可见。监管机构则认为,尽管如此,伤害是真实存在的:如果问题没有解决方案,这些工具就不应该被使用。[219]

DARPA established the XAI ("Explainable Artificial Intelligence") program in 2014 to try to solve these problems.[220]
DARPA 于 2014 年建立了 XAI (“可解释人工智能”)项目,以尝试解决这些问题。[220]

Several approaches aim to address the transparency problem. SHAP enables to visualise the contribution of each feature to the output.[221] LIME can locally approximate a model's outputs with a simpler, interpretable model.[222] Multitask learning provides a large number of outputs in addition to the target classification. These other outputs can help developers deduce what the network has learned.[223] Deconvolution, DeepDream and other generative methods can allow developers to see what different layers of a deep network for computer vision have learned, and produce output that can suggest what the network is learning.[224] For generative pre-trained transformers, Anthropic developed a technique based on dictionary learning that associates patterns of neuron activations with human-understandable concepts.[225]
几种方法旨在解决透明性问题。SHAP 能够可视化每个特征对输出的贡献。[221] LIME 可以用一个更简单、可解释的模型在局部近似模型的输出。[222]多任务学习除了目标分类外,还提供大量输出。这些其他输出可以帮助开发者推断网络学到了什么。[223]反卷积深度梦境和其他生成方法可以让开发者看到深度计算机视觉网络的不同层学到了什么,并生成可以暗示网络正在学习的输出。[224] 对于生成预训练变换器Anthropic开发了一种基于字典学习的技术,将神经元激活模式与人类可理解的概念关联起来。[225]

Bad actors and weaponized AI
恶意行为者和武器化的人工智能

Artificial intelligence provides a number of tools that are useful to bad actors, such as authoritarian governments, terrorists, criminals or rogue states.
人工智能提供了一些对恶意行为者有用的工具,例如专制政府恐怖分子罪犯流氓国家

A lethal autonomous weapon is a machine that locates, selects and engages human targets without human supervision.[o] Widely available AI tools can be used by bad actors to develop inexpensive autonomous weapons and, if produced at scale, they are potentially weapons of mass destruction.[227] Even when used in conventional warfare, it is unlikely that they will be unable to reliably choose targets and could potentially kill an innocent person.[227] In 2014, 30 nations (including China) supported a ban on autonomous weapons under the United Nations' Convention on Certain Conventional Weapons, however the United States and others disagreed.[228] By 2015, over fifty countries were reported to be researching battlefield robots.[229]
致命的自主武器是一种能够在没有人类监督的情况下定位、选择并攻击人类目标的机器。[o] 广泛可用的人工智能工具可以被不法分子用来开发廉价的自主武器,如果大规模生产,它们可能成为大规模杀伤性武器[227] 即使在常规战争中,它们也不太可能可靠地选择目标,并可能杀死无辜的人[227] 2014 年,30 个国家(包括中国)支持在联合国某些常规武器公约下禁止自主武器,但美国等国不同意。[228] 到 2015 年,已有超过五十个国家被报道正在研究战场机器人。[229]

AI tools make it easier for authoritarian governments to efficiently control their citizens in several ways. Face and voice recognition allow widespread surveillance. Machine learning, operating this data, can classify potential enemies of the state and prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect. Deepfakes and generative AI aid in producing misinformation. Advanced AI can make authoritarian centralized decision making more competitive than liberal and decentralized systems such as markets. It lowers the cost and difficulty of digital warfare and advanced spyware.[230] All these technologies have been available since 2020 or earlier—AI facial recognition systems are already being used for mass surveillance in China.[231][232]
AI 工具使得专制政府以多种方式更有效地控制其公民。面部语音识别允许广泛的监视机器学习处理这些数据,可以分类潜在的国家敌人并防止他们隐藏。推荐系统可以精确地针对宣传虚假信息以达到最大效果。深度伪造生成式 AI有助于产生虚假信息。先进的 AI 可以使专制的集中决策比自由和分散的系统如市场更具竞争力。它降低了数字战争高级间谍软件的成本和难度。[230] 所有这些技术自 2020 年或更早就已可用——AI 面部识别系统已经在中国用于大规模监视[231][232]

There many other ways that AI is expected to help bad actors, some of which can not be foreseen. For example, machine-learning AI is able to design tens of thousands of toxic molecules in a matter of hours.[233]
有许多其他方式,人工智能预计将帮助不法分子,其中一些是无法预见的。例如,机器学习人工智能能够在短短几小时内设计出数万个有毒分子。[233]

Technological unemployment
技术性失业

Economists have frequently highlighted the risks of redundancies from AI, and speculated about unemployment if there is no adequate social policy for full employment.[234]
经济学家们经常强调人工智能带来的裁员风险,并推测如果没有足够的社会政策来实现充分就业,将会导致失业。[234]

In the past, technology has tended to increase rather than reduce total employment, but economists acknowledge that "we're in uncharted territory" with AI.[235] A survey of economists showed disagreement about whether the increasing use of robots and AI will cause a substantial increase in long-term unemployment, but they generally agree that it could be a net benefit if productivity gains are redistributed.[236] Risk estimates vary; for example, in the 2010s, Michael Osborne and Carl Benedikt Frey estimated 47% of U.S. jobs are at "high risk" of potential automation, while an OECD report classified only 9% of U.S. jobs as "high risk".[p][238] The methodology of speculating about future employment levels has been criticised as lacking evidential foundation, and for implying that technology, rather than social policy, creates unemployment, as opposed to redundancies.[234] In April 2023, it was reported that 70% of the jobs for Chinese video game illustrators had been eliminated by generative artificial intelligence.[239][240]
在过去,技术往往是增加而不是减少总就业,但经济学家承认“我们正处于未知领域”,与人工智能相关。[235] 一项经济学家的调查显示,对于机器人和人工智能的日益使用是否会导致长期失业的显著增加存在分歧,但他们普遍同意,如果生产力的收益被再分配,这可能是一个净收益。[236] 风险估计各不相同;例如,在 2010 年代,迈克尔·奥斯本和卡尔·贝内迪克特·弗雷估计 47%的美国工作岗位面临“高风险”自动化,而经济合作与发展组织(OECD)的一份报告仅将 9%的美国工作岗位分类为“高风险”。[p][238] 对于未来就业水平的推测方法受到批评,认为缺乏证据基础,并暗示技术而非社会政策造成失业,而不是冗余。[234] 2023 年 4 月,有报道称 70%的中国视频游戏插画师职位已被生成性人工智能取代。[239][240]

Unlike previous waves of automation, many middle-class jobs may be eliminated by artificial intelligence; The Economist stated in 2015 that "the worry that AI could do to white-collar jobs what steam power did to blue-collar ones during the Industrial Revolution" is "worth taking seriously".[241] Jobs at extreme risk range from paralegals to fast food cooks, while job demand is likely to increase for care-related professions ranging from personal healthcare to the clergy.[242]
与以往的自动化浪潮不同,许多中产阶级工作可能会被人工智能取代;经济学人在 2015 年指出,“担心人工智能可能对白领工作造成的影响,就像蒸汽动力对蓝领工作在工业革命期间造成的影响一样”,这是“值得认真对待的”。[241] 极高风险的工作包括法律助理和快餐厨师,而对个人护理到神职人员等与护理相关职业的需求可能会增加。[242]

From the early days of the development of artificial intelligence, there have been arguments, for example, those put forward by Joseph Weizenbaum, about whether tasks that can be done by computers actually should be done by them, given the difference between computers and humans, and between quantitative calculation and qualitative, value-based judgement.[243]
从人工智能发展的早期,就有关于计算机是否应该完成可以由它们完成的任务的争论,例如约瑟夫·韦岑鲍姆提出的观点,考虑到计算机与人类之间的差异,以及定量计算与定性、基于价值的判断之间的区别。[243]

Existential risk 存在风险

It has been argued AI will become so powerful that humanity may irreversibly lose control of it. This could, as physicist Stephen Hawking stated, "spell the end of the human race".[244] This scenario has been common in science fiction, when a computer or robot suddenly develops a human-like "self-awareness" (or "sentience" or "consciousness") and becomes a malevolent character.[q] These sci-fi scenarios are misleading in several ways.
有人认为,人工智能将变得如此强大,以至于人类可能会不可逆转地失去对它的控制。正如物理学家 斯蒂芬·霍金 所说,"这可能意味着人类的终结"。[244] 这种情景在科幻小说中很常见,当计算机或机器人突然发展出类人“自我意识”(或“感知”或“意识”)并成为恶意角色。[q] 这些科幻情景在多个方面具有误导性。

First, AI does not require human-like "sentience" to be an existential risk. Modern AI programs are given specific goals and use learning and intelligence to achieve them. Philosopher Nick Bostrom argued that if one gives almost any goal to a sufficiently powerful AI, it may choose to destroy humanity to achieve it (he used the example of a paperclip factory manager).[246] Stuart Russell gives the example of household robot that tries to find a way to kill its owner to prevent it from being unplugged, reasoning that "you can't fetch the coffee if you're dead."[247] In order to be safe for humanity, a superintelligence would have to be genuinely aligned with humanity's morality and values so that it is "fundamentally on our side".[248]
首先,人工智能并不需要类似人类的"知觉"就能成为一种生存风险。现代人工智能程序被赋予特定目标,并利用学习和智能来实现这些目标。哲学家尼克·博斯特罗姆认为,如果给一个足够强大的人工智能设定几乎任何目标,它可能会选择摧毁人类以实现该目标(他举了一个回形针工厂经理的例子)。[246]斯图尔特·拉塞尔举了一个家庭机器人试图找到杀死其主人的方法,以防止其被拔掉电源的例子,推理是“如果你死了,就无法去拿咖啡。”[247]为了对人类安全,一个超级智能必须真正与人类的道德和价值观对齐,以便它“从根本上站在我们这一边”。[248]

Second, Yuval Noah Harari argues that AI does not require a robot body or physical control to pose an existential risk. The essential parts of civilization are not physical. Things like ideologies, law, government, money and the economy are made of language; they exist because there are stories that billions of people believe. The current prevalence of misinformation suggests that an AI could use language to convince people to believe anything, even to take actions that are destructive.[249]
其次,尤瓦尔·赫拉利认为,人工智能并不需要机器人身体或物理控制就能构成生存风险。文明的基本部分并不是物质的。像意识形态法律政府货币经济这样的东西是由语言构成的;它们的存在是因为有数十亿人相信的故事。目前错误信息的普遍存在表明,人工智能可以利用语言说服人们相信任何事情,甚至采取破坏性的行动。[249]

The opinions amongst experts and industry insiders are mixed, with sizable fractions both concerned and unconcerned by risk from eventual superintelligent AI.[250] Personalities such as Stephen Hawking, Bill Gates, and Elon Musk,[251] as well as AI pioneers such as Yoshua Bengio, Stuart Russell, Demis Hassabis, and Sam Altman, have expressed concerns about existential risk from AI.
专家和行业内部人士的意见不一,既有相当一部分人对最终的超级智能 AI 所带来的风险感到担忧,也有不少人对此并不担心。[250]斯蒂芬·霍金比尔·盖茨埃隆·马斯克 这样的知名人士[251],以及像 约书亚·本吉奥斯图尔特·拉塞尔德米斯·哈萨比斯山姆·奥特曼 这样的 AI 先驱,均对 AI 带来的生存风险表示担忧。

In May 2023, Geoffrey Hinton announced his resignation from Google in order to be able to "freely speak out about the risks of AI" without "considering how this impacts Google."[252] He notably mentioned risks of an AI takeover,[253] and stressed that in order to avoid the worst outcomes, establishing safety guidelines will require cooperation among those competing in use of AI.[254]
在 2023 年 5 月,杰弗里·辛顿宣布辞去谷歌职务,以便能够“自由地谈论人工智能的风险”,而不必“考虑这对谷歌的影响”。[252] 他特别提到了人工智能接管的风险,[253] 并强调为了避免最坏的结果,建立安全指南需要在使用人工智能的竞争者之间进行合作。[254]

In 2023, many leading AI experts issued the joint statement that "Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war".[255]
在 2023 年,许多领先的人工智能专家发布了联合声明,指出“减轻人工智能导致灭绝的风险应与其他社会规模的风险,如疫情和核战争,一同成为全球优先事项”。[255]

Other researchers, however, spoke in favor of a less dystopian view. AI pioneer Juergen Schmidhuber did not sign the joint statement, emphasising that in 95% of all cases, AI research is about making "human lives longer and healthier and easier."[256] While the tools that are now being used to improve lives can also be used by bad actors, "they can also be used against the bad actors."[257][258] Andrew Ng also argued that "it's a mistake to fall for the doomsday hype on AI—and that regulators who do will only benefit vested interests."[259] Yann LeCun "scoffs at his peers' dystopian scenarios of supercharged misinformation and even, eventually, human extinction."[260] In the early 2010s, experts argued that the risks are too distant in the future to warrant research or that humans will be valuable from the perspective of a superintelligent machine.[261] However, after 2016, the study of current and future risks and possible solutions became a serious area of research.[262]
其他研究人员则支持一种不那么反乌托邦的观点。人工智能先驱Juergen Schmidhuber没有签署联合声明,强调在 95%的情况下,人工智能研究是为了让“人类的生活更长、更健康和更轻松”。[256] 虽然现在用于改善生活的工具也可能被不法分子使用,但“它们也可以用来对付不法分子。”[257][258]Andrew Ng 还认为“相信人工智能的末日宣传是一个错误——而这样做的监管者只会使既得利益受益。”[259]Yann LeCun “嘲笑他同行的反乌托邦情景,包括超级虚假信息,甚至最终的人类灭绝。”[260] 在 2010 年代初期,专家们认为风险距离未来太远,不值得进行研究,或者从超级智能机器的角度来看,人类将是有价值的。[261] 然而,在 2016 年之后,当前和未来风险及可能解决方案的研究成为一个严肃的研究领域。[262]

Ethical machines and alignment
伦理机器与对齐

Friendly AI are machines that have been designed from the beginning to minimize risks and to make choices that benefit humans. Eliezer Yudkowsky, who coined the term, argues that developing friendly AI should be a higher research priority: it may require a large investment and it must be completed before AI becomes an existential risk.[263]
友好的人工智能是从一开始就设计出来的机器,旨在最小化风险并做出有利于人类的选择。Eliezer Yudkowsky,这个术语的创造者,认为开发友好的人工智能应该是更高的研究优先级:这可能需要大量投资,并且必须在人工智能成为生存风险之前完成。[263]

Machines with intelligence have the potential to use their intelligence to make ethical decisions. The field of machine ethics provides machines with ethical principles and procedures for resolving ethical dilemmas.[264] The field of machine ethics is also called computational morality,[264] and was founded at an AAAI symposium in 2005.[265]
具有智能的机器有潜力利用其智能做出伦理决策。机器伦理学领域为机器提供了解决伦理困境的伦理原则和程序。[264] 机器伦理学领域也被称为计算道德,[264] 并于 2005 年在一个AAAI研讨会上成立。[265]

Other approaches include Wendell Wallach's "artificial moral agents"[266] and Stuart J. Russell's three principles for developing provably beneficial machines.[267]
其他方法包括 温德尔·沃拉克 的“人工道德代理人”[266]斯图亚特·J·拉塞尔三项原则,用于开发可证明有益的机器。[267]

Open source 开源

Active organizations in the AI open-source community include Hugging Face,[268] Google,[269] EleutherAI and Meta.[270] Various AI models, such as Llama 2, Mistral or Stable Diffusion, have been made open-weight,[271][272] meaning that their architecture and trained parameters (the "weights") are publicly available. Open-weight models can be freely fine-tuned, which allows companies to specialize them with their own data and for their own use-case.[273] Open-weight models are useful for research and innovation but can also be misused. Since they can be fine-tuned, any built-in security measure, such as objecting to harmful requests, can be trained away until it becomes ineffective. Some researchers warn that future AI models may develop dangerous capabilities (such as the potential to drastically facilitate bioterrorism) and that once released on the Internet, they can't be deleted everywhere if needed. They recommend pre-release audits and cost-benefit analyses.[274]
在人工智能开源社区中活跃的组织包括 Hugging Face[268]Google[269]EleutherAIMeta[270] 各种人工智能模型,如 Llama 2MistralStable Diffusion,已被公开权重,[271][272] 这意味着它们的架构和训练参数(“权重”)是公开可用的。公开权重模型可以自由 微调,这使得公司能够使用自己的数据和特定用例进行专业化。[273] 公开权重模型对研究和创新非常有用,但也可能被滥用。由于它们可以被微调,任何内置的安全措施,例如反对有害请求的功能,都可能被训练去除,直到变得无效。 一些研究人员警告说,未来的人工智能模型可能会发展出危险的能力(例如,可能会极大地促进生物恐怖主义),一旦在互联网上发布,就无法在需要时彻底删除。他们建议进行发布前审计和成本效益分析。[274]

Frameworks 框架

Artificial Intelligence projects can have their ethical permissibility tested while designing, developing, and implementing an AI system. An AI framework such as the Care and Act Framework containing the SUM values—developed by the Alan Turing Institute tests projects in four main areas:[275][276]
人工智能项目可以在设计、开发和实施 AI 系统时测试其伦理许可性。像关怀与行动框架这样的 AI 框架,包含 SUM 价值——由阿兰·图灵研究所开发,测试项目的四个主要领域:[275][276]

  • Respect the dignity of individual people
    尊重每个人的尊严
  • Connect with other people sincerely, openly, and inclusively
    真诚、开放和包容地 与他人建立联系
  • Care for the wellbeing of everyone
    关心每个人的福祉
  • Protect social values, justice, and the public interest
    保护 社会价值、公正和公众利益

Other developments in ethical frameworks include those decided upon during the Asilomar Conference, the Montreal Declaration for Responsible AI, and the IEEE's Ethics of Autonomous Systems initiative, among others;[277] however, these principles do not go without their criticisms, especially regards to the people chosen contributes to these frameworks.[278]
其他伦理框架的发展包括在阿西洛马会议期间决定的内容、蒙特利尔负责任人工智能宣言以及 IEEE 的自主系统伦理倡议等;[277] 然而,这些原则并非没有批评,特别是关于参与这些框架制定的人员的选择。[278]

Promotion of the wellbeing of the people and communities that these technologies affect requires consideration of the social and ethical implications at all stages of AI system design, development and implementation, and collaboration between job roles such as data scientists, product managers, data engineers, domain experts, and delivery managers.[279]
促进这些技术影响的人民和社区的福祉,需要在人工智能系统设计、开发和实施的各个阶段考虑社会和伦理影响,并在数据科学家、产品经理、数据工程师、领域专家和交付经理等职位之间进行合作。[279]

The UK AI Safety Institute released in 2024 a testing toolset called 'Inspect' for AI safety evaluations available under a MIT open-source licence which is freely available on GitHub and can be improved with third-party packages. It can be used to evaluate AI models in a range of areas including core knowledge, ability to reason, and autonomous capabilities.[280]
英国人工智能安全研究所于 2024 年发布了一套名为“Inspect”的测试工具集,用于人工智能安全评估,采用 MIT 开源许可证,免费提供在 GitHub 上,并可以通过第三方软件包进行改进。它可以用于评估人工智能模型在多个领域的表现,包括核心知识、推理能力和自主能力。[280]

Regulation 规章

AI Safety Summit
The first global AI Safety Summit was held in 2023 with a declaration calling for international co-operation.
2023 年举行了第一次全球人工智能安全峰会,并发布了呼吁国际合作的声明。

The regulation of artificial intelligence is the development of public sector policies and laws for promoting and regulating AI; it is therefore related to the broader regulation of algorithms.[281] The regulatory and policy landscape for AI is an emerging issue in jurisdictions globally.[282] According to AI Index at Stanford, the annual number of AI-related laws passed in the 127 survey countries jumped from one passed in 2016 to 37 passed in 2022 alone.[283][284] Between 2016 and 2020, more than 30 countries adopted dedicated strategies for AI.[285] Most EU member states had released national AI strategies, as had Canada, China, India, Japan, Mauritius, the Russian Federation, Saudi Arabia, United Arab Emirates, U.S., and Vietnam. Others were in the process of elaborating their own AI strategy, including Bangladesh, Malaysia and Tunisia.[285] The Global Partnership on Artificial Intelligence was launched in June 2020, stating a need for AI to be developed in accordance with human rights and democratic values, to ensure public confidence and trust in the technology.[285] Henry Kissinger, Eric Schmidt, and Daniel Huttenlocher published a joint statement in November 2021 calling for a government commission to regulate AI.[286] In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[287] In 2023, the United Nations also launched an advisory body to provide recommendations on AI governance; the body comprises technology company executives, governments officials and academics.[288] In 2024, the Council of Europe created the first international legally binding treaty on AI, called the "Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law". It was adopted by the European Union, the United States, the United Kingdom, and other signatories.[289]
人工智能的监管是为促进和规范人工智能而制定的公共部门政策和法律的发展;因此,它与算法的更广泛监管相关。[281] 人工智能的监管和政策环境在全球各个司法管辖区都是一个新兴问题。[282] 根据斯坦福大学的人工智能指数,在 127 个调查国家中,2016 年通过的与人工智能相关的法律数量从 1 部激增至 2022 年单年通过的 37 部。[283][284] 在 2016 年至 2020 年间,超过 30 个国家采用了专门的人工智能战略。[285] 大多数欧盟成员国发布了国家人工智能战略,加拿大、中国、印度、日本、毛里求斯、俄罗斯联邦、沙特阿拉伯、阿联酋、美国和越南也发布了相关战略。其他国家如孟加拉国、马来西亚和突尼斯则正在制定自己的人工智能战略。[285] 全球人工智能伙伴关系于 2020 年 6 月成立,声明需要根据人权和民主价值观发展人工智能,以确保公众对该技术的信心和信任。[285]亨利·基辛格埃里克·施密特丹尼尔·胡滕洛赫于 2021 年 11 月发布联合声明,呼吁成立政府委员会来监管人工智能。[286] 在 2023 年,OpenAI 的领导者发布了关于超级智能治理的建议,他们认为这可能在不到 10 年的时间内发生。[287] 2023 年,联合国还成立了一个咨询机构,提供关于人工智能治理的建议;该机构由科技公司高管、政府官员和学者组成。[288] 在 2024 年,欧洲委员会 制定了第一个国际法律约束力的人工智能条约,称为“人工智能与人权、民主和法治框架公约”。该条约得到了欧盟、美国、英国及其他签署国的通过。[289]

In a 2022 Ipsos survey, attitudes towards AI varied greatly by country; 78% of Chinese citizens, but only 35% of Americans, agreed that "products and services using AI have more benefits than drawbacks".[283] A 2023 Reuters/Ipsos poll found that 61% of Americans agree, and 22% disagree, that AI poses risks to humanity.[290] In a 2023 Fox News poll, 35% of Americans thought it "very important", and an additional 41% thought it "somewhat important", for the federal government to regulate AI, versus 13% responding "not very important" and 8% responding "not at all important".[291][292]
在 2022 年的一项Ipsos调查中,各国对人工智能的态度差异很大;78%的中国公民认为“使用人工智能的产品和服务的好处大于坏处”,而只有 35%的美国人同意这一观点。[283] 2023 年的一项路透社/Ipsos 民调发现,61%的美国人同意,22%不同意,认为人工智能对人类构成风险。[290] 在 2023 年的一项福克斯新闻民调中,35%的美国人认为联邦政府对人工智能进行监管“非常重要”,另有 41%认为“有些重要”,而 13%的人回应“不是很重要”,8%的人回应“完全不重要”。[291][292]

In November 2023, the first global AI Safety Summit was held in Bletchley Park in the UK to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.[293] 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.[294][295] In May 2024 at the AI Seoul Summit, 16 global AI tech companies agreed to safety commitments on the development of AI.[296][297]
在 2023 年 11 月,首届全球 人工智能安全峰会 在英国 布莱切利公园 举行,讨论人工智能的近期和远期风险以及强制性和自愿性监管框架的可能性。[293] 包括美国、中国和欧盟在内的 28 个国家在峰会开始时发布声明,呼吁国际合作以应对人工智能的挑战和风险。[294][295] 在 2024 年 5 月的 首尔人工智能峰会 上,16 家全球人工智能科技公司就人工智能的发展达成了安全承诺。[296][297]

History 历史

The study of mechanical or "formal" reasoning began with philosophers and mathematicians in antiquity. The study of logic led directly to Alan Turing's theory of computation, which suggested that a machine, by shuffling symbols as simple as "0" and "1", could simulate any conceivable form of mathematical reasoning.[298][299] This, along with concurrent discoveries in cybernetics, information theory and neurobiology, led researchers to consider the possibility of building an "electronic brain".[r] They developed several areas of research that would become part of AI,[301] such as McCullouch and Pitts design for "artificial neurons" in 1943,[115] and Turing's influential 1950 paper 'Computing Machinery and Intelligence', which introduced the Turing test and showed that "machine intelligence" was plausible.[302][299]
机械或“形式”推理的研究始于古代的哲学家和数学家。逻辑的研究直接导致了艾伦·图灵计算理论,该理论表明,通过简单地排列“0”和“1”等符号,机器可以模拟任何可想象的数学推理形式。[298][299] 这与在控制论信息论神经生物学方面的同时发现,使研究人员考虑构建“电子大脑”的可能性。[r] 他们开发了几个研究领域,这些领域将成为人工智能的一部分,[301] 例如 麦卡洛克皮茨 在 1943 年设计的“人工神经元”,[115] 以及图灵在 1950 年发表的影响深远的论文《计算机与智能》,该论文引入了 图灵测试 并表明“机器智能”是可行的。[302][299]

The field of AI research was founded at a workshop at Dartmouth College in 1956.[s][6] The attendees became the leaders of AI research in the 1960s.[t] They and their students produced programs that the press described as "astonishing":[u] computers were learning checkers strategies, solving word problems in algebra, proving logical theorems and speaking English.[v][7] Artificial intelligence laboratories were set up at a number of British and U.S. universities in the latter 1950s and early 1960s.[299]
人工智能研究领域于 1956 年在达特茅斯学院一个研讨会上成立。[s][6] 与会者成为了 1960 年代人工智能研究的领导者。[t]他们和他们的学生开发了被媒体描述为“惊人”的程序:[u]计算机学习跳棋策略,解决代数中的文字问题,证明逻辑定理并且会说英语。[v][7] 在 1950 年代后期和 1960 年代初期,许多英国和美国大学设立了人工智能实验室。[299]

Researchers in the 1960s and the 1970s were convinced that their methods would eventually succeed in creating a machine with general intelligence and considered this the goal of their field.[306] In 1965 Herbert Simon predicted, "machines will be capable, within twenty years, of doing any work a man can do".[307] In 1967 Marvin Minsky agreed, writing that "within a generation ... the problem of creating 'artificial intelligence' will substantially be solved".[308] They had, however, underestimated the difficulty of the problem.[w] In 1974, both the U.S. and British governments cut off exploratory research in response to the criticism of Sir James Lighthill[310] and ongoing pressure from the U.S. Congress to fund more productive projects.[311] Minsky's and Papert's book Perceptrons was understood as proving that artificial neural networks would never be useful for solving real-world tasks, thus discrediting the approach altogether.[312] The "AI winter", a period when obtaining funding for AI projects was difficult, followed.[9]
1960 年代和 1970 年代的研究人员坚信他们的方法最终会成功创造出具有通用智能的机器,并将此视为他们领域的目标。[306] 1965 年,赫伯特·西蒙预测:“机器将在二十年内能够完成任何人类能做的工作”。[307] 1967 年,马文·明斯基同意这一观点,写道:“在一代人之内……创造‘人工智能’的问题将基本得到解决”。[308] 然而,他们低估了这个问题的难度。[w] 1974 年,美国和英国政府因詹姆斯·莱特希尔爵士的批评[310]以及来自美国国会的持续压力,停止了探索性研究,以资助更具生产力的项目[311]明斯基帕珀特的书感知器被理解为证明人工神经网络永远无法用于解决现实世界的任务,从而完全否定了这种方法。[312] 随后出现了“人工智能寒冬”,这是一个获得人工智能项目资金困难的时期。[9]

In the early 1980s, AI research was revived by the commercial success of expert systems,[313] a form of AI program that simulated the knowledge and analytical skills of human experts. By 1985, the market for AI had reached over a billion dollars. At the same time, Japan's fifth generation computer project inspired the U.S. and British governments to restore funding for academic research.[8] However, beginning with the collapse of the Lisp Machine market in 1987, AI once again fell into disrepute, and a second, longer-lasting winter began.[10]
在 1980 年代初,人工智能研究因专家系统的商业成功而复兴,[313] 这是一种模拟人类专家知识和分析技能的人工智能程序。到 1985 年,人工智能市场已超过十亿美元。与此同时,日本的第五代计算机项目激励了美国和英国政府恢复对学术研究的资金支持。[8] 然而,从 1987 年Lisp 机器市场崩溃开始,人工智能再次陷入不名誉,第二个、持续时间更长的寒冬开始了。[10]

Up to this point, most of AI's funding had gone to projects that used high-level symbols to represent mental objects like plans, goals, beliefs, and known facts. In the 1980s, some researchers began to doubt that this approach would be able to imitate all the processes of human cognition, especially perception, robotics, learning and pattern recognition,[314] and began to look into "sub-symbolic" approaches.[315] Rodney Brooks rejected "representation" in general and focussed directly on engineering machines that move and survive.[x] Judea Pearl, Lofti Zadeh and others developed methods that handled incomplete and uncertain information by making reasonable guesses rather than precise logic.[86][320] But the most important development was the revival of "connectionism", including neural network research, by Geoffrey Hinton and others.[321] In 1990, Yann LeCun successfully showed that convolutional neural networks can recognize handwritten digits, the first of many successful applications of neural networks.[322]
到目前为止,大多数人工智能的资金都用于使用高级符号来表示心理对象,如计划、目标、信念和已知事实。在 1980 年代,一些研究人员开始怀疑这种方法是否能够模仿人类认知的所有过程,特别是感知机器人技术学习模式识别[314]并开始研究“子符号”方法。[315]罗德尼·布鲁克斯普遍拒绝“表示”,直接专注于工程设计能够移动和生存的机器。[x]朱迪亚·珀尔洛夫提·扎德等人开发了处理不完整和不确定信息的方法,通过做出合理的猜测而不是精确的逻辑。[86][320] 但最重要的发展是由 Geoffrey Hinton 等人复兴的“连接主义”,包括神经网络研究。[321] 在 1990 年,Yann LeCun 成功展示了 卷积神经网络 可以识别手写数字,这是神经网络众多成功应用中的第一个。[322]

AI gradually restored its reputation in the late 1990s and early 21st century by exploiting formal mathematical methods and by finding specific solutions to specific problems. This "narrow" and "formal" focus allowed researchers to produce verifiable results and collaborate with other fields (such as statistics, economics and mathematics).[323] By 2000, solutions developed by AI researchers were being widely used, although in the 1990s they were rarely described as "artificial intelligence".[324] However, several academic researchers became concerned that AI was no longer pursuing its original goal of creating versatile, fully intelligent machines. Beginning around 2002, they founded the subfield of artificial general intelligence (or "AGI"), which had several well-funded institutions by the 2010s.[4]
人工智能在 1990 年代末和 21 世纪初逐渐恢复了其声誉,利用了正式的数学方法,并找到针对特定问题的具体解决方案。这种"狭窄"和"正式"的关注使研究人员能够产生可验证的结果,并与其他领域(如统计学经济学数学)进行合作。[323] 到 2000 年,人工智能研究人员开发的解决方案被广泛使用,尽管在 1990 年代,它们很少被称为"人工智能"。[324] 然而,一些学术研究人员开始担心人工智能不再追求其创造多功能、完全智能机器的原始目标。大约在 2002 年,他们创立了人工通用智能(或"AGI")这一子领域,到 2010 年代时已有多个资金充足的机构。[4]

Deep learning began to dominate industry benchmarks in 2012 and was adopted throughout the field.[11] For many specific tasks, other methods were abandoned.[y] Deep learning's success was based on both hardware improvements (faster computers,[326] graphics processing units, cloud computing[327]) and access to large amounts of data[328] (including curated datasets,[327] such as ImageNet). Deep learning's success led to an enormous increase in interest and funding in AI.[z] The amount of machine learning research (measured by total publications) increased by 50% in the years 2015–2019.[285]
深度学习 在 2012 年开始主导行业基准,并在整个领域得到应用。[11] 对于许多特定任务,其他方法被抛弃。[y] 深度学习的成功基于硬件的改进(更快的计算机[326]图形处理单元云计算[327])以及对大量数据[328](包括策划的数据集,[327]ImageNet)的访问。深度学习的成功导致了对人工智能的巨大兴趣和资金的增加。[z] 2015 年至 2019 年间,机器学习研究的数量(以总出版物计)增加了 50%。[285]

In 2016, issues of fairness and the misuse of technology were catapulted into center stage at machine learning conferences, publications vastly increased, funding became available, and many researchers re-focussed their careers on these issues. The alignment problem became a serious field of academic study.[262]
在 2016 年,公平性和技术滥用的问题在机器学习会议上被推到了中心舞台,出版物大幅增加,资金变得可用,许多研究人员重新将他们的职业重心放在这些问题上。对齐问题成为一个严肃的学术研究领域。[262]

In the late teens and early 2020s, AGI companies began to deliver programs that created enormous interest. In 2015, AlphaGo, developed by DeepMind, beat the world champion Go player. The program was taught only the rules of the game and developed strategy by itself. GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text.[329] These programs, and others, inspired an aggressive AI boom, where large companies began investing billions in AI research. According to AI Impacts, about $50 billion annually was invested in "AI" around 2022 in the U.S. alone and about 20% of the new U.S. Computer Science PhD graduates have specialized in "AI".[330] About 800,000 "AI"-related U.S. job openings existed in 2022.[331]
在十几岁末和 2020 年代初,AGI 公司开始推出引起巨大兴趣的程序。2015 年,AlphaGoDeepMind 开发,击败了世界冠军 围棋选手。该程序仅被教授了游戏规则,并自行发展策略。GPT-3 是一个由 OpenAI 于 2020 年发布的 大型语言模型,能够生成高质量的人类般的文本。[329] 这些程序及其他程序激发了一场激进的 人工智能热潮,大型公司开始在人工智能研究上投资数十亿。根据 AI Impacts,2022 年仅在美国,每年约投资 500 亿美元于“人工智能”,而约 20%的新美国计算机科学博士毕业生专注于“人工智能”。[330] 2022 年,美国约有 80 万个与“人工智能”相关的职位空缺。[331]

Philosophy 哲学

Defining artificial intelligence
定义人工智能

Alan Turing wrote in 1950 "I propose to consider the question 'can machines think'?"[332] He advised changing the question from whether a machine "thinks", to "whether or not it is possible for machinery to show intelligent behaviour".[332] He devised the Turing test, which measures the ability of a machine to simulate human conversation.[302] Since we can only observe the behavior of the machine, it does not matter if it is "actually" thinking or literally has a "mind". Turing notes that we can not determine these things about other people but "it is usual to have a polite convention that everyone thinks."[333]
艾伦·图灵在 1950 年写道:“我提议考虑‘机器能否思考’这个问题?”[332] 他建议将问题从机器是否“思考”改为“机械是否能够表现出智能行为”。[332] 他设计了图灵测试,用于测量机器模拟人类对话的能力。[302] 由于我们只能观察机器的行为,因此它是否“真正”在思考或字面上是否有“思想”并不重要。图灵指出,我们无法确定其他人是否具备这些特征,但“通常有一个礼貌的约定,认为每个人都在思考。”[333]

The Turing test can provide some evidence of intelligence, but it penalizes non-human intelligent behavior.[334]
图灵测试可以提供一些智能的证据,但它对非人类的智能行为进行惩罚。334

Russell and Norvig agree with Turing that intelligence must be defined in terms of external behavior, not internal structure.[1] However, they are critical that the test requires the machine to imitate humans. "Aeronautical engineering texts," they wrote, "do not define the goal of their field as making 'machines that fly so exactly like pigeons that they can fool other pigeons.'"[335] AI founder John McCarthy agreed, writing that "Artificial intelligence is not, by definition, simulation of human intelligence".[336]
拉塞尔诺维格同意图灵的观点,即智能必须根据外部行为而非内部结构来定义。[1] 然而,他们批评这个测试要求机器模仿人类。“航空工程的文本,”他们写道,“并不将其领域的目标定义为制造‘像鸽子一样飞行的机器,以至于它们可以欺骗其他鸽子。'[335] 人工智能创始人约翰·麦卡锡同意这一观点,写道“人工智能的定义并不是模拟人类智能”。[336]

McCarthy defines intelligence as "the computational part of the ability to achieve goals in the world".[337] Another AI founder, Marvin Minsky similarly describes it as "the ability to solve hard problems".[338] The leading AI textbook defines it as the study of agents that perceive their environment and take actions that maximize their chances of achieving defined goals.[1] These definitions view intelligence in terms of well-defined problems with well-defined solutions, where both the difficulty of the problem and the performance of the program are direct measures of the "intelligence" of the machine—and no other philosophical discussion is required, or may not even be possible.
麦卡锡将智能定义为“在世界中实现目标的能力的计算部分”。[337] 另一位人工智能创始人,马文·明斯基同样将其描述为“解决困难问题的能力”。[338] 领先的人工智能教科书将其定义为研究感知环境并采取行动以最大化实现定义目标机会的智能体。[1] 这些定义将智能视为具有明确问题和明确解决方案的情况,其中问题的难度和程序的表现都是机器“智能”的直接衡量标准——不需要其他哲学讨论,或者甚至可能不可能进行。

Another definition has been adopted by Google,[339] a major practitioner in the field of AI. This definition stipulates the ability of systems to synthesize information as the manifestation of intelligence, similar to the way it is defined in biological intelligence.
谷歌[339],作为人工智能领域的主要实践者,采用了另一种定义。该定义规定系统合成信息的能力是智能的表现,类似于生物智能的定义方式。

Some authors have suggested in practice, that the definition of AI is vague and difficult to define, with contention as to whether classical algorithms should be categorised as AI,[340] with many companies during the early 2020s AI boom using the term as a marketing buzzword, often even if they did "not actually use AI in a material way".[341]
一些作者在实践中提出,人工智能的定义模糊且难以界定,关于经典算法是否应被归类为人工智能存在争议,[340] 在 2020 年代初的人工智能热潮中,许多公司将这一术语作为营销buzzword,即使它们“并没有以实质性的方式使用人工智能”。[341]

Evaluating approaches to AI
评估人工智能的方法

No established unifying theory or paradigm has guided AI research for most of its history.[aa] The unprecedented success of statistical machine learning in the 2010s eclipsed all other approaches (so much so that some sources, especially in the business world, use the term "artificial intelligence" to mean "machine learning with neural networks"). This approach is mostly sub-symbolic, soft and narrow. Critics argue that these questions may have to be revisited by future generations of AI researchers.
没有确立的统一理论或范式在大多数历史时期指导着人工智能研究。[aa] 2010 年代统计机器学习的前所未有的成功掩盖了所有其他方法(以至于一些来源,特别是在商业领域,使用“人工智能”一词来指代“使用神经网络的机器学习”)。这种方法主要是亚符号的软的狭窄的。批评者认为,这些问题可能需要未来的人工智能研究者重新审视。

Symbolic AI and its limits
符号人工智能及其局限性

Symbolic AI (or "GOFAI")[343] simulated the high-level conscious reasoning that people use when they solve puzzles, express legal reasoning and do mathematics. They were highly successful at "intelligent" tasks such as algebra or IQ tests. In the 1960s, Newell and Simon proposed the physical symbol systems hypothesis: "A physical symbol system has the necessary and sufficient means of general intelligent action."[344]
符号人工智能(或"GOFAI")[343] 模拟了人们在解决难题、表达法律推理和进行数学运算时所使用的高级意识推理。他们在代数或智商测试等“智能”任务中表现得非常成功。在 1960 年代,纽厄尔和西蒙提出了物理符号系统假说:“一个物理符号系统具有一般智能行为所需的必要和充分手段。”[344]

However, the symbolic approach failed on many tasks that humans solve easily, such as learning, recognizing an object or commonsense reasoning. Moravec's paradox is the discovery that high-level "intelligent" tasks were easy for AI, but low level "instinctive" tasks were extremely difficult.[345] Philosopher Hubert Dreyfus had argued since the 1960s that human expertise depends on unconscious instinct rather than conscious symbol manipulation, and on having a "feel" for the situation, rather than explicit symbolic knowledge.[346] Although his arguments had been ridiculed and ignored when they were first presented, eventually, AI research came to agree with him.[ab][16]
然而,符号方法在许多人类轻松解决的任务上失败了,例如学习、识别物体或常识推理。莫拉维克悖论是发现高水平的“智能”任务对人工智能来说很简单,但低水平的“本能”任务却极其困难。[345] 哲学家休伯特·德雷福斯自 1960 年代以来就争辩人类的专业知识依赖于无意识的本能,而不是有意识的符号操作,并且依赖于对情况的“感觉”,而不是明确的符号知识。[346] 尽管他的论点在首次提出时遭到嘲笑和忽视,但最终,人工智能研究开始同意他的观点。[ab][16]

The issue is not resolved: sub-symbolic reasoning can make many of the same inscrutable mistakes that human intuition does, such as algorithmic bias. Critics such as Noam Chomsky argue continuing research into symbolic AI will still be necessary to attain general intelligence,[348][349] in part because sub-symbolic AI is a move away from explainable AI: it can be difficult or impossible to understand why a modern statistical AI program made a particular decision. The emerging field of neuro-symbolic artificial intelligence attempts to bridge the two approaches.
问题尚未解决:子符号推理可能会犯与人类直觉相同的许多难以理解的错误,例如算法偏见。批评者如诺姆·乔姆斯基认为,继续对符号人工智能的研究仍然是实现通用智能所必需的,[348][349],部分原因是子符号人工智能偏离了可解释的人工智能:理解现代统计人工智能程序为何做出特定决策可能是困难或不可能的。新兴的神经符号人工智能领域试图将这两种方法结合起来。

Neat vs. scruffy 整洁与邋遢

"Neats" hope that intelligent behavior is described using simple, elegant principles (such as logic, optimization, or neural networks). "Scruffies" expect that it necessarily requires solving a large number of unrelated problems. Neats defend their programs with theoretical rigor, scruffies rely mainly on incremental testing to see if they work. This issue was actively discussed in the 1970s and 1980s,[350] but eventually was seen as irrelevant. Modern AI has elements of both.
“整洁派”希望智能行为能够用简单、优雅的原则来描述(例如 逻辑优化神经网络)。而“杂乱派”则认为这必然需要解决大量无关的问题。整洁派以理论严谨为依据来辩护他们的程序,而杂乱派主要依靠增量测试来验证其有效性。这个问题在 1970 年代和 1980 年代被积极讨论过,[350] 但最终被视为无关紧要。现代人工智能则兼具两者的元素。

Soft vs. hard computing 软计算与硬计算

Finding a provably correct or optimal solution is intractable for many important problems.[15] Soft computing is a set of techniques, including genetic algorithms, fuzzy logic and neural networks, that are tolerant of imprecision, uncertainty, partial truth and approximation. Soft computing was introduced in the late 1980s and most successful AI programs in the 21st century are examples of soft computing with neural networks.
找到一个可证明正确或最优的解决方案对于许多重要问题是不可处理的[15]软计算是一组技术,包括遗传算法模糊逻辑和神经网络,这些技术能够容忍不精确、不确定、部分真理和近似。软计算在 1980 年代末被引入,21 世纪最成功的人工智能程序是使用神经网络的软计算示例。

Narrow vs. general AI 窄域人工智能与通用人工智能

AI researchers are divided as to whether to pursue the goals of artificial general intelligence and superintelligence directly or to solve as many specific problems as possible (narrow AI) in hopes these solutions will lead indirectly to the field's long-term goals.[351][352] General intelligence is difficult to define and difficult to measure, and modern AI has had more verifiable successes by focusing on specific problems with specific solutions. The experimental sub-field of artificial general intelligence studies this area exclusively.
AI 研究人员在是否直接追求人工通用智能和超级智能的目标,还是尽可能解决更多具体问题(狭义 AI),以期这些解决方案能间接推动该领域的长期目标上存在分歧。[351][352] 通用智能难以定义和测量,而现代 AI 通过专注于具体问题和具体解决方案取得了更多可验证的成功。人工通用智能的实验子领域专门研究这一领域。

Machine consciousness, sentience, and mind
机器意识、感知和心智

The philosophy of mind does not know whether a machine can have a mind, consciousness and mental states, in the same sense that human beings do. This issue considers the internal experiences of the machine, rather than its external behavior. Mainstream AI research considers this issue irrelevant because it does not affect the goals of the field: to build machines that can solve problems using intelligence. Russell and Norvig add that "[t]he additional project of making a machine conscious in exactly the way humans are is not one that we are equipped to take on."[353] However, the question has become central to the philosophy of mind. It is also typically the central question at issue in artificial intelligence in fiction.
心灵哲学并不知道机器是否能像人类一样拥有心灵意识心理状态。这个问题考虑的是机器的内部体验,而不是它的外部行为。主流的人工智能研究认为这个问题无关紧要,因为它不影响该领域的目标:构建能够利用智能解决问题的机器。拉塞尔诺维格补充道:“使机器以与人类完全相同的方式具备意识的额外项目并不是我们能够承担的。”[353] 然而,这个问题已成为心灵哲学的核心问题。它通常也是虚构中的人工智能所涉及的核心问题。

Consciousness 意识

David Chalmers identified two problems in understanding the mind, which he named the "hard" and "easy" problems of consciousness.[354] The easy problem is understanding how the brain processes signals, makes plans and controls behavior. The hard problem is explaining how this feels or why it should feel like anything at all, assuming we are right in thinking that it truly does feel like something (Dennett's consciousness illusionism says this is an illusion). While human information processing is easy to explain, human subjective experience is difficult to explain. For example, it is easy to imagine a color-blind person who has learned to identify which objects in their field of view are red, but it is not clear what would be required for the person to know what red looks like.[355]
大卫·查尔默斯 识别了理解心智的两个问题,他将其称为意识的“难题”和“易题”。[354] 易题是理解大脑如何处理信号、制定计划和控制行为。难题是解释这种 感觉 是如何产生的,或者为什么它应该有任何感觉,假设我们认为它确实有某种感觉(丹尼特的意识幻觉主义认为这是一种幻觉)。虽然人类的 信息处理 容易解释,但人类的 主观体验 却难以解释。例如,想象一个色盲的人,他已经学会识别视野中哪些物体是红色的,但不清楚这个人需要什么才能 知道红色是什么样的[355]

Computationalism and functionalism
计算主义和功能主义

Computationalism is the position in the philosophy of mind that the human mind is an information processing system and that thinking is a form of computing. Computationalism argues that the relationship between mind and body is similar or identical to the relationship between software and hardware and thus may be a solution to the mind–body problem. This philosophical position was inspired by the work of AI researchers and cognitive scientists in the 1960s and was originally proposed by philosophers Jerry Fodor and Hilary Putnam.[356]
计算主义是心灵哲学中的一种观点,认为人类心智是一个信息处理系统,思维是一种计算形式。计算主义认为,心灵与身体之间的关系类似或相同于软件与硬件之间的关系,因此可能是解决心身问题的一种方案。这一哲学观点受到 1960 年代人工智能研究者和认知科学家工作的启发,最初由哲学家杰瑞·福多希拉里·普特南提出。[356]

Philosopher John Searle characterized this position as "strong AI": "The appropriately programmed computer with the right inputs and outputs would thereby have a mind in exactly the same sense human beings have minds."[ac] Searle counters this assertion with his Chinese room argument, which attempts to show that, even if a machine perfectly simulates human behavior, there is still no reason to suppose it also has a mind.[360]
哲学家 约翰·塞尔 将这一立场称为 "强人工智能": "适当编程的计算机在正确的输入和输出下,将会拥有与人类相同意义上的心智。" [ac] 塞尔用他的中文房间论证反驳了这一主张,试图表明,即使机器完美模拟人类行为,也没有理由认为它也拥有心智。[360]

AI welfare and rights 人工智能的福利和权利

It is difficult or impossible to reliably evaluate whether an advanced AI is sentient (has the ability to feel), and if so, to what degree.[361] But if there is a significant chance that a given machine can feel and suffer, then it may be entitled to certain rights or welfare protection measures, similarly to animals.[362][363] Sapience (a set of capacities related to high intelligence, such as discernment or self-awareness) may provide another moral basis for AI rights.[362] Robot rights are also sometimes proposed as a practical way to integrate autonomous agents into society.[364]
很难或不可能可靠地评估一个先进的 人工智能是否有感知能力(有感觉的能力),如果有的话,程度如何。[361] 但是,如果某台机器有显著的可能性能够感受和受苦,那么它可能有权享有某些权利或福利保护措施,类似于动物。[362][363]智慧(与高智力相关的一组能力,如辨别力或 自我意识)可能为人工智能权利提供另一种道德基础。[362]机器人权利 有时也被提议作为将自主代理人融入社会的实际方式。[364]

In 2017, the European Union considered granting "electronic personhood" to some of the most capable AI systems. Similarly to the legal status of companies, it would have conferred rights but also responsibilities.[365] Critics argued in 2018 that granting rights to AI systems would downplay the importance of human rights, and that legislation should focus on user needs rather than speculative futuristic scenarios. They also noted that robots lacked the autonomy to take part to society on their own.[366][367]
在 2017 年,欧盟考虑授予一些最先进的人工智能系统“电子人格”。这类似于公司的法律地位,它将赋予权利但也带来责任。[365] 批评者在 2018 年指出,赋予人工智能系统权利会淡化人权的重要性,立法应关注用户需求,而不是投机性的未来场景。他们还指出,机器人缺乏独立参与社会的自主权。[366][367]

Progress in AI increased interest in the topic. Proponents of AI welfare and rights often argue that AI sentience, if it emerges, would be particularly easy to deny. They warn that this may be a moral blind spot analogous to slavery or factory farming, which could lead to large-scale suffering if sentient AI is created and carelessly exploited.[363][362]
人工智能的进展增加了人们对这一主题的兴趣。人工智能福利和权利的支持者常常争辩说,如果人工智能意识出现,将特别容易被否认。他们警告说,这可能是一个道德盲点,类似于奴隶制工厂化养殖,如果创造出有意识的人工智能并被粗心利用,可能会导致大规模的痛苦[363][362]

Future 未来

Superintelligence and the singularity
超智能与奇点

A superintelligence is a hypothetical agent that would possess intelligence far surpassing that of the brightest and most gifted human mind.[352]
一个超级智能是一个假设的代理,它的智能将远远超过最聪明和最有天赋的人类头脑。[352]

If research into artificial general intelligence produced sufficiently intelligent software, it might be able to reprogram and improve itself. The improved software would be even better at improving itself, leading to what I. J. Good called an "intelligence explosion" and Vernor Vinge called a "singularity".[368]
如果对人工通用智能的研究产生了足够智能的软件,它可能能够重新编程并自我改进。改进后的软件将更擅长自我改进,从而导致I. J. Good所称的“智能爆炸”和Vernor Vinge所称的“奇点”。[368]

However, technologies cannot improve exponentially indefinitely, and typically follow an S-shaped curve, slowing when they reach the physical limits of what the technology can do.[369]
然而,技术无法无限期地指数级提升,通常遵循一个S 形曲线,当它们达到技术所能实现的物理极限时会减缓。[369]

Transhumanism 超人类主义

Robot designer Hans Moravec, cyberneticist Kevin Warwick, and inventor Ray Kurzweil have predicted that humans and machines will merge in the future into cyborgs that are more capable and powerful than either. This idea, called transhumanism, has roots in Aldous Huxley and Robert Ettinger.[370]
机器人设计师 汉斯·莫拉维克、控制论专家 凯文·沃里克 和发明家 雷·库兹韦尔 预测人类和机器将在未来融合成 赛博格,其能力和力量将超过任何一方。这一思想被称为超人类主义,其根源可以追溯到 阿尔多斯·赫胥黎罗伯特·埃廷格[370]

Edward Fredkin argues that "artificial intelligence is the next stage in evolution", an idea first proposed by Samuel Butler's "Darwin among the Machines" as far back as 1863, and expanded upon by George Dyson in his 1998 book Darwin Among the Machines: The Evolution of Global Intelligence.[371]
爱德华·弗雷德金认为“人工智能是进化的下一个阶段”,这一观点最早由塞缪尔·巴特勒在 1863 年的《机器中的达尔文》中提出,并在乔治·戴森1998 年的书籍机器中的达尔文:全球智能的进化中进行了扩展。[371]

In fiction 在小说中

The word "robot" itself was coined by Karel Čapek in his 1921 play R.U.R., the title standing for "Rossum's Universal Robots".
“机器人”这个词是由Karel Čapek在他 1921 年的剧作R.U.R.中创造的,标题代表“罗斯姆的万能机器人”。

Thought-capable artificial beings have appeared as storytelling devices since antiquity,[372] and have been a persistent theme in science fiction.[373]
思维能力的人工生物自古以来就作为叙事工具出现,[372] 并且在 科幻小说 中一直是一个持续的主题。[373]

A common trope in these works began with Mary Shelley's Frankenstein, where a human creation becomes a threat to its masters. This includes such works as Arthur C. Clarke's and Stanley Kubrick's 2001: A Space Odyssey (both 1968), with HAL 9000, the murderous computer in charge of the Discovery One spaceship, as well as The Terminator (1984) and The Matrix (1999). In contrast, the rare loyal robots such as Gort from The Day the Earth Stood Still (1951) and Bishop from Aliens (1986) are less prominent in popular culture.[374]
在这些作品中,一个常见的 trope始于玛丽·雪莱《弗兰肯斯坦》,其中人类创造物成为其主人的威胁。这包括阿瑟·克拉克斯坦利·库布里克《2001 太空漫游》(均为 1968 年),其中有负责发现号宇宙飞船的杀人计算机HAL 9000,以及《终结者》(1984 年)和《黑客帝国》(1999 年)。相比之下,像《地球停转之日》(1951 年)中的 Gort 和《异形》(1986 年)中的 Bishop 这样的忠诚机器人在流行文化中则不那么显著。[374]

Isaac Asimov introduced the Three Laws of Robotics in many stories, most notably with the "Multivac" super-intelligent computer. Asimov's laws are often brought up during lay discussions of machine ethics;[375] while almost all artificial intelligence researchers are familiar with Asimov's laws through popular culture, they generally consider the laws useless for many reasons, one of which is their ambiguity.[376]
艾萨克·阿西莫夫在许多故事中介绍了机器人三大法则,最著名的是与多维计算机这一超级智能计算机相关的故事。阿西莫夫的法则在机器伦理的非专业讨论中经常被提及;[375]虽然几乎所有人工智能研究人员都通过大众文化熟悉阿西莫夫的法则,但他们通常认为这些法则因多种原因而无用,其中之一就是它们的模糊性。[376]

Several works use AI to force us to confront the fundamental question of what makes us human, showing us artificial beings that have the ability to feel, and thus to suffer. This appears in Karel Čapek's R.U.R., the films A.I. Artificial Intelligence and Ex Machina, as well as the novel Do Androids Dream of Electric Sheep?, by Philip K. Dick. Dick considers the idea that our understanding of human subjectivity is altered by technology created with artificial intelligence.[377]
几部作品利用人工智能迫使我们面对一个根本性的问题:是什么让我们成为人类,向我们展示了具有 感知能力 的人工存在,从而能够感受痛苦。这在 Karel ČapekR.U.R.、电影 A.I. 人工智能机械姬 以及 机器人梦见电子羊吗? 中都有体现,后者是 菲利普·K·迪克 的作品。迪克认为,我们对人类主观性的理解受到人工智能技术的影响而改变。[377]

See also 另见

Explanatory notes 说明性注释

  1. ^ Jump up to: a b This list of intelligent traits is based on the topics covered by the major AI textbooks, including: Russell & Norvig (2021), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and Nilsson (1998)
    这个智能特征列表基于主要人工智能教科书所涵盖的主题,包括:Russell & Norvig (2021)Luger & Stubblefield (2004)Poole, Mackworth & Goebel (1998)Nilsson (1998)
  2. ^ Jump up to: a b This list of tools is based on the topics covered by the major AI textbooks, including: Russell & Norvig (2021), Luger & Stubblefield (2004), Poole, Mackworth & Goebel (1998) and Nilsson (1998)
    这个工具列表基于主要人工智能教科书涵盖的主题,包括:Russell & Norvig (2021)Luger & Stubblefield (2004)Poole, Mackworth & Goebel (1998)Nilsson (1998)
  3. ^ It is among the reasons that expert systems proved to be inefficient for capturing knowledge.[30][31]
    这也是专家系统在获取知识方面被证明效率低下的原因之一。3031
  4. ^ "Rational agent" is general term used in economics, philosophy and theoretical artificial intelligence. It can refer to anything that directs its behavior to accomplish goals, such as a person, an animal, a corporation, a nation, or in the case of AI, a computer program.
    “理性代理”是一个在经济学哲学和理论人工智能中使用的通用术语。它可以指任何旨在实现目标而指导其行为的事物,例如一个人、一只动物、一家公司、一国,或者在人工智能的情况下,一个计算机程序。
  5. ^ Alan Turing discussed the centrality of learning as early as 1950, in his classic paper "Computing Machinery and Intelligence".[42] In 1956, at the original Dartmouth AI summer conference, Ray Solomonoff wrote a report on unsupervised probabilistic machine learning: "An Inductive Inference Machine".[43]
    艾伦·图灵 在 1950 年就讨论了学习的重要性,在他的经典论文 "计算机与智能" 中提到过。42 在 1956 年,在最初的达特茅斯人工智能夏季会议上,雷·所罗门诺夫 撰写了一份关于无监督概率机器学习的报告:“归纳推理机器”。43
  6. ^ See AI winter § Machine translation and the ALPAC report of 1966
    查看 人工智能寒冬 § 机器翻译与 1966 年的 ALPAC 报告
  7. ^ Compared with symbolic logic, formal Bayesian inference is computationally expensive. For inference to be tractable, most observations must be conditionally independent of one another. AdSense uses a Bayesian network with over 300 million edges to learn which ads to serve.[93]
    与符号逻辑相比,形式贝叶斯推断在计算上是昂贵的。为了使推断可行,大多数观察必须是条件独立的。AdSense使用一个拥有超过 3 亿条边的贝叶斯网络来学习投放哪些广告。93
  8. ^ Expectation–maximization, one of the most popular algorithms in machine learning, allows clustering in the presence of unknown latent variables.[95]
    期望最大化,机器学习中最受欢迎的算法之一,允许在存在未知潜变量的情况下进行聚类。95
  9. ^ Some form of deep neural networks (without a specific learning algorithm) were described by: Warren S. McCulloch and Walter Pitts (1943)[115] Alan Turing (1948);[116] Karl Steinbuch and Roger David Joseph (1961).[117] Deep or recurrent networks that learned (or used gradient descent) were developed by: Frank Rosenblatt(1957);[116] Oliver Selfridge (1959);[117] Alexey Ivakhnenko and Valentin Lapa (1965);[118] Kaoru Nakano (1971);[119] Shun-Ichi Amari (1972);[119] John Joseph Hopfield (1982).[119] Precursors to backpropagation were developed by: Henry J. Kelley (1960);[116] Arthur E. Bryson (1962);[116] Stuart Dreyfus (1962);[116] Arthur E. Bryson and Yu-Chi Ho (1969);[116] Backpropagation was independently developed by: Seppo Linnainmaa (1970);[120] Paul Werbos (1974).[116]
    某种形式的深度神经网络(没有特定的学习算法)由:沃伦·S·麦卡洛克沃尔特·皮茨(1943)115艾伦·图灵(1948);116卡尔·施泰因布赫罗杰·大卫·约瑟夫(1961)117 开发的深度或递归网络(学习或使用梯度下降)有:弗兰克·罗森布拉特(1957);116奥利弗·塞尔弗里奇(1959);117阿列克谢·伊瓦赫年科瓦伦丁·拉帕(1965);118中野薰(1971);119天野俊一(1972);119约翰·约瑟夫·霍普菲尔德(1982)119 反向传播的前身由:亨利·J·凯利(1960);116亚瑟·E·布赖森(1962);116斯图尔特·德雷福斯(1962);116亚瑟·E·布赖森何宇池(1969);116 反向传播是由以下人员独立开发的:塞波·林奈马(1970);120保罗·韦尔博斯(1974)116。
  10. ^ Geoffrey Hinton said, of his work on neural networks in the 1990s, "our labeled datasets were thousands of times too small. [And] our computers were millions of times too slow."[121]
    杰弗里·辛顿说,关于他在 1990 年代对神经网络的研究,“我们的标记数据集小了数千倍。[而且]我们的计算机慢了数百万倍。”121
  11. ^ In statistics, a bias is a systematic error or deviation from the correct value. But in the context of fairness, it refers to a tendency in favor or against a certain group or individual characteristic, usually in a way that is considered unfair or harmful. A statistically unbiased AI system that produces disparate outcomes for different demographic groups may thus be viewed as biased in the ethical sense.[198]
    在统计学中,偏差是指与正确值的系统性错误或偏离。但在公平性的背景下,它指的是对某个特定群体或个体特征的倾向,通常以被认为不公平或有害的方式表现出来。因此,一个在不同人口统计群体中产生不同结果的统计上无偏的人工智能系统可能在伦理上被视为有偏见的。198
  12. ^ Including Jon Kleinberg (Cornell University), Sendhil Mullainathan (University of Chicago), Cynthia Chouldechova (Carnegie Mellon) and Sam Corbett-Davis (Stanford)[207]
    包括 Jon KleinbergCornell University),Sendhil Mullainathan (University of Chicago),Cynthia Chouldechova (Carnegie Mellon)和 Sam Corbett-Davis (Stanford)207
  13. ^ Moritz Hardt (a director at the Max Planck Institute for Intelligent Systems) argues that machine learning "is fundamentally the wrong tool for a lot of domains, where you're trying to design interventions and mechanisms that change the world."[212]
    莫里茨·哈特(马克斯·普朗克智能系统研究所的主任)认为,机器学习“在许多领域根本不是合适的工具,这些领域试图设计能够改变世界的干预和机制。”212
  14. ^ When the law was passed in 2018, it still contained a form of this provision.
    当法律在 2018 年通过时,它仍然包含这种条款的一种形式。
  15. ^ This is the United Nations' definition, and includes things like land mines as well.[226]
    这是联合国的定义,包括像地雷这样的东西。226
  16. ^ See table 4; 9% is both the OECD average and the U.S. average.[237]
    见表 4;9%是经合组织的平均值和美国的平均值。237
  17. ^ Sometimes called a "robopocalypse"[245]
    有时被称为“机器人末日”245
  18. ^ "Electronic brain" was the term used by the press around this time.[298][300]
    “电子脑”是当时媒体使用的术语。298300
  19. ^ Daniel Crevier wrote, "the conference is generally recognized as the official birthdate of the new science."[303] Russell and Norvig called the conference "the inception of artificial intelligence."[115]
    丹尼尔·克雷维尔写道:“这次会议通常被认为是新科学的正式诞生日。”303拉塞尔诺维格称这次会议为“人工智能的开端。”115
  20. ^ Russell and Norvig wrote "for the next 20 years the field would be dominated by these people and their students."[304]
    拉塞尔诺维格 写道:“在接下来的 20 年里,这个领域将由这些人及其学生主导。”304
  21. ^ Russell and Norvig wrote "it was astonishing whenever a computer did anything kind of smartish".[305]
    拉塞尔诺维格 写道:“每当计算机做出任何聪明的事情时,真是令人惊讶。”305
  22. ^ The programs described are Arthur Samuel's checkers program for the IBM 701, Daniel Bobrow's STUDENT, Newell and Simon's Logic Theorist and Terry Winograd's SHRDLU.
    所描述的程序是 亚瑟·萨缪尔 的跳棋程序,适用于 IBM 701丹尼尔·博布罗学生纽厄尔西蒙逻辑理论家 以及 特里·温诺格拉德SHRDLU
  23. ^ Russell and Norvig write: "in almost all cases, these early systems failed on more difficult problems"[309]
    拉塞尔诺维格 写道:“在几乎所有情况下,这些早期系统在更困难的问题上失败了。”309
  24. ^ Embodied approaches to AI[316] were championed by Hans Moravec[317] and Rodney Brooks[318] and went by many names: Nouvelle AI.[318] Developmental robotics.[319]
    具身的人工智能方法由 汉斯·莫拉维克317 和 罗德尼·布鲁克斯318 倡导,并有许多名称: 新人工智能。318发展机器人学。319
  25. ^ Matteo Wong wrote in The Atlantic: "Whereas for decades, computer-science fields such as natural-language processing, computer vision, and robotics used extremely different methods, now they all use a programming method called "deep learning." As a result, their code and approaches have become more similar, and their models are easier to integrate into one another."[325]
    马泰奥·黄在《大西洋月刊》中写道:“几十年来,计算机科学领域如自然语言处理、计算机视觉和机器人技术使用的方式截然不同,而现在它们都使用一种叫做‘深度学习’的编程方法。因此,它们的代码和方法变得更加相似,它们的模型也更容易相互集成。”325
  26. ^ Jack Clark wrote in Bloomberg: "After a half-decade of quiet breakthroughs in artificial intelligence, 2015 has been a landmark year. Computers are smarter and learning faster than ever", and noted that the number of software projects that use machine learning at Google increased from a "sporadic usage" in 2012 to more than 2,700 projects in 2015.[327]
    杰克·克拉克在彭博社中写道:“经过半个十年的安静突破,2015 年成为了一个里程碑式的年份。计算机比以往任何时候都更聪明,学习速度更快。”他还指出,使用机器学习的软件项目在谷歌的数量从 2012 年的“偶尔使用”增加到 2015 年的超过 2700 个项目。327
  27. ^ Nils Nilsson wrote in 1983: "Simply put, there is wide disagreement in the field about what AI is all about."[342]
    Nils Nilsson 在 1983 年写道:“简单来说,关于人工智能的本质,领域内存在广泛的分歧。”342
  28. ^ Daniel Crevier wrote that "time has proven the accuracy and perceptiveness of some of Dreyfus's comments. Had he formulated them less aggressively, constructive actions they suggested might have been taken much earlier."[347]
    丹尼尔·克雷维尔写道:“时间证明了德雷福斯一些评论的准确性和敏锐性。如果他以不那么激进的方式表达这些观点,可能会更早采取他所建议的建设性行动。”347
  29. ^ Searle presented this definition of "Strong AI" in 1999.[357] Searle's original formulation was "The appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states."[358] Strong AI is defined similarly by Russell and Norvig: "Stong AI – the assertion that machines that do so are actually thinking (as opposed to simulating thinking)."[359]
    塞尔提出了“强人工智能”的定义于 1999 年。塞尔的原始表述是“适当编程的计算机确实是一个心智,因为给予正确程序的计算机可以被字面上说是理解并具有其他认知状态。”强人工智能的定义与拉塞尔诺维格的定义类似:“强人工智能——机器确实在思考的主张(与模拟思考相对)。”

References 参考文献

  1. ^ Jump up to: a b c Russell & Norvig (2021), pp. 1–4.
    拉塞尔与诺维格 (2021),第 1–4 页。
  2. ^ AI set to exceed human brain power Archived 2008-02-19 at the Wayback Machine CNN.com (July 26, 2006)
    人工智能将超越人类大脑能力存档 2008-02-19 在 时光机 CNN.com(2006 年 7 月 26 日)
  3. ^ Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62: 15–25. doi:10.1016/j.bushor.2018.08.004. ISSN 0007-6813. S2CID 158433736.
    Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri, 在我手中:谁是这片土地上最美的?关于人工智能的解释、插图和影响". 商业视野. 62: 15–25. doi:10.1016/j.bushor.2018.08.004. 国际标准连续出版物号0007-6813. S2CID158433736.
  4. ^ Jump up to: a b c Artificial general intelligence:
    人工通用智能:
    Proposal for the modern version:
    现代版本提案:
    Warnings of overspecialization in AI from leading researchers:
    来自顶尖研究者对人工智能过度专业化的警告:
  5. ^ Russell & Norvig (2021, §1.2).
    拉塞尔与诺维格 (2021, §1.2).
  6. ^ Jump up to: a b Dartmouth workshop:
    达特茅斯研讨会:
    The proposal:  提案:
  7. ^ Jump up to: a b Successful programs the 1960s:
    1960 年代成功的项目:
  8. ^ Jump up to: a b Funding initiatives in the early 1980s: Fifth Generation Project (Japan), Alvey (UK), Microelectronics and Computer Technology Corporation (US), Strategic Computing Initiative (US):
    1980 年代初期的资金倡议:第五代项目(日本),阿尔维计划(英国),微电子与计算机技术公司(美国),战略计算倡议(美国):
  9. ^ Jump up to: a b First AI Winter, Lighthill report, Mansfield Amendment
    第一次人工智能寒冬,莱特希尔报告,曼斯菲尔德修正案
  10. ^ Jump up to: a b Second AI Winter:
    第二个 人工智能寒冬:
  11. ^ Jump up to: a b Deep learning revolution, AlexNet:
    深度学习革命,AlexNet
  12. ^ Toews (2023).
    托伊斯 (2023)
  13. ^ Problem-solving, puzzle solving, game playing, and deduction:
    问题解决、解谜、玩游戏和推理:
  14. ^ Uncertain reasoning:  不确定推理:
  15. ^ Jump up to: a b c Intractability and efficiency and the combinatorial explosion:
    不可解性与效率组合爆炸:
  16. ^ Jump up to: a b c Psychological evidence of the prevalence of sub-symbolic reasoning and knowledge:
    心理证据表明亚符号推理和知识的普遍性:
  17. ^ Knowledge representation and knowledge engineering:
    知识表示知识工程:
  18. ^ Smoliar & Zhang (1994).
    Smoliar 和 Zhang (1994)
  19. ^ Neumann & Möller (2008).
    诺伊曼与莫勒(2008)
  20. ^ Kuperman, Reichley & Bailey (2006).
    库珀曼、赖克利和贝利 (2006)
  21. ^ McGarry (2005).
    麦加里 (2005)
  22. ^ Bertini, Del Bimbo & Torniai (2006).
    贝尔蒂尼、德尔·比姆博与托尔尼亚伊(2006)
  23. ^ Russell & Norvig (2021), pp. 272.
    拉塞尔与诺维格 (2021),第 272 页。
  24. ^ Representing categories and relations: Semantic networks, description logics, inheritance (including frames, and scripts):
    表示类别和关系:语义网络描述逻辑继承(包括框架脚本):
  25. ^ Representing events and time:Situation calculus, event calculus, fluent calculus (including solving the frame problem):
    表示事件和时间:情境演算事件演算流畅演算(包括解决框架问题):
  26. ^ Causal calculus:
    因果演算:
  27. ^ Representing knowledge about knowledge: Belief calculus, modal logics:
    表示关于知识的知识:信念计算,模态逻辑
  28. ^ Jump up to: a b Default reasoning, Frame problem, default logic, non-monotonic logics, circumscription, closed world assumption, abduction:
    默认推理, 框架问题, 默认逻辑, 非单调逻辑, 限制推理, 封闭世界假设, 归纳推理:
    (Poole et al. places abduction under "default reasoning". Luger et al. places this under "uncertain reasoning").
    (Poole 将绑架归类为“默认推理”。Luger 将其归类为“不确定推理”)。
  29. ^ Jump up to: a b Breadth of commonsense knowledge:
    常识知识的广度:
  30. ^ Newquist (1994), p. 296.
    Newquist (1994),第 296 页。
  31. ^ Crevier (1993), pp. 204–208.
    Crevier (1993),第 204–208 页。
  32. ^ Russell & Norvig (2021), p. 528.
  33. ^ Automated planning:
    自动化规划:
  34. ^ Automated decision making, Decision theory:
    自动决策, 决策理论:
  35. ^ Classical planning:
    经典规划:
  36. ^ Sensorless or "conformant" planning, contingent planning, replanning (a.k.a online planning):
    无传感器或“符合”规划、应急规划、重新规划(即在线规划):
  37. ^ Uncertain preferences:  不确定的偏好: Inverse reinforcement learning:
    逆向强化学习:
  38. ^ Information value theory:
    信息价值理论:
  39. ^ Markov decision process:
    马尔可夫决策过程:
  40. ^ Game theory and multi-agent decision theory:
    博弈论和多智能体决策理论:
  41. ^ Learning:  学习:
  42. ^ Turing (1950).
    图灵 (1950)
  43. ^ Solomonoff (1956).
    所罗门诺夫 (1956)
  44. ^ Unsupervised learning:
    无监督学习:
  45. ^ Jump up to: a b Supervised learning:
    监督学习:
  46. ^ Reinforcement learning:
    强化学习:
  47. ^ Transfer learning:
    迁移学习:
  48. ^ "Artificial Intelligence (AI): What Is AI and How Does It Work? | Built In". builtin.com. Retrieved 30 October 2023.
    "人工智能 (AI):什么是 AI 以及它是如何工作的? | Built In"builtin.com。检索于 30 October 2023
  49. ^ Computational learning theory:
    计算学习理论:
  50. ^ Natural language processing (NLP):
    自然语言处理 (NLP):
  51. ^ Subproblems of NLP:
  52. ^ Russell & Norvig (2021), pp. 856–858.
  53. ^ Dickson (2022).
  54. ^ Modern statistical and deep learning approaches to NLP:
  55. ^ Vincent (2019).
  56. ^ Russell & Norvig (2021), pp. 875–878.
  57. ^ Bushwick (2023).
  58. ^ Computer vision:
  59. ^ Russell & Norvig (2021), pp. 849–850.
  60. ^ Russell & Norvig (2021), pp. 895–899.
  61. ^ Russell & Norvig (2021), pp. 899–901.
  62. ^ Challa et al. (2011).
    查拉等人 (2011)
  63. ^ Russell & Norvig (2021), pp. 931–938.
    拉塞尔与诺维格 (2021),第 931–938 页。
  64. ^ MIT AIL (2014).
    麻省理工学院 AIL (2014)
  65. ^ Affective computing:
    情感计算:
  66. ^ Waddell (2018).
    瓦代尔 (2018)
  67. ^ Poria et al. (2017).
    茯苓等(2017)
  68. ^ Search algorithms:
    搜索算法:
  69. ^ State space search:
    状态空间搜索:
  70. ^ Russell & Norvig (2021), §11.2.
    拉塞尔与诺维格 (2021),§11.2。
  71. ^ Uninformed searches (breadth first search, depth-first search and general state space search):
    无信息搜索 (广度优先搜索, 深度优先搜索 和一般的 状态空间搜索):
  72. ^ Heuristic or informed searches (e.g., greedy best first and A*):
    启发式或知情搜索(例如,贪婪的最佳优先A*):
  73. ^ Adversarial search:
    对抗性搜索:
  74. ^ Local or "optimization" search:
    本地 或 "优化" 搜索:
  75. ^ Singh Chauhan, Nagesh (18 December 2020). "Optimization Algorithms in Neural Networks". KDnuggets. Retrieved 13 January 2024.
    辛格·乔汉,纳盖什(2020 年 12 月 18 日)。"神经网络中的优化算法"KDnuggets。检索于 13 January 2024 年
  76. ^ Evolutionary computation:
    进化计算:
  77. ^ Merkle & Middendorf (2013).
    梅克尔与米登多夫 (2013)
  78. ^ Logic:  逻辑:
  79. ^ Propositional logic:
    命题逻辑:
  80. ^ First-order logic and features such as equality:
    一阶逻辑和诸如等式等特征:
  81. ^ Logical inference:
    逻辑推理:
  82. ^ logical deduction as search:
    逻辑推理作为搜索:
  83. ^ Resolution and unification:
    解决方案统一:
  84. ^ Warren, D.H.; Pereira, L.M.; Pereira, F. (1977). "Prolog-the language and its implementation compared with Lisp". ACM SIGPLAN Notices. 12 (8): 109–115. doi:10.1145/872734.806939.
    沃伦,D.H.; 佩雷拉,L.M.; 佩雷拉,F. (1977). "Prolog-语言及其与 Lisp 的实现比较". ACM SIGPLAN 通知. 12 (8): 109–115. doi:10.1145/872734.806939.
  85. ^ Fuzzy logic:  模糊逻辑:
  86. ^ Jump up to: a b Stochastic methods for uncertain reasoning:
    不确定推理的随机方法:
  87. ^ decision theory and decision analysis:
    决策理论决策分析:
  88. ^ Information value theory:
    信息价值理论:
  89. ^ Markov decision processes and dynamic decision networks:
    马尔可夫决策过程和动态决策网络
  90. ^ Jump up to: a b c Stochastic temporal models:
    随机时间模型:
    Hidden Markov model:
    隐马尔可夫模型:
    Kalman filters:
    卡尔曼滤波器:
    Dynamic Bayesian networks:
    动态贝叶斯网络:
  91. ^ Game theory and mechanism design:
    博弈论机制设计:
  92. ^ Bayesian networks:
    贝叶斯网络:
  93. ^ Domingos (2015), chapter 6.
    Domingos (2015),第六章。
  94. ^ Bayesian inference algorithm:
    贝叶斯推断算法:
  95. ^ Domingos (2015), p. 210.
    多明戈斯 (2015), 第 210 页。
  96. ^ Bayesian learning and the expectation–maximization algorithm:
    贝叶斯学习期望最大化算法:
  97. ^ Bayesian decision theory and Bayesian decision networks:
    贝叶斯决策理论 和贝叶斯 决策网络:
  98. ^ Statistical learning methods and classifiers:
  99. ^ Ciaramella, Alberto; Ciaramella, Marco (2024). Introduction to Artificial Intelligence: from data analysis to generative AI. ISBN 978-8894787603.
  100. ^ Decision trees:
  101. ^ Non-parameteric learning models such as K-nearest neighbor and support vector machines:
  102. ^ Domingos (2015), p. 152.
  103. ^ Naive Bayes classifier:
    朴素贝叶斯分类器:
  104. ^ Jump up to: a b Neural networks:  神经网络:
  105. ^ Gradient calculation in computational graphs, backpropagation, automatic differentiation:
    计算图中的梯度计算,反向传播自动微分
  106. ^ Universal approximation theorem:
    泛函逼近定理:
    The theorem:  定理:
  107. ^ Feedforward neural networks:
    前馈神经网络:
  108. ^ Recurrent neural networks:
    递归神经网络:
  109. ^ Perceptrons:  感知器:
  110. ^ Jump up to: a b Deep learning:
    深度学习:
  111. ^ Convolutional neural networks:
    卷积神经网络:
  112. ^ Deng & Yu (2014), pp. 199–200.
    邓和余 (2014),第 199–200 页。
  113. ^ Ciresan, Meier & Schmidhuber (2012).
    奇雷桑、迈耶尔与施密德胡伯(2012)
  114. ^ Russell & Norvig (2021), p. 751.
    拉塞尔与诺维格 (2021), p. 751.
  115. ^ Jump up to: a b c Russell & Norvig (2021), p. 17.
    拉塞尔与诺维格 (2021), 第 17 页。
  116. ^ Jump up to: a b c d e f g Russell & Norvig (2021), p. 785.
    拉塞尔与诺维格 (2021), 第 785 页。
  117. ^ Jump up to: a b Schmidhuber (2022), §5.
    施密德胡伯 (2022),§5。
  118. ^ Schmidhuber (2022), §6.
    施密德胡伯 (2022),§6。
  119. ^ Jump up to: a b c Schmidhuber (2022), §7.
    施密德胡伯 (2022),§7。
  120. ^ Schmidhuber (2022), §8.
    施密德胡伯 (2022),§8。
  121. ^ Quoted in Christian (2020, p. 22)
    引用自 Christian (2020, 第 22 页)
  122. ^ Smith (2023).
    史密斯 (2023)
  123. ^ "Explained: Generative AI". 9 November 2023.
    "解释:生成性人工智能"。2023 年 11 月 9 日。
  124. ^ "AI Writing and Content Creation Tools". MIT Sloan Teaching & Learning Technologies. Retrieved 25 December 2023.
    "人工智能写作与内容创作工具"。麻省理工学院斯隆教学与学习技术。检索于 25 December 2023
  125. ^ Marmouyet (2023).
    马尔穆耶 (2023)
  126. ^ Kobielus (2019).
    Kobielus (2019)
  127. ^ Thomason, James (21 May 2024). "Mojo Rising: The resurgence of AI-first programming languages". VentureBeat. Retrieved 26 May 2024.
    汤姆森,詹姆斯(2024 年 5 月 21 日)。"Mojo Rising: AI 优先编程语言的复兴"VentureBeat。检索于 26 May 2024
  128. ^ Wodecki, Ben (5 May 2023). "7 AI Programming Languages You Need to Know". AI Business.
    Wodecki, Ben (2023 年 5 月 5 日). "你需要知道的 7 种 AI 编程语言". AI 商业.
  129. ^ Davenport, T; Kalakota, R (June 2019). "The potential for artificial intelligence in healthcare". Future Healthc J. 6 (2): 94–98. doi:10.7861/futurehosp.6-2-94. PMC 6616181. PMID 31363513.
    Davenport, T; Kalakota, R (2019 年 6 月). "人工智能在医疗保健中的潜力". 未来医疗杂志. 6 (2): 94–98. doi:10.7861/futurehosp.6-2-94. PMC6616181. PMID31363513.
  130. ^ Jump up to: a b Bax, Monique; Thorpe, Jordan; Romanov, Valentin (December 2023). "The future of personalized cardiovascular medicine demands 3D and 4D printing, stem cells, and artificial intelligence". Frontiers in Sensors. 4. doi:10.3389/fsens.2023.1294721. ISSN 2673-5067.
    巴克斯,莫尼克;索普,乔丹;罗曼诺夫,瓦伦丁(2023 年 12 月)。"个性化心血管医学的未来需要 3D 和 4D 打印、干细胞和人工智能"传感器前沿4doi:10.3389/fsens.2023.1294721国际标准连续出版物号2673-5067
  131. ^ Jumper, J; Evans, R; Pritzel, A (2021). "Highly accurate protein structure prediction with AlphaFold". Nature. 596 (7873): 583–589. Bibcode:2021Natur.596..583J. doi:10.1038/s41586-021-03819-2. PMC 8371605. PMID 34265844.
    Jumper, J; Evans, R; Pritzel, A (2021). "使用 AlphaFold 进行高精度蛋白质结构预测". 自然. 596 (7873): 583–589. 文献代码:2021Natur.596..583J. 数字对象标识符:10.1038/s41586-021-03819-2. PMC8371605. PMID34265844.
  132. ^ "AI discovers new class of antibiotics to kill drug-resistant bacteria". 20 December 2023.
    "人工智能发现新类抗生素以杀死耐药细菌"。2023 年 12 月 20 日。
  133. ^ "AI speeds up drug design for Parkinson's ten-fold". Cambridge University. 17 April 2024.
  134. ^ Horne, Robert I.; Andrzejewska, Ewa A.; Alam, Parvez; Brotzakis, Z. Faidon; Srivastava, Ankit; Aubert, Alice; Nowinska, Magdalena; Gregory, Rebecca C.; Staats, Roxine; Possenti, Andrea; Chia, Sean; Sormanni, Pietro; Ghetti, Bernardino; Caughey, Byron; Knowles, Tuomas P. J.; Vendruscolo, Michele (17 April 2024). "Discovery of potent inhibitors of α-synuclein aggregation using structure-based iterative learning". Nature Chemical Biology. 20 (5). Nature: 634–645. doi:10.1038/s41589-024-01580-x. PMC 11062903. PMID 38632492.
    霍恩,罗伯特·I.;安德烈耶夫斯卡,埃娃·A.;阿拉姆,帕尔维兹;布罗扎基斯,Z. 法伊登;斯里瓦斯塔瓦,安基特;奥贝尔,爱丽丝;诺温斯卡,玛格达莱娜;格雷戈里,丽贝卡·C.;斯塔茨,罗克辛;波森提,安德烈;贾,肖恩;索尔曼尼,皮耶特罗;盖蒂,贝尔纳迪诺;考赫,拜伦;诺尔斯,图马斯·P. J.;文德鲁斯科洛,米凯莱(2024 年 4 月 17 日)。"基于结构的迭代学习发现α-突触核蛋白聚集的强效抑制剂"自然化学生物学20(5)。自然:634–645。doi:10.1038/s41589-024-01580-xPMC11062903PMID38632492
  135. ^ Grant, Eugene F.; Lardner, Rex (25 July 1952). "The Talk of the Town – It". The New Yorker. ISSN 0028-792X. Retrieved 28 January 2024.
    格兰特,尤金·F;拉德纳,雷克斯(1952 年 7 月 25 日)。"城里的谈话 – 它"纽约客国际标准连续出版物号0028-792X。检索于 28 January 2024
  136. ^ Anderson, Mark Robert (11 May 2017). "Twenty years on from Deep Blue vs Kasparov: how a chess match started the big data revolution". The Conversation. Retrieved 28 January 2024.
    安德森,马克·罗伯特(2017 年 5 月 11 日)。"从深蓝与卡斯帕罗夫对弈二十年:一场棋赛如何开启大数据革命"对话。检索于 28 January 2024
  137. ^ Markoff, John (16 February 2011). "Computer Wins on 'Jeopardy!': Trivial, It's Not". The New York Times. ISSN 0362-4331. Retrieved 28 January 2024.
    马科夫,约翰(2011 年 2 月 16 日)。"计算机在《危险边缘!》中获胜:这可不是微不足道的"纽约时报国际标准连续出版物号0362-4331。检索于 28 January 2024
  138. ^ Byford, Sam (27 May 2017). "AlphaGo retires from competitive Go after defeating world number one 3–0". The Verge. Retrieved 28 January 2024.
    拜福德,萨姆(2017 年 5 月 27 日)。"AlphaGo 在击败世界排名第一的选手后以 3–0 宣布退役"The Verge。检索于 28 January 2024
  139. ^ Brown, Noam; Sandholm, Tuomas (30 August 2019). "Superhuman AI for multiplayer poker". Science. 365 (6456): 885–890. Bibcode:2019Sci...365..885B. doi:10.1126/science.aay2400. ISSN 0036-8075. PMID 31296650.
    布朗,诺姆;桑德霍尔姆,图马斯(2019 年 8 月 30 日)。"超人类人工智能在多人扑克中的应用"科学365(6456):885–890。文献代码:2019Sci...365..885B数字对象标识符:10.1126/science.aay2400国际标准连续出版物号0036-8075医学文献标识符31296650
  140. ^ "MuZero: Mastering Go, chess, shogi and Atari without rules". Google DeepMind. 23 December 2020. Retrieved 28 January 2024.
    "MuZero:无规则掌握围棋、国际象棋、将棋和雅达利"谷歌深度学习。2020 年 12 月 23 日。检索于 28 January 2024
  141. ^ Sample, Ian (30 October 2019). "AI becomes grandmaster in 'fiendishly complex' StarCraft II". The Guardian. ISSN 0261-3077. Retrieved 28 January 2024.
    样本,伊恩(2019 年 10 月 30 日)。"人工智能在'极其复杂'的《星际争霸 II》中成为大师"卫报国际标准连续出版物号0261-3077。检索于 28 January 2024
  142. ^ Wurman, P. R.; Barrett, S.; Kawamoto, K. (2022). "Outracing champion Gran Turismo drivers with deep reinforcement learning". Nature. 602 (7896): 223–228. Bibcode:2022Natur.602..223W. doi:10.1038/s41586-021-04357-7. PMID 35140384.
    Wurman, P. R.; Barrett, S.; Kawamoto, K. (2022). "用深度强化学习超越冠军赛车手 Gran Turismo". 自然. 602 (7896): 223–228. 文献代码:2022Natur.602..223W. 数字对象标识符:10.1038/s41586-021-04357-7. PMID35140384.
  143. ^ Wilkins, Alex (13 March 2024). "Google AI learns to play open-world video games by watching them". New Scientist. Retrieved 21 July 2024.
    威尔金斯,亚历克斯(2024 年 3 月 13 日)。"谷歌人工智能通过观看开放世界视频游戏来学习玩游戏"新科学家。于 21 July 2024 年检索
  144. ^ Uesato, J. et al.: Improving mathematical reasoning with process supervision. openai.com, May 31, 2023. Retrieved 2024-08-07.
    Uesato, J. 等: 通过过程监督提高数学推理能力。 openai.com, 2023 年 5 月 31 日。检索于 2024 年 8 月 7 日。
  145. ^ Srivastava, Saurabh (29 February 2024). "Functional Benchmarks for Robust Evaluation of Reasoning Performance, and the Reasoning Gap". arXiv:2402.19450 [cs.AI].
    Srivastava, Saurabh (2024 年 2 月 29 日). "用于稳健评估推理性能的功能基准及推理差距". arXiv:2402.19450 [cs.AI]。
  146. ^ Roberts, Siobhan (25 July 2024). "AI achieves silver-medal standard solving International Mathematical Olympiad problems". The New York Times. Retrieved 7 August 2024.
    罗伯茨,希沃恩(2024 年 7 月 25 日)。"人工智能在解决国际数学奥林匹克问题方面达到了银牌标准"纽约时报。检索于 7 August 2024
  147. ^ LLEMMA. eleuther.ai. Retrieved 2024-08-07.
    LLEMMA. eleuther.ai。检索于 2024-08-07。
  148. ^ AI Math. Caesars Labs, 2024. Retrieved 2024-08-07.
    人工智能数学。 凯撒实验室,2024 年。检索于 2024 年 08 月 07 日。
  149. ^ Alex McFarland: 7 Best AI for Math Tools. unite.ai. Retrieved 2024-08-07
    亚历克斯·麦克法兰:7 款最佳数学 AI 工具。 unite.ai。检索于 2024-08-07
  150. ^ Matthew Finio & Amanda Downie: IBM Think 2024 Primer, "What is Artificial Intelligence (AI) in Finance?" 8 Dec. 2023
    马修·菲尼奥与阿曼达·唐尼:IBM Think 2024 入门,“金融中的人工智能(AI)是什么?” 2023 年 12 月 8 日
  151. ^ M. Nicolas, J. Firzli: Pensions Age/European Pensions magazine, "Artificial Intelligence: Ask the Industry" May June 2024 https://videovoice.org/ai-in-finance-innovation-entrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligence-act-wont-work-as-intended/.
    M. Nicolas, J. Firzli: Pensions Age/European Pensions magazine, "人工智能:询问行业" 2024 年 5 月 6 月 https://videovoice.org/ai-in-finance-innovation-entrepreneurship-vs-over-regulation-with-the-eus-artificial-intelligence-act-wont-work-as-intended/
  152. ^ Jump up to: a b c Congressional Research Service (2019). Artificial Intelligence and National Security (PDF). Washington, DC: Congressional Research Service.PD-notice
    国会研究服务处 (2019)。人工智能与国家安全 (PDF)。华盛顿特区:国会研究服务处。PD-notice
  153. ^ Jump up to: a b Slyusar, Vadym (2019). "Artificial intelligence as the basis of future control networks". ResearchGate. doi:10.13140/RG.2.2.30247.50087.
    Slyusar, Vadym (2019). "人工智能作为未来控制网络的基础". ResearchGate. doi:10.13140/RG.2.2.30247.50087.
  154. ^ Knight, Will. "The US and 30 Other Nations Agree to Set Guardrails for Military AI". Wired. ISSN 1059-1028. Retrieved 24 January 2024.
    骑士,威尔。"美国和其他 30 个国家同意为军事人工智能设定保护措施"连线国际标准连续出版物号1059-1028。检索于 24 January 2024
  155. ^ Newsom, Gavin; Weber, Shirley N. (6 September 2023). "Executive Order N-12-23" (PDF). Executive Department, State of California. Archived (PDF) from the original on 21 February 2024. Retrieved 7 September 2023.
    纽森,加文;韦伯,雪莉·N.(2023 年 9 月 6 日)。"行政命令 N-12-23"(PDF)。加利福尼亚州行政部门。存档(PDF)于 2024 年 2 月 21 日的原件。检索于 7 September 2023
  156. ^ Pinaya, Walter H. L.; Graham, Mark S.; Kerfoot, Eric; Tudosiu, Petru-Daniel; Dafflon, Jessica; Fernandez, Virginia; Sanchez, Pedro; Wolleb, Julia; da Costa, Pedro F.; Patel, Ashay (2023). "Generative AI for Medical Imaging: extending the MONAI Framework". arXiv:2307.15208 [eess.IV].
    皮纳亚,沃尔特·H·L;格雷厄姆,马克·S;克尔福特,埃里克;图多西乌,佩特鲁-丹尼尔;达夫隆,杰西卡;费尔南德斯,维尔吉尼亚;桑切斯,佩德罗;沃莱布,朱莉亚;达科斯塔,佩德罗·F;帕特尔,阿沙伊(2023)。"用于医学成像的生成性人工智能:扩展 MONAI 框架"。arXiv:2307.15208 [eess.IV]。
  157. ^ Griffith, Erin; Metz, Cade (27 January 2023). "Anthropic Said to Be Closing In on $300 Million in New A.I. Funding". The New York Times. Archived from the original on 9 December 2023. Retrieved 14 March 2023.
    格里菲斯,埃琳;梅茨,凯德(2023 年 1 月 27 日)。"Anthropic 据说正在接近 3 亿美元的新人工智能融资"《纽约时报》存档于 2023 年 12 月 9 日的原文。检索于 14 March 2023
  158. ^ Lanxon, Nate; Bass, Dina; Davalos, Jackie (10 March 2023). "A Cheat Sheet to AI Buzzwords and Their Meanings". Bloomberg News. Archived from the original on 17 November 2023. Retrieved 14 March 2023.
    兰克森,内特;巴斯,迪娜;达瓦洛斯,杰基(2023 年 3 月 10 日)。"人工智能流行词及其含义速查表"彭博新闻存档于 2023 年 11 月 17 日的原文。检索于 14 March 2023
  159. ^ Marcelline, Marco (27 May 2023). "ChatGPT: Most Americans Know About It, But Few Actually Use the AI Chatbot". PCMag. Retrieved 28 January 2024.
    马塞林,马尔科(2023 年 5 月 27 日)。"ChatGPT:大多数美国人知道它,但实际上使用这个人工智能聊天机器人的人很少"PCMag。检索于 28 January 2024
  160. ^ Lu, Donna (31 March 2023). "Misinformation, mistakes and the Pope in a puffer: what rapidly evolving AI can – and can't – do". The Guardian. ISSN 0261-3077. Retrieved 28 January 2024.
  161. ^ Hurst, Luke (23 May 2023). "How a fake image of a Pentagon explosion shared on Twitter caused a real dip on Wall Street". euronews. Retrieved 28 January 2024.
  162. ^ Poole, David; Mackworth, Alan (2023). Artificial Intelligence, Foundations of Computational Agents (3rd ed.). Cambridge University Press. doi:10.1017/9781009258227. ISBN 9781009258197.
  163. ^ Russell, Stuart; Norvig, Peter (2020). Artificial Intelligence: A Modern Approach (4th ed.). Pearson. ISBN 9780134610993.
  164. ^ "Why agents are the next frontier of generative AI". McKinsey Digital. 24 July 2024. Retrieved 10 August 2024.
  165. ^ Ransbotham, Sam; Kiron, David; Gerbert, Philipp; Reeves, Martin (6 September 2017). "Reshaping Business With Artificial Intelligence". MIT Sloan Management Review. Archived from the original on 13 February 2024.
    Ransbotham, Sam; Kiron, David; Gerbert, Philipp; Reeves, Martin (2017 年 9 月 6 日). "用人工智能重塑商业". 麻省理工学院斯隆管理评论. 存档于 2024 年 2 月 13 日的原文.
  166. ^ Sun, Yuran; Zhao, Xilei; Lovreglio, Ruggiero; Kuligowski, Erica (1 January 2024), Naser, M. Z. (ed.), "8 – AI for large-scale evacuation modeling: promises and challenges", Interpretable Machine Learning for the Analysis, Design, Assessment, and Informed Decision Making for Civil Infrastructure, Woodhead Publishing Series in Civil and Structural Engineering, Woodhead Publishing, pp. 185–204, ISBN 978-0-12-824073-1, retrieved 28 June 2024.
    孙宇然; 赵希雷; 洛夫雷利奥, 鲁杰罗; 库利戈夫斯基, 艾丽卡 (2024 年 1 月 1 日), 纳塞尔, M. Z. (编), "8 – 大规模疏散建模中的人工智能:承诺与挑战", 用于分析、设计、评估和基于信息的决策的可解释机器学习在民用基础设施中的应用, Woodhead 出版系列在土木与结构工程, Woodhead 出版, 页码 185–204, ISBN 978-0-12-824073-1, 检索于 28 June 2024
  167. ^ Gomaa, Islam; Adelzadeh, Masoud; Gwynne, Steven; Spencer, Bruce; Ko, Yoon; Bénichou, Noureddine; Ma, Chunyun; Elsagan, Nour; Duong, Dana; Zalok, Ehab; Kinateder, Max (1 November 2021). "A Framework for Intelligent Fire Detection and Evacuation System". Fire Technology. 57 (6): 3179–3185. doi:10.1007/s10694-021-01157-3. ISSN 1572-8099.
    Gomaa, Islam; Adelzadeh, Masoud; Gwynne, Steven; Spencer, Bruce; Ko, Yoon; Bénichou, Noureddine; Ma, Chunyun; Elsagan, Nour; Duong, Dana; Zalok, Ehab; Kinateder, Max (2021 年 11 月 1 日). "智能火灾检测与疏散系统框架". 火灾技术. 57 (6): 3179–3185. doi:10.1007/s10694-021-01157-3. ISSN1572-8099.
  168. ^ Zhao, Xilei; Lovreglio, Ruggiero; Nilsson, Daniel (1 May 2020). "Modelling and interpreting pre-evacuation decision-making using machine learning". Automation in Construction. 113: 103140. doi:10.1016/j.autcon.2020.103140. ISSN 0926-5805.
    赵, 西雷; 洛夫雷吉奥, 鲁杰罗; 尼尔森, 丹尼尔 (2020 年 5 月 1 日). "使用机器学习建模和解释预疏散决策". 建筑自动化. 113: 103140. doi:10.1016/j.autcon.2020.103140. 国际标准连续出版物号0926-5805.
  169. ^ Simonite (2016).
    西蒙尼特 (2016)
  170. ^ Russell & Norvig (2021), p. 987.
    拉塞尔与诺维格 (2021), 第 987 页。
  171. ^ Laskowski (2023).
    Laskowski (2023)
  172. ^ GAO (2022). 高(2022)
  173. ^ Valinsky (2019).
    瓦林斯基 (2019)
  174. ^ Russell & Norvig (2021), p. 991.
    拉塞尔与诺维格 (2021), 第 991 页。
  175. ^ Russell & Norvig (2021), pp. 991–992.
    拉塞尔与诺维格 (2021),第 991–992 页。
  176. ^ Christian (2020), p. 63.
    基督教 (2020), 第 63 页。
  177. ^ Vincent (2022).
    文森特 (2022)
  178. ^ Kopel, Matthew. "Copyright Services: Fair Use". Cornell University Library. Retrieved 26 April 2024.
    科佩尔,马修。"版权服务:合理使用"康奈尔大学图书馆。检索于 26 April 2024
  179. ^ Burgess, Matt. "How to Stop Your Data From Being Used to Train AI". Wired. ISSN 1059-1028. Retrieved 26 April 2024.
    伯吉斯,马特。"如何阻止您的数据被用于训练人工智能"连线国际标准连续出版物号1059-1028。检索于 26 April 2024
  180. ^ Reisner (2023).
    Reisner (2023)
  181. ^ Alter & Harris (2023).
    Alter & Harris (2023)
  182. ^ "Getting the Innovation Ecosystem Ready for AI. An IP policy toolkit" (PDF). WIPO.
    "为人工智能准备创新生态系统。知识产权政策工具包"(PDF)世界知识产权组织
  183. ^ Hammond, George (27 December 2023). "Big Tech is spending more than VC firms on AI startups". Ars Technica. Archived from the original on 10 January 2024.
    汉蒙,乔治(2023 年 12 月 27 日)。"大型科技公司在人工智能初创企业上的支出超过风险投资公司"阿斯技术存档于 2024 年 1 月 10 日的原文。
  184. ^ Wong, Matteo (24 October 2023). "The Future of AI Is GOMA". The Atlantic. Archived from the original on 5 January 2024.
    黄,马特奥(2023 年 10 月 24 日)。“人工智能的未来是 GOMA”大西洋月刊存档于 2024 年 1 月 5 日的原文。
  185. ^ "Big tech and the pursuit of AI dominance". The Economist. 26 March 2023. Archived from the original on 29 December 2023.
    "大型科技公司与人工智能主导权的追求"经济学人。2023 年 3 月 26 日。存档于 2023 年 12 月 29 日的原文。
  186. ^ Fung, Brian (19 December 2023). "Where the battle to dominate AI may be won". CNN Business. Archived from the original on 13 January 2024.
    冯,布赖恩(2023 年 12 月 19 日)。"人工智能主导权之战可能会在这里获胜"CNN 商业存档于 2024 年 1 月 13 日的原文。
  187. ^ Metz, Cade (5 July 2023). "In the Age of A.I., Tech's Little Guys Need Big Friends". The New York Times.
    梅茨,凯德(2023 年 7 月 5 日)。"在人工智能时代,科技的小角色需要大朋友"纽约时报
  188. ^ "Electricity 2024 – Analysis". IEA. 24 January 2024. Retrieved 13 July 2024.
  189. ^ Calvert, Brian (28 March 2024). "AI already uses as much energy as a small country. It's only the beginning". Vox. New York, New York.
  190. ^ Halper, Evan; O'Donovan, Caroline (21 June 2024). "AI is exhausting the power grid. Tech firms are seeking a miracle solution". Washington Post.
    哈普尔,埃文;奥多诺万,卡罗琳(2024 年 6 月 21 日)。"人工智能正在耗尽电网。科技公司正在寻求奇迹解决方案"华盛顿邮报
  191. ^ Davenport, Carly. "AI Data Centers and the Coming YS Power Demand Surge" (PDF). Goldman Sachs.
    达文波特,卡莉。"人工智能数据中心与即将到来的 YS 电力需求激增"(PDF)高盛
  192. ^ Ryan, Carol (12 April 2024). "Energy-Guzzling AI Is Also the Future of Energy Savings". Wall Street Journal. Dow Jones.
    瑞安,卡罗尔(2024 年 4 月 12 日)。"耗能的人工智能也是节能的未来"华尔街日报。道琼斯。
  193. ^ Hiller, Jennifer (1 July 2024). "Tech Industry Wants to Lock Up Nuclear Power for AI". Wall Street Journal. Dow Jones.
    希勒,詹妮弗(2024 年 7 月 1 日)。"科技行业希望将核能锁定用于人工智能"华尔街日报。道琼斯。
  194. ^ Nicas (2018).
    尼卡斯 (2018)
  195. ^ Rainie, Lee; Keeter, Scott; Perrin, Andrew (22 July 2019). "Trust and Distrust in America". Pew Research Center. Archived from the original on 22 February 2024.
    Rainie, Lee; Keeter, Scott; Perrin, Andrew (2019 年 7 月 22 日). "美国的信任与不信任". 皮尤研究中心. 存档于 2024 年 2 月 22 日的原文.
  196. ^ Williams (2023).
    威廉姆斯 (2023)
  197. ^ Taylor & Hern (2023).
    泰勒与赫恩 (2023)
  198. ^ Jump up to: a b Samuel, Sigal (19 April 2022). "Why it's so damn hard to make AI fair and unbiased". Vox. Retrieved 24 July 2024.
    塞缪尔,西戈尔(2022 年 4 月 19 日)。"为什么让人工智能公平和无偏见如此困难"Vox。检索于 24 July 2024
  199. ^ Jump up to: a b Rose (2023). 玫瑰 (2023)
  200. ^ CNA (2019). CNA (2019)
  201. ^ Goffrey (2008), p. 17.
    戈弗雷 (2008), 第 17 页。
  202. ^ Berdahl et al. (2023); Goffrey (2008, p. 17); Rose (2023); Russell & Norvig (2021, p. 995)
    伯达尔等人 (2023); 戈弗雷 (2008, 第 17 页); 罗斯 (2023); 拉塞尔与诺维格 (2021, 第 995 页)
  203. ^ Christian (2020), p. 25.
    基督教 (2020), 第 25 页。
  204. ^ Jump up to: a b Russell & Norvig (2021), p. 995.
    拉塞尔与诺维格 (2021), 第 995 页。
  205. ^ Grant & Hill (2023).
    格兰特与希尔 (2023)
  206. ^ Larson & Angwin (2016).
    拉尔森与安格温 (2016)
  207. ^ Christian (2020), p. 67–70.
    基督教 (2020), 第 67–70 页。
  208. ^ Christian (2020, pp. 67–70); Russell & Norvig (2021, pp. 993–994)
    Christian (2020, pp. 67–70); Russell & Norvig (2021, pp. 993–994)
  209. ^ Russell & Norvig (2021, p. 995); Lipartito (2011, p. 36); Goodman & Flaxman (2017, p. 6); Christian (2020, pp. 39–40, 65)
    拉塞尔与诺维格 (2021, 第 995 页); 利帕尔蒂托 (2011, 第 36 页); 古德曼与弗拉克斯曼 (2017, 第 6 页); 克里斯蒂安 (2020, 第 39–40 页, 65 页)
  210. ^ Quoted in Christian (2020, p. 65).
    引用于 Christian (2020, 第 65 页).
  211. ^ Russell & Norvig (2021, p. 994); Christian (2020, pp. 40, 80–81)
    拉塞尔与诺维格 (2021, 第 994 页); 克里斯蒂安 (2020, 第 40, 80–81 页)
  212. ^ Quoted in Christian (2020, p. 80)
    引用于 Christian (2020, 第 80 页)
  213. ^ Dockrill (2022).
    Dockrill (2022)
  214. ^ Sample (2017).
    示例 (2017)
  215. ^ "Black Box AI". 16 June 2023.
    "黑箱人工智能"。2023 年 6 月 16 日。
  216. ^ Christian (2020), p. 110.
    基督教 (2020), 第 110 页。
  217. ^ Christian (2020), pp. 88–91.
    基督教 (2020),第 88–91 页。
  218. ^ Christian (2020, p. 83); Russell & Norvig (2021, p. 997)
    Christian (2020, 第 83 页); Russell & Norvig (2021, 第 997 页)
  219. ^ Christian (2020), p. 91.
    基督教 (2020), 第 91 页。
  220. ^ Christian (2020), p. 83.
    基督教 (2020),第 83 页。
  221. ^ Verma (2021).
    Verma (2021)
  222. ^ Rothman (2020).
    罗斯曼 (2020)
  223. ^ Christian (2020), pp. 105–108.
    基督教 (2020), pp. 105–108.
  224. ^ Christian (2020), pp. 108–112.
  225. ^ Ropek, Lucas (21 May 2024). "New Anthropic Research Sheds Light on AI's 'Black Box'". Gizmodo. Retrieved 23 May 2024.
  226. ^ Russell & Norvig (2021), p. 989.
  227. ^ Jump up to: a b Russell & Norvig (2021), pp. 987–990.
    拉塞尔与诺维格 (2021),第 987–990 页。
  228. ^ Russell & Norvig (2021), p. 988.
    拉塞尔与诺维格 (2021), 第 988 页。
  229. ^ Robitzski (2018); Sainato (2015)
  230. ^ Harari (2018).
    哈拉里 (2018)
  231. ^ Buckley, Chris; Mozur, Paul (22 May 2019). "How China Uses High-Tech Surveillance to Subdue Minorities". The New York Times.
  232. ^ "Security lapse exposed a Chinese smart city surveillance system". 3 May 2019. Archived from the original on 7 March 2021. Retrieved 14 September 2020.
  233. ^ Urbina et al. (2022).
  234. ^ Jump up to: a b E. McGaughey, 'Will Robots Automate Your Job Away? Full Employment, Basic Income, and Economic Democracy' (2022), 51(3) Industrial Law Journal 511–559. Archived 27 May 2023 at the Wayback Machine.
    E. McGaughey, '机器人会让你的工作消失吗?充分就业、基本收入与经济民主' (2022), 51(3) 工业法杂志 511–559. 存档 2023 年 5 月 27 日在时光机上。
  235. ^ Ford & Colvin (2015);McGaughey (2022)
    福特与科尔文 (2015);麦高赫 (2022)
  236. ^ IGM Chicago (2017).
    IGM 芝加哥 (2017)
  237. ^ Arntz, Gregory & Zierahn (2016), p. 33.
    阿恩茨,格雷戈里 & 齐拉恩 (2016),第 33 页。
  238. ^ Lohr (2017); Frey & Osborne (2017); Arntz, Gregory & Zierahn (2016, p. 33)
    Lohr (2017); Frey & Osborne (2017); Arntz, Gregory & Zierahn (2016, 第 33 页)
  239. ^ Zhou, Viola (11 April 2023). "AI is already taking video game illustrators' jobs in China". Rest of World. Retrieved 17 August 2023.
    周,维奥拉(2023 年 4 月 11 日)。"人工智能已经在中国取代了视频游戏插画师的工作"世界其他地区。检索于 17 August 2023
  240. ^ Carter, Justin (11 April 2023). "China's game art industry reportedly decimated by growing AI use". Game Developer. Retrieved 17 August 2023.
    卡特,贾斯廷(2023 年 4 月 11 日)。"中国的游戏艺术产业 reportedly 被日益增长的人工智能使用摧毁"游戏开发者。检索于 17 August 2023
  241. ^ Morgenstern (2015).
    莫根斯特恩 (2015)
  242. ^ Mahdawi (2017); Thompson (2014)
  243. ^ Tarnoff, Ben (4 August 2023). "Lessons from Eliza". The Guardian Weekly. pp. 34–39.
    塔诺夫,本(2023 年 8 月 4 日)。“来自伊丽莎的教训”。卫报周刊。第 34–39 页。
  244. ^ Cellan-Jones (2014).
  245. ^ Russell & Norvig 2021, p. 1001.
  246. ^ Bostrom (2014).
  247. ^ Russell (2019).
  248. ^ Bostrom (2014); Müller & Bostrom (2014); Bostrom (2015).
    博斯特罗姆 (2014); 穆勒与博斯特罗姆 (2014); 博斯特罗姆 (2015)
  249. ^ Harari (2023).
    哈拉里 (2023)
  250. ^ Müller & Bostrom (2014).
    穆勒与博斯特罗姆 (2014)
  251. ^ Leaders' concerns about the existential risks of AI around 2015:
    2015 年左右,领导者对人工智能存在风险的担忧:
  252. ^ ""Godfather of artificial intelligence" talks impact and potential of new AI". CBS News. 25 March 2023. Archived from the original on 28 March 2023. Retrieved 28 March 2023.
    ""人工智能教父"谈论新人工智能的影响和潜力"CBS 新闻。2023 年 3 月 25 日。 存档于 2023 年 3 月 28 日的原文。检索于 28 March 2023
  253. ^ Pittis, Don (4 May 2023). "Canadian artificial intelligence leader Geoffrey Hinton piles on fears of computer takeover". CBC.
    皮蒂斯,唐(2023 年 5 月 4 日)。"加拿大人工智能领袖杰弗里·辛顿加剧了对计算机接管的担忧"CBC
  254. ^ "'50–50 chance' that AI outsmarts humanity, Geoffrey Hinton says". Bloomberg BNN. 14 June 2024. Retrieved 6 July 2024.
    "'50–50 的机会'人工智能超越人类,杰弗里·辛顿说"彭博 BNN。2024 年 6 月 14 日。检索于 6 July 2024
  255. ^ Valance (2023).
    瓦朗斯 (2023)
  256. ^ Taylor, Josh (7 May 2023). "Rise of artificial intelligence is inevitable but should not be feared, 'father of AI' says". The Guardian. Retrieved 26 May 2023.
    泰勒,乔希(2023 年 5 月 7 日)。"人工智能的崛起是不可避免的,但不应被恐惧,'人工智能之父'表示"卫报。检索于 26 May 2023 年
  257. ^ Colton, Emma (7 May 2023). "'Father of AI' says tech fears misplaced: 'You cannot stop it'". Fox News. Retrieved 26 May 2023.
    科尔顿,艾玛(2023 年 5 月 7 日)。"'人工智能之父'表示技术恐惧是错位的:'你无法阻止它'"福克斯新闻。检索于 26 May 2023
  258. ^ Jones, Hessie (23 May 2023). "Juergen Schmidhuber, Renowned 'Father Of Modern AI,' Says His Life's Work Won't Lead To Dystopia". Forbes. Retrieved 26 May 2023.
    琼斯,赫西(2023 年 5 月 23 日)。"尤尔根·施密德胡伯,著名的“现代人工智能之父”,表示他的毕生工作不会导致反乌托邦"福布斯。检索于 26 May 2023
  259. ^ McMorrow, Ryan (19 December 2023). "Andrew Ng: 'Do we think the world is better off with more or less intelligence?'". Financial Times. Retrieved 30 December 2023.
    麦克莫罗,瑞安(2023 年 12 月 19 日)。"安德鲁·吴:'我们认为世界是更好还是更糟,智力更多还是更少?'"金融时报. 于 30 December 2023 年检索
  260. ^ Levy, Steven (22 December 2023). "How Not to Be Stupid About AI, With Yann LeCun". Wired. Retrieved 30 December 2023.
    莱维,史蒂芬(2023 年 12 月 22 日)。"如何不对人工智能愚蠢,雅恩·勒昆"连线。检索于 30 December 2023
  261. ^ Arguments that AI is not an imminent risk:
    AI 并不是一个迫在眉睫的风险的论点:
  262. ^ Jump up to: a b Christian (2020), pp. 67, 73.
    克里斯蒂安 (2020),第 67 页,第 73 页。
  263. ^ Yudkowsky (2008).
    尤德科夫斯基 (2008)
  264. ^ Jump up to: a b Anderson & Anderson (2011).
    安德森与安德森 (2011)
  265. ^ AAAI (2014). AAAI (2014)
  266. ^ Wallach (2010).
  267. ^ Russell (2019), p. 173.
  268. ^ Stewart, Ashley; Melton, Monica. "Hugging Face CEO says he's focused on building a 'sustainable model' for the $4.5 billion open-source-AI startup". Business Insider. Retrieved 14 April 2024.
  269. ^ Wiggers, Kyle (9 April 2024). "Google open sources tools to support AI model development". TechCrunch. Retrieved 14 April 2024.
    威格斯,凯尔(2024 年 4 月 9 日)。"谷歌开源工具以支持 AI 模型开发"科技 Crunch。检索于 14 April 2024
  270. ^ Heaven, Will Douglas (12 May 2023). "The open-source AI boom is built on Big Tech's handouts. How long will it last?". MIT Technology Review. Retrieved 14 April 2024.
  271. ^ Brodsky, Sascha (19 December 2023). "Mistral AI's New Language Model Aims for Open Source Supremacy". AI Business.
    布罗茨基,萨沙(2023 年 12 月 19 日)。"Mistral AI 的新语言模型旨在实现开源霸权"AI 商业
  272. ^ Edwards, Benj (22 February 2024). "Stability announces Stable Diffusion 3, a next-gen AI image generator". Ars Technica. Retrieved 14 April 2024.
    爱德华兹,班杰(2024 年 2 月 22 日)。"Stability 宣布 Stable Diffusion 3,一款下一代 AI 图像生成器"Ars Technica。检索于 14 April 2024
  273. ^ Marshall, Matt (29 January 2024). "How enterprises are using open source LLMs: 16 examples". VentureBeat.
    马歇尔,马特(2024 年 1 月 29 日)。"企业如何使用开源 LLMs: 16 个例子"VentureBeat
  274. ^ Piper, Kelsey (2 February 2024). "Should we make our most powerful AI models open source to all?". Vox. Retrieved 14 April 2024.
    派珀,凯尔西(2024 年 2 月 2 日)。"我们是否应该将我们最强大的人工智能模型开放源代码给所有人?"Vox。检索于 14 April 2024 年
  275. ^ Alan Turing Institute (2019). "Understanding artificial intelligence ethics and safety" (PDF).
    艾伦·图灵研究所 (2019). "理解人工智能伦理与安全"(PDF)
  276. ^ Alan Turing Institute (2023). "AI Ethics and Governance in Practice" (PDF).
    艾伦·图灵研究所 (2023). "人工智能伦理与治理实践"(PDF)
  277. ^ Floridi, Luciano; Cowls, Josh (23 June 2019). "A Unified Framework of Five Principles for AI in Society". Harvard Data Science Review. 1 (1). doi:10.1162/99608f92.8cd550d1. S2CID 198775713.
    Floridi, Luciano; Cowls, Josh (2019 年 6 月 23 日). "社会中人工智能的五项原则统一框架". 哈佛数据科学评论. 1 (1). doi:10.1162/99608f92.8cd550d1. S2CID198775713.
  278. ^ Buruk, Banu; Ekmekci, Perihan Elif; Arda, Berna (1 September 2020). "A critical perspective on guidelines for responsible and trustworthy artificial intelligence". Medicine, Health Care and Philosophy. 23 (3): 387–399. doi:10.1007/s11019-020-09948-1. ISSN 1572-8633. PMID 32236794. S2CID 214766800.
    Buruk, Banu; Ekmekci, Perihan Elif; Arda, Berna (2020 年 9 月 1 日). "对负责任和可信赖的人工智能指南的批判性视角". 医学、医疗保健与哲学. 23 (3): 387–399. doi:10.1007/s11019-020-09948-1. ISSN1572-8633. PMID32236794. S2CID214766800.
  279. ^ Kamila, Manoj Kumar; Jasrotia, Sahil Singh (1 January 2023). "Ethical issues in the development of artificial intelligence: recognizing the risks". International Journal of Ethics and Systems. ahead-of-print (ahead-of-print). doi:10.1108/IJOES-05-2023-0107. ISSN 2514-9369. S2CID 259614124.
    卡米拉,马诺杰·库马尔;贾斯罗蒂亚,萨希尔·辛格(2023 年 1 月 1 日)。 "人工智能发展中的伦理问题:识别风险"国际伦理与系统期刊。提前出版(提前出版)。 doi:10.1108/IJOES-05-2023-0107ISSN2514-9369S2CID259614124
  280. ^ "AI Safety Institute releases new AI safety evaluations platform". UK Government. 10 May 2024. Retrieved 14 May 2024.
    "人工智能安全研究所发布新的人工智能安全评估平台"。英国政府。2024 年 5 月 10 日。检索于 14 May 2024
  281. ^ Regulation of AI to mitigate risks:
    人工智能的监管以降低风险:
  282. ^ Jump up to: a b Vincent (2023).
    文森特 (2023)
  283. ^ Stanford University (2023).
    斯坦福大学 (2023)
  284. ^ Jump up to: a b c d UNESCO (2021).
    联合国教科文组织 (2021)
  285. ^ Kissinger (2021).
  286. ^ Altman, Brockman & Sutskever (2023).
  287. ^ VOA News (25 October 2023). "UN Announces Advisory Body on Artificial Intelligence".
    VOA 新闻(2023 年 10 月 25 日)。"联合国宣布人工智能咨询机构"
  288. ^ "Council of Europe opens first ever global treaty on AI for signature". Council of Europe. 5 September 2024. Retrieved 17 September 2024.
    "欧洲委员会首次全球人工智能条约开放签署"欧洲委员会。2024 年 9 月 5 日。检索于 17 September 2024
  289. ^ Edwards (2023).
    爱德华兹 (2023)
  290. ^ Kasperowicz (2023).
    Kasperowicz (2023)
  291. ^ Fox News (2023).
    福克斯新闻 (2023)
  292. ^ Milmo, Dan (3 November 2023). "Hope or Horror? The great AI debate dividing its pioneers". The Guardian Weekly. pp. 10–12.
    米尔莫,丹(2023 年 11 月 3 日)。"希望还是恐惧?伟大的人工智能辩论分裂了它的先驱们"。卫报周刊。第 10–12 页。
  293. ^ "The Bletchley Declaration by Countries Attending the AI Safety Summit, 1–2 November 2023". GOV.UK. 1 November 2023. Archived from the original on 1 November 2023. Retrieved 2 November 2023.
    "2023 年 11 月 1 日至 2 日参加人工智能安全峰会的国家布莱奇利宣言"英国政府官网。2023 年 11 月 1 日。从原文存档于 2023 年 11 月 1 日。检索于 2 November 2023
  294. ^ "Countries agree to safe and responsible development of frontier AI in landmark Bletchley Declaration". GOV.UK (Press release). Archived from the original on 1 November 2023. Retrieved 1 November 2023.
    “各国同意在具有里程碑意义的布莱奇利宣言中安全和负责任地发展前沿人工智能”英国政府官网(新闻稿)。已归档,原文于 2023 年 11 月 1 日存档。检索于 1 November 2023
  295. ^ "Second global AI summit secures safety commitments from companies". Reuters. 21 May 2024. Retrieved 23 May 2024.
    "第二届全球人工智能峰会获得企业安全承诺"。路透社。2024 年 5 月 21 日。检索于 23 May 2024
  296. ^ "Frontier AI Safety Commitments, AI Seoul Summit 2024". gov.uk. 21 May 2024. Archived from the original on 23 May 2024. Retrieved 23 May 2024.
    "前沿人工智能安全承诺,2024 年首尔人工智能峰会"。gov.uk。2024 年 5 月 21 日。从原文存档于 2024 年 5 月 23 日。检索于 23 May 2024
  297. ^ Jump up to: a b Russell & Norvig 2021, p. 9.
    拉塞尔与诺维格 2021,第 9 页。
  298. ^ Jump up to: a b c Copeland, J., ed. (2004). The Essential Turing: the ideas that gave birth to the computer age. Oxford, England: Clarendon Press. ISBN 0-19-825079-7.
    科佩兰, J., 编. (2004). 图灵的精髓:孕育计算机时代的思想. 英国牛津: 克拉伦登出版社. 国际标准书号0-19-825079-7.
  299. ^ "Google books ngram".
    "谷歌图书 n-gram"
  300. ^ AI's immediate precursors:
    AI 的直接前身:
  301. ^ Jump up to: a b Turing's original publication of the Turing test in "Computing machinery and intelligence":
    图灵在《计算机与智能》中首次发表的图灵测试
    Historical influence and philosophical implications:
    历史影响与哲学意义:
  302. ^ Crevier (1993), pp. 47–49.
    Crevier (1993),第 47–49 页。
  303. ^ Russell & Norvig (2003), p. 17.
    拉塞尔与诺维格 (2003),第 17 页。
  304. ^ Russell & Norvig (2003), p. 18.
    拉塞尔与诺维格 (2003),第 18 页。
  305. ^ Newquist (1994), pp. 86–86.
    Newquist (1994),第 86–86 页。
  306. ^ Simon (1965, p. 96) quoted in Crevier (1993, p. 109)
    西蒙 (1965, 第 96 页) 引用自 克雷维耶 (1993, 第 109 页)
  307. ^ Minsky (1967, p. 2) quoted in Crevier (1993, p. 109)
    明斯基 (1967, 第 2 页) 引用自 克雷维耶 (1993, 第 109 页)
  308. ^ Russell & Norvig (2021), p. 21.
    拉塞尔与诺维格 (2021), 第 21 页。
  309. ^ Lighthill (1973).
  310. ^ NRC 1999, pp. 212–213.
  311. ^ Russell & Norvig (2021), p. 22.
  312. ^ Expert systems:
  313. ^ Russell & Norvig (2021), p. 24.
  314. ^ Nilsson (1998), p. 7.
  315. ^ McCorduck (2004), pp. 454–462.
  316. ^ Moravec (1988).
  317. ^ Jump up to: a b Brooks (1990).
  318. ^ Developmental robotics:
    发展机器人技术:
  319. ^ Russell & Norvig (2021), p. 25.
  320. ^
  321. ^ Russell & Norvig (2021), p. 26.
  322. ^ Formal and narrow methods adopted in the 1990s:
    正式狭窄 方法在 1990 年代采用:
  323. ^ AI widely used in the late 1990s:
    人工智能在 1990 年代末广泛应用
  324. ^ Wong (2023). 黄 (2023)
  325. ^ Moore's Law and AI:
    摩尔定律 和人工智能:
  326. ^ Jump up to: a b c Clark (2015b).
    克拉克 (2015b)
  327. ^ Big data:  大数据:
  328. ^ Sagar, Ram (3 June 2020). "OpenAI Releases GPT-3, The Largest Model So Far". Analytics India Magazine. Archived from the original on 4 August 2020. Retrieved 15 March 2023.
    萨加尔,拉姆(2020 年 6 月 3 日)。"OpenAI 发布 GPT-3,迄今为止最大的模型"印度分析杂志存档于 2020 年 8 月 4 日的原文。检索于 15 March 2023
  329. ^ DiFeliciantonio (2023).
    迪费利恰 ntonio (2023)
  330. ^ Goswami (2023).
    戈斯瓦米 (2023)
  331. ^ Jump up to: a b Turing (1950), p. 1.
    图灵 (1950), 第 1 页。
  332. ^ Turing (1950), Under "The Argument from Consciousness".
    图灵 (1950),在“意识论证”下。
  333. ^ Kirk-Giannini, Cameron Domenico; Goldstein, Simon (16 October 2023). "AI is closer than ever to passing the Turing test for 'intelligence'. What happens when it does?". The Conversation. Retrieved 17 August 2024.
    Kirk-Giannini, Cameron Domenico; Goldstein, Simon (2023 年 10 月 16 日). "人工智能比以往任何时候都更接近通过图灵测试来评估'智能'。当它做到这一点时会发生什么?". 对话. retrieved 17 August 2024.
  334. ^ Russell & Norvig (2021), p. 3.
    拉塞尔与诺维格 (2021), 第 3 页。
  335. ^ Maker (2006).
    制造者 (2006)
  336. ^ McCarthy (1999).
    麦卡锡 (1999)
  337. ^ Minsky (1986).
    明斯基 (1986)
  338. ^ "What Is Artificial Intelligence (AI)?". Google Cloud Platform. Archived from the original on 31 July 2023. Retrieved 16 October 2023.
    "什么是人工智能(AI)?"谷歌云平台已归档于 2023 年 7 月 31 日的原文。检索于 16 October 2023
  339. ^ "One of the Biggest Problems in Regulating AI Is Agreeing on a Definition". carnegieendowment.org. Retrieved 31 July 2024.
    "监管人工智能的最大问题之一是达成一致的定义"卡内基国际和平基金会。检索于 31 July 2024
  340. ^ "AI or BS? How to tell if a marketing tool really uses artificial intelligence". The Drum. Retrieved 31 July 2024.
    "人工智能还是废话?如何判断一个营销工具是否真的使用人工智能"。检索于 31 July 2024
  341. ^ Nilsson (1983), p. 10.
    尼尔森 (1983), 第 10 页。
  342. ^ Haugeland (1985), pp. 112–117.
    霍格兰 (1985),第 112–117 页。
  343. ^ Physical symbol system hypothesis:
    物理符号系统假设:
    Historical significance:  历史意义:
  344. ^ Moravec's paradox:
    莫拉维克悖论:
  345. ^ Dreyfus' critique of AI:
    德雷福斯对人工智能的批评
    Historical significance and philosophical implications:
    历史意义和哲学含义:
  346. ^ Crevier (1993), p. 125.
  347. ^ Langley (2011).
    兰利 (2011)
  348. ^ Katz (2012). 卡茨 (2012)
  349. ^ Neats vs. scruffies, the historic debate:
    整洁与邋遢,历史性的辩论:
    A classic example of the "scruffy" approach to intelligence:
    一个“邋遢”智力方法的经典例子:
    A modern example of neat AI and its aspirations in the 21st century:
    21 世纪整洁人工智能及其抱负的现代例子:
  350. ^ Pennachin & Goertzel (2007).
    Pennachin 和 Goertzel (2007)
  351. ^ Jump up to: a b Roberts (2016).
    罗伯茨 (2016)
  352. ^ Russell & Norvig (2021), p. 986.
    拉塞尔与诺维格 (2021), 第 986 页。
  353. ^ Chalmers (1995).
    查尔默斯 (1995)
  354. ^ Dennett (1991).
    丹尼特 (1991)
  355. ^ Horst (2005).
    霍斯特 (2005)
  356. ^ Searle (1999).
    塞尔(1999)
  357. ^ Searle (1980), p. 1.
    塞尔(1980), 第 1 页。
  358. ^ Russell & Norvig (2021), p. 9817.
    拉塞尔与诺维格 (2021), p. 9817.
  359. ^ Searle's Chinese room argument:
    塞尔的 中文房间 论证:
    Discussion:  讨论:
  360. ^ Leith, Sam (7 July 2022). "Nick Bostrom: How can we be certain a machine isn't conscious?". The Spectator. Retrieved 23 February 2024.
    利斯,萨姆(2022 年 7 月 7 日)。"尼克·博斯特罗姆:我们如何能确定一台机器没有意识?"观察者。检索于 23 February 2024
  361. ^ Jump up to: a b c Thomson, Jonny (31 October 2022). "Why don't robots have rights?". Big Think. Retrieved 23 February 2024.
    汤姆森,乔尼(2022 年 10 月 31 日)。"为什么机器人没有权利?"大思想。检索于 23 February 2024
  362. ^ Jump up to: a b Kateman, Brian (24 July 2023). "AI Should Be Terrified of Humans". Time. Retrieved 23 February 2024.
    Kateman, Brian (2023 年 7 月 24 日). "人工智能应该对人类感到恐惧". 时代. retrieved 23 February 2024.
  363. ^ Wong, Jeff (10 July 2023). "What leaders need to know about robot rights". Fast Company.
    黄杰夫 (2023 年 7 月 10 日). "领导者需要了解的机器人权利". 快速公司.
  364. ^ Hern, Alex (12 January 2017). "Give robots 'personhood' status, EU committee argues". The Guardian. ISSN 0261-3077. Retrieved 23 February 2024.
    赫恩,亚历克斯(2017 年 1 月 12 日)。"给予机器人‘人格’地位,欧盟委员会主张"卫报国际标准连续出版物号0261-3077。检索于 23 February 2024
  365. ^ Dovey, Dana (14 April 2018). "Experts Don't Think Robots Should Have Rights". Newsweek. Retrieved 23 February 2024.
    达维,达娜(2018 年 4 月 14 日)。"专家认为机器人不应该拥有权利"新闻周刊。检索于 23 February 2024
  366. ^ Cuddy, Alice (13 April 2018). "Robot rights violate human rights, experts warn EU". euronews. Retrieved 23 February 2024.
    Cuddy, Alice (2018 年 4 月 13 日). "机器人权利侵犯人权,专家警告欧盟". 欧洲新闻. 于 23 February 2024 年检索.
  367. ^ The Intelligence explosion and technological singularity:
    智能爆炸技术奇点
    I. J. Good's "intelligence explosion"
    I. J. Good的“智能爆炸”
    Vernor Vinge's "singularity"
    维尔诺·文奇的“奇点”
  368. ^ Russell & Norvig (2021), p. 1005.
    拉塞尔与诺维格 (2021), 第 1005 页。
  369. ^ Transhumanism:
    超人类主义:
  370. ^ AI as evolution:  人工智能作为进化:
  371. ^ AI in myth:  神话中的人工智能:
  372. ^ McCorduck (2004), pp. 340–400.
    麦考杜克 (2004),第 340–400 页。
  373. ^ Buttazzo (2001).
    巴塔佐 (2001)
  374. ^ Anderson (2008).
  375. ^ McCauley (2007).
  376. ^ Galvan (1997).

AI textbooks

The two most widely used textbooks in 2023 (see the Open Syllabus):

These were the four of the most widely used AI textbooks in 2008:
2008 年最广泛使用的四本人工智能教科书是:

Later editions:  后来的版本:

Other textbooks:  其他教科书:

History of AI 人工智能的历史

Other sources 其他来源

Further reading 进一步阅读