这是用户在 2024-9-24 18:25 为 https://www.consilium.europa.eu/en/press/press-releases/2023/12/09/artificial-intelligence-act-counc... 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
Skip to content 跳至主要内容

Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world
人工智能法案:理事会和议会就全球首批人工智能规则达成协议

This press release was updated on 2 February 2024 to add the final compromise text with a view to agreement.
本新闻稿已于 2024 年 2 月 2 日更新,以添加最终妥协文本,以期达成协议。

Following 3-day ‘marathon’ talks, the Council presidency and the European Parliament’s negotiators have reached a provisional agreement on the proposal on harmonised rules on artificial intelligence (AI), the so-called artificial intelligence act. The draft regulation aims to ensure that AI systems placed on the European market and used in the EU are safe and respect fundamental rights and EU values. This landmark proposal also aims to stimulate investment and innovation on AI in Europe.
经过为期 3 天的“马拉松式”会谈,理事会轮值主席国和欧洲议会谈判代表就人工智能 (AI) 统一规则提案达成了临时协议,即所谓的人工智能法案。该法规草案旨在确保投放欧洲市场并在欧盟使用的 AI 系统是安全的,并尊重基本权利和欧盟价值观。这一具有里程碑意义的提案还旨在刺激欧洲对 AI 的投资和创新。

Carme Artigas, Spanish secretary of state for digitalisation and artificial intelligence
This is a historical achievement, and a huge milestone towards the future! Today’s agreement effectively addresses a global challenge in a fast-evolving technological environment on a key area for the future of our societies and economies. And in this endeavour, we managed to keep an extremely delicate balance: boosting innovation and uptake of artificial intelligence across Europe whilst fully respecting the fundamental rights of our citizens.
这是一项历史性的成就,也是迈向未来的一个巨大里程碑!今天的协议有效地解决了在快速发展的技术环境中面临的全球挑战,该挑战涉及我们社会和经济的未来的关键领域。在这项工作中,我们设法保持了一个极其微妙的平衡:在整个欧洲促进创新和人工智能的采用,同时充分尊重我们公民的基本权利。
Carme Artigas, Spanish secretary of state for digitalisation and artificial intelligence
Carme Artigas, Spanish secretary of state for digitalisation and artificial intelligence Carme Artigas,西班牙数字化和人工智能国务秘书

The AI act is a flagship legislative initiative with the potential to foster the development and uptake of safe and trustworthy AI across the EU’s single market by both private and public actors. The main idea is to regulate AI based on the latter’s capacity to cause harm to society following a ‘risk-based’ approach: the higher the risk, the stricter the rules. As the first legislative proposal of its kind in the world, it can set a global standard for AI regulation in other jurisdictions, just as the GDPR has done, thus promoting the European approach to tech regulation in the world stage.
AI 法案是一项旗舰立法倡议,有可能促进私营和公共行为者在欧盟单一市场开发和采用安全可靠的 AI。主要思想是根据人工智能对社会造成伤害的能力来监管人工智能,遵循“基于风险”的方法:风险越高,规则越严格。作为世界上第一个此类立法提案,它可以像 GDPR 一样,为其他司法管辖区的 AI 监管设定全球标准,从而在世界舞台上推广欧洲的技术监管方法。

The main elements of the provisional agreement
临时协议的主要内容

Compared to the initial Commission proposal, the main new elements of the provisional agreement can be summarised as follows:
与最初的委员会提案相比,临时协议的主要新内容可以总结如下:

  • rules on high-impact general-purpose AI models that can cause systemic risk in the future, as well as on high-risk AI systems
    针对未来可能导致系统性风险的高影响通用 AI 模型以及高风险 AI 系统的规则
  • a revised system of governance with some enforcement powers at EU level
    修订后的治理体系,在欧盟层面具有一些执法权
  • extension of the list of prohibitions but with the possibility to use remote biometric identification by law enforcement authorities in public spaces, subject to safeguards
    扩展了禁令清单,但执法机构可以在公共场所使用远程生物识别,但要遵守保障措施
  • better protection of rights through the obligation for deployers of high-risk AI systems to conduct a fundamental rights impact assessment prior to putting an AI system into use.
    通过规定高风险 AI 系统的部署者有义务在使用 AI 系统之前进行基本权利影响评估,更好地保护权利。

In more concrete terms, the provisional agreement covers the following aspects:
更具体地说,临时协议涵盖以下几个方面:

Definitions and scope 定义和范围

To ensure that the definition of an AI system provides sufficiently clear criteria for distinguishing AI from simpler software systems, the compromise agreement aligns the definition with the approach proposed by the OECD.
为了确保 AI 系统的定义提供足够明确的标准来区分 AI 与更简单的软件系统,折衷协议使定义与经合组织提出的方法保持一致。

The provisional agreement also clarifies that the regulation does not apply to areas outside the scope of EU law and should not, in any case, affect member states’ competences in national security or any entity entrusted with tasks in this area. Furthermore, the AI act will not apply to systems which are used exclusively for military or defence purposes. Similarly, the agreement provides that the regulation would not apply to AI systems used for the sole purpose of research and innovation, or for people using AI for non-professional reasons. 
临时协议还澄清,该法规不适用于欧盟法律范围之外的领域,并且在任何情况下都不应影响成员国在国家安全方面的权限或受托在该领域执行任务的任何实体。此外,AI 法案不适用于专门用于军事国防目的的系统。同样,该协议规定,该法规不适用于仅用于研究和创新目的的人工智能系统,也不适用于出于非专业原因使用 AI 的人。 

Classification of AI systems as high-risk and prohibited AI practices
将 AI 系统归类为高风险和禁止的 AI 做法

The compromise agreement provides for a horizontal layer of protection, including a high-risk classification, to ensure that AI systems that are not likely to cause serious fundamental rights violations or other significant risks are not captured. AI systems presenting only limited risk would be subject to very light transparency obligations, for example disclosing that the content was AI-generated so users can make informed decisions on further use.
妥协协议提供了一个横向保护层,包括高风险分类,以确保不会捕获不太可能导致严重基本权利侵犯或其他重大风险的 AI 系统。仅存在有限风险的 AI 系统将承担非常宽松的透明度义务,例如披露内容是 AI 生成的,以便用户可以就进一步使用做出明智的决定。

A wide range of high-risk AI systems would be authorised, but subject to a set of requirements and obligations to gain access to the EU market. These requirements have been clarified and adjusted by the co-legislators in such a way that they are more technically feasible and less burdensome for stakeholders to comply with, for example as regards the quality of data, or in relation to the technical documentation that should be drawn up by SMEs to demonstrate that their high-risk AI systems comply with the requirements.
将授权各种高风险的人工智能系统,但要进入欧盟市场需要遵守一系列要求和义务。共同立法者对这些要求进行了澄清和调整,使其在技术上更加可行并且利益相关者遵守的负担更小,例如在数据质量方面,或与中小企业应起草的技术文件有关,以证明其高风险人工智能系统符合要求。

Since AI systems are developed and distributed through complex value chains, the compromise agreement includes changes clarifying the allocation of responsibilities and roles of the various actors in those chains, in particular providers and users of AI systems. It also clarifies the relationship between responsibilities under the AI Act and responsibilities that already exist under other legislation, such as the relevant EU data protection or sectorial legislation.
由于 AI 系统是通过复杂的价值链开发和分发的,因此妥协协议包括一些变化,阐明了这些链条中各个参与者的责任和角色分配,特别是 AI 系统的提供者和用户。它还阐明了 AI 法案下的责任与其他立法(例如相关的欧盟数据保护或行业立法)下已经存在的责任之间的关系。

For some uses of AI, risk is deemed unacceptable and, therefore, these systems will be banned from the EU. The provisional agreement bans, for example, cognitive behavioural manipulation, the untargeted scraping of facial images from the internet or CCTV footage, emotion recognition in the workplace and educational institutions, social scoring, biometric categorisation to infer sensitive data, such as sexual orientation or religious beliefs, and some cases of predictive policing for individuals.
对于 AI 的某些用途,风险被认为是不可接受的,因此,这些系统将被欧盟禁止。例如,临时协议禁止认知行为操纵、从互联网或闭路电视录像中无目的地抓取面部图像、工作场所和教育机构中的情绪识别社会评分、生物识别分类以推断敏感数据,例如性取向或宗教信仰,以及一些针对个人的预测性警务案例。

Law enforcement exceptions
执法例外情况

Considering the specificities of law enforcement authorities and the need to preserve their ability to use AI in their vital work, several changes to the Commission proposal were agreed relating to the use of AI systems for law enforcement purposes. Subject to appropriate safeguards, these changes are meant to reflect the need to respect the confidentiality of sensitive operational data in relation to their activities. For example, an emergency procedure was introduced allowing law enforcement agencies to deploy a high-risk AI tool that has not passed the conformity assessment procedure in case of urgency. However, a specific mechanism has been also introduced to ensure that fundamental rights will be sufficiently protected against any potential misuses of AI systems.
考虑到执法机构的特殊性以及保持其在重要工作中使用 AI 的能力的必要性,同意对委员会提案进行几项修改,这些修改涉及将 AI 系统用于执法目的。在采取适当保护措施的情况下,这些更改旨在反映尊重与其活动相关的敏感操作数据机密性的必要性。例如,引入了一个紧急程序,允许执法机构在紧急情况下部署尚未通过合格评定程序的高风险 AI 工具。但是,还引入了一种特定机制,以确保基本权利得到充分保护,防止人工智能系统出现任何潜在的滥用。

Moreover, as regards the use of real-time remote biometric identification systems in publicly accessible spaces, the provisional agreement clarifies the objectives where such use is strictly necessary for law enforcement purposes and for which law enforcement authorities should therefore be exceptionally allowed to use such systems. The compromise agreement provides for additional safeguards and limits these exceptions to cases of victims of certain crimes, prevention of genuine, present, or foreseeable threats, such as terrorist attacks, and searches for people suspected of the most serious crimes.
此外,关于在公共场所使用实时远程生物特征识别系统,临时协议阐明了在执法目的绝对必要的情况下,应特别允许执法机构使用此类系统的目标。妥协协议规定了额外的保障措施,并将这些例外情况限制在某些犯罪的受害者案件中,防止真实、当前或可预见的威胁,例如恐怖袭击,以及搜查最严重罪行的嫌疑人。

General purpose AI systems and foundation models
通用 AI 系统和基础模型

New provisions have been added to take into account situations where AI systems can be used for many different purposes (general purpose AI), and where general-purpose AI technology is subsequently integrated into another high-risk system. The provisional agreement also addresses the specific cases of general-purpose AI (GPAI) systems.
添加了新规定,以考虑到 AI 系统可用于多种不同目的(通用 AI)的情况,以及通用 AI 技术随后集成到另一个高风险系统的情况。临时协议还涉及通用人工智能 (GPAI) 系统的具体情况。

Specific rules have been also agreed for foundation models, large systems capable to competently perform a wide range of distinctive tasks, such as generating video, text, images, conversing in lateral language, computing, or generating computer code. The provisional agreement provides that foundation models must comply with specific transparency obligations before they are placed in the market. A stricter regime was introduced for ‘high impact’ foundation models. These are foundation models trained with large amount of data and with advanced complexity, capabilities, and performance well above the average, which can disseminate systemic risks along the value chain.
对于基础模型,也就能够胜任执行各种独特任务的大型系统(例如生成视频、文本、图像、用横向语言交谈、计算或生成计算机代码)还商定了具体规则。临时协议规定,基础模型在投放市场之前必须遵守特定的透明度义务。对“高影响”基础模型引入了更严格的制度。这些是用大量数据训练的基础模型,具有远高于平均水平的高级复杂性、功能和性能,可以沿价值链传播系统性风险。

A new governance architecture
新的治理架构

Following the new rules on GPAI models and the obvious need for their enforcement at EU level, an AI Office within the Commission is set up tasked to oversee these most advanced AI models, contribute to fostering standards and testing practices, and enforce the common rules in all member states. A scientific panel of independent experts will advise the AI Office about GPAI models, by contributing to the development of methodologies for evaluating the capabilities of foundation models, advising on the designation and the emergence of high impact foundation models, and monitoring possible material safety risks related to foundation models.
根据 GPAI 模型的新规则及其在欧盟层面执行的明显必要性,委员会内成立了一个 AI 办公室,负责监督这些最先进的 AI 模型,促进标准和测试实践,并在所有成员国执行通用规则。一个由独立专家组成的科学小组将向 AI 办公室提供有关 GPAI 模型的建议,为评估基础模型能力的方法的开发做出贡献,就高影响基础模型的命名和出现提供建议,并监测与基础模型相关的可能材料安全风险。

The AI Board, which would comprise member states’ representatives, will remain as a coordination platform and an advisory body to the Commission and will give an important role to Member States on the implementation of the regulation, including the design of codes of practice for foundation models. Finally, an advisory forum for stakeholders, such as industry representatives, SMEs, start-ups, civil society, and academia, will be set up to provide technical expertise to the AI Board.
由成员国代表组成的 AI 委员会将继续作为协调平台和委员会的咨询机构,并将在实施该法规方面为会员国发挥重要作用,包括为基金会模型设计实践守则。最后,将为行业代表、中小企业、初创企业、民间社会和学术界等利益相关者建立一个咨询论坛,为 AI 董事会提供技术专长。

Penalties 处罚

The fines for violations of the AI act were set as a percentage of the offending company’s global annual turnover in the previous financial year or a predetermined amount, whichever is higher. This would be €35 million or 7% for violations of the banned AI applications, €15 million or 3% for violations of the AI act’s obligations and €7,5 million or 1,5% for the supply of incorrect information. However, the provisional agreement provides for more proportionate caps on administrative fines for SMEs and start-ups in case of infringements of the provisions of the AI act.
违反 AI 法案的罚款设定为违规公司上一财政年度全球年营业额的百分比或预定金额,以较高者为准。违反被禁止的人工智能应用程序将被罚款 3500 万欧元或 7%,违反人工智能法案义务将被罚款 1500 万欧元或 3%,提供错误信息的罚款将达到 750 万欧元或 1.5%。然而,临时协议规定了在违反 AI 法案条款的情况下,中小企业和初创企业的行政罚款上限更加相称

The compromise agreement also makes clear that a natural or legal person may make a complaint to the relevant market surveillance authority concerning non-compliance with the AI act and may expect that such a complaint will be handled in line with the dedicated procedures of that authority.
妥协协议还明确规定,自然人或法人可以就不遵守 AI 法案向相关市场监督机构提出投诉,并可能期望此类投诉将按照该机构的专门程序进行处理。

Transparency and protection of fundamental rights
透明度和保护基本权利

The provisional agreement provides for a fundamental rights impact assessment before a high-risk AI system is put in the market by its deployers. The provisional agreement also provides for increased transparency regarding the use of high-risk AI systems. Notably, some provisions of the Commission proposal have been amended to indicate that certain users of a high-risk AI system that are public entities will also be obliged to register in the EU database for high-risk AI systems.  Moreover, newly added provisions put emphasis on an obligation for users of an emotion recognition system to inform natural persons when they are being exposed to such a system.
临时协议规定,在高风险人工智能系统由其部署者投放市场之前,应进行基本权利影响评估。临时协议还规定了使用高风险 AI 系统的透明度。值得注意的是,委员会提案的一些条款已被修改,以表明作为公共实体的高风险 AI 系统的某些用户也将有义务在欧盟高风险 AI 系统的数据库中注册。 此外,新增的条款强调情绪识别系统的使用者在接触情感识别系统时有义务通知自然人。

Measures in support of innovation
支持创新的措施

With a view to creating a legal framework that is more innovation-friendly and to promoting evidence-based regulatory learning, the provisions concerning measures in support of innovation have been substantially modified compared to the Commission proposal.
为了创建一个对创新更友好的法律框架并促进基于证据的监管学习,与委员会的提案相比,有关支持创新措施的规定已进行重大修改。

Notably, it has been clarified that AI regulatory sandboxes, which are supposed to establish a controlled environment for the development, testing and validation of innovative AI systems, should also allow for testing of innovative AI systems in real world conditions. Furthermore, new provisions have been added allowing testing of AI systems in real world conditions, under specific conditions and safeguards. To alleviate the administrative burden for smaller companies, the provisional agreement includes a list of actions to be undertaken to support such operators and provides for some limited and clearly specified derogations. 
值得注意的是,已经澄清了人工智能监管沙箱,它应该为创新人工智能系统的开发、测试和验证建立一个受控环境,也应该允许在现实世界条件下测试创新的人工智能系统。此外,还增加了新规定,允许在真实世界条件下、特定条件和保障措施下测试 AI 系统。为了减轻小公司的行政负担,临时协议包括一份支持此类运营商将采取的行动清单,并规定了一些有限且明确规定的减损。 

Entry into force 生效

The provisional agreement provides that the AI act should apply two years after its entry into force, with some exceptions for specific provisions.
临时协议规定,AI 法案应在生效两年后适用,但特定条款有一些例外。

Next steps 后续步骤

Following today’s provisional agreement, work will continue at technical level in the coming weeks to finalise the details of the new regulation. The presidency will submit the compromise text to the member states’ representatives (Coreper) for endorsement once this work has been concluded.
在今天的临时协议之后,未来几周将继续进行技术层面的工作,以最终确定新法规的细节。这项工作完成后,主席团将把妥协文本提交给会员国代表 (Coreper) 以供批准。

The entire text will need to be confirmed by both institutions and undergo legal-linguistic revision before formal adoption by the co-legislators.
整个文本需要得到两个机构的确认,并在共同立法者正式通过之前进行法律语言修订。

Background information 背景信息

The Commission proposal, presented in April 2021, is a key element of the EU’s policy to foster the development and uptake across the single market of safe and lawful AI that respects fundamental rights.
委员会提案于 2021 年 4 月提出,是欧盟政策的关键要素,旨在促进尊重基本权利的安全合法 AI 在单一市场中的发展和采用。

The proposal follows a risk-based approach and lays down a uniform, horizontal legal framework for AI that aims to ensure legal certainty.  The draft regulation aims to promote investment and innovation in AI, enhance governance and effective enforcement of existing law on fundamental rights and safety, and facilitate the development of a single market for AI applications. It goes hand in hand with other initiatives, including the coordinated plan on artificial intelligence which aims to accelerate investment in AI in Europe. On 6 December 2022, the Council reached an agreement for a general approach (negotiating mandate) on this file and entered interinstitutional talks with the European Parliament (‘trilogues’) in mid-June 2023.
该提案遵循基于风险的方法,并为人工智能制定了一个统一的横向法律框架,旨在确保法律确定性。该法规草案旨在促进对 AI 的投资和创新,加强对基本权利和安全的现有法律的治理和有效执行,并促进 AI 应用单一市场的发展。它与其他举措齐头并进,包括旨在加速欧洲人工智能投资的人工智能协调计划。2022 年 12 月 6 日,理事会就该文件达成了一般方法(谈判授权)的协议,并于 2023 年 6 月中旬与欧洲议会进行了机构间谈判(“三部曲”)。

Last review: 28 March 2024