This is a bilingual snapshot page saved by the user at 2024-5-30 1:39 for https://app.immersivetranslate.com/pdf-pro/99ae2510-ae92-434c-9922-91111eeec38e, provided with bilingual support by Immersive Translate. Learn how to save?
2024_05_29_9bb6091bd6006612d089g

新闻生产中的生成式人工智能治理:  Generative AI Governance in News Production.

伦理基线、规制理路与愿景引领 Ethical baselines, regulatory approaches and visionary leadership

一基于全球 27 个媒体生成式人工智能指南的扎根研究
A rooted study based on 27 media-generated AI guides from around the world

摘要: 生成式人工智能在加速智媒时代进程的同时也为新闻业界带来了风险挑战,如何实现新闻生产中的生成式人工智能有效治理成为学界关注重点。然而, 当前研究鲜有从具体新闻实践视角看待这一技术介人的影响及治理对策。为了向学界传递新闻业界对该问题的认知与实践, 研究采用扎根理论方法对 16 个国家和地区 27 个新闻媒体发布的生成式人工智能指南进行编码分析,构建出新闻生产中的生成式人工智能治理机制理论框架。研究发现, 新闻生产中的生成式人工智能治理机制由价值层面的伦理基线、技术层面的规制理路和现实层面的愿景引领共同构成,具体表现为“新闻伦理”“有限使用”“媒介治理”“媒介效应”和“社会愿景”五个范畴之间的作用过程。在理想状态下, 新闻生产中的技术赋能及治理体现出人文主义与工具理性高度融合的关系本质。
Abstract: Generative artificial intelligence (AI) has accelerated the process of the smart media era, but also brought risks and challenges to the news industry, and how to realize the effective governance of generative AI in news production has become the focus of attention in the academic community. However, few studies have looked at the impact of this technology and its governance from the perspective of specific journalistic practices. In order to convey the journalism industry's knowledge and practice of this issue to the academia, the study adopts a rooted theory approach to code and analyze 27 news media guidelines on generative AI in 16 countries and regions, and constructs a theoretical framework for the governance mechanism of generative AI in news production. The study finds that the governance mechanism of generative AI in news production consists of an ethical baseline at the value level, a regulatory rationale at the technical level, and a visionary leadership at the practical level, which is specifically characterized by "journalism ethics", "limited use", "media governance", "media governance", and "ethics". "Media governance", "media effect" and "social vision". Ideally, the empowerment and governance of technology in news production reflects a highly integrated relationship between humanism and instrumental rationality.
关键词: 新闻生产; 生成式人工智能治理; 伦理基线; 规制理路; 愿景引领; 扎根研究
Keywords: news production; generative AI governance; ethical baseline; regulatory rationale; visionary leadership; rooted research

一、引言 I. Introduction

生成式人工智能领域不断涌现的技术进步极大加速了智媒时代的发展, 在全过程赋能新闻生产工作的同时也带来了前所未有的不确定性。理想状态下, 智能技术持续丰富新闻生产的方式, 优化其形态与效能, 在使用者可控范围内服务于生产创新活动。然而, 技术更新和迭代的速度往往出乎人意料, 人们对于理解和掌握新技术的压力时常处于过载状态, 加之大语言模型运行所具有的黑箱特征,技术的可控性随之下降,新闻生产在逐渐智能化的同时也面临着显著的行业挑战。
The emerging technological advances in the field of generative artificial intelligence have greatly accelerated the development of the smart media era, empowering news production in the whole process while bringing unprecedented uncertainty. Ideally, intelligent technology will continue to enrich the way of news production, optimize its form and performance, and serve the production and innovation activities within the control of users. However, the speed of technological updates and iterations is often unexpected, and people are often overloaded with the pressure of understanding and mastering new technologies. Coupled with the black-box characteristics of large language modeling, the controllability of technology has declined, and news production is gradually becoming smarter, while also facing significant industry challenges.
目前, 危机话语已成为智媒时代新闻学术讨论的一大焦点 。从人与技术关系角度看, 问题已不仅限于人如何利用技术进行新闻生产, 而是上升到对新闻工 作者在生产过程中意义定位的拷问。生成式人工智能对原先具有高职业技能特征的业务流程进行数据提取, 以程式化的方式实现模仿习得, 进而能够实现低成本、批量化的新闻生产要求。在此过程中,人类的工作意义被不断模糊,技术主义逐渐占据人文主义领地, 威胁着新闻业共同体的健康发展。
At present, crisis discourse has become a major focus of journalism academic discussion in the era of smart media . From the perspective of the relationship between human and technology, the problem is not only limited to how human beings utilize technology to produce news, but also rises to the question of the meaning of journalists' positioning in the production process. Generative AI extracts data from business processes that are originally characterized by high vocational skills, and achieves imitation acquisition in a programmatic way, thus realizing the requirements of low-cost and batch news production. In this process, the meaning of human work has been blurred, and technocracy has gradually taken over the territory of humanism, threatening the healthy development of the journalism community.
围绕着生成式人工智能对新闻生产带来的潜在风险及其治理方式, 学界展开了充分讨论。研究广泛涉及到算法风险与系统应对机制 1 、技术对职业意识冲击与新闻工作者底线坚守 、虚假新闻的多维度应对方式 3 、技术运用带来的伦理失范及多主体责任4等内容。总体上, 现有研究遵循了相似的研究范式, 依据捕捉到的现实对生成式人工智能对新闻生产带来的负面影响及应对进行归纳推演, 虽然对现实问题表现出强烈关怀, 但存在碎片化的特征, 鲜有研究对新闻生产中生成式人工智能的治理作出系统性阐释。同时, 研究大多是以旁观者的视角理想化地预设问题的发生, 而未能从新闻业者的视角面对更加细化的问题内容及治理对策。鉴于上述情况, 本研究系统收集了 16 个国家和地区的 27 个新闻媒体发布的生成式人工指南文本, 借助扎根理论的方法对文本进行全面提炼, 从而对新闻生产中生成式人工智能的治理框架构建做出探索性分析, 以进一步理解应用人工智能开展新闻生产的人机关系原理及其更深层次的现实意义。
The potential risks of generative artificial intelligence (AI) on news production and its governance have been thoroughly discussed in the academic community. Studies have extensively covered algorithmic risks and systematic coping mechanisms1 , the impact of technology on professional awareness and journalists' bottom line , the multi-dimensional response to fake news3 , ethical misconduct and multi-principal responsibility brought about by the use of technology4, etc. In general, existing studies follow a similar research paradigm, based on the captured reality of the potential risks of generative AI on news production and their governance approaches. In general, the existing studies follow a similar research paradigm, summarizing and deducing the negative impacts of generative AI on news production and its responses based on captured realities. Although they show a strong concern for the real problems, they are characterized by fragmentation, and few of them have made a systematic explanation of the governance of generative AI in news production. At the same time, most of the researches idealize the occurrence of the problem from the perspective of bystanders, but fail to face the more detailed content of the problem and the governance countermeasures from the perspective of journalists. In view of the above, this study systematically collects 27 texts on generative AI published by news media in 16 countries and regions, and refines the texts with the help of the rooted theory method, so as to make an exploratory analysis on the construction of the governance framework of generative AI in news production, in order to further understand the principle of human-computer relationship in the application of AI in news production and its deeper practical significance.

二、文献回顾 II. Literature review

围绕生成式人工智能在新闻生产中的潜在风险治理,研究者对技术的实践应用表现给予充分关注, 推演出各种问题情境, 从不同层面贡献出应对方案。具体而言, 现有治理方式包括以下三个层面:
Around the potential risk governance of generative AI in news production, researchers have paid full attention to the practical application of the technology, deduced various problematic scenarios, and contributed response options from different levels. Specifically, existing governance approaches include the following three levels.
一是制度层面的刚性规则制定。有研究指出, 对生产力创新的包容和新技术引发风险的警惕两者间的冲突化解将长期成为关于人工智能技术的监管主题, 因此要遵循包容审慎和分级分类监管的规制原则, 建立起协同共管、内容监管和风险监测三方面机制 5 。曾晓认为, 面对生成式人工智能对新闻真实性、创造力和价值观带来的挑战, 应从出台法律规范和行业监管措施、强化新闻伦理规范和行 业技术自律、完善技术解决方案和社会监督体系建设三方面予以推进 。还有学者指出建立及时严格问责机制的必要性, 认为既要有政府方面的法律保障体系作为框架性规范, 也要从内部监管发力, 通过建立透明监管和问责机制实现对互联网新闻业健康发展的有力保护
First, rigid rule-making at the institutional level. Some studies have pointed out that the conflict between the tolerance of productivity innovation and the alertness to the risks caused by new technologies will be the theme of regulation of AI technologies in the long run. Therefore, it is necessary to follow the principles of inclusive and prudent regulation and hierarchical regulation, and to establish a three-pronged mechanism of collaborative co-management, content regulation, and risk monitoring5 . According to Zeng Xiao, in the face of the challenges brought by generative AI to news authenticity, creativity and values, it should be promoted in three aspects: the introduction of legal norms and industry regulatory measures, the strengthening of ethical norms for journalism and industry self-regulation of technology, and the improvement of technological solutions and the construction of a social supervision system. . Other scholars point out the necessity of establishing a timely and strict accountability mechanism, and believe that it is necessary to have a governmental legal safeguard system as a framework, but also from the internal regulatory efforts, through the establishment of a transparent regulatory and accountability mechanism to realize a strong protection for the healthy development of Internet journalism .
二是技术生态层面的和谐人机关系构建。有研究者从人机协同视角出发, 强调要以信任为核心,搭建技术、人机和制度三维信任协同, 进而构筑可持续的人机共生生态 。还有研究针对生成式人工智能固有的算法缺陷,提出可从语义理解能力提升、算法偏见纠正、吸收用户反馈三个方面出发, 改进和优化生成式人工智能的深度学习算法, 从而提升其在新闻生产中的准确率和真实性 。谢梅和王世龙从风险的空间重构原理出发, 主张有必要推动生成内容由对象治理转向智能平台的生态治理,以维持生成式人工智能社会空间生产的良性秩序 5 。韩晓宁等人借鉴价值共创理论, 指出应以人为中心规范技术应用边界, 发挥新闻业者主体作用,构建长效人机协作机制6。
The second is the construction of harmonious human-machine relationship at the level of technological ecology. From the perspective of human-machine synergy, some researchers emphasize the need to build a three-dimensional trust synergy among technology, human-machine and system with trust as the core to construct a sustainable human-machine symbiotic ecology . In view of the inherent algorithmic deficiencies of generative AI, another study proposes that the deep learning algorithm of generative AI can be improved and optimized from the three aspects of semantic comprehension, algorithmic bias correction, and absorbing users' feedback, so as to improve its accuracy and truthfulness in the production of news . From the principle of spatial reconfiguration of risk, Xie Mei and Wang Shilong advocate the need to promote the shift of generated content from object governance to ecological governance of intelligent platforms, in order to maintain a benign order in the socio-spatial production of generative AI5 . Drawing on the theory of value co-creation, Han et al. point out that the boundaries of technology application should be regulated by human-centeredness, and the main role of journalists should be brought into play to build a long-term human-machine collaboration mechanism6 .
三是行动者层面的多主体参与。许静等人以“内容来源和真实性联盟”为例,指出由创作者、技术人员、记者、活动家等主体共同参与治理和监管流程能够实现对新闻生产真实性的有效保障7 。还有研究从具体的角色责任出发, 认为新闻编辑部、新闻业者和新闻用户应分别做到制定使用指南、发挥整合功能和提升数字素养 。与上述观点相似, 陈力丹和荣雪燕从强化新闻专业意识角度出发, 认为新闻从业者应在生成式人工智能技术使用中占据“核查者”主体地位, 新闻传播教育要注重提升学生的核查能力。考虑到用户的价值观和言论会转而影响人工智能的产出内容,提升全民的媒介素养同样刻不容缓9。
Thirdly, the participation of multiple actors at the actor level. Xu Jing et al. take the Alliance for Content Sourcing and Authenticity as an example, pointing out that the participation of creators, technicians, journalists, activists and other actors in the governance and monitoring process can effectively guarantee the authenticity of news production.7 There are also studies on the specific roles and responsibilities of newsrooms, news operators and news users to develop usage guidelines, integrate functions and improve digital literacy. Other studies have looked at the specific roles and responsibilities of newsrooms, journalists and news users in terms of guidelines, integration and digital literacy . Similar to the above, Chen Lidan and Rong Xueyan, from the perspective of strengthening journalism professionalism, believe that news practitioners should take the main position of "verifier" in the use of generative AI technologies, and that journalism and communication education should emphasize on improving the verification ability of students. Considering that users' values and opinions will in turn influence the output of AI, it is also urgent to improve the media literacy of the whole population.
总的来看, 现有研究对生成式人工智能在新闻生产中已有以及可能实现的应 用表现展开了充分讨论, 并从特定主体和层面出发, 对相关问题的改进提出了具体对策, 初步勾勒出“应用表现-潜在风险-治理方案”的研究图景。然而, 现有研究大多基于理性推演或是对事实经验的碎片化捕捉提出治理构想, 而对现实中已有的治理实践及成果缺乏充分关注, 这在一定程度上拉大了新闻传播学界与新闻业界的交流鸿沟。鉴于上述原因,本文对有参考意义的资料进行广泛搜寻,系统收集到 16 个国家和地区的 27 个新闻媒体发布的生成式人工指南文本, 试图借助扎根理论全面提炼文本内容, 以期对新闻生产中生成式人工智能的治理机制做出探索性发现与建构。
In general, existing studies have fully discussed the existing and possible applications of generative AI in news production, and put forward specific countermeasures for the improvement of related problems from the perspective of specific subjects and levels, initially sketching out the research landscape of "application performance-potential risk-governance program". However, most of the existing research is based on rational deduction or fragmented capture of factual experience to put forward governance concepts, and lacks sufficient attention to the existing governance practices and achievements in reality, which to a certain extent widens the communication gap between journalism and communication academia and the journalism industry. In view of the above reasons, this paper searches extensively for references and systematically collects 27 texts of generative AI published by news media in 16 countries and regions, and tries to comprehensively refine the contents of the texts with the help of rootedness theory, in order to make exploratory discoveries and constructs of the governance mechanism of generative AI in news production.

三、研究设计 III. Research design

本文旨在构建一个探索性的分析框架, 为了达成该研究目的, 采用与之相适应的扎根理论方法。本部分将从方法基本信息、研究主要流程、选取资料信息详细阐述研究设计。
The purpose of this paper is to construct an exploratory analytical framework, and in order to accomplish that research purpose, a rooted theory approach is used that is compatible with it. This section will detail the research design in terms of basic information about the methodology, the main flow of the study, and information about the selected data.

(一) 研究方法: 扎根理论 (i) Research methodology: rootedness theory

扎根理论是一种自下而上建立理论的方法, 其特点表现为在系统收集经验资料的基础上, 寻找反映社会现象的核心概念, 通过在概念之间建立起联系而形成理论。 年由 Glaser 和 Strauss 共同出版的《扎根理论的发现: 质化研究策略》标志着扎根理论的诞生。方法的形成深受芝加哥学派的实用主义和符号互动论以及哥伦比亚大学的量化研究影响, 既提倡建构与日常生活经验问题有密切联系的中层理论, 又将量化分析方法融人到扎根理论当中, 使得研究过程具有可追溯性和可重复性。 扎根理论的出现有效地解决了理论研究与经验研究之间严重脱节的问题, 避免了只侧重于纯粹理论探讨或只停留于经验事实描述的单一研究倾向,进而达到从经验事实中抽象出可靠理论的研究目的。
Rootedness theory is a bottom-up approach to theory building, characterized by the systematic collection of empirical data, the search for core concepts reflecting social phenomena, and the formation of theories through the establishment of links between the concepts. The discovery of Rooted Theory: Strategies for Qualitative Research, co-published by Glaser and Strauss in 2007, marked the birth of Rooted Theory. The methodology was deeply influenced by the pragmatism and symbolic interactionism of the Chicago School and the quantitative research of Columbia University, which not only advocated the construction of middle-level theories closely related to the problems of daily life experience, but also integrated quantitative analysis methods into Roots Theory, so as to make the research process traceable and reproducible. The emergence of rooted theory effectively solves the problem of serious disconnection between theoretical research and empirical research, avoids the single research tendency of focusing on purely theoretical exploration or staying only in the description of empirical facts, and thus achieves the research purpose of abstracting reliable theories from empirical facts.
随着研究的不断推进,扎根理论进一步形成了三种不同流派,分别为 Glaser 和 Strauss 的经典版本、Strauss 和 Corbin 的程序化版本以及 Charmaz 的建构主义版本。其中程序化版本得到了更广泛的使用, 该版本扎根理论提出了包含开放性编码、主轴性编码和选择性编码的三级编码程序, 为研究者的实践应用提供了清晰的程序化操作路径。作为一种理论建构方法, 扎根理论的核心思想包括以下三 个方面: 一是理论来源于数据。此处的数据具有十分广义的内涵, 访谈、文本、文献、观察、问卷等均包含在内。理论必须要以数据为依据, 从原始数据中产生的理论才被认为具有生命力。二是研究者要保持理论敏感性。这种敏感性要求体现于研究设计、收集资料和分析资料的全过程,研究者要始终注意捕捉建构理论的新线索。具体而言, 就是要求研究者具有为经验资料赋予特定意义并使其概念化的能力。三是将不断比较贯穿于研究全过程。从具体操作来看, 扎根理论先是对资料内容进行详细编码, 再将资料归集到各种概念类属中, 这种数据的概念化以及概念和范畴的提炼正是在不断比较中得以完成, 这种比较也使得扎根理论在早期被称为“不断比较的方法”。
With the advancement of research, three different schools of thought have been further developed in Zagan's theory, namely the classical version of Glaser and Strauss, the procedural version of Strauss and Corbin, and the constructivist version of Charmaz. Among them, the procedural version is more widely used, which proposes a three-tier coding process including open coding, spindle coding and selective coding, providing a clear procedural path for researchers' practical application. As a theory construction method, the core idea of Zagan's theory includes the following three aspects: first, theory comes from data. The data here has a very broad connotation, including interviews, texts, documents, observations, questionnaires, etc. Theories must be based on data. Theories must be based on data, and theories derived from raw data are considered to be vital. Secondly, the researcher should maintain theoretical sensitivity. This sensitivity is reflected in the whole process of research design, data collection and analysis of data, and the researcher should always pay attention to capture new clues for constructing theories. Specifically, the researcher is required to have the ability to assign specific meanings to empirical data and to conceptualize them. Thirdly, constant comparison should be carried out throughout the whole process of research. From the point of view of specific operation, Zagan's theory starts with detailed coding of the content of the data, and then groups the data into a variety of conceptual categories, the conceptualization of the data and the refinement of the concepts and categories can be accomplished in the constant comparison, which makes Zagan's theory in the early days known as "the method of constant comparison".
就本研究而言, 目的在于建构具有探索性意义的新闻生产中生成式人工智能治理框架, 采用扎根理论方法, 能够有效契合自下而上整合经验资料并生成核心概念与范畴的研究目的。为了清晰呈现本文的研究流程, 以图 1 进行呈现。
In this study, we aim to construct an exploratory framework for the governance of generative AI in news production, adopting a rooted theory approach, which can effectively fit the research purpose of bottom-up integration of empirical data and the generation of core concepts and categories. In order to clearly present the research process of this paper, Figure 1 is presented.
1
文献回顾 Literature review
资料提炼 data extraction

图 1 扎根理论研究流程 Figure 1 Rooted Theory Research Process

(二)资料选取 (ii) Selection of information

本文以“生成式人工智能指南”“Generative AI Guidelines”“Generative AI Principles"作为关键词进行初步检索, 从中笕选出由新闻媒体作为发布方的文本内容。通过对全网信息的广泛搜索, 本文汇总得到从 2023 年 3 月至今世界各国新闻媒体最新出台的生成式人工智能指南, 涵盖了欧洲、美洲、非洲、亚洲 16 个国家和地区 27 个新闻媒体, 具体来源信息如表 1 所示。本文首先对外文版本的生成式人工智能指南使用 DeepL 翻译应用进行翻译, 再进行人工翻译校对。需要说明的是, 英国路透社(Reuters)、英国广播公司(BBC)、德国巴伐利亚广播公司 (BR) 等媒体的人工智能指南仍停留在早期版本, 未专门针对生成式人工智能出台更新内容, 故本文未将其纳人研究范围内。
In this paper, we use "Generative AI Guidelines" and "Generative AI Principles" as the keywords to conduct a preliminary search. In this paper, we use "Generative AI Guidelines", "Generative AI Principles" as the keywords to conduct a preliminary search, from which we select the text contents published by news media. Through the extensive search of the whole network, this paper summarizes the latest generative AI guidelines issued by news media from March 2023 to the present, covering 27 news media in 16 countries and regions in Europe, the Americas, Africa, and Asia, and the specific source information is shown in Table 1, which is a summary of the latest generative AI guidelines issued by news media in foreign languages, and the latest generative AI guidelines issued by news media in foreign languages. In this paper, we first translate the foreign language version of the GUI guide using DeepL translation application, and then proofread the human translation. It should be noted that the AI guides of Reuters, BBC, and Bavarian Broadcasting Corporation (BR) are still in their early versions, and have not been updated specifically for generative AI, so they are not included in this paper.

表 1 新闻媒体生成式人工智能指南来源汇总  Table 1 Summary of sources of generative AI guidelines for news media
 Affiliation Continent
所属
大洲
国别 新闻媒体 news media 文本标题 Text Title
 Release Time
发布
时间
欧洲 英国 The Guardian 《卫报》生成人工智能指南 The Guardian's guide to generating artificial intelligence 2023.6
英国 Financial Times 编辑致信: 关于生成式人工智能和英国 《金融时报》
Letter to the Editor: On generative artificial intelligence and the Financial Times
2023.5
荷兰
Algemeen
Nederlands
Persbureau (ANP)
荷兰通讯社人工智能指南 A Guide to Artificial Intelligence for Dutch News Agencies 2023.4
荷兰 Volkskrant 《荷兰人民报》新闻守则 Press code of the Dutch People's Newspaper 2023.5
比利时 Mediahuis 人工智能指南 Guide to Artificial Intelligence 2023.5
比利时
The Raad voor de
Journalistiek
(RVDJ)
新闻理事会关于在新闻业中使用人工智能的新指南 New Press Council Guidance on the Use of Artificial Intelligence in Journalism 2023.3
挪威
Verdens Gang
(VG)
《挪威世道报》人工智能指南 A Guide to Artificial Intelligence in the Norwegian Seidos 2023.4
芬兰
STT -Lehtikuva STT-Lehtikuva

Transparent access to information and curiosity - Finnish News Agency Artificial Intelligence Instructions for Use
透明的信息获取和好奇心-
芬兰通讯社人工智能使用说明
2023.6
瑞典 Aftonbladet

The Swedish Evening Standard's policy on artificial intelligence: this is our relationship with new technologies
《瑞典晚报》的人工智能政策:
这就我们与新技术的关系
2023.9
瑞士 Ringier 《荣格新闻》人工智能编辑指南 AI editorial guide to Jung News 2023.5
瑞士 Heidi.News 《海蒂新闻》编辑部对人工智能的应用表明立场 The Heidi News editorial board takes a stand on the use of artificial intelligence 2023.4
法国 Le Parisien 《巴黎人报》集团致力于使用生成式人工智能 Le Parisien group commits to generative artificial intelligence 2023.5
德国 IPPEN.MEDIA 人工智能使用原则 Principles of Artificial Intelligence Use 2023.9
德国
The Deutscher
Journalisten-Verba
nd (DJV)
德国记者协会立场文件: 关于人工智能的使用 Position paper of the Association of German Journalists: on the use of artificial intelligence 2023.4
德国
The German Press
Agency (DPA)
公开、负责任、透明——德国新闻社人工智能指南 Open, Accountable, Transparent - A Guide to Artificial Intelligence for the German Press Agency 2023.4
美洲 美国
Associated Press
(AP)
美联社关于生成式人工智能的准则 AP Guidelines on Generative Artificial Intelligence 2023.8
美国
News Media
Alliance (NMA)
美国新闻媒体联盟人工智能准则 American News Media Alliance Artificial Intelligence Guidelines 2023.4

四、扎根编码与理论框架构建 IV. Rooted Coding and Theoretical Framework Construction

本文采用程序化的扎根理论方法,对材料文本进行三个环节的逐级编码,具体包括: 开放式编码、主轴式编码与选择式编码, 在此基础上构建起新闻生产中的生成式人工智能治理机制分析框架。
This paper adopts a programmatic rootedness theory approach to encode the material text step by step in three stages, including open coding, spindle coding and selective coding, on the basis of which we construct a framework for analyzing the governance mechanism of generative artificial intelligence in news production.

(一) 开放式编码 (i) Open coding

开放式编码是指将资料打散, 赋予概念, 然后以新的方式重新组合起来的操作化过程。这一过程特别要求研究者保持开放心态, 尽量悬置个人“倾见”和研究界“定见”, 将所有资料按其原初状态进行登录。3本研究将收集到的 27 个新闻媒 体的生成式人工智能指南文本资料导人 Nvivo14 软件, 在理论建模阶段利用节点功能对其中 22 个文本进行自由编码。为尽可能避免研究者主观偏见的影响, 邀请三名具有专业背景的研究人员分别独立编码。编码过程基于原文语句进行, 将不同研究人员编码结果进行反复比较, 得到 124 条原始记录, 通过合并笕选得到 73 个初始概念, 再对其进行归类化处理得到个 16 个初始范畴。上述开放式编码示例和范畴化结果分别如表 2 和表 3 所示, 为了节省篇幅, 每个初始概念仅罗列一个原始记录作为对照说明。
Open coding refers to the operationalization process of breaking up data, assigning concepts to them, and then reassembling them in new ways. This process requires the researcher to keep an open mind, suspend personal "opinions" and the "stereotypes" of the research community as much as possible, and register all the materials in their original state.3 In this study, the text of 27 collected generative AI guides for news media was imported into the Nvivo14 software, and 22 of them were analyzed using the node function in the theoretical modeling stage. In this study, we imported 27 collected news media texts into the Nvivo14 software, and 22 of them were freely encoded using the node function in the theoretical modeling phase. In order to avoid the influence of subjective bias as much as possible, three researchers with professional backgrounds were invited to code the texts independently. The coding process was based on the original statements, and the results of the different researchers' coding were repeatedly compared to obtain 124 original records, which were combined to obtain 73 initial concepts, and then categorized to obtain 16 initial categories. The above open coding examples and categorization results are shown in Tables 2 and 3, respectively. In order to save space, only one original record is listed for each initial concept as a control.
表 2 资料文本的开放式编码 Table 2 Open coding of information texts
概念化 原文内容 Original content
保护用户隐私 Protecting user privacy 人工智能系统和模型的设计尤其应尊重与其交互的用户隐私
Artificial intelligence systems and models should be designed, in particular, to respect the privacy of the users with whom they interact
数据合法 Data legitimacy 个人数据的收集和使用应当合法 Collection and use of personal data should be lawful
数据披露 Data disclosure 解释数据如何在人工智能系统中使用并使受众能够控制
Explain how data can be used in AI systems and enable audience control
信息隐蔽 concealment of information 未经部门负责人批准, 不得将敏感信息与人工智能系统共享
Sensitive information should not be shared with AI systems without the approval of the department head.
标识训练信息 Marking training information 用于训练人工智能系统的内容和来源要被明确标识 Content and sources used to train AI systems are clearly identified
标识生成内容 Marker Generated Content 全部或部分借助人工智能创建的新闻内容必须进行相应标记
News content created in whole or in part with the help of artificial intelligence must be tagged accordingly
标识内容来源 Logo content source 任何发布的合成图像都将附有一个可见的标记, 解释其来源
Any published composite image will be accompanied by a visible tag explaining its origin.
过程记录 process documentation 对数据的使用要履行保存准确记录的义务 Use of data is subject to the obligation to keep accurate records
制定数据使用标准 Developing standards for data use

AI developers should work with publishers to develop mutually agreed upon standards for data traceability.
人工智能开发人员应与出版商合作, 制定双方认可的数据溯源标
公开系统原理 Open System Principles 还应向用户提供有关系统如何运行的易于理解的信息 Users should also be provided with easy-to-understand information about how the system works
公开个性化推荐算法 Public personalized recommendation algorithms 为用户提供定制显示内容所依据的算法应向用户公开 The algorithms on which customized display content is provided to the user should be made available to the user
确保内容准确完整 Ensure content is accurate and complete

Developers and users must do their best to ensure that AI-generated content is accurate and complete
开发和使用人员必须尽最大努力确保人工智能生成的内容准确完
防止原创歪曲 Preventing original distortion 人工智能系统必须确保原创信息不会被歪曲 Artificial intelligence systems must ensure that original information is not distorted
避免作为信源 Avoidance as a source 人工智能是工具, 但绝不是信息来源 Artificial intelligence is a tool, but never a source of information.
信息来源验证 Verification of information sources 每一条生成的信息都应经过可靠来源的验证 Each generated message should be validated by a reliable source
通过标准认证 Certified to standards

Advocating for the certification of AI systems used in journalism, including meeting specific standards for quality, non-discrimination, data protection, etc.
倡导对新闻行业使用的人工智能系统的认证, 认证包括在质量、
非歧视、数据保护等方面满足特定标准
对输出结果负责 Accountable for output results 人工智能系统的开发者和使用者应开展合作, 确保对系统输出负
Developers and users of AI systems should work together to ensure that the output of the system is responsive to the needs of the users.
错误纠正 Correction of errors 数据处理应符合高质量标准, 数据材料中的不完整、歪曲和其他
Data processing should meet high quality standards, and incomplete, distorted and other data material should be treated in a manner that is consistent with the quality of the data.
错误必须予以纠正 Errors must be corrected
使使
他们自己的采访笔记 Their own interview notes
创意激发 Creative Inspiration 我们可能会使用人工智能工具来激发创作,例如帮助生成故事创
We may use AI tools to inspire creativity, for example, to help generate storytelling
人-机-人采编流程 Human-machine-human editorial process

Human-machine-human, where thinking and decision-making begin and end with the human being, can be produced in the middle with the help of artificial intelligence.
人-机器-人, 思考和决策始于人也结束于人, 可以在中间生产的过
程中借助人工智能
促进而非取代人类 Promoting, not replacing, human beings 在任何情况下, “人工智能同事”都不应该取代新闻编辑
Under no circumstances should "artificially intelligent colleagues" replace news editors.
维护自由尊严 Upholding the dignity of freedom

In dealing with the attention imbalance caused by artificial intelligence, autonomy and freedom should not be abandoned and the best solution should be sought on the basis of human freedom and dignity.
在处理人工智能带来的注意力失衡问题的过程中, 不应放弃自主
自由, 而应在人的自由和尊严基础上寻求最佳解决方案
维护受众信任 Maintaining audience trust

We do not reject technological advances, but in order to preserve the framework and ethical system that guides our actions, it is important to maintain a relationship of trust with our audience.
我们不拒绝技术进步, 为了维护指引行动的框架和道德体系, 最
重要的是维护与受众的信任关系
以人类智能为核心 Human Intelligence at the Core

Artificial intelligence can improve things, but human intelligence is still at the center of all our editorial production.
人工智能可以改进工作, 但人类智能仍是我们所有编辑制作的核
造福人类及后代 For the benefit of mankind and future generations

Artificial intelligence systems can benefit all of humanity, including future generations, provided they operate in accordance with human values and global law.
人工智能系统能够造福全人类包括子孙后代, 前提是它们符合人
类价值观并遵照全球法律运作
解决全球问题 Addressing global issues 人工智能系统的多学科性质使其能够理想地解决全球关注的问题
The multidisciplinary nature of AI systems makes them ideally suited to address global concerns
表 3 编码的范畴化结果 Table 3 Categoryization results for encoding
初始范畴 Initial scope 初始概念 Initial concepts
隐私安全 privacy and security 保护用户隐私、数据合法、数据披露、信息隐蔽 Protection of user privacy, data legitimacy, data disclosure, information concealment
透明性

Labeling training information, labeling generated content, labeling content sources, documenting processes, setting standards for data use, disclosing system principles, disclosing personalized recommendation algorithms
标记训练信息、标记生成内容、标记内容来源、过程记录、制定数据
使用标准、公开系统原理、公开个性化推荐算法
真实性

Ensuring accuracy and completeness of content, prevention of original distortion, avoidance of use as a source, verification of sources, certification through standards
确保内容准确完整、防止原创歪曲、避免作为信源、信息来源验证、
通过标准认证
负责任

Accountability for outputs, pre-publication review, professional review, specialized responsibility, editorial responsibility, journalistic responsibility
对输出结果负责、发布前审查、专业审查、专门负责人、编辑责任、
记者责任
公平性 维护市场公平、防止垄断、公正性审查 Maintenance of market equity, prevention of monopolization, impartiality review
维护知识产权 Upholding Intellectual Property Rights 版权报酬、授权后使用、避免代码侵权、告知内容抓取用途
Copyright remuneration, use after authorization, avoiding code infringement, informing about the use of content crawl
正效应 创造商机、提高效率、协同效应、用户友好体验、突破写作障碍
Creating Business Opportunities, Increasing Efficiency, Synergies, User-Friendly Experience, Breaking Through Writer's Block
负效应

Fake news, erosion of social trust, crisis of confidence in the media, undermining of the democratic order, weakening of productivity, attention imbalance, spillover of malignant information
新闻造假、损害社会信任、媒体信任危机、破坏民主秩序、削弱生产
力、注意力失衡、恶意信息溢出
辅助应用 complementary application 文字辅助、图像辅助、视频辅助、视效增强、提炼总结、创意激发
Text-assisted, image-assisted, video-assisted, visual enhancement, distillation and summarization, creative stimulation
禁止事项 Prohibitions 禁止完全程序开发、禁止不审查代码、禁止篡改、限制使用规模
Prohibition of full program development, prohibition of failure to review code, prohibition of tampering, limitation of scale of use
刚性框架 rigid frame 制定数据政策、引人敏捷治理、建立认证系统 Develop a data policy, elicit agile governance, and establish a certification system
多元监管 diversified regulatory framework 共同监管、引人政府合作、受众参与审查 Co-regulation, eliciting government cooperation, audience participation in reviews
平衡引导 Balanced guidance 积极相互制约、开展教育培训、组建试点小组 Active mutual restraints, education and training, formation of pilot groups
人类智能主导 Human Intelligence Dominance 确保人类地位、人-机-人采编流程、促进而非取代人类、以人类智能
Ensure human status, human-machine-human editorial process, promote rather than replace humans, human intelligence with human intelligence
为核心
维护社会秩序 Maintaining social order 维护自由尊严、维护受众信任 Upholding the dignity of freedom and the trust of the audience
可持续发展 Sustainable development 造福人类及后代、解决全球问题 Addressing global issues for the benefit of humankind and future generations

(二) 主轴式编码 (ii) Spindle coding

主轴式编码的主要任务是发现和建立概念类属之间的各种联系, 以表现资料中各个部分之间的有机关联 。从操作上看, 主轴式编码就是通过对开放式编码获得的初始范畴进行凝练和分类,以不断挖掘各范畴之间的内在逻辑关系,从而提炼出主导其他范畴的主范畴 。本文在开放式编码阶段获得 16 个初始范畴的基础上, 进一步进行比较、分类, 提炼出 5 个具有更高层级代表性和概括力的主范畴, 分别为: 伦理原则、有限使用、治理路径、结果效应和目标愿景。主范畴包含的初始范畴及其关系内涵如表 4 所示。
The main task of spindle coding is to discover and establish various links between conceptual categories, in order to express the organic connection between various parts of the data . Operationally, the main-axis coding is to condense and categorize the initial categories obtained from open coding, in order to continuously explore the internal logical relationships among the categories, so as to extract the main categories that dominate the other categories . In this paper, on the basis of the 16 initial categories obtained in the open coding stage, we further compare and classify them, and refine five main categories with higher-level representativeness and generalization, which are: ethical principles, limited use, governance path, outcome effects and goal vision. The initial categories included in the main categories and their relationship connotations are shown in Table 4.
表 4 主轴式编码及其关系内涵 Table 4 Spindle codes and their relational connotations
主范畴 初始范畴 Initial scope 关系内涵 Relationship Connotation
新闻伦理 journalistic ethics 隐私安全 privacy and security

The use of generative AI technologies in news production should be based on specific ethical guidelines, including the protection of user privacy and security, transparency and understandability of algorithmic processes, authenticity of the generated content, accountability of the subject, fairness in the marketplace and in society, and respect for intellectual property rights.
新闻生产中使用生成式人工智能技术应基于特定的
伦理准则, 具体包括保护用户隐私安全、算法运行过
程透明且可被理解、保证生成内容真实、主体责任明
确、维护市场和社会公平以及尊重知识产权。
透明性
真实性
负责任
公平性
保护知识产权 Protection of intellectual property rights
媒介效应 media effect 正效应

The use of generative AI in news production can have positive and negative media effects on society, both positive effects in terms of efficiency improvement and experience optimization, and negative effects in terms of false information and damage to trust.
新闻生产中生成式人工智能的运用可以对社会产生
正负媒介效应, 既具有效率提升、体验优化等方面的
正效应,也存在虚假信息、损害信任等方面的负效应。
负效应
有限使用 Limited use 辅助应用 complementary application 在当前实践中, 各媒体机构明确了生成式人工智能的
In current practice, media organizations have clarified the generative AI
禁止事项 Prohibitions 应用领域和禁止事项。 Areas of application and prohibitions.
媒介治理 media governance 刚性框架 rigid frame 新闻业界探索出的媒介治理路径包括三方面内容, 分
The path of media governance explored by the journalism profession consists of three elements, namely
多元监管 diversified regulatory framework 别为: 刚性的法规制度框架、多元责任主体参与的共
The following are the key elements of the new system: a rigid legal and institutional framework, and a common system with the participation of multiple responsible actors.
平衡引导 Balanced guidance 同监管、以积极谨慎态度引导人工智能技术发展。 To regulate and guide the development of AI technology in a positive and prudent manner.
社会愿景 Social vision 人类智能主导 Human Intelligence Dominance 生成式人工智能技术在新闻生产中的应用具有更高 The application of generative AI techniques in news production has higher
维护社会秩序 Maintaining social order 层次的发展意义,理想状态为在人类智能主导下实现 level of developmental significance, ideally realized under the dominance of human intelligence.
可持续发展 Sustainable development 社会的有序和可持续发展。 Orderly and sustainable development of society.

(三)选择式编码 (iii) Selective coding

选择式编码是指对所有已发现范畴进行系统分析后明确一个核心范畴, 该范畴能够统领其他所有范畴与概念。研究通常以 “故事线”的思路将碎片化的概念聚拢在一起, 采用绘制逻辑关系图的形式来表明各范畴之间的内在联系。通过对范畴间关联的进一步提炼,本文将“新闻生产中生成式人工智能治理机制”作为核心范畴。围绕核心范畴,本文发掘了“价值层”“技术层”“现实层”三个递进的治理层次,上文提炼得到的五个主范畴分别归属于其中某一层次。
Selective coding refers to the systematic analysis of all the identified categories to identify a core category that is able to unify all the other categories and concepts. The research usually uses the idea of "story line" to bring the fragmented concepts together, and adopts the form of logical relationship diagram to show the intrinsic connection between the categories. By further refining the interconnections between categories, this paper takes "generative AI governance mechanism in news production" as the core category. Around the core category, this paper discovers three progressive governance levels, namely, "value level", "technology level" and "reality level", and the five main categories extracted above belong to one of them. The five main categories extracted above belong to one of these levels.
具体而言, 本文在选择性编码过程中确定的故事线为: 新闻生产中的生成式人工智能治理发端于“价值层”,相关新闻伦理原则构成了该层次的核心内容。总体上, 各媒体机构保持了统一的伦理关切, 包括隐私安全、透明性、真实性、负责任、公平性、保护知识产权六大原则。这些新闻伦理原则直接影响了“技术层”的实践做法。各媒体机构以清单的形式明确了生成式人工智能可以参与的辅助性工作以及严格禁止事项, 使得对生成式人工智能的应用处于“有限使用”状态。为了确保人工智能被合理使用,各新闻媒体提倡遵循谨慎而又开放的态度,涉及到刚性框架、多元监管、平衡引导三方面媒介治理路径, 进一步矫正和规范了“有限使用”的具体意涵。沿着上述实践思路,得以放大媒介正效应并降低媒介负效应,进而可以在“现实层”借助技术赋能新闻生产达成更长远的目标愿景。最后,新闻生产参与的社会愿景的阶段性达成催生出更为具体且有新意的社会伦理原则, 并反馈传导至新一轮的新闻生产升级活动当中, 实现对新闻伦理原则的再塑。 “价值层”“技术层”“现实层”体现了发展式演进的态势, 有利于构建一个负责任、可信赖的智能媒介生态。以上原理过程如图 2 所示。
Specifically, the story line identified in this paper during the selective coding process is that the governance of generative AI in news production begins at the "value level", where ethical principles of journalism form the core of the hierarchy. In general, media organizations maintain a unified set of ethical concerns, including six principles: privacy and security, transparency, truthfulness, accountability, fairness, and protection of intellectual property. These ethical principles of journalism have a direct impact on the practices at the "technology level. Media organizations have defined a list of auxiliary tasks that generative AI can be involved in, as well as strict prohibitions, so that the application of generative AI is "limited use". In order to ensure that AI is used appropriately, news media advocate a prudent yet open attitude, involving a rigid framework, pluralistic regulation, and balanced guidance in three aspects of media governance, further correcting and standardizing the specific meaning of "limited use". Along these lines of practice, the positive effects of the media can be amplified and the negative effects of the media can be minimized, so that a longer-term vision can be achieved through technologically empowered news production at the "reality level". Finally, the stage-by-stage achievement of the social vision of news production gives rise to more specific and innovative social ethical principles, which are then fed back into a new round of news production upgrading activities, realizing the reshaping of ethical principles of journalism. The "value layer", "technology layer" and "reality layer" reflect the developmental evolution, which is conducive to the construction of a responsible and trustworthy intelligent media ecology. The above principle process is shown in Figure 2.
图 2 新闻生产中的生成式人工智能治理机制 Fig. 2 Generative AI governance mechanisms in news production

(四)理论饱和度检验 (iv) Theoretical saturation test

为了检验新闻生产中的生成式人工智能治理分析框架是否已达到完整状态,本文对理论建模部分未涉及的剩余 5 个生成式人工指南文本资料重复与建模过程相同的逐步编码、概念化和范畴化工作, 并未发现新的概念, 现有范畴之间也未出现新的关联。据此, 本文认为通过各阶段编码构建的理论模型信息已达到充分挖掘状态。
In order to test the completeness of the analytical framework for the governance of generative AI in news production, this paper repeats the same step-by-step coding, conceptualization, and categorization of the remaining five generative AI guides, which are not covered in the theoretical modeling section, and no new concepts are found, and no new associations are made between the existing categories. Accordingly, it is concluded that the theoretical model constructed through the various stages of coding has reached a state of full exploitation of information.

五、理论模型阐释 V. Theoretical Model Elaboration

综合来看, 新闻生产中的生成式人工智能治理机制展现了从价值层到技术层再到现实层的作用过程,由伦理基线、规制理路和愿景引领共同构成,体现了人文关怀与工具理性的高度融合。
Taken as a whole, the governance mechanism of generative AI in news production shows a process from the value layer to the technology layer and then to the reality layer, which consists of the ethical baseline, the regulatory rationale and the visionary leadership, reflecting a high degree of fusion of humanistic concern and instrumental rationality.

(一)伦理基线: 智媒时代的全体系价值对齐 (i) Ethical baseline: system-wide value alignment in the age of smart media

以大模型为基础的生成式人工智能加人极大加速了智媒化进程, 为新闻生产的效率提升和创意激发带来前所未有的改变。大模型技术在快速迭代的同时也带来了更大的不确定性: 一方面, 大模型的规模量级已从早期的数亿参数发展到千亿参数, 处理和分析问题能力十分可观。另一方面, 大模型赖以生存的海量预训练数据中不可避免地包含各种人类观念内容, 其中不乏与社会价值观相悖的信息,存在与用户互动时激活不良内容的风险。故而, 使人工智能能够准确理解人类道德观念并在实践中予以体现成为生成式人工智能治理的基础问题。
The addition of generative AI based on big models has greatly accelerated the process of smart media, bringing unprecedented changes to the efficiency and creativity of news production. The rapid iteration of big model technology also brings more uncertainty: on the one hand, the scale of big model has developed from hundreds of millions of parameters to hundreds of billions of parameters, which is very impressive in processing and analyzing problems. On the other hand, the huge amount of pre-training data on which the big model depends inevitably contains various human conceptual contents, among which there is no lack of information that contradicts with social values, and there is a risk of activating undesirable contents when interacting with users. Therefore, it is a fundamental issue for generative AI governance to enable AI to accurately understand human moral concepts and reflect them in practice.
为了破解这一困境, 理论界与实务界不约而同地关注到“价值对齐”的关键作用。人与机器关系范畴下的“对齐” (alignment) 一词最早由“控制论之父”诺伯
In order to solve this dilemma, theoretical and practical circles have coincidentally paid attention to the key role of "value alignment". The term "alignment" in the context of human-machine relationship was first coined by the "father of cybernetics", Nobbs.

特・维纳于 1960 年提出。他在文章《自动化的道德和技术后果》中将人工智能的对齐形象地定义为: “假如我们期望借助机器达成某个目标, 同时无法有效干涉其运行过程, 那么应当确认输人到机器中的目标确实是我们所希望达成的。”1 与此类似, 莱克等人认为对齐是机器目标与人类需求间的协同, 即机器能够了解人的意图, 并在稳健运行中实现这些意图。 价值对齐则是价值层面人工智能理解人类的具体体现, 构成了人机协同的必要基础。为实现价值对齐, 关键在于赋予机器一套与人类价值观一致的自治系统 , 即明确机器应遵循伦理原则的具体内容。通过对生成式人工智能指南文本的提炼, 本文发现各新闻媒体在伦理原则的表述上虽不尽相同,但对于一些关键维度基本保持了一致观点,具体包括隐私安全、透明性、真实性、负责任、公平性和保护知识产权六大方面。
The idea was proposed by Ter Wiener in 1960. In his article "The Moral and Technological Consequences of Automation," he defines the alignment of AI graphically as: "If we expect to achieve a goal with the help of a machine, and we cannot effectively interfere with the process of its operation, then we should make sure that the goal that is infused into the machine is indeed the goal we want to achieve." 1 Similarly, Reich et al. consider alignment to be the synergy between machine goals and human needs, i.e., the ability of a machine to understand human intentions and realize those intentions in robust operation. Value alignment, on the other hand, is a concrete manifestation of value-level AI's understanding of humans, and forms the necessary foundation for human-machine collaboration. In order to realize value alignment, the key is to give the machine a set of autonomy system in line with human values , i.e., to clarify the specific content of the ethical principles that the machine should follow. Through the refinement of the text of generative AI guidelines, this paper finds that although the expression of ethical principles varies among news media, they basically maintain the same viewpoints on some key dimensions, including privacy and security, transparency, truthfulness, accountability, fairness, and the protection of intellectual property rights.
从治理角度出发, 伦理原则的确立赋予了生成式人工智能人格化特征, 为生成式人工智能参与新闻生产设置了行为标准基线。具体而言,价值层面的内容增补详细回答了什么是可控可靠的人工智能, 如何对人工智能的行为表现作出道德评价, 如何在技术进步的同时引导人工智能向更加良善的方向发展等关键问题。
From the perspective of governance, the establishment of ethical principles has given personality to generative AI and set a baseline of behavioral standards for generative AI to participate in news production. Specifically, the value-level content addendum provides detailed answers to key questions such as what is a controllable and reliable AI, how to make ethical evaluations of AI behaviors, and how to guide AI to develop in a more benign direction while advancing technology.
从更深层意义来看, 治理语境下伦理原则的确立体现了一种全体系的价值对齐。根据计算机科学的理论观点, 就道德实现而言, 大模型的价值对齐应具备三方面的核心能力, 分别为道德理解能力 (comprehension capability)、道德诊断能力 (diagnosis capability) 和道德矫正能力 (rectification capability)。其中, 道德理解能力是指人工智能系统在多大程度上能理解人类赋予的道德观念和伦理规则。道德诊断能力是指在具体情境中识别其中道德问题和冲突, 并作出合理判断的能力。道德矫正能力是指在作出识别后, 能够及时调整行为和提供问题解决路径的问题。 对应来看, 各新闻媒体首先为工作中所采用的人工智能系统设置了清晰的伦理规则, 依靠基于人类偏好反馈的强化学习构建起机器的道德理解基础。其次, 伦理基线的出现促动了对机器学习过程的监督, 新闻媒体依靠详细的伦理原则清单得以持续审视人工智能自身的道德层次和对道德情境的判断表现。最后, 从人机长期互动角度看, 操作者对透明、真实、公平等应用原则的遵守将内化为人工智能的自身秉性, 进而显著提升其生成符合道德规范的行为选项的能力。概言之, 新闻媒体在生成式人工智能治理中以划定伦理原则为启动条件, 在长期的人机互动过程中将人对伦理原则的遵守转化为机器的道德层次提升, 为良善的生成式人工智能应用奠定了主基调。
In a deeper sense, the establishment of ethical principles in the context of governance reflects a system-wide value alignment. According to the theoretical viewpoint of computer science, in terms of ethical realization, the value alignment of the big model should have three core capabilities, namely, comprehension capability, diagnosis capability and rectification capability. The three core competencies that should be possessed for modeling alignment are comprehension capability, diagnosis capability and rectification capability.) Among them, moral comprehension capability refers to the extent to which the AI system can understand the moral concepts and ethical rules given by human beings. Moral diagnostic ability refers to the ability to recognize moral problems and conflicts in specific situations and make reasonable judgments. Moral corrective capability refers to the ability to adjust behavior and provide a solution to a problem in a timely manner after identification. Correspondingly, news media first set clear ethical rules for the AI systems used in their work, relying on reinforcement learning based on human preference feedback to build a foundation for the machine's ethical understanding. Second, the emergence of an ethical baseline facilitates the monitoring of the machine learning process, and the news media rely on a detailed list of ethical principles to continuously scrutinize the AI's own ethical hierarchy and its performance in judging ethical situations. Finally, from the perspective of long-term human-computer interaction, the operator's adherence to the applied principles of transparency, truthfulness, and fairness will be internalized into the AI's own nature, which will significantly enhance its ability to generate ethical behavioral options. To summarize, the news media take the delineation of ethical principles as the starting condition in the governance of generative AI, and in the process of long-term human-computer interaction, the human's adherence to ethical principles will be transformed into the machine's moral hierarchy enhancement, which lays down the main tone for the application of good generative AI.

(二)规制理路:有限辅助场景中的技术适切 (ii) Regulatory rationale: technological appropriateness in limited assistive scenarios

生成式人工智能在海量训练数据和机器学习原理的加持下实现对各类工作内容的程式化模仿, 从而能够为新闻生产提供诸多助益。从各新闻媒体发布的指南内容来看, 生成式人工智能的应用范围得到了有意的限制, 使用方的首要意图并非最大程度释放其效能, 而是要在符合新闻伦理要求与提升生产效率之间寻得平衡。正如德国新闻社(DPA)发布的指南内容所指出:“人工智能不会以大型工具形式进人编辑部, 而是以小工具形式带来变革”, 1新闻媒体普遍对人工智能的应用保持谨慎控制的态度。为了适应真实性、负责任等伦理原则要求, 拟人化内容生成、逼真图像生成、完全程序开发等行为被列为明确禁止事项,生成式人工智能被允许参与文字、图像和视频工作本质上是为了帮助新闻工作者实现更完善的创意表达。概言之, 当前各新闻媒体在实践中为生成式人工智能框定了一种 “有限辅助场景”, 以确保技术在嵌人新闻生产中处于被驯服状态。
Generative AI can provide many benefits to news production by programmatically mimicking various types of work with the support of massive training data and machine learning principles. According to the guidelines issued by news media, the scope of application of generative AI is intentionally limited, and the primary intention of the users is not to maximize its effectiveness, but to strike a balance between ethical journalism and productivity. As the guidelines issued by the German Press Agency (DPA) state, "Artificial intelligence will not enter the editorial department as a big tool, but will bring about change in the form of a small tool",1 news media in general have maintained a cautious and controlled attitude towards the use of AI. In order to meet the requirements of truthfulness, responsibility and other ethical principles, anthropomorphic content generation, realistic image generation, complete program development and other behaviors are listed as explicitly prohibited, generative artificial intelligence is allowed to participate in text, image and video work is essentially to help journalists achieve better creative expression. To summarize, the current news media in practice for generative AI framed a "limited auxiliary scene", to ensure that the technology in the embedded news production in a state of tame.
为了维护有限辅助场景的稳定性, 新闻媒体主张从多路径开展媒介治理, 以提升生成式人工智能技术的适切性。总体上, 可将媒介治理路径分为构建刚性框架、推进多元监督、开展平衡引导三个部分, 这三部分各自发挥了特定功能:
In order to maintain the stability of the limited auxiliary scene, the news media advocate media governance from multiple paths to enhance the relevance of generative AI technology. In general, media governance paths can be divided into three parts: building a rigid framework, promoting pluralistic supervision, and carrying out balanced guidance, each of which plays a specific function.
其一, 刚性框架提供了规范化的约束标准。具体而言, 刚性框架由法律法规、治理框架与认证系统共同构成, 三者分别代表了严格性、效率性和公正性的内在要求。结合各家新闻媒体的普遍看法, 理想状态下的刚性框架应包含以下要件:以新闻数据保护及监管政策法规为基础, 以注重实时快速反应的敏捷治理框架为主体, 配以受广泛认可的第三方内容认证系统。
First, the rigid framework provides normative and binding standards. Specifically, a rigid framework consists of laws and regulations, a governance framework and an accreditation system, which represent the intrinsic requirements of strictness, efficiency and fairness, respectively. Taking into account the general views of news media, a rigid framework should ideally include the following elements: news data protection and regulatory policies and regulations as the basis, an agile governance framework that focuses on real-time and rapid response as the main body, and a widely recognized third-party content certification system.
其二, 多元监督释放了强大的治理效能。新闻生产具有显著的公共性, 纵观从事件发生到受众接收的全过程, 政府、市场、媒体、社会均参与其中, 这就决 定了应当以多元视角看待新闻生产的利益和责任主体。对此, 媒介治理要摆脱新闻媒体单打独斗的思维, 转而开展多元协同的合作治理。换言之, 就是要达成“政府政策引导-媒体主责自律-受众反馈参与”的共治格局。
Secondly, pluralistic supervision releases strong governance effectiveness. News production has a significant public nature, from the occurrence of the event to the audience to receive the whole process, the government, the market, the media and society are all involved, which determines that the interests and responsibilities of news production should be viewed from a pluralistic perspective. In this regard, media governance should get rid of the mindset of the news media working alone, and turn into a cooperative governance with multiple synergies. In other words, it is necessary to reach a common governance pattern of "government policy guidance, media responsibility and self-regulation, and audience feedback and participation".
其三, 平衡引导促进了正向的技术演进。在达成谨慎使用的基本共识之外,各新闻媒体同样注意到智能技术的革新意义, 强调要对人工智能技术发展保持开放包容态度。为此,新闻媒体探索出值得各行业借鉴的平衡引导方式:一是强调各类人工智能系统之间的耦合性,主张同时启用不同种类、同等级别的多个人工智能系统,以实现积极的相互制约。二是重视工作者和技术工具两者发展的同步性, 提出要开展持续性的培训工作,以强化新闻工作者对人工智能系统各方面进展的认知。三是遵循智能技术落地的渐进性, 提出组建试点团队的方式, 通过反复试验来确保生成式人工智能有助于数据挖掘、文本和图像分析以及翻译等工作的开展。
Third, balanced guidance promotes positive technological evolution. In addition to the basic consensus on prudent use, the news media have also noted the innovative significance of smart technologies, emphasizing the need to maintain an open and tolerant attitude towards the development of AI technologies. To this end, the news media have explored a balanced approach that is worthy of reference for various industries: first, emphasizing the coupling between various types of AI systems, and advocating the simultaneous use of multiple AI systems of different types and at the same level in order to achieve positive mutual constraints. Second, it emphasizes the synchronization of the development of workers and technology tools, and proposes to carry out continuous training to strengthen journalists' knowledge of the progress of various aspects of AI systems. Third, following the progressive nature of the implementation of intelligent technologies, it is proposed to set up pilot teams to ensure that generative AI contributes to data mining, text and image analysis, and translation through repeated trials.

(三)愿景引领: 人类智能主导的永续社会构建 (iii) Vision Leadership: Human Intelligence-led Construction of a Sustainable Society

从长远角度看, 新闻生产中的生成式人工智能加人指向了更为宏伟愿景: 推进人类智能主导的人机协同演进, 构建负责任、可信赖的智能媒介生态, 进而助力社会的永续性发展。其中, 人机协同强调通过人类智能和机器的计算能力相结合达到更高效、准确和创新的工作成效, 机器在协同过程中发挥智能辅助作用,而不替代人类原创的思考和决策能力。具体到新闻行业,从业者基于新闻专业意识、批判性思维和情感体验彰显人类在新闻生产中的不可取代地位,构成了凸显人类智能主导而使人工智能居于辅助地位的方向指引。
From a long-term perspective, the addition of generative AI to news production points to a more ambitious vision: to promote the evolution of human-machine collaboration led by human intelligence, to build a responsible and trustworthy intelligent media ecosystem, and to contribute to the sustainable development of society. Among them, human-machine collaboration emphasizes the combination of human intelligence and machine computing power to achieve more efficient, accurate and innovative work results, with machines playing an intelligent supporting role in the collaborative process without replacing the original human thinking and decision-making capabilities. Specifically, in the news industry, practitioners emphasize the irreplaceable position of human beings in news production based on journalistic professionalism, critical thinking and emotional experience, which constitutes a directional guideline that highlights the dominant role of human intelligence and the auxiliary role of artificial intelligence.
本文发现,多家媒体机构指南频繁出现“确保人类地位”“促进而非取代人类”“以人类智能为核心”等表述,强调生成式人工智能的应用要以人类智能主导为前提,这有助于维护新闻业者的自由尊严,进而产生正向媒介效应以“维护受众信任”。就采编流程而言,新闻业者作为新闻生产的重要主体,应采用“人-机人"采编流程。换言之, 新闻生产中的思考和决策始于人类编辑, 其对人工智能辅助生成内容的监管和核查应落实到新闻生产的全流程, 且务必以人类编辑作为采编流程最后一环的唯一主体, 例如巴西媒体 Núcleo 明确禁止发布未经人工审核的含有人工智能生成内容的新闻报道。1人类主责的关键体现在于新闻编辑重 点核验新闻真实性和原创性: 其一, 真实性核验要求在新闻采编的全流程融人事实核查, 在必要时辅以机器核验。例如, 美联社生成式人工智能指南强调, 照片、视频或音频均不能使用人工智能技术更改, 即不允许使用人工智能添加或减去任何要素, 建议新闻编辑使用 “反向图像搜索”的方法以帮助验证图像的来源和真实性。1新闻媒体还可以使用事实核查检测器“Anti-Deepfake”来识别虚假视频。2其二, 原创性核验维护了知识产权, 对不必要的知识产权纠纷做出了前置性的风险规避。美国媒体 Insider 对人工智能生成内容的原创性有着非常严苛的要求,需要新闻编辑使用第三方软件检验新闻内容原创性, 比如使用谷歌检索和 Grammarly 的“抄袭检索”功能进行核验。3总体上, 智媒时代人机协同演进的进程应遵循人类智能主导的行动路线, 而这有赖于新闻业者对新闻生产主体性的再认识以及新闻专业意识的再升华。
In this paper, we find that the guidelines of many media organizations frequently contain expressions such as "ensuring the status of human beings", "facilitating rather than replacing human beings", and "focusing on human intelligence", emphasizing that the application of generative AI should be premised on the dominance of human intelligence, which can help maintain the dignity of journalists' freedom and thus generate positive media effects to "maintain audience trust". The application of generative AI is premised on the dominance of human intelligence, which helps to maintain the freedom and dignity of journalists, and in turn generates a positive media effect to "maintain audience trust". As far as the editorial process is concerned, newsmakers, as an important subject of news production, should adopt the "human-machine-human" editorial process. In other words, thinking and decision-making in news production starts with human editors, whose supervision and verification of AI-assisted content should be implemented throughout the entire process of news production, and who must be the sole subject of the final part of the editorial process, as in the case of the Brazilian media outlet Núcleo, which explicitly prohibits the publication of news stories containing AI-generated content that have not been vetted by human beings.1 The key manifestations of the primary responsibility of human beings are News editors focus on verifying the authenticity and originality of news: First, authenticity verification requires the integration of human fact-checking throughout the news-gathering process, supplemented by machine verification when necessary. For example, the Associated Press Generative Artificial Intelligence Guidelines emphasize that photographs, video, or audio cannot be altered using AI techniques, i.e., AI is not permitted to add or subtract elements, and suggests that news editors use a "reverse image search" approach to help verify the source and authenticity of images.1 News outlets can also use the fact-checking detector "AntiVerify" to help verify the source and authenticity of an image. The news media can also use the fact-checking detector "Anti-Deepfake" to identify fake videos.2 Secondly, originality verification safeguards intellectual property rights and provides an upfront risk aversion against unnecessary intellectual property disputes. The U.S. media Insider has very strict requirements for the originality of AI-generated content, requiring news editors to use third-party software to check the originality of news content, such as using Google search and Grammarly's "plagiarism search" function for verification.3 Overall, the process of human-computer collaboration in the era of smart media should follow the human-intelligence-led action. In general, the evolution of human-computer collaboration in the era of smart media should follow the course of action dominated by human intelligence, which relies on the re-cognition of news producers' subjectivity of news production as well as the sublimation of their professional consciousness of journalism.
在确保人类智能主导的前提下, 应用人工智能技术造福人类社会已获得普遍共识。联合国在 2024 年 3 月 11 日发布重要决议, 明确指出安全可靠、值得信赖的人工智能系统是实现可持续发展的重要机遇 4 。具体而言, 就是要更好地利用人工智能造福人类及后代, 推动形成具有广泛共识的国际人工智能治理框架和规范, 以构建人类命运共同体。新闻媒体作为信息生态系统的主要构建者, 在人类社会进步中具有不可或缺的作用。当前, 新闻生产已成为人工智能重要应用场域,同时也是人机协同关系演进的关键生发场所, 构建负责任、可信赖的智能媒介生态将极大推动理想社会愿景的达成。首先, 生成式人工智能可以实时识别与全球公共利益相关的议题并高效生产相关新闻内容, 通过多种媒介的传播快速扩大议题的影响力。同时, 这一过程缩短了议程设置效果产生的时间滞后, 可以在更短的时间内制造舆论共识, 引导更多力量参与到全球问题的解决当中。其次, 新闻媒体是社会中各种群体观点的承载者, 在安全可靠且负责任的生成式人工智能技 术辅助下, 能够更高效地生成非歧视、兼顾多元文化群体的新闻内容。尤其是在实现人机价值对齐的情况下, 负责任的生成式人工智能可以帮助纠正潜在的歧视和偏见信息, 有助于构建相互尊重、平等包容的和谐社会。最后, 生成式人工智能辅助新闻生产可以基于特定主题高效梳理现有知识内容, 在有限的篇幅中实现高度的智慧集成, 并通过精准个性化推送惠及更多受众, 进而加速学习型社会形成并助力全社会民众的知识结构迭代, 更大程度地集合人类智慧力量以解决文明进程中不断出现的新挑战。从动态视角看侍治理过程, 永续性发展目标的不断达成将促进新的社会伦理产生, 反馈传导至新闻生产领域, 实现对新闻伦理的再塑。
There is a general consensus on the application of AI technologies for the benefit of human society, while ensuring the dominance of human intelligence. The United Nations issued an important resolution on March 11, 2024, stating clearly that safe, reliable and trustworthy AI systems are an important opportunity to achieve sustainable development4 . Specifically, it is to better utilize AI for the benefit of humanity and future generations, and to promote the formation of an international AI governance framework and norms with broad consensus, in order to build a community of human destiny. As the main builder of the information ecosystem, news media play an indispensable role in the progress of human society. At present, news production has become an important application field of AI, and it is also a key birthplace for the evolution of human-computer synergy, and the construction of a responsible and trustworthy intelligent media ecosystem will greatly contribute to the realization of the vision of an ideal society. First of all, generative AI can identify issues of global public interest in real time and efficiently produce related news content, rapidly expanding the influence of the issues through the dissemination of multiple media. At the same time, this process shortens the time lag of agenda-setting effects, and can create public opinion consensus in a shorter period of time, guiding more forces to participate in the solution of global problems. Secondly, news media is the carrier of the views of various groups in the society. With the assistance of safe, reliable and responsible generative AI technology, it can generate non-discriminatory and multicultural news content more efficiently. Especially in the case of human-computer value alignment, responsible generative AI can help correct potentially discriminatory and biased information, and help build a harmonious society of mutual respect and equality. Finally, generative AI-assisted news production can efficiently sort out existing knowledge content based on specific topics, achieve a high degree of wisdom integration in a limited space, and reach a wider audience through accurate personalized delivery, thus accelerating the formation of a learning society and contributing to the iterative knowledge structure of the entire society, and bringing together the power of human wisdom to solve the new challenges that are constantly emerging in the process of civilization. From the dynamic perspective of the governance process, the continuous achievement of the goal of sustainable development will promote the emergence of new social ethics, which in turn will be transmitted to the field of news production, realizing the reshaping of news ethics.

六、结论与讨论 vi. conclusions and discussion

本文以 16 个国家和地区的 27 个新闻媒体发布的生成式人工智能指南为研究对象, 采用扎根理论的方法进行逐层编码和理论建构, 获得价值层面的伦理基线、技术层面的规制理路和现实层面的愿景引领三个要素, 包含“新闻伦理”“有限使用”“媒介治理”“媒介效应”和“社会愿景”五个主范畴的新闻生产中的生成式人工智能治理机制分析框架。本文的主要研究结论包括: 第一, 治理过程受到新闻伦理指引,具体由隐私安全、透明性、真实性等原则构成。第二,在伦理原则的指引下, 新闻生产中的生成式人工智能应用处于被有限使用的状态中。为了确保这一状态的稳定性, 新闻媒体主张从刚性框架、多元监督、平衡引导三个开展媒介治理。第三, 受到规范的生成式人工智能应用方式将放大正媒介效应, 从而有助于达成人类智能主导下的永续性发展的社会愿景。第四, 社会愿景的阶段性达成将催生出新的社会伦理原则, 进而促动新闻伦理的再升级。
In this paper, we take 27 AI guidelines published by news media in 16 countries and regions as the research object, and adopt the rooted theory method to code and construct theories layer by layer, so as to obtain the three elements of ethical baseline at the value level, the regulatory rationale at the technical level, and the vision at the practical level, which include five main categories: "news ethics", "limited use", "media governance", "media effect", and "social vision". The five main categories of news, namely, "news ethics", "limited use", "media governance", "media effect" and "social vision", are summarized in this paper. A framework for analyzing the governance mechanism of generative AI in production. The main findings of this paper include: First, the governance process is guided by journalism ethics, which consists of the principles of privacy and security, transparency, and truthfulness, etc. Second, the governance process is guided by ethical principles, which include privacy and security, transparency, and truthfulness. Second, under the guidance of ethical principles, generative AI applications in news production are in a state of limited use. In order to ensure the stability of this state, the news media advocate three aspects of media governance: rigid framework, pluralistic supervision, and balanced guidance. Third, regulated generative AI applications will amplify the positive media effects, thus contributing to the social vision of human intelligence-led sustainable development. Fourth, the stage-by-stage realization of the social vision will give rise to new social ethical principles, which in turn will promote the upgrading of journalism ethics.
图 3 技术赋能新闻生产的实现过程 Figure 3 Realization process of technology-enabled news production
从更普适角度看, 本文发现的治理机制可被进一步提炼为各类技术赋能新闻
From a more generalized perspective, the governance mechanisms identified in this paper can be further distilled into various types of technology-enabled news.

生产的实现过程(见图 3)。首先, 新技术介人新闻生产时需要以新闻伦理为指引,在遵循特定伦理原则的基础上推进技术适切, 明确操作环节的“可为”与“不可为”。其次,技术适切的好坏将对媒介的社会效应产生正向或负向影响。再次,正向媒介效应将促成更具广泛意义的社会愿景达成, 为推动人类社会发展贡献新闻新闻媒体智慧。最后, 社会愿景的达成催生更具进步意义的社会伦理产生, 进而丰富新闻伦理意涵。如此, 技术赋能新闻生产将处于不断发展的动态进程当中,实现人文关怀与工具理性的高度融合。
The realization process of production (see Figure 3). First of all, new technologies need to be guided by journalistic ethics when they are used in news production, so as to promote technological appropriateness based on specific ethical principles, and to clarify the "do's" and "don't's" of the operation process. Secondly, the appropriateness of technology will have a positive or negative impact on the social effect of the media. Thirdly, positive media effects will lead to a broader social vision, contributing to the wisdom of the news media in promoting the development of human society. Finally, the achievement of social vision will lead to the creation of more progressive social ethics, thus enriching the meaning of journalism ethics. In this way, technology-enabled news production will be in a dynamic process of continuous development, realizing a high degree of integration of humanistic concern and instrumental rationality.

  1. 1 翁之颢.面对 AIGC, 新闻业怎样寻找出路? [J].视听界, 2023, (03):17-20.
    1 Roy Weng. Facing the AIGC, how can journalism find a way out? [J]. Audiovisual, 2023, (03):17-20.
  2. 张超.新闻生产中的算法风险: 成因、类型与对策[J]. 中国出版,2018,(13):38-42.
    ZHANG Chao. Algorithmic risk in news production: causes, types and countermeasures[J]. China Publishing,2018,(13):38-42.
    陈力丹, 荣雪燕. 从 ChatGPT 到 Sora一一生成式 AI 浪潮下强化新闻专业意识的再思考 . 新闻爱好者, 2024,(04):4-8.
    Chen Lidan, Rong Xueyan. From ChatGPT to Sora: Rethinking journalism professionalism under the wave of generative AI. . Journalism Enthusiast, 2024,(04):4-8.
    胡宏超, 谢新洲. 人工智能背景下虚假新闻的发展趋势与治理问题[J]. 新闻爱好者, 2023,(10):9-15.
    HU Hongchao, XIE Xinzhou. The development trend and governance of fake news in the context of artificial intelligence[J]. Journalism Enthusiast, 2023,(10):9-15.
    靖鸣,娄翠. 人工智能技术在新闻传播中伦理失范的思考[J].出版广角, 2018, (01):9-13.
    Jing Ming,Lou Cui. Reflections on Ethical Misconduct of Artificial Intelligence Technology in News Dissemination[J]. Publishing Wide, 2018, (01):9-13.
    朱嘉珺. 生成式人工智能虚假有害信息规制的挑战与应对一以 ChatGPT 的应用为引[J].比较法研究, 2023, (05):34-54.
    Zhu Jiajun. Challenges and Responses to the Regulation of False and Harmful Information by Generative Artificial Intelligence: An Application of ChatGPT[J]. Comparative Law Studies, 2023, (05):34-54.
  3. 1 曾晓.ChatGPT 新思考: AIGC 模式下新闻内容生产的机遇、挑战及规制策略[J].出版广角, 2023,(07):57-61.
    1 Zeng Xiao. New Thinking on ChatGPT: Opportunities, Challenges and Regulatory Strategies of News Content Production under the AIGC Model[J]. Publishing, 2023,(07):57-61.
    胡宏超, 谢新洲. 人工智能背景下虚假新闻的发展趋势与治理问题[J]. 新闻爱好者, 2023, (10):9-15.
    HU Hongchao, XIE Xinzhou. The development trend and governance of fake news in the context of artificial intelligence[J]. Journalism Enthusiast, 2023, (10):9-15.
    蒋雪颖.刘欣. 许静. 人机协同视角下生成式 AI 新闻的前沿应用与规制进路[J]. 新闻爱好者, 2023,(11):38-43.
    Jiang Xueying. Liu X. XU Jing. Frontier Application and Regulation of Generative AI News from the Perspective of Human-Computer Collaboration[J]. Journalism Enthusiast, 2023,(11):38-43.
    崔燕.生成式人工智能介入新闻生产的价值挑战与优化策略[J]. 当代电视, 2024,(02): 104-108.
    Cui Yan. Value Challenges and Optimization Strategies of Generative Artificial Intelligence Intervention in News Production[J]. Contemporary Television, 2024,(02): 104-108.
    谢梅,王世龙.ChatGPT 出圈后人工智能生成内容的风险类型及其治理[J]. 新闻界, 2023, (08):51-60.
    Xie Mei,Wang Shilong.Risk types of AI-generated content and its governance after ChatGPT out of the loop[J]. Journalism, 2023, (08):51-60.
    韩晓宁, 耿晓梦,周恩泽. 人机共生与价值共创: 智媒时代新闻从业者与人工智能的关系重塑 . 中国编辑, 2023, (03):9-14.
    Han Xiaoning, Geng Xiaomeng, Zhou Enze. Human-machine symbiosis and value co-creation: reshaping the relationship between journalists and artificial intelligence in the age of smart media . China Edit, 2023, (03):9-14.
    7 许静, 刘欣, 蒋雪颖. 新闻生产与生成式人工智能人机塊合的实践进路[ J]. 南昌大学学报(人文社会科学版), 2023,54(05):114-122.
    7 Jing Xu, Xin Liu, Xueying Jiang. A Practical Approach to Human-Machine Integration of News Production and Generative Artificial Intelligence[ J]. Journal of Nanchang University (Humanities and Social Sciences Edition), 2023,54(05):114-122.
    8 蒋雪颖, 许静. 人机交互中的生成式人工智能新闻: 主体赋能、危机与应对 . 河南社会科学. 2023.31(12):105-113.
    8 Jiang, Xueying, and Jing Xu. Generative Artificial Intelligence Journalism in Human-Computer Interaction: Subject Empowerment, Crisis and Response . Henan Social Science. 2023.31(12):105-113.
    陈力丹, 荣雪燕. 从 ChatGPT 到 Sora一一生成式 AI 浪潮下强化新闻专业意识的再思考 [J].新闻爱好者, 2024,(04):4-8.
    Chen Lidan, Rong Xueyan. From ChatGPT to Sora: Rethinking Journalism Professionalism in the Wave of Generative AI [J]. Journalism Enthusiast, 2024,(04):4-8.
  4. 陈向明. 质的研究方法与社会科学研究[M]. 北京: 教育科学出版社, 2000: 327.
    Chen, Xiangming. Qualitative Research Methods and Social Science Research [M]. Beijing: Educational Science Press, 2000: 327.
    吴毅, 吴刚、马颂歌. 扎根理论的起源、流派与应用方法述评一基于工作场所学习的案例分析 . 远程教育杂志, 2016,35(03):32-41.
    Wu Yi, Wu Gang and Ma Songge. A Review of the Origins, Genres, and Applied Methods of Rootedness Theory A Case Study Based on Workplace Learning . Journal of Distance Education, 2016,35(03):32-41.
  5. 注释: 《全球人工智能守则》由美国新闻媒体联盟牵头发起,共有 14 国 31 个媒体机构签署加入。为方便统计,本文将此守则纳入美国国别且单独算作一个集合型的媒体机构。
    Note: The Global Code on Artificial Intelligence was spearheaded by the U.S. News Media Alliance and signed by 31 media organizations in 14 countries. For statistical purposes, the Code is included in the United States and counted separately as a collection of media organizations.
    注释: 非洲媒体监管协会是一个建立于南非且影响全非洲的机构。为方便统计,本文将此守则看作一个从属于非洲地区的集体守则。
    Note: The African Media Regulators Association (AMRA) is an organization established in South Africa with an Africa-wide reach. For statistical purposes, this code is treated as a collective code subordinate to the African region.
    陈向明. 质的研究方法与社会科学研究[M]. 北京: 教育科学出版社, 2000: 332 .
    Chen, Xiangming. Qualitative Research Methods and Social Science Research [M]. Beijing: Educational Science Press, 2000: 332 .
  6. 1 陈向明.扎根理论的思路和方法 . 教育研究与实验,1999,(04):58-63+73.
    1 Chen Xiangming. Ideas and methods of rooted theory . Educational Research and Experimentation,1999,(04):58-63+73.
    Pandit N R. The creation of theory: A recent application of the grounded theory method[J]. The qualitative report, 1996, 2(4): 1-15.
  7. Wiener N. Some Moral and Technical Consequences of Automation: As machines learn they may develop unforeseen strategies at rates that baffle their programmers[J]. Science, 1960, 131(3410): 1355-1358.
    Leike J, Krueger D, Everitt T, et al. Scalable agent alignment via reward modeling: a research direction[J]. arXiv preprint arXiv:1811.07871, 2018.
    胡正荣, 闰佳琦. 生成式人工智能的价值对齐比较研究一一基于 2012-2023年十大国际新闻生成评论的实验[J].新闻大学, 2024, (03):1-17+117.
    HU Zhengrong, LEI Jiaqi. A Comparative Study of Value Alignment in Generative Artificial Intelligence - An Experiment Based on Generated Reviews of the Top Ten International News from 2012-2023[J]. Journalism University, 2024, (03):1-17+117.
    矣晓沅, 谢幸. 大模型道德价值观对齐问题剖析[J]. 计算机研究与发展, 2023,60(09): 1926-1945.
    Yei Xiaoyuan, Xie Xing. An analysis of the alignment of moral values in large models[J]. Computer Research and Development, 2023,60(09): 1926-1945.
  8. ' DPA. Offen, verantwortungsvoll und transparent - Die Guidelines der dpa ü ü nstliche Intelligenz. [EB/OL]. [2023-04-03].
  9. Núcleo. POLÍTICA DE USO DE INTELIGÊNCIA ARTIFICIAL. [EB/OL]. [2023-05-18]. https: / nucleo.jor.br/politica-ia/.
  10. AP. Standards around generative AI. [EB/OL]. [2023-08-16]. https: / blog.ap.org/standards-around-generative-ai.
    栾轶玫. AIGC 在新闻生产中的风险控制[J].视听界, 2023,(04):
    Luan Yimei. AIGC risk control in news production[J]. Audiovisual World, 2023,(04).
    Insider. My editor's note to the newsroom on Al: Let's think of it like a 'bicycle of the mind'. [EB/OL]. [2023-4-14].
    https: / / www . businessinsider .com/how-insider-newsroom-will-use-ai-2023-4? international=true&r=US&| .
    https: / / www . businessinsider .com/how-insider-newsroom-will-use-ai-2023-4? international=true&r=US&| .
    联合国. 抓住安全、可靠和值得信赖的人工智能系统带来的机遇,促进可持续
    United Nations. Seizing the opportunities presented by safe, secure and trustworthy artificial intelligence systems for sustainable development
    发展. [EB/OL]. [2024-03-11].
    Development . [EB/OL]. [2024-03-11].
    https: / / documents. un. org/doc/undoc/Itd/n24/065/91/pdf/n2406591 . pdf?token=hOEluYxDe3tc4KOSBd&fe= true.
    https: / / documents. un. org/doc/undoc/Itd/n24/065/91/pdf/n2406591 . pdf?token=hOEluYxDe3tc4KOSBd&fe= true.