這是用戶在 2024-7-16 14:30 為 https://www.theguardian.com/technology/article/2024/jul/14/ais-oppenheimer-moment-autonomous-weapons... 保存的雙語快照頁面,由 沉浸式翻譯 提供雙語支持。了解如何保存?
Skip to main contentSkip to navigationSkip to navigation
A drone operated by AI
Composite: The Guardian/Getty Images
Composite: The Guardian/Getty Images
合成:衛報/蓋蒂圖片社

AI’s ‘Oppenheimer moment’: autonomous weapons enter the battlefield
人工智慧的「奧本海默時刻」:自主武器進入戰場

The military use of AI-enabled weapons is growing, and the industry that provides them is booming
人工智慧武器的軍事用途正在增長,提供它們的行業正在蓬勃發展

A squad of soldiers is under attack and pinned down by rockets in the close quarters of urban combat. One of them makes a call over his radio, and within moments a fleet of small autonomous drones equipped with explosives fly through the town square, entering buildings and scanning for enemies before detonating on command. One by one the suicide drones seek out and kill their targets. A voiceover on the video, a fictional ad for multibillion-dollar Israeli weapons company Elbit Systems, touts the AI-enabled drones’ ability to “maximize lethality and combat tempo”.
在城市戰鬥的近距離中,士兵小隊受到攻擊並被火箭彈壓制。其中一人通過無線電撥打電話,不一會兒,一支裝有炸藥的小型自主無人機機隊飛過城鎮廣場,進入建築物並掃描敵人,然後根據命令引爆。自殺式無人機一個接一個地尋找並殺死他們的目標。視頻中的畫外音是價值數十億美元的以色列武器公司Elbit Systems的虛構廣告,吹捧了支持人工智慧的無人機“最大限度地提高殺傷力和戰鬥節奏”的能力。

While defense companies like Elbit promote their new advancements in artificial intelligence (AI) with sleek dramatizations, the technology they are developing is increasingly entering the real world.
雖然像 Elbit 這樣的國防公司以時尚的戲劇化方式宣傳他們在人工智慧 (AI) 方面的新進展,但他們正在開發的技術正越來越多地進入現實世界。

The Ukrainian military has used AI-equipped drones mounted with explosives to fly into battlefields and strike at Russian oil refineries. American AI systems identified targets in Syria and Yemen for airstrikes earlier this year. The Israel Defense Forces, meanwhile, used another kind of AI-enabled targeting system to label as many as 37,000 Palestinians as suspected militants during the first weeks of its war in Gaza.
烏克蘭軍方使用裝有炸藥的配備人工智慧的無人機飛入戰場並襲擊俄羅斯煉油廠。今年早些時候,美國人工智慧系統確定了敘利亞和葉門的空襲目標。與此同時,以色列國防軍在加沙戰爭的最初幾周使用另一種人工智慧瞄準系統將多達 37,000 名巴勒斯坦人標記為可疑武裝分子。

A drone with AI integration used to detect explosive devices in humanitarian de-mining in the Zhytomyr region of Ukraine in 2023. Photograph: Maxym Marusenko/NurPhoto/Shutterstock
2023年,一架集成了 AI 的無人機用於探測烏克蘭日托米爾地區人道主義排雷中的爆炸裝置。攝影:Maxym Marusenko/NurPhoto/Shutterstock

Growing conflicts around the world have acted as both accelerant and testing ground for AI warfare, experts say, while making it even more evident how unregulated the nascent field is. The expansion of AI in conflict has shown that national militaries have an immense appetite for the technology, despite how unpredictable and ethically fraught it can be. The result is a multibillion-dollar AI arms race that is drawing in Silicon Valley giants and states around the world.
專家表示,世界各地日益加劇的衝突既是人工智慧戰爭的加速劑,也是試驗場,同時使這個新興領域更加明顯地不受監管。人工智慧在衝突中的擴張表明,各國軍隊對這項技術有著巨大的興趣,儘管它可能具有多麼不可預測和道德上令人擔憂。結果是一場價值數十億美元的人工智慧軍備競賽,吸引了矽谷巨頭和世界各地的州。

The refrain among diplomats and weapons manufacturers is that AI-enabled warfare and autonomous weapons systems have reached their “Oppenheimer moment”, a reference to J Robert Oppenheimer’s development of the atomic bomb during the second world war. Depending on who is invoking the physicist, the phrase is either a triumphant prediction of a new, peaceful era of American hegemony or a grim warning of a horrifically destructive power.
外交官和武器製造商的克制是,人工智慧支持的戰爭和自主武器系統已經達到了他們的“奧本海默時刻”,指的是羅伯特·奧本海默在第二次世界大戰期間開發原子彈。根據誰在援引物理學家的話,這句話要麼是對美國霸權新和平時代的勝利預測,要麼是對可怕破壞力的嚴峻警告。

Elbit Systems is developing AI-enabled offensive drones to ‘maximize lethality and combat tempo’ on the battlefield. Photograph: Baz Ratner/Reuters
Elbit Systems正在開發支持人工智慧的進攻性無人機,以在戰場上“最大限度地提高殺傷力和戰鬥節奏”。照片:Baz Ratner /路透社

Altogether, the US military has more than 800 active AI-related projects and requested $1.8bn worth of funding for AI in the 2024 budget alone. The flurry of investment and development has also intensified longstanding debates about the future of conflict. As the pace of innovation speeds ahead, autonomous weapons experts warn that these systems are entrenching themselves into militaries and governments around the world in ways that may fundamentally change society’s relationship with technology and war.
美國軍方總共有800多個活躍的人工智慧相關專案,僅在2024年預算中就要求為人工智慧提供價值18億美元的資金。一連串的投資和發展也加劇了關於衝突未來的長期辯論。隨著創新步伐的加快,自主武器專家警告說,這些系統正在以可能從根本上改變社會與技術和戰爭關係的方式在世界各地的軍隊和政府中根深蒂固。

Palantir has become involved in AI projects including what it calls the US army’s ‘first AI-defined vehicle’. Photograph: Budrul Chukrut/Sopa Images/Rex/Shutterstock
Palantir已經參與了人工智慧專案,包括它所謂的美國陸軍“第一輛人工智慧定義的車輛”。照片:Budrul Chukrut/Sopa Images/Rex/Shutterstock

“There’s a risk that over time we see humans ceding more judgment to machines,” said Paul Scharre, executive vice-president and director of studies at the Center for a New American Security thinktank. “We could look back 15 or 20 years from now and realize we crossed a very significant threshold.”
“隨著時間的推移,我們看到人類將更多的判斷權交給機器,”新美國安全智庫中心執行副總裁兼研究主任保羅·沙爾(Paul Scharre)說。“我們可以在15年或20年後回顧過去,並意識到我們跨越了一個非常重要的門檻。

The AI boom comes for warfare
人工智慧熱潮為戰爭而來

While the rapid advancements in AI in recent years have created a surge of investment, the move toward increasingly autonomous weapons systems in warfare goes back decades. Advancements had rarely appeared in public discourse, however, and instead were the subject of scrutiny among a relatively small group of academics, human rights workers and military strategists.
雖然近年來人工智慧的快速發展帶來了大量投資,但在戰爭中向日益自主的武器系統邁進可以追溯到幾十年前。然而,進步很少出現在公共話語中,而是在相對較小的學者、人權工作者和軍事戰略家群體中受到審查。

What has changed, researchers say, is both increased public attention to everything AI and genuine breakthroughs in the technology. Whether a weapon is truly “autonomous” has always been the subject of debate. Experts and researchers say autonomy is better understood as a spectrum rather than a binary, but they generally agree that machines are now able to make more decisions without human input than ever before.
研究人員表示,改變的是公眾對人工智慧一切的關注增加,以及該技術的真正突破。武器是否真正「自主」一直是爭論的話題。專家和研究人員表示,自主性最好被理解為一個頻譜而不是二進位,但他們普遍認為,機器現在能夠在沒有人類輸入的情況下做出比以往任何時候都多的決策。

Composite: The Guardian/Getty Images
合成:衛報/蓋蒂圖片社

The increasing appetite for combat tools that blend human and machine intelligence has led to an influx of money to companies and government agencies that promise they can make warfare smarter, cheaper and faster.
對融合人類和機器智慧的作戰工具的需求日益增長,導致資金湧入公司和政府機構,這些公司和政府機構承諾可以使戰爭更智慧、更便宜、更快。

The Pentagon plans to spend $1bn by 2025 on its Replicator Initiative, which aims to develop swarms of unmanned combat drones that will use artificial intelligence to seek out threats. The air force wants to allocate around $6bn over the next five years to research and development of unmanned collaborative combat aircraft, seeking to build a fleet of 1,000 AI-enabled fighter jets that can fly autonomously. The Department of Defense has also secured hundreds of millions of dollars in recent years to fund its secretive AI initiative known as Project Maven, a venture focused on technologies like automated target recognition and surveillance.
五角大樓計劃到2025年在其Replicator Initiative上花費10億美元,該計劃旨在開發成群的無人作戰無人機,這些無人機將使用人工智慧來尋找威脅。美國空軍希望在未來五年內撥款約60億美元用於無人駕駛協作戰鬥機的研發,尋求建立一支由1000架人工智慧戰鬥機組成的機隊,這些戰鬥機可以自主飛行。近年來,美國國防部還獲得了數億美元的資金,用於資助其名為Project Maven的秘密人工智慧計劃,該計劃專注於自動目標識別和監視等技術。

Demonstrators protest Google’s contract with Israel to provide facial recognition and other technologies amid the Israel-Hamas war, on 14 December 2023. Photograph: Santiago Mejia/AP
2023年12月14日,示威者抗議谷歌與以色列簽訂的合同,在以色列-哈馬斯戰爭期間提供面部識別和其他技術。攝影:聖地牙哥·梅希亞/美聯社

Military demand for increased AI and autonomy has been a boon for tech and defense companies, which have won huge contracts to help develop various weapons projects. Anduril, a company that is developing lethal autonomous attack drones, unmanned fighter jets and underwater vehicles, is reportedly seeking a $12.5bn valuation. Founded by Palmer Luckey – a 31-year-old, pro-Trump tech billionaire who sports Hawaiian shirts and a soul patch – Anduril secured a contract earlier this year to help build the Pentagon’s unmanned warplane program. The Pentagon has already sent hundreds of the company’s drones to Ukraine, and last month approved the potential sale of $300m worth of its Altius-600M-V attack drones to Taiwan. Anduril’s pitch deck, according to Luckey, claims the company will “save western civilization”.
軍方對增加人工智慧和自主性的需求對科技和國防公司來說是一個福音,這些公司贏得了巨額合同,幫助開發各種武器專案。據報導,Anduril是一家正在開發致命自主攻擊無人機、無人戰鬥機和水下航行器的公司,該公司正在尋求125億美元的估值。安杜里爾由帕爾默·拉基(Palmer Luckey)創立,帕爾默·拉基(Palmer Luckey)是一位31歲的親特朗普科技億萬富翁,他穿著夏威夷襯衫和靈魂補丁,今年早些時候獲得了一份合同,説明建立五角大樓的無人戰機計劃。五角大樓已經向烏克蘭派遣了數百架該公司的無人機,並在上個月批准向臺灣出售價值3億美元的Altius-600M-V攻擊無人機。根據Luckey的說法,Anduril的宣傳材料聲稱該公司將“拯救西方文明”。

Palantir, the tech and surveillance company founded by billionaire Peter Thiel, has become involved in AI projects ranging from Ukrainian de-mining efforts to building what it calls the US army’s “first AI-defined vehicle”. In May, the Pentagon announced it awarded Palantir a $480m contract for its AI technology that helps with identifying hostile targets. The military is already using the company’s technology in at least two military operations in the Middle East.
由億萬富翁彼得·蒂爾(Peter Thiel)創立的科技和監控公司Palantir已經參與了人工智慧專案,從烏克蘭的排雷工作到建造所謂的美國陸軍“第一輛人工智慧定義的車輛”。今年5月,五角大樓宣佈授予Palantir一份價值4.8億美元的合同,用於其人工智慧技術,該技術有助於識別敵對目標。軍方已經在中東的至少兩次軍事行動中使用了該公司的技術。

Helsing was valued at $5.4bn this month after raising almost $500m on the back of its AI defense software. Photograph: Pavlo Gonchar/Sopa Images/Rex/Shutterstock
Helsing本月的估值為54億美元,此前該公司的AI防禦軟體籌集了近5億美元。攝影:Pavlo Gonchar/Sopa Images/Rex/Shutterstock

Anduril and Palantir, respectively named after a legendary sword and magical seeing stone in The Lord of The Rings, represent just a slice of the international gold rush into AI warfare. Helsing, which was founded in Germany, was valued at $5.4bn this month after raising almost $500m on the back of its AI defense software. Elbit Systems meanwhile received about $760m in munitions contracts in 2023 from the Israeli ministry of defense, it disclosed in a financial filing from March. The company reported around $6bn in revenue last year.
Anduril 和 Palantir 分別以《指環王》中的一把傳奇劍和神奇的鋸齒石命名,它們只是國際淘金熱進入 AI 戰爭的一部分。Helsing成立於德國,本月估值為54億美元,此前該公司的人工智慧防禦軟體籌集了近5億美元。與此同時,Elbit Systems在3月份的一份財務檔中披露,該公司在2023年從以色列國防部獲得了約7.6億美元的彈藥合同。該公司去年報告的收入約為60億美元。

“The money that we’re seeing being poured into autonomous weapons and the use of things like AI targeting systems is extremely concerning,” said Catherine Connolly, monitoring and research manager for the organization Stop Killer Robots.
“我們看到的資金被投入到自主武器和人工智慧瞄準系統等用途上,這非常令人擔憂,”Stop Killer Robots組織的監測和研究經理Catherine Connolly說。

Big tech companies also appear more willing to embrace the defense industry and its use of AI than in years past. In 2018, Google employees protested the company’s involvement in the military’s Project Maven, arguing that it violated ethical and moral responsibilities. Google ultimately caved to the pressure and severed its ties with the project. Since then, however, the tech giant has secured a $1.2bn deal with the Israeli government and military to provide cloud computing services and artificial intelligence capabilities.
與過去幾年相比,大型科技公司似乎也更願意接受國防工業及其對人工智慧的使用。2018年,谷歌員工抗議該公司參與軍方的 Project Maven,認為這違反了倫理和道德責任。谷歌最終屈服於壓力,切斷了與該專案的聯繫。然而,從那時起,這家科技巨頭已與以色列政府和軍方達成了一項價值 12 億美元的協定,以提供雲計算服務和人工智慧功能。

Google’s response has changed, too. After employees protested against the Israeli military contract earlier this year, Google fired dozens of them. CEO Sundar Pichai bluntly told staff that “this is a business”. Similar protests at Amazon in 2022 over its involvement with the Israeli military resulted in no change of corporate policy.

A double black box 雙黑匣子

As money flows into defense tech, researchers warn that many of these companies and technologies are able to operate with extremely little transparency and accountability. Defense contractors are generally protected from liability when their products accidentally do not work as intended, even when the results are deadly, and the classified tendencies of the US national security apparatus means that companies and governments are not obligated to share the details of how these systems work.
隨著資金流入國防技術,研究人員警告說,其中許多公司和技術能夠在極低的透明度和問責制下運作。當國防承包商的產品意外無法按預期工作時,即使結果是致命的,國防承包商通常也免於承擔責任,而美國國家安全機構的機密趨勢意味著公司和政府沒有義務分享這些系統如何工作的細節。

When governments take already secretive and proprietary AI technologies and then place them within the clandestine world of national security, it creates what University of Virginia law professor Ashley Deeks calls a “double black box”. The dynamic makes it extremely difficult for the public to know whether these systems are operating correctly or ethically. Often, it appears that they leave wide margins for mistakes. In Israel, an investigation from +972 Magazine reported that the military relied on information from an AI system to determine targets for airstrikes despite knowing that the software made errors in around 10% of cases.
當政府將已經秘密和專有的人工智慧技術置於國家安全的秘密世界中時,它創造了弗吉尼亞大學法學教授阿什利·迪克斯(Ashley Deeks)所說的「雙重黑匣子」。。這種動態使公眾很難知道這些系統是否正確運行或合乎道德。通常,它們似乎為錯誤留下了很大的餘地。在以色列,+972 雜誌的一項調查報告稱,軍方依靠來自人工智慧系統的信息來確定空襲目標,儘管知道該軟體在大約 10% 的情況下犯了錯誤。

The proprietary nature of these systems means that arms monitors sometimes even rely on analyzing drones that have been downed in combat zones such as Ukraine to get an idea of how they actually function.
這些系統的專有性質意味著武器監測員有時甚至依靠分析在烏克蘭等戰區被擊落的無人機來了解它們的實際運作方式。

“I’ve seen a lot of areas of AI in the commercial space where there’s a lot of hype. The term ‘AI’ gets thrown around a lot. And once you look under the hood, it’s maybe not as sophisticated as the advertising,” Scharre said.
“我在商業領域看到了很多人工智慧領域,那裡有很多炒作。“AI”這個詞經常被拋出。一旦你看清楚引擎蓋下,它可能不像廣告那麼複雜,“Scharre說。

A Human in the loop
迴圈中的人類

While companies and national militaries are reticent to give details on how their systems actually operate, they do engage in broader debates around moral responsibilities and regulations. A common concept among diplomats and weapons manufacturers alike when discussing the ethics of AI-enabled warfare is that there should always be a “human in the loop” to make decisions instead of ceding total control to machines. However, there is little agreement on how to implement human oversight.
雖然公司和國家軍隊不願透露其系統實際運作的細節,但他們確實參與了圍繞道德責任和法規的更廣泛的辯論。外交官和武器製造商在討論人工智慧戰爭的倫理問題時,一個共同的概念是,應該始終有一個“人在迴圈中”做出決定,而不是將完全控制權交給機器。然而,對於如何實施人為監督,幾乎沒有達成一致意見。

Activists from the Campaign to Stop Killer Robots stage a protest at the Brandenburg Gate in Berlin, Germany, on 21 March 2019. Photograph: Annegret Hilse/Reuters
2019年3月21日,來自「阻止殺手機器人運動」的活動人士在德國柏林勃蘭登堡門舉行抗議活動。照片:Annegret Hilse/路透社

“Everyone can get on board with that concept, while simultaneously everybody can disagree about what it actually means in practice,” said Rebecca Crootof, a law professor at the University of Richmond and an expert on autonomous warfare. “It isn’t that useful in terms of actually directing technological design decisions.” Crootof is also the first visiting fellow at the US Defense Advanced Research Projects Agency, or Darpa, but agreed to speak in an independent capacity.
“每個人都可以接受這個概念,但同時每個人都可以不同意它在實踐中的實際含義,”里士滿大學(University of Richmond)法學教授、自主戰爭專家麗蓓嘉·克魯托夫(Rebecca Crootof)說。“就實際指導技術設計決策而言,它並沒有那麼有用。克魯托夫也是美國國防高級研究計劃局(DARPA)的第一位訪問學者,但同意以獨立身份發言。

Complex questions of human psychology and accountability throw a wrench into the high-level discussions of humans in loops. An example that researchers cite from the tech industry is the self-driving car, which often puts a “human in the loop” by allowing a person to regain control of the vehicle when necessary. But if a self-driving car makes a mistake or influences a human being to make a wrong decision, is it fair to put the person in the driver’s seat in charge? If a self-driving car cedes control to a human moments before a crash, who is at fault?
人類心理和問責制的複雜問題給人類的高層討論帶來了麻煩。研究人員引用科技行業的一個例子是自動駕駛汽車,它通常通過允許一個人在必要時重新獲得對車輛的控制權來讓“人類進入迴圈”。但是,如果自動駕駛汽車犯了錯誤或影響了人類做出錯誤的決定,那麼讓駕駛座上的人負責是否公平?如果自動駕駛汽車在碰撞前將控制權交給人類,那麼誰是錯方?

Protesters gather outside the gates of Elbit System’s factory in Leicester, UK, on 10 July 2024. Photograph: Martin Pope/Zuma Press Wire/Rex/Shutterstock
2024 年 7 月 10 日,抗議者聚集在英國萊斯特的 Elbit System 工廠大門外。照片:Martin Pope/Zuma Press Wire/Rex/Shutterstock

“Researchers have written about a sort of ‘moral crumple zone’ where we sometimes have humans sitting in the cockpit or driver’s seat just so that we have someone to blame when things go wrong,” Scharre said.
“研究人員寫了一種'道德崩潰區',我們有時會讓人類坐在駕駛艙或駕駛座上,這樣當事情出錯時,我們就會有人責備,”Scharre說。

A struggle to regulate 監管鬥爭

At a meeting in Vienna in late April of this year, international organizations and diplomats from 143 countries gathered for a conference held on regulating the use of AI and autonomous weapons in war. After years of failed attempts at any comprehensive treaties or binding UN security council resolutions on these technologies, the plea to countries from Austria’s foreign minister, Alexander Schallenberg, was more modest than an outright ban on autonomous weapons.
今年4月下旬,來自143個國家的國際組織和外交官齊聚維也納,召開了關於規範人工智慧和自主武器在戰爭中使用的會議。在多年嘗試任何關於這些技術的全面條約或具有約束力的聯合國安理會決議失敗之後,奧地利外交部長亞歷山大·沙倫貝格(Alexander Schallenberg)向各國發出的呼籲比徹底禁止自主武器更為溫和。

“At least let us make sure that the most profound and far-reaching decision, who lives and who dies, remains in the hands of humans and not of machines,” Schallenberg told the audience.
“至少讓我們確保最深刻和最深遠的決定,誰生誰死,仍然掌握在人類手中,而不是機器手中,”沙倫伯格告訴觀眾。

Organizations such as the International Committee of the Red Cross and Stop Killer Robots have called for prohibitions on specific types of autonomous weapons systems for more than a decade, as well as overall rules that would govern how the technology can be deployed. These would cover certain uses such as being able to commit harm against people without human input or limit the types of combat areas that they can be used in.
十多年來,紅十字國際委員會(International Committee of the Red Cross)和“停止殺手機器人”(Stop Killer Robots)等組織一直呼籲禁止特定類型的自主武器系統,並制定管理該技術部署方式的總體規則。這些將涵蓋某些用途,例如能夠在沒有人為輸入的情況下對人造成傷害,或限制它們可以使用的戰鬥區域類型。

A drone with AI integration is used to de-mine in the Zhytomyr region of Ukraine on 20 September 2023. Photograph: Maxym Marusenko/NurPhoto/Shutterstock
2023年9月20日,一架集成了人工智慧的無人機在烏克蘭日托米爾地區進行排雷。攝影:Maxym Marusenko/NurPhoto/Shutterstock

The proliferation of the technology has also forced arms control advocates to change some of their language, an acknowledgment that they are losing time in the fight for regulation.
該技術的擴散也迫使軍備控制宣導者改變他們的一些措辭,承認他們在爭取監管的鬥爭中正在浪費時間。

“We called for a preemptive ban on fully autonomous weapons systems,” said Mary Wareham, deputy director of the crisis, conflict and arms division at Human Rights Watch. “That ‘preemptive’ word is no longer used nowadays, because we’ve come so much closer to autonomous weapons.”
“我們呼籲先發制人地禁止全自主武器系統,”人權觀察危機、衝突和武器部副主任瑪麗・韋勒姆(Mary Wareham)說。“現在不再使用'先發制人'這個詞,因為我們已經離自主武器更近了。

Increasing the checks on how autonomous weapons can be produced and used in warfare has extensive international support – except among the states most responsible for creating and utilizing the technology. Russia, China, the United States, Israel, India, South Korea and Australia all disagree that there should be any new international law around autonomous weapons.
加強對自主武器的生產和在戰爭中使用的檢查得到了廣泛的國際支援——除了對創造和利用該技術負有最大責任的國家。俄羅斯、中國、美國、以色列、印度、韓國和澳大利亞都不同意應該圍繞自主武器制定任何新的國際法。

Defense companies and their influential owners are also pushing back on regulations. Luckey, Anduril’s founder, has made vague commitments to having a “human in the loop” in the company’s technology while publicly opposing regulation and bans on autonomous weapons. Palantir’s CEO, Alex Karp, has repeatedly invoked Oppenheimer, characterizing autonomous weapons and AI as a global race for supremacy against geopolitical foes like Russia and China.
國防公司及其有影響力的擁有者也在抵制法規。Anduril的創始人Luckey在公開反對對自主武器的監管和禁令的同時,也做出了模糊的承諾,即在公司技術中擁有“人”。Palantir的首席執行官亞歷克斯·卡普(Alex Karp)多次援引奧本海默(Oppenheimer)的話,將自主武器和人工智慧描述為與俄羅斯和中國等地緣政治敵人爭奪霸權的全球競賽。

Soldiers from the British army used an AI engine during an exercise in Estonia on 2 June 2021. Photograph: Mike Whitehurst/Ministry of defence/Crown Copyright/PA
2021年6月2日,英國軍隊士兵在愛沙尼亞的一次演習中使用了人工智慧引擎。攝影:Mike Whitehurst/Ministry of Defence/Crown Copyright/PA

This lack of regulations is not a problem unique to autonomous weapons, experts say, and is part of a broader issue that international legal regimes don’t have good answers for when a technology malfunctions or a combatant makes a mistake in conflict zones. But the concern from experts and arms control advocates is that once these technologies are developed and integrated into militaries, they will be here to stay and even harder to regulate.
專家表示,這種監管的缺乏並不是自主武器獨有的問題,而是一個更廣泛問題的一部分,即國際法律制度對技術故障或戰鬥人員在衝突地區犯錯時沒有很好的答案。但專家和軍備控制宣導者擔心的是,一旦這些技術被開發並整合到軍隊中,它們將繼續存在,甚至更難監管。

“Once weapons are embedded into military support structures, it becomes more difficult to give them up, because they’re counting on it.” Scharre said. “It’s not just a financial investment – states are counting on using it as how they think about their national defense.”
“一旦武器被嵌入軍事支援結構,放棄它們就變得更加困難,因為他們指望它。“沙爾說。“這不僅僅是一項金融投資——各州都指望用它來思考他們的國防。

If development of autonomous weapons and AI is anything like other military technologies, there is also the likelihood that their use will trickle down into domestic law enforcement and border patrol agencies to entrench the technology even further.
如果自主武器和人工智慧的發展與其他軍事技術一樣,那麼它們的使用也有可能滲透到國內執法和邊境巡邏機構,以進一步鞏固該技術。

“A lot of the time the technologies that are used in war come home,” Connolly said.
“很多時候,戰爭中使用的技術都回家了,”康諾利說。

The increased attention to autonomous weapons systems and AI over the last year has also given regulation advocates some hope that political pressure in favor of establishing international treaties will grow. They also point to efforts such as the campaign to ban landmines, in which Human Rights Watch director Wareham was a prominent figure, as proof that there is always time for states to walk back their use of weapons of war.
過去一年,對自主武器系統和人工智慧的日益關注也給監管宣導者帶來了一些希望,即支援建立國際條約的政治壓力將會增加。他們還指出,人權觀察組織主任韋勒姆(Wareham)是其中的傑出人物,例如禁止地雷運動的努力,證明各國總是有時間收回對戰爭武器的使用。

“It’s not going to be too late. It’s never too late, but I don’t want to get to the point where we’re saying: ‘How many more civilians must die before we take action on this?’” Wareham said. “We’re getting very, very close now to saying that.”
現在還不算太晚。現在永遠不會太晚,但我不想說:'在我們採取行動之前,還有多少平民必須死亡?'”韋勒姆說。“我們現在非常非常接近這麼說。

Related stories

Related stories 相關故事

  • Pope calls on G7 leaders to ban use of autonomous weapons
    教宗呼籲七國集團領導人禁止使用自主武器

  • OpenAI says Russian and Israeli groups used its tools to spread disinformation
    OpenAI表示,俄羅斯和以色列團體利用其工具傳播虛假資訊

  • Google remains focused on its long quest for your eyeballs
    谷歌仍然專注於對你的眼球的長期追求

  • US air force denies running simulation in which AI drone ‘killed’ operator
    美國空軍否認運行類比,其中人工智慧無人機「殺死」操作員

  • Thank the Lords someone is worried about AI-controlled weapons systems
    謝天謝地,有人擔心人工智慧控制的武器系統

  • Ex-Google worker fears 'killer robots' could cause mass atrocities
    前谷歌員工擔心「殺手機器人」可能導致大規模暴行

  • The Guardian view on the ethics of AI: it’s about Dr Frankenstein, not his monster
    《衛報》對人工智慧倫理的看法:這是關於弗蘭肯斯坦博士的,而不是他的怪物

  • Google's march to the business of war must be stopped
    谷歌必須停止向戰爭業務進軍

More from Headlines

More from Headlines 更多來自頭條新聞的相關產品

  • European Union 歐盟
    Top EU officials to boycott informal meetings hosted by Hungary
    歐盟高級官員抵制匈牙利主辦的非正式會議

  • France 法國
    Failure to agree on new PM puts leftwing coalition in ‘stalemate’
    未能就新總理達成一致使左翼聯盟陷入“僵局”

  • North Korea 朝鮮
    Diplomat flees to South in highest ranking defection since 2016 – report
    外交官逃往韓國,這是自2016年以來最高級別的叛逃 - 報告

  • Tonga 湯加
    Parts of country without internet after cables damaged and Starlink ordered to cease operations
    電纜損壞后,該國部分地區沒有互聯網,Starlink 被勒令停止運營

  • UK 英國
    UK ready to build ‘closer, more mature’ trade links with EU
    英國準備與歐盟建立「更緊密、更成熟」的貿易聯繫

  • Ireland 愛爾蘭
    Police arrest 15 after clashes at Dublin site planned for asylum seekers
    警方在都柏林為尋求庇護者計劃地點發生衝突后逮捕了15人

  • Africa 非洲
    Kidnappings soar in central Africa’s ‘triangle of death’
    中非“死亡三角”綁架案激增

  • Science 科學
    US startup claims lab-made, climate-friendly butter tastes ‘like real thing’
    美國初創公司聲稱實驗室製造的氣候友好型黃油嘗起來“像真的”

  • Climate crisis 氣候危機
    China’s emissions of two potent greenhouse gases rise 78% in decade
    中國兩種強效溫室氣體的排放量在十年內增長了78%

  • Environment 環境
    Climate crisis is making days longer, study finds
    研究發現,氣候危機正在使日子延長

Most viewed

Most viewed