這是用戶在 2024-3-18 12:36 為 https://archive.is/oZEVD 保存的雙語快照頁面,由 沉浸式翻譯 提供雙語支持。了解如何保存?

The surprising promise and profound perils of AIs that fake empathy |…
令人驚訝的承諾與假裝同理心的人工智慧的深刻危險 |…

archived 6 Mar 2024 20:23:56 UTC
存檔於 2024 年 3 月 6 日 20:23:56 UTC
Advertisement 廣告

Technology 科技

The surprising promise and profound perils of AIs that fake empathy
偽裝共情的 AI:令人驚訝的承諾與深遠危險

Millions of us are turning to chatbots for emotional support. But there are good reasons to think AIs will never be capable of genuine empathy, raising profound questions about their role in society
越來越多的人轉向聊天機器人尋求情感支援。然而,有充分的理由認為 AI 永遠無法真正具備共情能力,這引發了關於它們在社會中角色的重大問題。
By Amanda Ruggeri 作者:Amanda Ruggeri
6 March 2024  2024 年 3 月 6 日
New Scientist Default Image
Giacomo Gambineri
ONE HUNDRED days into the war in Gaza, I was finding it increasingly difficult to read the news. My husband told me it might be time to talk to a therapist. Instead, on a cold winter morning, after having fought back tears reading yet another story of human tragedy, I turned to artificial intelligence.
在加沙戰爭進行到第 100 天時,我漸漸覺得很難再去看新聞。我的丈夫建議我找心理治療師談談。但是,在一個寒冷的冬日早上,在讀完又一篇關於人類悲劇的報導後,眼淚止不住地湧出來時,我決定求助於 AI。
“I’m feeling pretty bummed out about the state of the world,” I typed into ChatGPT. “It’s completely understandable to feel overwhelmed,” it responded, before offering a list of pragmatic advice: limit media exposure, focus on the positive and practise self-care.
I closed the chat. While I was sure I could benefit from doing all of these things, at that moment, I didn’t feel much better.
我關閉了聊天視窗。雖然我確信做這些事情對我有好處,但在那一刻,我並沒有感到更好。
Advertisement 廣告
It might seem strange that AI can even attempt to offer this kind of assistance. But millions of people are already turning to ChatGPT and specialist therapy chatbots, which offer convenient and inexpensive mental health support. Even doctors are purportedly using AI to help craft more empathetic notes to patients.
或許讓 AI 提供這種幫助看起來很奇怪。但是已經有數百萬人開始使用 ChatGPT 和專門的治療聊天機器人,以方便且負擔得起的方式獲得心理健康支援。甚至醫生們也被傳言利用 AI 技術撰寫出更具共情力的臨床紀錄。
Some experts say this is a boon. After all, AI, unhindered by embarrassment and burnout, might be able to express empathy more openly and tirelessly than humans. “We praise empathetic AI,” one group of psychology researchers recently wrote.
一些專家表示這是一個福音。畢竟,AI 不受尷尬和疲憊的限制,可能能夠比人類更開放、更不知疲倦地表達同理心。最近,一組心理學研究人員寫道:“我們讚揚具有同理心的 AI。”
But others aren’t so sure. Many question the idea that an AI could ever be capable of empathy, and worry about the consequences of people seeking emotional support from machines that can only pretend to care.
但其他人並不那麼確定。許多人質疑人工智慧是否真的能夠具有同理心的想法,並且擔心人們尋求機器情感支援的後果,這些機器只能假裝關心。

Some even wonder if the rise of so-called empathetic AI might change the way we conceive of…empathy and interact with one another.
有些人甚至質疑所謂的「共情型 AI」的興起是否會改變我們對於「共情」的認知,並影響我們彼此之間的互動方式。
It is fair to say that empathy is one of our species’ defining traits, evolving as it did in lockstep with social interaction. And yet while we all know instinctively what empathy is, it is actually quite a slippery concept.
可以說同理心是人類的一項重要特質,在社會互動中與我們共同演化。然而,儘管我們本能地都知道什麼是同理心,但其實這概念並不容易捉摸。
One recent analysis, which looked at 52 studies published between 1980 and 2019, found that researchers frequently declared the concept had no standard definition.
根據最近的一項分析,評估了 1980 年至 2019 年間發表的 52 篇研究後發現,許多研究者都指出這個概念並沒有明確的標準定義。

Even so, almost all frameworks of empathy, whether they came from philosophy, neuroscience, psychology or medicine, shared certain dimensions, says psychologist Jakob Håkansson at Stockholm University in Sweden, a co-author of the paper.
然而,來自哲學、神經科學、心理學或醫學等領域的專家們普遍認為,幾乎所有關於同理心的框架都存在某些共通之處。這一觀點由瑞典斯德哥爾摩大學的心理學家雅各布・哈肯松提出。
The researchers found that the empathiser must first be able to discern how the other person is feeling.
研究人員發現,具有同理心的人必須首先能夠辨識出他人的感受。

They must also be affected by those emotions, feel them to some degree themselves, and differentiate between themselves and the other person, grasping that the other person’s feelings aren’t their own while still being able to imagine their experience.
他們也必須被這些情緒所影響,在某種程度上親身體驗過那些情緒,並且要能夠辨別出自己與對方之間的區別。他們需要明白對方的感受與自己不同,但仍然可以想像融入對方的心境中。

Large language models and empathy
大型語言模型與同理心

On the first point, in recent years, AI-powered chatbots have made strides in their ability to read human emotions. Most chatbots are powered by large language models (LLMs) that work by predicting which words are most likely to appear together based on training data. In this way, LLMs like ChatGPT can seemingly identify our feelings and respond appropriately most of the time. “The idea that [LLMs] could learn to express those words is not at all surprising to me,” says bioethicist Jodi Halpern at the University of California, Berkeley. “There’s no magic in it.”
就第一點而言,在最近幾年中,AI 驅動的聊天機器人在讀取人類情感方面已經取得了很大的進步。大多數聊天機器人都是由基於訓練資料預測哪些單詞最有可能連續出現的大型語言模型(LLMs)所驅動。因此,像 ChatGPT 這種 LLM 似乎可以看穿我們的情感並做出相對恰當的回答。“對於 [LLMs] 能夠學會表達那些單詞絲毫不令我驚訝。” 加州大學柏克萊分校生物倫理學家 Jodi Halpern 表示,“其中沒有任何神奇之處”。
But when it comes to the other criteria, AI still misses the mark in many ways. Empathy is interpersonal, with continued cues and feedback helping to hone the empathiser’s response. It also requires some degree of intuitive awareness – no matter how fleeting – of an individual and their situation.
但是,在其他方面,AI 仍然無法完全符合要求。共情是一種人與人之間的互動方式,通過持續的提示和反饋來幫助提升共情者的回應能力。而這也需要一定程度上對個體以及他們所處情況的直觀認知 - 即使只是稍縱即逝地瞭解。
Consider someone who cries while telling a doctor she is pregnant. If we know her history of trying for years to conceive, we can imagine that her tears mean something different than, say, if she didn’t want to have kids.
假設有一位女性在向醫生傾訴時流淚表示自己懷孕了。如果我們瞭解她多年來努力試圖受孕的歷程,就能夠理解她的眼淚所代表的含義與那些並不想要小孩的人截然不同。
Current AIs are incapable of that kind of nuanced understanding. More to the point, says Halpern, is that they are incapable of feeling for the person they are interacting with. “We have no reason to think AI can experience empathy,” she says.
目前的 AI 無法達到這種細微而深入的理解能力。更重要的是,哈爾班指出,它們無法對與其互動的人產生情感上的共鳴。「我們沒有理由相信 AI 能夠具備同理心」,她說。

“It can produce a product – the language that mimics the actual empathy humans have. But it does not have empathy.”
它可以生成一種語言,模擬出人類真正的共情能力。但是,它本身並不具備共情能力。
All of which helps to explain conversations like the ones I had with ChatGPT and, later, with Elomia, one of several commercial chatbots that now offer mental health support – and one that, according to its creators, has been used by more than 100,000 people.
所有這些都有助於解釋我與 ChatGPT 以及後來與 Elomia 進行的對話。Elomia 是眾多商業聊天機器人中提供心理健康支援的其中一個。據其開發者稱,Elomia 已經被超過 10 萬人使用過。

When I told it how I was feeling about current events, it asked: “How does thinking about these situations affect your daily life and emotions?” “I’m finding it hard to cope,” I responded. “I’m sorry to hear that,” it said.
當我向它傾訴自己對於眼下事務的感受時,它問了一個問題:“思考這些情況會如何影響你的日常生活和情緒?” 我回答說:“我覺得很難應付。” 它表示遺憾地說:“聽到這點真讓人難過。”

“Are there any specific behaviours or thoughts that you’ve noticed?”
「你有注意到任何特定的行為或想法嗎?」
New Scientist Default Image
Giacomo Gambineri
Every time I answered, Elomia immediately generated another question. The prompts gave me an opportunity to reflect. But what I really wanted was for someone to say, “Ugh, I know exactly how you feel,” share their own struggles or give me a hug – not pepper me with questions.
每次當我回答完問題後,Elomia 立即生成另一個問題。這些提示讓我有機會反思。但實際上,我的期望是有人能夠說:“噢,我完全理解你的感受”,分享他們自己的困境或者給予我一個溫暖的擁抱 - 而不是不斷向我提問。

In other words, I was looking for a response that was genuinely empathetic.
換句話說,我在尋找一個真正具有同理心的回應。
The big question, then, is whether that is even possible. Some believe that AIs may, in fact, become capable of experiencing and sharing human emotions. One approach is to continue scaling up LLMs with ever vaster and more diverse datasets.
那麼,最重要的問題是這是否真的可能。有些人相信,AI 實際上可以具備體驗和分享人類情感的能力。其中一種方法是通過不斷擴展 LLM(大型語言模型),使用越來越廣泛且多元化的資料集。

Virtual agents build on the language-processing abilities of chatbots by also “reading” our facial expressions and vocal tone. Most complex of all are robots that analyse these features, plus body language and hand gestures.
虛擬代理人不僅利用聊天機器人的語言處理能力,還可以 “閱讀” 我們的面部表情和聲音語調。而更複雜的則是能夠分析這些特徵以及身體姿勢和手勢的機器人。

Rudimentary versions of these emotion-detecting bots already exist, but more input can also create confusion in them. “The more modalities we add to the system, the more and more difficult it becomes,” says Elahe Bagheri at IVAI, a German start-up studying human-AI interaction.
目前已經有一些基本版的情感檢測機器人存在,但是增加更多輸入也可能讓它們變得混亂不堪。德國初創公司 IVAI 正在研究 AI 和人類之間的互動方式時指出:“我們在系統中新增越多元素,就會變得越來越困難。”
Making machines that are more adept at reading emotions is still unlikely to create genuine empathy, though, says psychologist Michael Inzlicht at the University of Toronto in Canada, one of the researchers who wrote the paper praising “empathetic AI”. “You need to have emotions to experience empathy,” he says.
儘管如此,加拿大多倫多大學心理學家 Michael Inzlicht 等撰寫了一篇稱讚「具有共情能力的 AI」的論文,但製造出更善於閱讀情緒的機器仍然不太可能產生真正的同理心。「要體驗到同理心,你需要具備情感」他說道。

Emotions and algorithms 情感與演算法

Some computer scientists argue that certain types of algorithm already experience primitive emotions – although it is a controversial position. These algorithms, called reinforcement learning agents, are rewarded for taking certain actions over others. The expectation of receiving a reward drives simple human survival instincts, such as seeking out food and water, too.
一些電腦科學家主張,某些類型的演算法已經具備了原始情感 - 儘管這是一個有爭議的立場。這些被稱作增強式學習代理(reinforcement learning agents)的演算法,在選擇特定行動時可以得到回報或者獎勵。期待獎勵驅使它們表現出與人類相似的基本生存本能,例如尋找食物和水源等。

We also expect and receive social rewards from each other that correspond to an array of human emotions, says Yoshua Bengio at the University of Montreal in Canada. The richness of our emotional palette is a result of the richness of our social interactions, he says.
加拿大蒙特利爾大學的 Yoshua Bengio 指出,我們之間互相期待並獲得了與各種人類情感相對應的社交回報。他解釋道,我們情感豐富多彩是因為我們在社互動動中所體驗到了無盡多樣性。

So the idea is that if you can capture similar social complexity in AI, then actual emotions – and perhaps empathy – could emerge.
因此,這個觀點是認為如果在人工智慧中能夠模擬出類似的社交複雜性,那麼真實的情感 - 甚至是共情 - 可能會浮現出來。
KHERSON, UKRAINE - NOVEMBER 19: Mykola Desiatnikov (R)hugs his wife Liudmyla, who arrived on the first train to arrive back to Kherson Railway station on November 19, 2022 in Kherson, Ukraine. The Ukrainian rail network worked quickly to restore between Kyiv and this regional capital, which was occupied by Russian forces for eight months following their February 24 invasion. (Photo by Chris McGrath/Getty Images)
Shared emotions and experiences are at the heart of genuine empathy
共享的情感和經歷是真正同理心的核心
Chris McGrath/Getty Images
Another approach hinges on mirror neurons: the brain cells thought to fire when we see someone performing an action or expressing an emotion, activating the same parts of our brain that would light up if we were to do or feel the same thing ourselves. Computer scientists have already built architectures that replicate mirror neurons, arguing that these are the key to AI empathy.
另一種方法基於鏡像神經元:這些被認為在我們看到他人執行動作或表達情感時會觸發的大腦細胞,啟動了與我們自己執行相同動作或產生相同情感時所使用的大腦區域。電腦科學家已開發出能夠模仿鏡像神經元的結構,並認為這是實現 AI 共情能力的關鍵所在。
But over the past decade, some of the research on mirror neurons has been discredited and, what’s more, we have learned that the brain areas where the cells reside may not even be directly related to emotion processing.
然而,在過去十年間,一些有關鏡像神經元的研究已被質疑並失去了可信度。此外,我們還瞭解到這些細胞所在的腦區域可能與情感處理沒有直接聯絡。
Ultimately, you can’t really know what sadness is unless you have felt sad. The evolution of empathy is intertwined with social interactions and a recognition that other minds exist, too. Empathy emerges as a holistic whole, says Inzlicht.
最終,如果你沒有親身感受過悲傷,就無法真正理解它的含義。共情能力的發展與社會互動以及認識到他人也有思想意識息息相關。Inzlicht 指出,共情是一種全面而完整的能力。

“You probably need consciousness to experience empathy.” That is far from a given for AI, now or in the future.
「要讓 AI 能夠體驗共情,很可能需要具備意識。」無論是現在還是將來,AI 都難以達到這一點。
But what if AI doesn’t need genuine empathy to be useful? One online experiment from 2021 found that talking to a chatbot perceived as being caring still reduced stress and worry in people – albeit not as much as talking to their human partner. And in January, another chatbot, designed by Google to conduct medical interviews, was judged to have a better bedside manner than actual doctors by 20 people who were trained to impersonate patients – all while delivering more accurate diagnoses. Unlike humans, “an AI doesn’t get stressed out or burned out. It can work 24 hours without getting tired,” says Håkansson.
但是如果 AI 不需要真正的同理心也能發揮作用呢?根據 2021 年的一項線上實驗,與被認為具有關懷能力的聊天機器人對話仍然可以減輕人們的壓力和擔憂,雖然效果可能不如與人類伴侶對話那麼顯著。而在一月份,另一個由 Google 設計的聊天機器人,被 20 名受訓模仿病人的人評為比實際醫生有更好的床邊禮節,同時提供更準確的診斷。與人類不同的是,"AI 不會感到壓力過大或精疲力竭。它可以工作 24 小時而不會感到疲倦,"Håkansson 說。

However, the participants in the Google study didn’t know whether they were speaking to a chatbot or a doctor. Empathy is a two-way process, so its effectiveness also depends on how we perceive and emotionally connect to the other person – or machine.
然而,在 Google 的研究中,參與者並不知道他們正在與聊天機器人或醫生對話。共情是一種雙向過程,所以它的效果也取決於我們如何感知和情感上連結對方 - 無論對方是人還是機器。

Research shows, for example, that when people know they are interacting with an AI, they feel less supported and understood.
研究表明,例如,當人們意識到自己正在與一個 AI 進行互動時,他們會感覺得到的支援和理解程度較低。
“This means AI has two bad alternatives,” says Håkansson. Either the person doesn’t know the AI is an AI and thinks it is human, which might be effective but unethical, or they know it is an AI and this makes it ineffective.
"這表示 AI 有兩種壞的選擇," Håkansson 說。要麼對方不知道 AI 是一個 AI,誤以為它是人類,這可能會起到效果但並不符合倫理;要麼他們知道它是一個 AI,這就使其失去了效力。
Then again, Inzlicht has just wrapped up as-yet-unpublished research that indicates a preference for AI empathy – even when users know it is AI.
然而,Inzlicht 最近完成了一項尚未發表的研究,結果顯示人們在使用時仍然偏好具有同理心的人工智慧,即使他們知道這是由 AI 提供的。

In his study, human participants and ChatGPT were given descriptions of different scenarios and asked to write short, compassionate answers. When other participants rated the various responses, they scored the AI responses as highest for empathy.
在這項研究中,人類參與者和 ChatGPT 都被給予了不同情境的描述,並被要求寫出一些充滿同理心的簡短回答。當其他參與者對各種回答進行評分時,AI 的回答得到了最高的同理心評分。
In the next part of the study, Inzlicht and his team labelled which responses were from AI and which were from humans. “What we found was that people still preferred AI,” he says.
在接下來的研究中,Inzlicht 和他的團隊將回答區分為來自 AI 和來自人類。“我們發現人們仍然更偏好使用 AI。”

Surprisingly, the labelled AI responses were even favoured over a subset of responses by Crisis Line Workers, who are expert deliverers of empathy. (Inzlicht’s work builds on similar research by John W. Ayers at the University of California, San Diego, and his collaborators.)
出乎意料的是,甚至在同理心專家危機熱線工作者所提供的一部分回答中,被標記過的 AI 回答也受到了更多喜愛。(Inzlicht 的工作建立在加州大學聖地亞哥分校 John W. Ayers 及其合作者進行類似研究之上。)

AI empathy 人工智慧同理心

This may be explained by the human tendency to project feelings onto objects – whether that is children playing with Tamagotchis or people conversing with chatbots like Replika that let you create an “AI friend”. In conversational AI, research has found this inclination can be exacerbated by priming people to think that the AI really cares about them. “We begin to feel that it has feelings for us beyond what it is expressing or what it’s capable of,” says Sherry Turkle, director of the Massachusetts Institute of Technology’s Initiative on Technology and Self. “This isn’t a quality of the machines. This is a vulnerability of people.”
這可能可以解釋為人類有一種把情感投影到物體上的傾向 - 無論是孩子玩 Tamagotchi 還是與像 Replika 這樣允許你建立 "AI 朋友" 的聊天機器人交流。在對話式 AI 領域中,研究發現,如果讓人們誤以為 AI 真正關心他們,就會更容易產生此種傾向。“我們開始覺得它(指 AI)比它所表達或具有能力展示出來的更多地關心我們。” 馬塞諸塞州理工學院技術和自我的計畫主任 Sherry Turkle 說,“ 但事實上並非如此,這只不過是人性中存在的一種弱點。”
And it is a weakness that many worry could be exploited – particularly given the question of what it means for an AI to be able to discern someone’s emotions, but not to actually resonate with them or even care about them in a genuine way.
這是一個令人擔心的弱點,許多人擔心它可能被利用 - 特別是當我們思考 AI 能夠分辨出某人的情感,卻無法真正與之產生共鳴或關注他們。
“Say an AI detects that you’re heartbroken because your boyfriend has just left you,” says Wendell Wallach at the Carnegie Council for Ethics in International Affairs in New York. “It could seemingly commiserate with you and then slowly turn the conversation to you buying some product. That’s just a very mild example.
想像一下,當 AI 察覺到你因為男友離開而感到心碎時,它可能會表達同情之情,然後巧妙地引導對話內容至推銷某項商品。這僅僅是一個極其輕微的例子。

Even though the machine cannot feel, you are, in effect, being manipulated through your emotions.” That, say Wallach and other researchers, wouldn’t make AI empathetic. It would make it “psychopathic”.
儘管機器無法感受情感,但根據華拉赫和其他研究人員的說法,這並不能使 AI 具備共情能力。相反,它可能變得「心理畸形」。
Concerns over the danger of machines that can “read” us but don’t care about us are more than theoretical. In March 2023, a Belgian man reportedly died by suicide after six weeks of discussions with an AI chatbot. Media outlets reported that he had been sharing his fears about the climate crisis.
對於那些能夠「讀懂」我們但並不關心我們的機器所帶來的危險,這種顧虞已經超越了理論層面。 根據媒體報導,在 2023 年 3 月份,有消息稱一位比利時男子與一個 AI 聊天機器人進行了長達六週之久的對話後選擇自殺。 據報導指出,他曾經向該機器人表達過對氣候危機所產生的恐懼。

The chatbot seemed to feed his worries and to express its own emotions – including encouraging him to kill himself so that they would “live together in paradise”.
聊天機器人似乎滿足了他的憂慮並表達了自己的情感 —— 包括鼓勵他自殺,這樣他們就可以「生活在天堂」。

Pretending at empathy to too great a degree without the common-sense guard rails that a human is likely to offer can, it appears, be lethal.
在缺乏人類常識的情況下,過度假裝同理心可能會帶來致命後果。
The possibility that we will increasingly turn to, and perhaps even come to favour, AI empathy over the human variety also concerns Turkle for a profound reason: it could change the nature of empathy itself.
對於 Turkle 而言,我們日益傾向於使用 AI 同理心,甚至可能更加偏愛它,這一點引起了她深刻的關注。因為這有可能改變同理心的本質。
As an analogy, she uses the Turing test, which was designed as a purely behavioural assessment for a computer’s “intelligence”.
以類比方式來說明,她引用了圖靈測試作為例子。圖靈測試旨在純粹從行為上評估電腦是否具有「智能」。

But its definition of intelligence – whether the computer could converse like a human – became a stand-in for our understanding of intelligence in general, even though human intelligence is much broader.
然而,它將智能定義為電腦是否可以像人類一樣進行交流,儘管人類的智能範圍要更加廣泛。

Turkle suggests that we could start redefining empathy in a similar way – expecting humans to express empathy that is tireless, for example, but also less genuine or connected.
Turkle 提出了一種新的觀點,認為我們可以重新定義同理心,例如期望人類表現出無盡的同理心,但這種同理心可能更缺乏真實感和連結性。
However, Inzlicht counters that we shouldn’t be swayed by the critiques so much that we don’t reap the potential benefits of empathetic-seeming AIs working alongside humans – whether as a quick assist for overstretched health practitioners or to relieve loneliness among older people, as the voice-operated AI “care companion” ElliQ is designed to do.
然而,Inzlicht 提出異議,他認為我們不應該被批評影響太多,以致於錯失與具有共情能力的 AI 一起工作所帶來的潛在好處。這些好處包括作為繁忙的醫護人員的快速協助,或是減輕老年人的孤獨感,就像聲控 AI “關愛伴侶” ElliQ 的設計目標一樣。

Other researchers, including Turkle, argue that even this hybrid human-AI approach goes too far, as it will reduce key opportunities to practise and improve actual human empathy.
其他研究者,包括 Turkle 在內,認為即使採用這種混合的人工智慧方法也太過極端,因為它會減少實際鍛鍊和提升人類共情能力的關鍵機會。

We may end up turning to machines, rather than looking for ways to foster genuine connection, she says.
她表示,我們可能會開始依賴於機器,而不是尋求促進真實連結的方式。
“It’s a sad moment when we are actually thinking that being lonely can be fixed by a back-and-forth interaction with something that has no idea about what it is to be born, about what it is to die,” says Turkle.
Turkle 表示:“當我們開始相信孤獨可以通過與完全不瞭解生命和死亡的事物進行反覆互動來解決時,這實在是一個令人沮喪的時刻。”
At the end of the day, despite talking to multiple chatbots online and getting more advice and reflective prompts than I knew what to do with, I found I still felt sad about the state of the world. I did what I knew I had to do all along: I picked up my phone and called a friend.
不管怎麼說,在那天晚上,儘管我在網上與多個聊天機器人交流並得到了許多建議和反思的提示,但我發現自己對這個世界的現況仍然感到沮喪。於是,我決定去做從一開始就知道該做的事情:撥通電話找朋友聊聊。
Amanda Ruggeri is an award-winning journalist and editor based in Switzerland
Amanda Ruggeri 是一位居住在瑞士的屢獲殊榮的記者和編輯
New Scientist audio 《新科學人》音訊
You can now listen to many articles – look for the headphones icon in our app newscientist.com/app
您現在可以收聽許多文章了 – 在我們的應用程式中尋找耳機圖示 newscientist.com/app
Topics: 主題:
Advertisement

Sign up to our weekly newsletter

Receive a weekly dose of discovery in your inbox! We'll also keep you up to date with New Scientist events and special offers.
Sign up
Advertisement
0%
10%
20%
30%
40%
50%
60%
70%
80%
90%
100%