这是用户在 2024-5-6 21:32 为 https://www.scientificamerican.com/article/ai-doesnt-threaten-humanity-its-owners-do/ 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
Skip to main content

AI Doesn’t Threaten Humanity. Its Owners Do

We shouldn’t be afraid of AI taking over humanity; we should fear the fact that our humanity hasn’t kept up with our technology

Illustration of a man in suit with text bubbles and graphic elements representing artificial intelligence

Moor Studio/Getty Images

In April a lawsuit revealed that Google Chrome’s private browsing mode, known as “Incognito,” was not actually as private as we might think. Google was still collecting data, which it has now agreed to destroy, and its “private” browsing does not actually stop websites or your Internet service provider, such as Comcast or AT&T, from tracking your activities.
今年 4 月,一项诉讼显示,谷歌 Chrome 浏览器的私人浏览模式(即 "Incognito")实际上并不像我们想象的那样私密。谷歌仍在收集数据,现在它已同意销毁这些数据,而且它的 "私密 "浏览实际上并不能阻止网站或你的互联网服务提供商(如康卡斯特或 AT&T)跟踪你的活动。

In fact, that information harvesting is the whole business model of our digital and smart-device-enabled world. All of our habits and behaviors are monitored, reduced to “data” for machine learning AI, and the findings are used to manipulate us for other people’s gains.
事实上,信息收集是我们这个数字和智能设备世界的整个商业模式。我们的所有习惯和行为都会受到监控,并转化为机器学习人工智能所需的 "数据",然后利用这些数据来操纵我们,为他人谋取利益。

It doesn’t have to be this way. AI could be used more ethically for everyone’s benefit. We shouldn’t fear AI as a technology. We should instead worry about who owns AI and how its owners wield AI to invade privacy and erode democracy.

No surprise, tech companies, state entities, corporations and other private interests increasingly invade our privacy and spy on us. Insurance companies monitor their clients’ sleep apnea machines to deny coverage for improper use. Children’s toys spy on playtime and collect data about our kids. Period tracker apps share with Facebook and other third parties (including state authorities in abortion-restricted states) when a woman last had sex, their contraceptive practices, menstrual details and even their moods. Home security cameras surveil customers and are susceptible to hackers. Medical apps share personal information with lawyers. Data brokers, companies that track people across platforms and technology, amplify these trespasses by selling bundled user profiles to anyone willing to pay.
毫不奇怪,科技公司、国家实体、企业和其他私人利益集团越来越多地侵犯我们的隐私并监视我们。保险公司监控客户的睡眠呼吸机,如果使用不当就拒绝承保。儿童玩具监视我们孩子的游戏时间并收集数据。经期跟踪应用程序与 Facebook 和其他第三方(包括限制堕胎州的州政府)分享女性最后一次性生活的时间、避孕措施、月经细节甚至情绪。家庭安全摄像头监视客户,容易受到黑客攻击。医疗应用程序与律师共享个人信息。数据经纪人是跨平台和技术追踪用户的公司,他们通过向任何愿意付费的人出售捆绑的用户资料来扩大这些侵犯行为。

This explicit spying is obvious and feels wrong at a visceral level. What’s even more sinister, however, is how the resulting data are used—not only sold to advertisers or any private interest that seeks to influence our behavior, but deployed for AI training in order to improve machine learning. Potentially this could be a good thing. Humanity could learn more about itself, discovering our shortcomings and how we might address them. That could assist individuals in getting help and meeting their needs.

Instead, machine learning is used to predict and prescribe, that is, estimate who we are and the things that would most likely influence us and change our behavior. One such behavior is how to get us to “engage” more with technology and generate more data. AI is being used to try and know us better than we know ourselves, get us addicted to technology, and impact us without our awareness, consent or best interest in mind. In other words, AI is not helping humanity address our shortcomings, it’s exploiting our vulnerabilities so private interests can guide how we think, act and feel.
相反,机器学习是用来预测和开处方的,即估计我们是什么样的人,以及哪些事情最有可能影响我们并改变我们的行为。其中一种行为就是如何让我们更多地 "参与 "技术并产生更多数据。人工智能正被用来试图比我们自己更了解我们,让我们沉迷于技术,并在我们没有意识到、没有同意或没有考虑到我们最佳利益的情况下影响我们。换句话说,人工智能不是在帮助人类解决我们的缺点,而是在利用我们的弱点,让私人利益引导我们的思维、行为和感受。

A Facebook whistleblower made this all clear several years ago. To meet its revenue goals, the platform used AI to keep people on the platform longer. This meant finding the perfect amount of anger-inducing and provocative content, so that bullying, conspiracies, hate speech, disinformation and other harmful communications flourished. Experimenting on users without their knowledge, the company designed addictive features into the technology, despite knowing that this harmed teenage girls. A United Nations report labeled Facebook a “useful instrument” for spreading hate in an attempted genocide in Myanmar, and the company admitted the platform’s role in amplifying violence. Corporations and other interests can thus use AI to learn our psychological weaknesses, invite us to the most insecure version of ourselves and push our buttons to achieve their own desired ends.
几年前,Facebook 的一名告密者清楚地揭露了这一切。为了实现营收目标,该平台利用人工智能让人们在平台上停留更长时间。这意味着要找到最合适的激怒和挑衅性内容,从而使欺凌、阴谋、仇恨言论、虚假信息和其他有害传播蓬勃发展。该公司在用户不知情的情况下对其进行实验,在技术中设计了令人上瘾的功能,尽管明知这样做会伤害少女。联合国的一份报告称,在缅甸的一次种族灭绝未遂事件中,Facebook 是传播仇恨的 "有用工具",该公司承认该平台在放大暴力方面发挥了作用。因此,公司和其他利益集团可以利用人工智能了解我们的心理弱点,邀请我们进入最不安全的自我状态,并按下我们的按钮,以达到他们自己想要的目的。

So when we use our phones, computers, home security systems, health devices, smart watches, cars, toys, home assistants, apps, gadgets and what have you, they are also using us. As we search, we are searched. As we narrate our lives on social media, our stories and scrolling are captured. Despite feeling free and in control, we are subtly being guided (or “nudged” in benevolent tech speak) towards constrained ideas and outcomes. Based on previous behaviors, we are offered a flattering and hyperindividualized world that amplifies and confirms our biases, using our own interests and personalities against us to keep us coming back for more. Employing AI in this manner might be good for business, but it’s disastrous for the empathy and informed deliberations required for democracy.
因此,当我们使用手机、电脑、家庭安全系统、健康设备、智能手表、汽车、玩具、家庭助手、应用程序、小工具等时,它们也在使用我们。当我们搜索时,我们也在被搜索。当我们在社交媒体上讲述自己的生活时,我们的故事和滚动都会被记录下来。尽管我们觉得自己是自由的、可以控制的,但我们却被巧妙地引导(或者用善意的技术术语来说是 "点拨"),走向受限的想法和结果。根据我们以往的行为,我们被提供了一个谄媚和过度个性化的世界,这个世界放大并证实了我们的偏见,利用我们自己的兴趣和个性来对付我们,让我们不断地回来获得更多。以这种方式使用人工智能可能对商业有利,但对民主所需的同理心和知情审议却是灾难性的。

Even as tech companies ask us to accept cookies or belatedly seek our consent, these efforts are not done in good faith. They give us an illusion of privacy even as “improving” the companies’ services relies on machines learning more about us than we know ourselves and finding patterns to our behavior that no one knew they were looking for. Even the developers of AI don’t know how exactly it works, and therefore can’t meaningfully tell us what we’re consenting to.
即使科技公司要求我们接受 Cookie 或姗姗来迟地征求我们的同意,这些努力也并非出于善意。它们给了我们一种隐私的假象,即使公司服务的 "改进 "依赖于机器对我们的了解超过了我们对自己的了解,并且发现了我们的行为模式,而这些模式是无人知晓的。即使是人工智能的开发者也不知道它到底是如何工作的,因此也无法告诉我们我们同意了什么。

Under the current business model, the advances of AI and robot technology will enrichen the few while making life more difficult for the many. Sure, you could argue that people will benefit from the potential advances (and tech industry-enthralled economists undoubtedly will so argue) in health, design and whatever efficiencies AI might bring. But this is less meaningful when people have been robbed of their dignity, blamed for not keeping up and continuously spied on and manipulated for someone else’s gain.

We shouldn’t be afraid of AI taking over humanity; we should fear the fact that our humanity hasn’t kept up with our technology. Instead of enabling a world where we work less and live more, billionaires have designed a system to reward the few at the expense of the many. While AI has and will continue to do great things, it’s also been used to make people more anxious, precarious and self-centered, as well as less free. Until we truly learn to care about one another and consider the good of all, technology will continue to ensnare and not emancipate us. There’s no such thing as artificial ethics, and human principles must guide our technology, not the other way around. This starts by asking about who owns AI and how it might be employed in everyone’s best interest. The future belongs to us all.

This is an opinion and analysis article, and the views expressed by the author or authors are not necessarily those of Scientific American.

Joseph Jones is an assistant professor of media at West Virginia University where he teaches courses about media ethics, law, history, sociology, philosophy and power. His research focuses include AI and the political economy of our mediascape, and how media create meaning in our everyday lives and invite us to understand the world and our place in it.
约瑟夫-琼斯(JOSEPH JONES)是西弗吉尼亚大学媒体助理教授,教授媒体伦理、法律、历史、社会学、哲学和权力等课程。他的研究重点包括人工智能和媒体景观的政治经济学,以及媒体如何在我们的日常生活中创造意义,并让我们理解世界和我们在其中的位置。

More by Joseph Jones 约瑟夫-琼斯的更多作品