Transcript for Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419
Sam Altman: OpenAI, GPT-5, Sora, 董事会纷争, Elon Musk, Ilya, 权力与AGI | Lex Fridman播客#419 的文字记录

This is a transcript of Lex Fridman Podcast #419 with Sam Altman 2. The timestamps in the transcript are clickable links that take you directly to that point in the main video. Please note that the transcript is human generated, and may have errors. Here are some useful links:
这是Lex Fridman播客#419与Sam Altman 2的文字记录。文字记录中的时间戳是可点击的链接,直接带您到主视频的那个点。请注意,文字记录是人工生成的,可能存在错误。以下是一些有用的链接:

Table of Contents 目录

Here are the loose “chapters” in the conversation. Click link to jump approximately to that part in the transcript:
以下是对话中的“章节”。点击链接可以跳转到对话记录的大致位置:

Introduction 介绍

Sam Altman 萨姆·奥尔特曼 (00:00:00) I think compute is going to be the currency of the future. I think it’ll be maybe the most precious commodity in the world. I expect that by the end of this decade, and possibly somewhat sooner than that, we will have quite capable systems that we look at and say, “Wow, that’s really remarkable.” The road to AGI should be a giant power struggle. I expect that to be the case.
(00:00:00) 我认为计算将会是未来的货币。我认为它可能会成为世界上最珍贵的商品。我预计在这个十年的末尾,甚至可能比这更早,我们将拥有相当强大的系统,我们会看着它们说,“哇,这真的很了不起。”通往AGI的道路应该是一场巨大的权力斗争。我预计这将是这样的情况。
Lex Fridman 莱克斯·弗里德曼 (00:00:26) Whoever builds AGI first gets a lot of power. Do you trust yourself with that much power?
(00:00:26)谁先构建出人工通用智能,谁就能获得大量的权力。你信任自己能掌握那么大的权力吗?
(00:00:36) The following is a conversation with Sam Altman, his second time on the podcast. He is the CEO of OpenAI, the company behind GPT-4, ChaTGPT, Sora, and perhaps one day the very company that will build AGI. This is The Lex Fridman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Sam Altman.
(00:00:36) 接下来是我与Sam Altman的对话,这是他第二次来到播客。他是OpenAI的CEO,这家公司是GPT-4,ChaTGPT,Sora的背后推手,也许有一天,它将是构建AGI的公司。这是Lex Fridman的播客。为了支持它,请在描述中查看我们的赞助商。现在,亲爱的朋友们,这里是Sam Altman。

OpenAI board saga OpenAI董事会纷争

(00:01:05) Take me through the OpenAI board saga that started on Thursday, November 16th, maybe Friday, November 17th for you.
(00:01:05) 带我了解一下从周四,11月16日开始的OpenAI董事会纷争,或许对你来说是周五,11月17日。
Sam Altman 萨姆·奥尔特曼 (00:01:13) That was definitely the most painful professional experience of my life, and chaotic and shameful and upsetting and a bunch of other negative things. There were great things about it too, and I wish it had not been in such an adrenaline rush that I wasn’t able to stop and appreciate them at the time. But I came across this old tweet of mine or this tweet of mine from that time period. It was like going your own eulogy, watching people say all these great things about you, and just unbelievable support from people I love and care about. That was really nice, really nice. That whole weekend, with one big exception, I felt like a great deal of love and very little hate, even though it felt like I have no idea what’s happening and what’s going to happen here and this feels really bad. And there were definitely times I thought it was going to be one of the worst things to ever happen for AI safety. Well, I also think I’m happy that it happened relatively early. I thought at some point between when OpenAI started and when we created AGI, there was going to be something crazy and explosive that happened, but there may be more crazy and explosive things still to happen. It still, I think, helped us build up some resilience and be ready for more challenges in the future.
(00:01:13)那绝对是我人生中最痛苦的专业经历,混乱、羞愧、困扰,还有许多其他的负面情绪。当然,也有很多好的方面,我希望当时我没有处在如此的肾上腺素冲动中,以至于我无法停下来欣赏它们。但是我看到了我那个时期的一条旧推特,就像是在听自己的悼词,看着人们对你说出所有的好话,以及来自我所爱和关心的人们的难以置信的支持。那真的很好,真的很好。那整个周末,除了一个大的例外,我感觉到了大量的爱,很少的恨,尽管我感觉我不知道正在发生什么,也不知道接下来会发生什么,这感觉真的很糟糕。而且,我确实有过这样的想法,这可能会成为AI安全性问题中最糟糕的事情之一。不过,我也认为我很高兴这件事情发生得相对较早。我曾经想过,在OpenAI开始和我们创造AGI之间的某个时刻,会有一些疯狂和爆炸性的事情发生,但可能还会有更多的疯狂和爆炸性的事情即将发生。 我认为,这仍然帮助我们建立了一些韧性,并为未来的更多挑战做好了准备。
Lex Fridman 莱克斯·弗里德曼 (00:03:02) But the thing you had a sense that you would experience is some kind of power struggle?
(00:03:02) 但是你有预感你会经历某种权力斗争吗?
Sam Altman 萨姆·奥尔特曼 (00:03:08) The road to AGI should be a giant power struggle. The world should… Well, not should. I expect that to be the case.
(00:03:08)通往AGI的道路应该是一场巨大的权力斗争。世界应该...嗯,不是应该。我预期那就是情况。
Lex Fridman 莱克斯·弗里德曼 (00:03:17) And so you have to go through that, like you said, iterate as often as possible in figuring out how to have a board structure, how to have organization, how to have the kind of people that you’re working with, how to communicate all that in order to deescalate the power struggle as much as possible.
(00:03:17)所以你必须经历这个过程,就像你说的,尽可能多地迭代,以找出如何设立董事会结构,如何组织,如何选择你要合作的人,如何沟通所有这些,以尽可能地降低权力斗争。
Sam Altman 萨姆·奥尔特曼 (00:03:37) Yeah. (00:03:37)是的。
Lex Fridman 莱克斯·弗里德曼 (00:03:37) Pacify it.
(00:03:37) 平息它。
Sam Altman 萨姆·奥尔特曼 (00:03:38) But at this point, it feels like something that was in the past that was really unpleasant and really difficult and painful, but we’re back to work and things are so busy and so intense that I don’t spend a lot of time thinking about it. There was a time after, there was this fugue state for the month after, maybe 45 days after, that I was just drifting through the days. I was so out of it. I was feeling so down.
(00:03:38) 但在这个时候,那感觉就像是过去的一些非常不愉快、困难和痛苦的事情,但我们已经回到工作中,事情如此繁忙和紧张,以至于我没有太多时间去想它。之后有一段时间,也就是过去的一个月,可能是45天后,我只是在日子里漂泊。我当时的状态非常糟糕。我感觉非常低落。
Lex Fridman 莱克斯·弗里德曼 (00:04:17) Just on a personal, psychological level?
(00:04:17)就个人心理层面而言?
Sam Altman 萨姆·奥尔特曼 (00:04:20) Yeah. Really painful, and hard to have to keep running OpenAI in the middle of that. I just wanted to crawl into a cave and recover for a while. But now it’s like we’re just back to working on the mission.
(00:04:20)是的。真的很痛苦,而且在那种情况下还要继续运营OpenAI真的很难。我只是想躲进一个洞里休息一段时间。但现在我们又回到了执行任务的工作中。
Lex Fridman 莱克斯·弗里德曼 (00:04:38) Well, it’s still useful to go back there and reflect on board structures, on power dynamics, on how companies are run, the tension between research and product development and money and all this kind of stuff so that you, who have a very high potential of building AGI, would do so in a slightly more organized, less dramatic way in the future. So there’s value there to go, both the personal psychological aspects of you as a leader, and also just the board structure and all this messy stuff.
(00:04:38) 嗯,回顾董事会结构,权力动态,公司运营方式,研究与产品开发以及金钱之间的紧张关系等等,仍然是有用的。这样,你们这些有很高潜力建立AGI的人,将来可以以更有组织,少些戏剧性的方式去做。所以,无论是你作为领导者的个人心理方面,还是董事会结构和所有这些混乱的事情,都有价值去深入了解。
Sam Altman 萨姆·奥尔特曼 (00:05:18) I definitely learned a lot about structure and incentives and what we need out of a board. And I think that it is valuable that this happened now in some sense. I think this is probably not the last high-stress moment of OpenAI, but it was quite a high-stress moment. My company very nearly got destroyed. And we think a lot about many of the other things we’ve got to get right for AGI, but thinking about how to build a resilient org and how to build a structure that will stand up to a lot of pressure in the world, which I expect more and more as we get closer, I think that’s super important.
(00:05:18)我确实从中学到了很多关于结构和激励以及我们对董事会的需求。我认为在某种意义上,这件事现在发生是有价值的。我认为这可能不是OpenAI最后一次面临高压力的时刻,但这确实是一个非常高压的时刻。我的公司几乎被毁掉。我们对于实现AGI需要做对的许多其他事情进行了深思熟虑,但是思考如何建立一个能够抵御世界上大量压力的弹性组织和结构,我认为这非常重要,尤其是当我们越来越接近这个目标时。
Lex Fridman 莱克斯·弗里德曼 (00:06:01) Do you have a sense of how deep and rigorous the deliberation process by the board was? Can you shine some light on just human dynamics involved in situations like this? Was it just a few conversations and all of a sudden it escalates and why don’t we fire Sam kind of thing?
(00:06:01)你对董事会的审议过程有多深入和严谨有了解吗?你能否揭示一些这种情况下涉及的人际动态?是不是就只有几次对话,然后突然升级,为什么我们不解雇萨姆这样的事情?
Sam Altman 萨姆·奥尔特曼 (00:06:22) I think the board members are well-meaning people on the whole, and I believe that in stressful situations where people feel time pressure or whatever, people understand and make suboptimal decisions. And I think one of the challenges for OpenAI will be we’re going to have to have a board and a team that are good at operating under pressure.
(00:06:22)我认为董事会成员总的来说都是出于好意的人,我相信在人们感到时间压力或其他压力的紧张情况下,人们会理解并做出次优的决策。我认为OpenAI面临的一个挑战是,我们必须拥有一个能够在压力下运作良好的董事会和团队。
Lex Fridman 莱克斯·弗里德曼 (00:07:00) Do you think the board had too much power?
(00:07:00)你认为董事会的权力过大吗?
Sam Altman 萨姆·奥尔特曼 (00:07:03) I think boards are supposed to have a lot of power, but one of the things that we did see is in most corporate structures, boards are usually answerable to shareholders. Sometimes people have super voting shares or whatever. In this case, and I think one of the things with our structure that we maybe should have thought about more than we did is that the board of a nonprofit has, unless you put other rules in place, quite a lot of power. They don’t really answer to anyone but themselves. And there’s ways in which that’s good, but what we’d really like is for the board of OpenAI to answer to the world as a whole, as much as that’s a practical thing.
(00:07:03) 我认为董事会应该拥有很大的权力,但我们确实看到,在大多数公司结构中,董事会通常要对股东负责。有时人们拥有超级投票权或其他权利。在这种情况下,我认为我们的结构中,我们可能应该比实际上更多地考虑的一点是,非营利组织的董事会,除非你制定其他规则,否则拥有相当大的权力。他们真正的回答者只有他们自己。这在某些方面是好的,但我们真正希望的是OpenAI的董事会能对全世界负责,只要这是一种实际的事情。
Lex Fridman 莱克斯·弗里德曼 (00:07:44) So there’s a new board announced.
(00:07:44)所以,有一个新的董事会被宣布了。
Sam Altman 萨姆·奥尔特曼 (00:07:46) Yeah. (00:07:46)是的。
Lex Fridman 莱克斯·弗里德曼 (00:07:47) There’s I guess a new smaller board at first, and now there’s a new final board?
(00:07:47)我猜首先有一个新的小型董事会,现在有一个新的最终董事会?
Sam Altman 萨姆·奥尔特曼 (00:07:53) Not a final board yet. We’ve added some. We’ll add more.
(00:07:53)这还不是最终的板子。我们已经添加了一些。我们会再添加更多。
Lex Fridman 莱克斯·弗里德曼 (00:07:56) Added some. Okay. What is fixed in the new one that was perhaps broken in the previous one?
(00:07:56)添加了一些。好的。新版本修复了哪些可能在之前版本中出现的问题?
Sam Altman 萨姆·奥尔特曼 (00:08:05) The old board got smaller over the course of about a year. It was nine and then it went down to six, and then we couldn’t agree on who to add. And the board also I think didn’t have a lot of experienced board members, and a lot of the new board members at OpenAI have just have more experience as board members. I think that’ll help.
(00:08:05)大约一年的时间里,旧的董事会规模变小了。原本有九个人,然后减少到了六个,之后我们无法达成一致,不知道该增加谁。我认为董事会也没有很多有经验的董事,而OpenAI的很多新董事都有更多的董事经验。我认为这将会有所帮助。
Lex Fridman 莱克斯·弗里德曼 (00:08:31) It’s been criticized, some of the people that are added to the board. I heard a lot of people criticizing the addition of Larry Summers, for example. What’s the process of selecting the board? What’s involved in that?
(00:08:31)它受到了批评,一些被增加到董事会的人。例如,我听到很多人批评增加拉里·萨默斯。选择董事会的过程是什么?那涉及到什么?
Sam Altman 萨姆·奥尔特曼 (00:08:43) So Brett and Larry were decided in the heat of the moment over this very tense weekend, and that weekend was a real rollercoaster. It was a lot of ups and downs. And we were trying to agree on new board members that both the executive team here and the old board members felt would be reasonable. Larry was actually one of their suggestions, the old board members. Brett, I think I had even previous to that weekend suggested, but he was busy and didn’t want to do it, and then we really needed help in [inaudible 00:09:22]. We talked about a lot of other people too, but I felt like if I was going to come back, I needed new board members. I didn’t think I could work with the old board again in the same configuration, although we then decided, and I’m grateful that Adam would stay, but we considered various configurations, decided we wanted to get to a board of three and had to find two new board members over the course of a short period of time.
(00:08:43) 所以在这个非常紧张的周末,布雷特和拉里在一时冲动中被决定下来,那个周末真的像坐过山车一样。有很多起伏。我们试图达成对新董事会成员的一致意见,这些新成员既能得到这里的执行团队的认可,也能得到旧董事会成员的认可。拉里实际上是旧董事会成员的建议之一。我想我甚至在那个周末之前就提议过布雷特,但他很忙,不想做这个,然后我们真的需要在[听不清00:09:22]方面得到帮助。我们也谈论了很多其他人,但我觉得如果我要回来,我需要新的董事会成员。我不认为我能再和旧的董事会以同样的配置合作,尽管我们后来决定,我很感激亚当会留下,但我们考虑了各种配置,决定我们想要一个三人的董事会,并且必须在短时间内找到两个新的董事会成员。
(00:09:57) So those were decided honestly without… You do that on the battlefield. You don’t have time to design a rigorous process then. For new board members since, and new board members we’ll add going forward, we have some criteria that we think are important for the board to have, different expertise that we want the board to have. Unlike hiring an executive where you need them to do one role well, the board needs to do a whole role of governance and thoughtfulness well, and so, one thing that Brett says which I really like is that we want to hire board members in slates, not as individuals one at a time. And thinking about a group of people that will bring nonprofit expertise, expertise at running companies, good legal and governance expertise, that’s what we’ve tried to optimize for.
(00:09:57) 所以这些决定是在战场上诚实地做出的……你在那里做决定,没有时间设计一个严谨的过程。对于后来的新董事会成员,以及我们将要增加的新董事会成员,我们有一些我们认为对董事会重要的标准,我们希望董事会拥有不同的专业知识。与雇佣一个只需要做好一个角色的高管不同,董事会需要做好整个治理和思考的角色,所以,布雷特说的一句我非常喜欢的话是,我们希望以整个团队的形式雇佣董事会成员,而不是一次雇佣一个人。考虑到一群人将带来非营利专业知识,运营公司的专业知识,良好的法律和治理专业知识,这就是我们试图优化的地方。
Lex Fridman 莱克斯·弗里德曼 (00:10:49) So is technical savvy important for the individual board members?
(00:10:49)那么,对于董事会成员来说,技术娴熟度重要吗?
Sam Altman 萨姆·奥尔特曼 (00:10:52) Not for every board member, but for certainly some you need that. That’s part of what the board needs to do.
(00:10:52)并非每个董事会成员都需要,但肯定有些人需要。这是董事会需要做的一部分。
Lex Fridman 莱克斯·弗里德曼 (00:10:57) The interesting thing that people probably don’t understand about OpenAI, I certainly don’t, is all the details of running the business. When they think about the board, given the drama, they think about you. They think about if you reach AGI or you reach some of these incredibly impactful products and you build them and deploy them, what’s the conversation with the board like? And they think, all right, what’s the right squad to have in that kind of situation to deliberate?
(00:10:57)关于OpenAI,人们可能不太了解的有趣之处,我当然也不了解,那就是运营业务的所有细节。当他们想到董事会,考虑到其中的戏剧性,他们会想到你。他们会想到,如果你达到了人工智能的极限,或者你开发并部署了一些影响力极大的产品,那么你和董事会的对话会是怎样的?他们会想,好吧,那种情况下应该有哪种团队来进行讨论?
Sam Altman 萨姆·奥尔特曼 (00:11:25) Look, I think you definitely need some technical experts there. And then you need some people who are like, “How can we deploy this in a way that will help people in the world the most?” And people who have a very different perspective. I think a mistake that you or I might make is to think that only the technical understanding matters, and that’s definitely part of the conversation you want that board to have, but there’s a lot more about how that’s going to just impact society and people’s lives that you really want represented in there too.
(00:11:25)看,我认为你那里肯定需要一些技术专家。然后你需要一些人,他们会问,“我们如何部署这个以便最大程度地帮助世界上的人们?”以及那些有着非常不同观点的人。我认为你或我可能会犯的错误是认为只有技术理解才重要,这肯定是你希望那个委员会进行的对话的一部分,但是关于这将如何影响社会和人们的生活,你真的希望在那里也能得到体现。
Lex Fridman 莱克斯·弗里德曼 (00:11:56) Are you looking at the track record of people or you’re just having conversations?
(00:11:56) 你是在查看人们的过往记录,还是只是在进行对话?
Sam Altman 萨姆·奥尔特曼 (00:12:00) Track record is a big deal. You of course have a lot of conversations, but there are some roles where I totally ignore track record and just look at slope, ignore the Y-intercept.
(00:12:00)过往记录是一件大事。你当然会有很多对话,但有些角色我完全忽视过往记录,只看斜率,忽视Y轴截距。
Lex Fridman 莱克斯·弗里德曼 (00:12:18) Thank you. Thank you for making it mathematical for the audience.
(00:12:18) 谢谢你。感谢你为观众将其数学化。
Sam Altman 萨姆·奥尔特曼 (00:12:21) For a board member, I do care much more about the Y-intercept. I think there is something deep to say about track record there, and experience is something’s very hard to replace.
(00:12:21)作为一个董事会成员,我确实更关心Y轴截距。我认为这里有关于过往记录的深层含义,而且经验是很难被替代的。
Lex Fridman 莱克斯·弗里德曼 (00:12:32) Do you try to fit a polynomial function or exponential one to the track record?
(00:12:32) 你是尝试将多项式函数或指数函数拟合到跟踪记录吗?
Sam Altman 萨姆·奥尔特曼 (00:12:36) That analogy doesn’t carry that far.
(00:12:36) 那个类比并不适用到那么远。
Lex Fridman 莱克斯·弗里德曼 (00:12:39) All right. You mentioned some of the low points that weekend. What were some of the low points psychologically for you? Did you consider going to the Amazon jungle and just taking ayahuasca and disappearing forever?
(00:12:39) 好的。你提到了那个周末的一些低谷。对你来说,心理上的一些低谷是什么?你有考虑过去亚马逊丛林,只是服用一些死藤水,然后永远消失吗?
Sam Altman 萨姆·奥尔特曼 (00:12:53) It was a very bad period of time. There were great high points too. My phone was just nonstop blowing up with nice messages from people I worked with every day, people I hadn’t talked to in a decade. I didn’t get to appreciate that as much as I should have because I was just in the middle of this firefight, but that was really nice. But on the whole, it was a very painful weekend. It was like a battle fought in public to a surprising degree, and that was extremely exhausting to me, much more than I expected. I think fights are generally exhausting, but this one really was. The board did this Friday afternoon. I really couldn’t get much in the way of answers, but I also was just like, well, the board gets to do this, so I’m going to think for a little bit about what I want to do, but I’ll try to find the blessing in disguise here.
(00:12:53)那是一个非常糟糕的时期。当然也有很高的峰值。我的手机不停地接收到我每天工作的人,甚至是我十年未联系的人发来的友好信息。我没有像我应该的那样去欣赏这一点,因为我当时正处于这场火拼的中间,但那真的很好。但总的来说,那是一个非常痛苦的周末。这就像是在公众面前进行的一场战斗,程度之大让我感到惊讶,这对我来说极其疲惫,比我预期的要多。我认为争斗通常都很疲惫,但这一次真的是。董事会在周五下午做了这个决定。我真的无法得到太多的答案,但我也只是想,嗯,董事会有权这么做,所以我会花一点时间思考我想做什么,但我会试着在这里找到隐藏的祝福。
(00:13:52) And I was like, well, my current job at OpenAI is, or it was, to run a decently sized company at this point. And the thing I’d always liked the most was just getting to work with the researchers. And I was like, yeah, I can just go do a very focused AGI research effort. And I got excited about that. Didn’t even occur to me at the time possibly that this was all going to get undone. This was Friday afternoon.
(00:13:52)我当时就想,嗯,我现在在OpenAI的工作,或者说我曾经的工作,就是管理一个此刻规模还算不错的公司。我最喜欢的事情就是能和研究人员一起工作。我当时就想,是的,我可以去做一个非常专注的AGI研究工作。我对此感到很兴奋。我当时甚至没有想到,可能这一切都会被撤销。那是周五的下午。
Lex Fridman 莱克斯·弗里德曼 (00:14:19) So you’ve accepted the death of this-
(00:14:19) 所以你已经接受了这个的死亡-
Sam Altman 萨姆·奥尔特曼 (00:14:22) Very quickly. Very quickly. I went through a little period of confusion and rage, but very quickly, quickly. And by Friday night, I was talking to people about what was going to be next, and I was excited about that. I think it was Friday evening for the first time that I heard from the exec team here, which is like, “Hey, we’re going to fight this.” and then I went to bed just still being like, okay, excited. Onward.
(00:14:22)非常快。非常快。我经历了一段小小的困惑和愤怒期,但非常快,快速。到了周五晚上,我开始和人们讨论接下来会发生什么,我对此感到兴奋。我想是周五晚上我第一次听到这里的执行团队说,“嘿,我们要对抗这个。”然后我上床睡觉,仍然感到,好的,兴奋。向前。
Lex Fridman 莱克斯·弗里德曼 (00:14:52) Were you able to sleep?
(00:14:52) 你能睡着吗?
Sam Altman 萨姆·奥尔特曼 (00:14:54) Not a lot. One of the weird things was there was this period of four and a half days where I didn’t sleep much, didn’t eat much, and still had a surprising amount of energy. You learn a weird thing about adrenaline in wartime.
(00:14:54)并没有太多。其中一个奇怪的事情是,有四天半的时间我几乎没有睡觉,也没有吃多少东西,但仍然有惊人的精力。你在战时会对肾上腺素有一种奇怪的了解。
Lex Fridman 莱克斯·弗里德曼 (00:15:09) So you accepted the death of this baby, OpenAI.
(00:15:09) 所以你接受了这个婴儿的死亡,OpenAI。
Sam Altman 萨姆·奥尔特曼 (00:15:13) And I was excited for the new thing. I was just like, “Okay, this was crazy, but whatever.”
(00:15:13)我对新的事物感到兴奋。我就像,“好吧,这很疯狂,但无所谓。”
Lex Fridman 莱克斯·弗里德曼 (00:15:17) It’s a very good coping mechanism.
(00:15:17)这是一个非常好的应对机制。
Sam Altman 萨姆·奥尔特曼 (00:15:18) And then Saturday morning, two of the board members called and said, “Hey, we didn’t mean to destabilize things. We don’t want to store a lot of value here. Can we talk about you coming back?” And I immediately didn’t want to do that, but I thought a little more and I was like, well, I really care about the people here, the partners, shareholders. I love this company. And so I thought about it and I was like, “Well, okay, but here’s the stuff I would need.” And then the most painful time of all was over the course of that weekend, I kept thinking and being told, and not just me, the whole team here kept thinking, well, we were trying to keep OpenAI stabilized while the whole world was trying to break it apart, people trying to recruit whatever.
(00:15:18) 然后在周六早上,两位董事会成员打电话来说,“嘿,我们并不是想让事情变得不稳定。我们并不想在这里储存大量的价值。我们能谈谈你回来的事吗?”我立刻就不想这么做,但我又多想了一会儿,我想,嗯,我真的很关心这里的人,合作伙伴,股东。我爱这家公司。所以我想了想,我说,“好吧,但这是我需要的东西。”然后最痛苦的时刻就在那个周末,我一直在思考,一直被告知,不仅仅是我,这里的整个团队都在想,嗯,我们正在试图保持OpenAI的稳定,而整个世界都在试图将其分裂,人们试图招募等等。
(00:16:04) We kept being told, all right, we’re almost done. We’re almost done. We just need a little bit more time. And it was this very confusing state. And then Sunday evening when, again, every few hours I expected that we were going to be done and we’re going to figure out a way for me to return and things to go back to how they were. The board then appointed a new interim CEO, and then I was like, that feels really bad. That was the low point of the whole thing. I’ll tell you something. It felt very painful, but I felt a lot of love that whole weekend. Other than that one moment Sunday night, I would not characterize my emotions as anger or hate, but I felt a lot of love from people, towards people. It was painful, but the dominant emotion of the weekend was love, not hate.
(00:16:04)我们一直被告知,好了,我们快完成了。我们快完成了。我们只需要再多一点时间。这是一种非常混乱的状态。然后在周日晚上,我又一次期待我们将完成,我将找到一种方式回归,事情将回到原来的样子。然后董事会任命了一位新的临时CEO,我感到非常不好。那是整个事件中最低谷的时刻。我要告诉你一件事。我感到非常痛苦,但那整个周末我感到了很多的爱。除了周日晚上那一刻,我不会把我的情绪描述为愤怒或恨,但我感到了来自人们的很多爱,对人们的很多爱。那是痛苦的,但那个周末的主导情绪是爱,而不是恨。
Lex Fridman 莱克斯·弗里德曼 (00:17:04) You’ve spoken highly of Mira Murati, that she helped especially, as you put in the tweet, in the quiet moments when it counts. Perhaps we could take a bit of a tangent. What do you admire about Mira?
(00:17:04)你对米拉·穆拉蒂评价很高,说她在关键的安静时刻给予了特别的帮助,就像你在推文中所说的。也许我们可以稍微偏离一下主题。你对米拉有什么欣赏之处?
Sam Altman 萨姆·奥尔特曼 (00:17:15) Well, she did a great job during that weekend in a lot of chaos, but people often see leaders in the crisis moments, good or bad. But a thing I really value in leaders is how people act on a boring Tuesday at 9:46 in the morning and in just the normal drudgery of the day-to-day. How someone shows up in a meeting, the quality of the decisions they make. That was what I meant about the quiet moments.
(00:17:15)嗯,她在那个混乱的周末做得非常好,但人们通常在危机时刻看到的是领导者,无论好坏。但我真正看重的领导者特质是人们在一个平淡无奇的周二上午9:46以及在日常琐事中的行为。一个人在会议中的表现,他们做出的决策的质量。这就是我所说的安静时刻的含义。
Lex Fridman 莱克斯·弗里德曼 (00:17:47) Meaning most of the work is done on a day-by-day, in meeting-by-meeting. Just be present and make great decisions.
(00:17:47)这意味着大部分的工作是日复一日,会议接会议地完成的。只需保持在场并做出优秀的决策。
Sam Altman 萨姆·奥尔特曼 (00:17:58) Yeah. Look, what you have wanted to spend the last 20 minutes about, and I understand, is this one very dramatic weekend, but that’s not really what OpenAI is about. OpenAI is really about the other seven years.
(00:17:58) 是的。看,你过去20分钟想要谈论的,我理解,是这个非常戏剧化的周末,但这并不是OpenAI的真正含义。OpenAI实际上是关于其他七年的事情。
Lex Fridman 莱克斯·弗里德曼 (00:18:10) Well, yeah. Human civilization is not about the invasion of the Soviet Union by Nazi Germany, but still that’s something people focus on.
(00:18:10)嗯,是的。人类文明并不是关于纳粹德国对苏联的入侵,但这仍然是人们关注的事情。
Sam Altman 萨姆·奥尔特曼 (00:18:18) Very understandable.
(00:18:18) 非常易懂。
Lex Fridman 莱克斯·弗里德曼 (00:18:19) It gives us an insight into human nature, the extremes of human nature, and perhaps some of the damage in some of the triumphs of human civilization can happen in those moments, so it’s illustrative. Let me ask you about Ilya. Is he being held hostage in a secret nuclear facility?
(00:18:19)它让我们深入了解人性,人性的极端,也许人类文明的一些损失和一些胜利就在那些时刻发生,所以它具有示范性。让我问你关于伊利亚的事。他是在一个秘密的核设施里被扣为人质吗?

Ilya Sutskever 伊利亚·苏茨凯弗

Sam Altman 萨姆·奥尔特曼 (00:18:36) No. (00:18:36)不。
Lex Fridman 莱克斯·弗里德曼 (00:18:37) What about a regular secret facility?
(00:18:37)那普通的秘密设施呢?
Sam Altman 萨姆·奥尔特曼 (00:18:39) No. (00:18:39)不。
Lex Fridman 莱克斯·弗里德曼 (00:18:40) What about a nuclear non-secret facility?
(00:18:40)那么一个非秘密的核设施呢?
Sam Altman 萨姆·奥尔特曼 (00:18:41) Neither. Not that either.
(00:18:41)都不是。那个也不是。
Lex Fridman 莱克斯·弗里德曼 (00:18:44) This is becoming a meme at some point. You’ve known Ilya for a long time. He was obviously part of this drama with the board and all that kind of stuff. What’s your relationship with him now?
(00:18:44)这在某个时候变成了一个梗。你认识伊利亚已经很长时间了。他显然是这场董事会争议的一部分,还有所有那些事情。你现在和他的关系如何?
Sam Altman 萨姆·奥尔特曼 (00:18:57) I love Ilya. I have tremendous respect for Ilya. I don’t have anything I can say about his plans right now. That’s a question for him, but I really hope we work together for certainly the rest of my career. He’s a little bit younger than me. Maybe he works a little bit longer.
(00:18:57)我爱伊利亚。我对伊利亚有着极高的尊重。我现在无法对他的计划发表任何评论。这是他的问题,但我真心希望我们能在我职业生涯的剩余时间里一起工作。他比我年轻一点。也许他会工作得更久一些。
Lex Fridman 莱克斯·弗里德曼 (00:19:15) There’s a meme that he saw something, like he maybe saw AGI and that gave him a lot of worry internally. What did Ilya see?
(00:19:15)有一个流行观点是,他看到了某些东西,比如他可能看到了AGI,这让他内心感到很担忧。那么,伊利亚看到了什么呢?
Sam Altman 萨姆·奥尔特曼 (00:19:28) Ilya has not seen AGI. None of us have seen AGI. We’ve not built AGI. I do think one of the many things that I really love about Ilya is he takes AGI and the safety concerns, broadly speaking, including things like the impact this is going to have on society, very seriously. And as we continue to make significant progress, Ilya is one of the people that I’ve spent the most time over the last couple of years talking about what this is going to mean, what we need to do to ensure we get it right, to ensure that we succeed at the mission. So Ilya did not see AGI, but Ilya is a credit to humanity in terms of how much he thinks and worries about making sure we get this right.
(00:19:28)伊利亚还没有见过AGI。我们都没有见过AGI。我们还没有建造AGI。我确实认为,我非常喜欢伊利亚的许多方面之一就是他非常认真地对待AGI以及广义上的安全问题,包括这将对社会产生的影响等事情。随着我们继续取得重大进展,伊利亚是我在过去几年中花费最多时间讨论这将意味着什么,我们需要做什么来确保我们做对了,以确保我们成功完成任务的人之一。所以伊利亚没有看到AGI,但伊利亚在思考和担忧我们如何做对这件事上,对人类来说是一种荣誉。
Lex Fridman 莱克斯·弗里德曼 (00:20:30) I’ve had a bunch of conversation with him in the past. I think when he talks about technology, he’s always doing this long-term thinking type of thing. So he is not thinking about what this is going to be in a year. He’s thinking about in 10 years, just thinking from first principles like, “Okay, if this scales, what are the fundamentals here? Where’s this going?” And so that’s a foundation for them thinking about all the other safety concerns and all that kind of stuff, which makes him a really fascinating human to talk with. Do you have any idea why he’s been quiet? Is it he’s just doing some soul-searching?
(00:20:30)我过去和他有过很多次对话。我认为当他谈论科技时,他总是在做这种长期思考的事情。所以他并不在考虑一年后这会变成什么样。他在想的是十年后,就像从基本原理出发,“好的,如果这个东西扩大了,基本的情况是什么?这会走向哪里?”所以这就是他们思考所有其他安全问题和所有那些事情的基础,这使他成为一个非常有趣的人来交谈。你知道他为什么一直保持沉默吗?是他在进行一些深度的反思吗?
Sam Altman 萨姆·奥尔特曼 (00:21:08) Again, I don’t want to speak for Ilya. I think that you should ask him that. He’s definitely a thoughtful guy. I think Ilya is always on a soul search in a really good way.
(00:21:08)再次强调,我不想替伊利亚发言。我认为你应该问他。他绝对是个深思熟虑的人。我认为伊利亚总是以一种非常好的方式在寻找自我。
Lex Fridman 莱克斯·弗里德曼 (00:21:27) Yes. Yeah. Also, he appreciates the power of silence. Also, I’m told he can be a silly guy, which I’ve never seen that side of him.
(00:21:27)是的。对。他也欣赏沉默的力量。另外,我听说他可以是个滑稽的家伙,但我从未见过他的那一面。
Sam Altman 萨姆·奥尔特曼 (00:21:36) It’s very sweet when that happens.
(00:21:36)发生这种事情真是太甜蜜了。
Lex Fridman 莱克斯·弗里德曼 (00:21:39) I’ve never witnessed a silly Ilya, but I look forward to that as well.
(00:21:39) 我从未见过傻气的伊利亚,但我也期待这一刻。
Sam Altman 萨姆·奥尔特曼 (00:21:43) I was at a dinner party with him recently and he was playing with a puppy and he was in a very silly mood, very endearing. And I was thinking, oh man, this is not the side of Ilya that the world sees the most.
(00:21:43)我最近和他一起参加了一个晚宴,他正在和一只小狗玩耍,他的心情非常愚蠢,非常讨人喜欢。我在想,哦,这不是伊利亚最常向世界展示的一面。
Lex Fridman 莱克斯·弗里德曼 (00:21:55) So just to wrap up this whole saga, are you feeling good about the board structure-
(00:21:55) 所以,总结一下这整个事件,你对董事会的结构感觉良好吗-
Sam Altman 萨姆·奥尔特曼 (00:21:55) Yes. (00:21:55)是的。
Lex Fridman 莱克斯·弗里德曼 (00:22:01) … about all of this and where it’s moving?
(00:22:01)…关于所有这些以及它的发展方向?
Sam Altman 萨姆·奥尔特曼 (00:22:04) I feel great about the new board. In terms of the structure of OpenAI, one of the board’s tasks is to look at that and see where we can make it more robust. We wanted to get new board members in place first, but we clearly learned a lesson about structure throughout this process. I don’t have, I think, super deep things to say. It was a crazy, very painful experience. I think it was a perfect storm of weirdness. It was a preview for me of what’s going to happen as the stakes get higher and higher and the need that we have robust governance structures and processes and people. I’m happy it happened when it did, but it was a shockingly painful thing to go through.
(00:22:04) 我对新的董事会感觉非常好。在OpenAI的结构方面,董事会的任务之一就是审视并看看我们在哪里可以使其更为健壮。我们希望首先安排好新的董事会成员,但我们显然在这个过程中对结构学到了一课。我想我没有什么深入的话要说。这是一个疯狂的,非常痛苦的经历。我认为这是一场完美的怪异风暴。这对我来说是一个预览,随着赌注越来越高,我们需要健壮的治理结构、流程和人员。我很高兴它在这个时候发生,但这是一件令人震惊的痛苦事情。
Lex Fridman 莱克斯·弗里德曼 (00:22:47) Did it make you be more hesitant in trusting people?
(00:22:47)这让你在信任他人时变得更加犹豫吗?
Sam Altman 萨姆·奥尔特曼 (00:22:50) Yes. (00:22:50)是的。
Lex Fridman 莱克斯·弗里德曼 (00:22:51) Just on a personal level?
(00:22:51)就个人层面而言?
Sam Altman 萨姆·奥尔特曼 (00:22:52) Yes. I think I’m like an extremely trusting person. I’ve always had a life philosophy of don’t worry about all of the paranoia. Don’t worry about the edge cases. You get a little bit screwed in exchange for getting to live with your guard down. And this was so shocking to me. I was so caught off guard that it has definitely changed, and I really don’t like this, it’s definitely changed how I think about just default trust of people and planning for the bad scenarios.
(00:22:52)是的。我认为我是一个极度信任他人的人。我一直有一种生活哲学,就是不要担心所有的偏执狂想法。不要担心那些极端情况。你可能会因此稍微吃点亏,但换来的是可以放下警惕生活。而这件事对我来说太震惊了。我被完全打了个措手不及,这确实改变了我,我真的不喜欢这种感觉,它确实改变了我对人们默认信任以及为坏情况做计划的看法。
Lex Fridman 莱克斯·弗里德曼 (00:23:21) You got to be careful with that. Are you worried about becoming a little too cynical?
(00:23:21)你得小心点。你担心自己会变得过于愤世嫉俗吗?
Sam Altman 萨姆·奥尔特曼 (00:23:26) I’m not worried about becoming too cynical. I think I’m the extreme opposite of a cynical person, but I’m worried about just becoming less of a default trusting person.
(00:23:26)我并不担心自己会变得过于愤世嫉俗。我认为我是一个极度反愤世嫉俗的人,但我担心的是我可能会变得不再默认相信他人。
Lex Fridman 莱克斯·弗里德曼 (00:23:36) I’m actually not sure which mode is best to operate in for a person who’s developing AGI, trusting or un-trusting. It’s an interesting journey you’re on. But in terms of structure, see, I’m more interested on the human level. How do you surround yourself with humans that are building cool shit, but also are making wise decisions? Because the more money you start making, the more power the thing has, the weirder people get.
(00:23:36)实际上,我并不确定对于一个正在开发AGI的人来说,是信任模式还是不信任模式更好。你正在进行的这个旅程很有趣。但在结构方面,我更感兴趣的是人类层面。你如何让自己被那些正在创造酷炫东西,但同时也在做出明智决策的人们所包围?因为你开始赚取的钱越多,这个东西的权力越大,人们就会变得越奇怪。
Sam Altman 萨姆·奥尔特曼 (00:24:06) I think you could make all kinds of comments about the board members and the level of trust I should have had there, or how I should have done things differently. But in terms of the team here, I think you’d have to give me a very good grade on that one. And I have just enormous gratitude and trust and respect for the people that I work with every day, and I think being surrounded with people like that is really important.
(00:24:06)我认为你可以对董事会成员以及我应该对他们有多大的信任,或者我应该如何以不同的方式做事发表各种评论。但在这个团队方面,我认为你必须给我一个很高的评价。我对我每天工作的人们充满了巨大的感激、信任和尊重,我认为被这样的人包围是非常重要的。

Elon Musk lawsuit 埃隆·马斯克诉讼

Lex Fridman 莱克斯·弗里德曼 (00:24:39) Our mutual friend Elon sued OpenAI. What to you is the essence of what he’s criticizing? To what degree does he have a point? To what degree is he wrong?
(00:24:39) 我们的共同朋友埃隆起诉了OpenAI。你认为他批评的核心是什么?他的观点有多少是对的?又有多少是错的?
Sam Altman 萨姆·奥尔特曼 (00:24:52) I don’t know what it’s really about. We started off just thinking we were going to be a research lab and having no idea about how this technology was going to go. Because it was only seven or eight years ago, it’s hard to go back and really remember what it was like then, but this is before language models were a big deal. This was before we had any idea about an API or selling access to a chatbot. It was before we had any idea we were going to productize at all. So we’re like, “We’re just going to try to do research and we don’t really know what we’re going to do with that.” I think with many fundamentally new things, you start fumbling through the dark and you make some assumptions, most of which turned out to be wrong.
(00:24:52)我真的不知道这是关于什么的。我们一开始只是认为我们会成为一个研究实验室,对这项技术的发展方向一无所知。因为那只是七八年前的事,很难回忆起那时的情况,但那是在语言模型成为大热门之前。那是在我们对API或出售聊天机器人的访问权限一无所知之前。那是在我们还没有任何将其产品化的想法之前。所以我们就像,“我们只是打算做些研究,我们真的不知道我们会用那些研究成果做什么。”我认为对于许多根本性的新事物,你开始时就像在黑暗中摸索,做出一些假设,其中大部分最后都被证明是错误的。
(00:25:31) And then it became clear that we were going to need to do different things and also have huge amounts more capital. So we said, “Okay, well, the structure doesn’t quite work for that. How do we patch the structure?” And then you patch it again and patch it again and you end up with something that does look eyebrow-raising, to say the least. But we got here gradually with, I think, reasonable decisions at each point along the way. And it doesn’t mean I wouldn’t do it totally differently if we could go back now with an Oracle, but you don’t get the Oracle at the time. But anyway, in terms of what Elon’s real motivations here are, I don’t know.
(00:25:31) 然后我们明白了,我们需要做一些不同的事情,同时也需要大量的资金。所以我们说,“好吧,现在的结构可能不太适用。我们怎么修改这个结构呢?”然后你就一次又一次地修改,最后得到的东西看起来确实让人大跌眼镜,至少可以这么说。但是我们是逐步到达这里的,我认为在每个阶段我们都做出了合理的决定。这并不意味着如果我们现在能回到过去,有了预知未来的能力,我就不会完全不同的方式去做。但是你在那个时候并没有预知未来的能力。不过,至于埃隆真正的动机是什么,我不知道。
Lex Fridman 莱克斯·弗里德曼 (00:26:12) To the degree you remember, what was the response that OpenAI gave in the blog post? Can you summarize it?
(00:26:12) 尽你所记,OpenAI在博客文章中的回应是什么?你能总结一下吗?
Sam Altman 萨姆·奥尔特曼 (00:26:21) Oh, we just said Elon said this set of things. Here’s our characterization, or here’s not our characterization. Here’s the characterization of how this went down. We tried to not make it emotional and just say, “Here’s the history.”
(00:26:21)哦,我们刚刚说伊隆说了这些事情。这是我们的描述,或者不是我们的描述。这就是这件事的描述。我们试图不让它带有情绪,只是说,“这就是历史。”
Lex Fridman 莱克斯·弗里德曼 (00:26:44) I do think there’s a degree of mischaracterization from Elon here about one of the points you just made, which is the degree of uncertainty you had at the time. You guys are a small group of researchers crazily talking about AGI when everybody’s laughing at that thought.
(00:26:44)我确实认为埃隆在这里对你刚才提到的一点有些误解,那就是你当时的不确定程度。你们是一小群疯狂讨论AGI的研究者,而所有人都在嘲笑这个想法。
Sam Altman 萨姆·奥尔特曼 (00:27:09) It wasn’t that long ago Elon was crazily talking about launching rockets when people were laughing at that thought, so I think he’d have more empathy for this.
(00:27:09)就在不久前,当人们嘲笑这个想法时,埃隆疯狂地谈论发射火箭,所以我认为他对此会有更多的同情。
Lex Fridman 莱克斯·弗里德曼 (00:27:20) I do think that there’s personal stuff here, that there was a split that OpenAI and a lot of amazing people here chose to part ways with Elon, so there’s a personal-
(00:27:20)我确实认为这里有一些个人因素,OpenAI和这里的许多出色的人选择与埃隆分道扬镳,所以这是个人的-
Sam Altman 萨姆·奥尔特曼 (00:27:34) Elon chose to part ways.
(00:27:34)埃隆选择了分道扬镳。
Lex Fridman 莱克斯·弗里德曼 (00:27:37) Can you describe that exactly? The choosing to part ways?
(00:27:37)你能具体描述一下吗?选择分道扬镳的那一刻?
Sam Altman 萨姆·奥尔特曼 (00:27:42) He thought OpenAI was going to fail. He wanted total control to turn it around. We wanted to keep going in the direction that now has become OpenAI. He also wanted Tesla to be able to build an AGI effort. At various times, he wanted to make OpenAI into a for-profit company that he could have control of or have it merge with Tesla. We didn’t want to do that, and he decided to leave, which that’s fine.
(00:27:42)他认为OpenAI将会失败。他想要完全的控制权来扭转局面。我们想要继续朝着现在已经成为OpenAI的方向前进。他还希望特斯拉能够建立一个AGI(人工通用智能)的项目。在不同的时候,他想要将OpenAI变成一个他可以控制的盈利公司,或者让它与特斯拉合并。我们不想这样做,所以他决定离开,这没问题。
Lex Fridman 莱克斯·弗里德曼 (00:28:06) So you’re saying, and that’s one of the things that the blog post says, is that he wanted OpenAI to be basically acquired by Tesla in the same way that, or maybe something similar or maybe something more dramatic than the partnership with Microsoft.
(00:28:06) 所以你的意思是,这也是博客文章中提到的一点,他希望OpenAI能够被特斯拉收购,就像或者可能类似于,或者可能比与微软的合作更具戏剧性。
Sam Altman 萨姆·奥尔特曼 (00:28:23) My memory is the proposal was just like, yeah, get acquired by Tesla and have Tesla have full control over it. I’m pretty sure that’s what it was.
(00:28:23)我的记忆是,提案就像是,是的,被特斯拉收购,让特斯拉完全控制它。我很确定就是这样。
Lex Fridman 莱克斯·弗里德曼 (00:28:29) So what does the word open in OpenAI mean to Elon at the time? Ilya has talked about this in the email exchanges and all this kind of stuff. What does it mean to you at the time? What does it mean to you now?
(00:28:29) 那么,对于当时的埃隆来说,OpenAI中的"open"是什么意思呢?伊利亚在电子邮件交流和所有这些东西中谈到过这个。那时对你来说是什么意思?现在对你来说又是什么意思?
Sam Altman 萨姆·奥尔特曼 (00:28:44) Speaking of going back with an Oracle, I’d pick a different name. One of the things that I think OpenAI is doing that is the most important of everything that we’re doing is putting powerful technology in the hands of people for free, as a public good. We don’t run ads on our-
(00:28:44)说到和一个神谕回去,我会选择一个不同的名字。我认为OpenAI正在做的最重要的事情之一就是将强大的技术免费放在人们手中,作为公共利益。我们不在我们的-上运行广告。
Sam Altman 萨姆·奥尔特曼 (00:29:01) … as a public good. We don’t run ads on our free version. We don’t monetize it in other ways. We just say it’s part of our mission. We want to put increasingly powerful tools in the hands of people for free and get them to use them. I think that kind of open is really important to our mission. I think if you give people great tools and teach them to use them or don’t even teach them, they’ll figure it out, and let them go build an incredible future for each other with that, that’s a big deal. So if we can keep putting free or low cost or free and low cost powerful AI tools out in the world, I think that’s a huge deal for how we fulfill the mission. Open source or not, yeah, I think we should open source some stuff and not other stuff. It does become this religious battle line where nuance is hard to have, but I think nuance is the right answer.
(00:29:01) …作为一种公共利益。我们在我们的免费版本上不运行广告。我们也不通过其他方式来盈利。我们只是说这是我们的使命的一部分。我们希望将越来越强大的工具免费放在人们手中,并让他们使用。我认为这种开放对我们的使命非常重要。我认为如果你给人们提供了伟大的工具,并教他们如何使用,或者甚至不教他们,他们会自己弄明白,然后让他们用这些工具为彼此建设一个令人难以置信的未来,这是一件大事。所以,如果我们能继续将免费或低成本的强大AI工具推向世界,我认为这对我们实现使命有着巨大的影响。无论是否开源,是的,我认为我们应该开源一些东西,而不是其他东西。这确实成为了一条宗教战线,很难有细微的差别,但我认为细微的差别是正确的答案。
Lex Fridman 莱克斯·弗里德曼 (00:29:55) So he said, “Change your name to CloseAI and I’ll drop the lawsuit.” I mean is it going to become this battleground in the land of memes about the name?
(00:29:55)所以他说,“把你的名字改为CloseAI,我就撤销诉讼。”我的意思是,这个名字问题会不会变成这片迷因之地的战场?
Sam Altman 萨姆·奥尔特曼 (00:30:06) I think that speaks to the seriousness with which Elon means the lawsuit, and that’s like an astonishing thing to say, I think.
(00:30:06)我认为这表明了埃隆对诉讼的严肃态度,我觉得这是一件令人惊讶的事情。
Lex Fridman 莱克斯·弗里德曼 (00:30:23) Maybe correct me if I’m wrong, but I don’t think the lawsuit is legally serious. It’s more to make a point about the future of AGI and the company that’s currently leading the way.
(00:30:23)也许如果我错了你可以纠正我,但我认为这场诉讼在法律上并不严重。它更多的是为了对未来的人工智能和目前领先的公司提出一个观点。
Sam Altman 萨姆·奥尔特曼 (00:30:37) Look, I mean Grok had not open sourced anything until people pointed out it was a little bit hypocritical and then he announced that Grok will open source things this week. I don’t think open source versus not is what this is really about for him.
(00:30:37)看,我的意思是,直到有人指出这有点虚伪,Grok才没有开源任何东西,然后他宣布Grok将在本周开源一些东西。我不认为开源与否是他真正关心的问题。
Lex Fridman 莱克斯·弗里德曼 (00:30:48) Well, we will talk about open source and not. I do think maybe criticizing the competition is great. Just talking a little shit, that’s great. But friendly competition versus like, “I personally hate lawsuits.”
(00:30:48) 好的,我们将谈论开源与非开源。我确实认为或许批评竞争对手是好事。稍微说点坏话,那很好。但是友好的竞争与“我个人讨厌诉讼”相比,就像……
Sam Altman 萨姆·奥尔特曼 (00:31:01) Look, I think this whole thing is unbecoming of a builder. And I respect Elon as one of the great builders of our time. I know he knows what it’s like to have haters attack him and it makes me extra sad he’s doing it toss.
(00:31:01)看,我认为这整件事对于一个建设者来说是不体面的。我尊重埃隆作为我们这个时代的伟大建设者之一。我知道他知道被恨他的人攻击是什么感觉,这让我更加难过他也在做这种事。
Lex Fridman 莱克斯·弗里德曼 (00:31:18) Yeah, he’s one of the greatest builders of all time, potentially the greatest builder of all time.
(00:31:18) 是的,他是有史以来最伟大的建筑师之一,可能是有史以来最伟大的建筑师。
Sam Altman 萨姆·奥尔特曼 (00:31:22) It makes me sad. And I think it makes a lot of people sad. There’s a lot of people who’ve really looked up to him for a long time. I said in some interview or something that I missed the old Elon and the number of messages I got being like, “That exactly encapsulates how I feel.”
(00:31:22)这让我感到难过。我想这也让很多人感到难过。有很多人长期以来都非常尊敬他。我在某次采访或者什么地方说过我怀念过去的埃隆,然后我收到了很多消息,说“这正好表达了我现在的感受。”
Lex Fridman 莱克斯·弗里德曼 (00:31:36) I think he should just win. He should just make X Grok beat GPT and then GPT beats Grok and it’s just the competition and it’s beautiful for everybody. But on the question of open source, do you think there’s a lot of companies playing with this idea? It’s quite interesting. I would say Meta surprisingly has led the way on this, or at least took the first step in the game of chess of really open sourcing the model. Of course it’s not the state-of-the-art model, but open sourcing Llama Google is flirting with the idea of open sourcing a smaller version. What are the pros and cons of open sourcing? Have you played around with this idea?
(00:31:36)我认为他应该就这样赢。他应该让X Grok击败GPT,然后GPT再击败Grok,这就是竞争,对每个人来说都是美好的。但在开源的问题上,你认为有很多公司在玩弄这个想法吗?这非常有趣。我会说Meta出人意料地在这方面引领了潮流,或者至少在真正开源模型的棋局中迈出了第一步。当然,这并不是最先进的模型,但开源Llama Google正在考虑开源一个较小的版本。开源的利弊是什么?你有没有尝试过这个想法?
Sam Altman 萨姆·奥尔特曼 (00:32:22) Yeah, I think there is definitely a place for open source models, particularly smaller models that people can run locally, I think there’s huge demand for. I think there will be some open source models, there will be some closed source models. It won’t be unlike other ecosystems in that way.
(00:32:22) 是的,我认为开源模型绝对有其存在的价值,特别是那些人们可以在本地运行的小型模型,我认为对此有巨大的需求。我认为会有一些开源模型,也会有一些闭源模型。在这方面,它不会与其他生态系统有所不同。
Lex Fridman 莱克斯·弗里德曼 (00:32:39) I listened to all in podcasts talking about this lawsuit and all that kind of stuff. They were more concerned about the precedent of going from nonprofit to this cap for profit. What precedent that sets for other startups? Is that something-
(00:32:39)我听了所有关于这个诉讼以及所有相关事情的播客。他们更关心的是从非盈利转变为这种有利润上限的先例。这为其他初创公司设立了什么样的先例?这是一种什么情况-
Sam Altman 萨姆·奥尔特曼 (00:32:56) I would heavily discourage any startup that was thinking about starting as a nonprofit and adding a for-profit arm later. I’d heavily discourage them from doing that. I don’t think we’ll set a precedent here.
(00:32:56)我强烈反对任何初创公司一开始就考虑作为非营利机构,并在后期增加营利部门的想法。我会极力劝阻他们这么做。我不认为我们会在这里设立一个先例。
Lex Fridman 莱克斯·弗里德曼 (00:33:05) Okay. So most startups should go just-
(00:33:05) 好的。所以大多数创业公司应该就这样去做-
Sam Altman 萨姆·奥尔特曼 (00:33:08) For sure.
(00:33:08)当然。
Lex Fridman 莱克斯·弗里德曼 (00:33:09) And again-
(00:33:09)再来一次-
Sam Altman 萨姆·奥尔特曼 (00:33:09) If we knew what was going to happen, we would’ve done that too.
(00:33:09)如果我们知道会发生什么,我们也会那样做的。
Lex Fridman 莱克斯·弗里德曼 (00:33:12) Well in theory, if you dance beautifully here, there’s some tax incentives or whatever, but…
(00:33:12)理论上,如果你在这里跳得很美,会有一些税收优惠或者什么的,但是……
Sam Altman 萨姆·奥尔特曼 (00:33:19) I don’t think that’s how most people think about these things.
(00:33:19) 我不认为大多数人是这样看待这些事情的。
Lex Fridman 莱克斯·弗里德曼 (00:33:22) It’s just not possible to save a lot of money for a startup if you do it this way.
(00:33:22)如果你这样做,对于一个创业公司来说,想要存下大量的钱是不可能的。
Sam Altman 萨姆·奥尔特曼 (00:33:27) No, I think there’s laws that would make that pretty difficult.
(00:33:27) 不,我认为有些法律会使这变得相当困难。
Lex Fridman 莱克斯·弗里德曼 (00:33:30) Where do you hope this goes with Elon? This tension, this dance, what do you hope this? If we go 1, 2, 3 years from now, your relationship with him on a personal level too, like friendship, friendly competition, just all this kind of stuff.
(00:33:30)你希望这个与埃隆的关系会如何发展?这种紧张,这种舞蹈,你希望这是什么?如果我们从现在开始,1年,2年,3年,你和他在个人层面上的关系,比如友谊,友好的竞争,还有所有这些东西。
Sam Altman 萨姆·奥尔特曼 (00:33:51) Yeah, I really respect Elon and I hope that years in the future we have an amicable relationship.
(00:33:51)是的,我真的很尊重伊隆,我希望在未来的岁月里我们能有一个和睦的关系。
Lex Fridman 莱克斯·弗里德曼 (00:34:05) Yeah, I hope you guys have an amicable relationship this month and just compete and win and explore these ideas together. I do suppose there’s competition for talent or whatever, but it should be friendly competition. Just build cool shit. And Elon is pretty good at building cool shit. So are you.
(00:34:05) 是的,我希望你们这个月能保持和睦的关系,一起竞争,一起赢,一起探索这些想法。我猜可能会有人才竞争之类的,但应该是友好的竞争。就去创造一些酷炫的东西吧。埃隆非常擅长创造酷炫的东西。你也是。

Sora 索拉

(00:34:32) So speaking of cool shit, Sora. There’s like a million questions I could ask. First of all, it’s amazing. It truly is amazing on a product level but also just on a philosophical level. So let me just technical/philosophical ask, what do you think it understands about the world more or less than GPT-4 for example? The world model when you train on these patches versus language tokens.
(00:34:32) 所以,说到酷炫的东西,Sora。我有一百万个问题想要问。首先,它真的很惊人。不仅在产品层面上,也在哲学层面上。那么,让我从技术/哲学的角度来问,你认为它对世界的理解比如说GPT-4多还是少?当你在这些片段上训练世界模型,而不是语言标记。
Sam Altman 萨姆·奥尔特曼 (00:35:04) I think all of these models understand something more about the world model than most of us give them credit for. And because they’re also very clear things they just don’t understand or don’t get right, it’s easy to look at the weaknesses, see through the veil and say, “Ah, this is all fake.” But it’s not all fake. It’s just some of it works and some of it doesn’t work.
(00:35:04)我认为这些模型对世界模型的理解比我们大多数人认为的要深入。而且,因为它们也有很明显的不理解或者理解错误的地方,所以很容易看到这些弱点,看穿这层面纱,然后说,“啊,这全是假的。”但并非全是假的。只是有些部分有效,有些部分无效。
(00:35:28) I remember when I started first watching Sora videos and I would see a person walk in front of something for a few seconds and occlude it and then walk away and the same thing was still there. I was like, “Oh, this is pretty good.” Or there’s examples where the underlying physics looks so well represented over a lot of steps in a sequence, it’s like, “|Oh, this is quite impressive.” But fundamentally, these models are just getting better and that will keep happening. If you look at the trajectory from DALL·E 1 to 2 to 3 to Sora, there are a lot of people that were dunked on each version saying it can’t do this, it can’t do that and look at it now.
(00:35:28) 我记得当我开始看Sora的视频时,我会看到一个人走在某物前面几秒钟,然后遮挡住它,然后走开,同样的东西还在那里。我就像,“哦,这很不错。”或者有些例子,底层的物理现象在一系列步骤中表现得如此逼真,就像,“哦,这真的很令人印象深刻。”但从根本上说,这些模型只是在变得更好,这种情况会一直持续下去。如果你看看从DALL·E 1到2到3再到Sora的轨迹,有很多人在每个版本上都被批评说它不能做到这个,不能做到那个,现在看看它。
Lex Fridman 莱克斯·弗里德曼 (00:36:04) Well, the thing you just mentioned is the occlusions is basically modeling the physics of the three-dimensional physics of the world sufficiently well to capture those kinds of things.
(00:36:04) 嗯,你刚才提到的这个问题,遮挡物基本上是对世界三维物理现象的建模,足以捕捉到这类事物。
Sam Altman 萨姆·奥尔特曼 (00:36:17) Well… (00:36:17)嗯……
Lex Fridman 莱克斯·弗里德曼 (00:36:18) Or yeah, maybe you can tell me, in order to deal with occlusions, what does the world model need to?
(00:36:18) 或者是的,也许你可以告诉我,为了处理遮挡问题,世界模型需要做什么?
Sam Altman 萨姆·奥尔特曼 (00:36:24) Yeah. So what I would say is it’s doing something to deal with occlusions really well. What I represent that it has a great underlying 3D model of the world, it’s a little bit more of a stretch.
(00:36:24) 是的。所以我想说的是,它处理遮挡物的方式做得非常好。我认为它有一个很好的基础3D世界模型,这有点过于夸大其词。
Lex Fridman 莱克斯·弗里德曼 (00:36:33) But can you get there through just these kinds of two-dimensional training data approaches?
(00:36:33) 但是,你能仅通过这种二维训练数据的方法达到那里吗?
Sam Altman 萨姆·奥尔特曼 (00:36:39) It looks like this approach is going to go surprisingly far. I don’t want to speculate too much about what limits it will surmount and which it won’t, but…
(00:36:39)看起来这种方法将会出人意料地走得很远。我不想过多地猜测它能克服什么限制,不能克服什么,但是……
Lex Fridman 莱克斯·弗里德曼 (00:36:46) What are some interesting limitations of the system that you’ve seen? I mean there’s been some fun ones you’ve posted.
(00:36:46) 你见过这个系统有哪些有趣的限制?我是说你已经发布过一些有趣的内容。
Sam Altman 萨姆·奥尔特曼 (00:36:52) There’s all kinds of fun. I mean, cat’s sprouting an extra limit at random points in a video. Pick what you want, but there’s still a lot of problem, there’s a lot of weaknesses.
(00:36:52)有各种各样的乐趣。我的意思是,视频中的猫随机地长出了额外的肢体。选择你想要的,但还是存在很多问题,有很多弱点。
Lex Fridman 莱克斯·弗里德曼 (00:37:02) Do you think it’s a fundamental flaw of the approach or is it just bigger model or better technical details or better data, more data is going to solve the cat sprouting [inaudible 00:37:19]?
(00:37:02)你认为这是方法的根本性缺陷,还是只是更大的模型,更好的技术细节,或者更好的数据,更多的数据就能解决猫崽子[不可听清00:37:19]的问题?
Sam Altman 萨姆·奥尔特曼 (00:37:19) I would say yes to both. I think there is something about the approach which just seems to feel different from how we think and learn and whatever. And then also I think it’ll get better with scale.
(00:37:19)我会对两者都说是。我认为这种方法有些与我们的思考、学习等方式不同。而且我也认为随着规模的扩大,它会变得更好。
Lex Fridman 莱克斯·弗里德曼 (00:37:30) Like I mentioned, LLMS have tokens, text tokens, and Sora has visual patches so it converts all visual data, a diverse kinds of visual data videos and images into patches. Is the training to the degree you can say fully self supervised, there’s some manual labeling going on? What’s the involvement of humans in all this?
(00:37:30)就像我提到的,LLMS有文本标记,而Sora有视觉补丁,所以它将所有视觉数据,各种各样的视觉数据视频和图像转换成补丁。这种训练到了你可以说完全自我监督的程度了吗?还有一些人工标记在进行吗?人类在这一切中的参与是什么样的?
Sam Altman 萨姆·奥尔特曼 (00:37:49) I mean without saying anything specific about the Sora approach, we use lots of human data in our work.
(00:37:49)我的意思是,不具体谈论Sora的方法,我们在工作中使用了大量的人类数据。
Lex Fridman 莱克斯·弗里德曼 (00:38:00) But not internet scale data? So lots of humans. Lots is a complicated word, Sam.
(00:38:00) 但不是互联网规模的数据?所以有很多人。很多是一个复杂的词,Sam。
Sam Altman 萨姆·奥尔特曼 (00:38:08) I think lots is a fair word in this case.
(00:38:08)我认为在这种情况下,“很多”是个合适的词。
Lex Fridman 莱克斯·弗里德曼 (00:38:12) Because to me, “lots”… Listen, I’m an introvert and when I hang out with three people, that’s a lot of people. Four people, that’s a lot. But I suppose you mean more than…
(00:38:12)因为对我来说,“很多”……听着,我是个内向的人,当我和三个人一起出去,那就是很多人了。四个人,那也很多。但我猜你的意思是超过……
Sam Altman 萨姆·奥尔特曼 (00:38:21) More than three people work on labeling the data for these models, yeah.
(00:38:21)是的,有三个以上的人在为这些模型标注数据。
Lex Fridman 莱克斯·弗里德曼 (00:38:24) Okay. Right. But fundamentally, there’s a lot of self supervised learning. Because what you mentioned in the technical report is internet scale data. That’s another beautiful… It’s like poetry. So it’s a lot of data that’s not human label. It’s self supervised in that way?
(00:38:24) 好的,对。但从根本上说,有很多自我监督学习。因为你在技术报告中提到的是互联网规模的数据。这又是一个美丽的……就像诗一样。所以这是大量没有人类标签的数据。这就是自我监督的方式吗?
Sam Altman 萨姆·奥尔特曼 (00:38:44) Yeah. (00:38:44)是的。
Lex Fridman 莱克斯·弗里德曼 (00:38:45) And then the question is, how much data is there on the internet that could be used in this that is conducive to this kind of self supervised way if only we knew the details of the self supervised. Have you considered opening it up a little more details?
(00:38:45)然后问题是,互联网上有多少数据可以用于这种自我监督的方式,只要我们知道自我监督的细节。你有考虑过公开更多的细节吗?
Sam Altman 萨姆·奥尔特曼 (00:39:02) We have. You mean for source specifically?
(00:39:02)我们有。你是指针对源头特别的吗?
Lex Fridman 莱克斯·弗里德曼 (00:39:04) Source specifically. Because it’s so interesting that can the same magic of LLMs now start moving towards visual data and what does that take to do that?
(00:39:04) 特别是源头。因为这太有趣了,LLMs的同样魔力现在能否开始转向视觉数据,那需要做些什么来实现这一点呢?
Sam Altman 萨姆·奥尔特曼 (00:39:18) I mean it looks to me like yes, but we have more work to do.
(00:39:18)我的意思是,看起来对我来说是的,但我们还有更多的工作要做。
Lex Fridman 莱克斯·弗里德曼 (00:39:22) Sure. What are the dangers? Why are you concerned about releasing the system? What are some possible dangers of this?
(00:39:22) 当然。有什么危险?你为什么担心释放这个系统?这有什么可能的危险?
Sam Altman 萨姆·奥尔特曼 (00:39:29) I mean frankly speaking, one thing we have to do before releasing the system is just get it to work at a level of efficiency that will deliver the scale people are going to want from this so that I don’t want to downplay that. And there’s still a ton ton of work to do there. But you can imagine issues with deepfakes, misinformation. We try to be a thoughtful company about what we put out into the world and it doesn’t take much thought to think about the ways this can go badly.
(00:39:29)坦白说,在发布系统之前,我们必须做的一件事就是让它达到一定的效率,以满足人们对此的期望,我并不想淡化这一点。而且,我们还有大量的工作要做。但你可以想象到深度伪造和误导信息的问题。我们试图成为一个对我们投入世界的东西深思熟虑的公司,而想象这可能会出现的问题并不需要太多的思考。
Lex Fridman 莱克斯·弗里德曼 (00:40:05) There’s a lot of tough questions here, you’re dealing in a very tough space. Do you think training AI should be or is fair use under copyright law?
(00:40:05)这里有很多棘手的问题,你正在处理一个非常困难的领域。你认为在版权法下,训练AI应该或者说是合理使用吗?
Sam Altman 萨姆·奥尔特曼 (00:40:14) I think the question behind that question is, do people who create valuable data deserve to have some way that they get compensated for use of it, and that I think the answer is yes. I don’t know yet what the answer is. People have proposed a lot of different things. We’ve tried some different models. But if I’m like an artist for example, A, I would like to be able to opt out of people generating art in my style. And B, if they do generate art in my style, I’d like to have some economic model associated with that.
(00:40:14) 我认为那个问题背后的问题是,创造有价值数据的人是否应该有某种方式得到使用它的补偿,我认为答案是肯定的。我还不知道答案是什么。人们提出了很多不同的东西。我们尝试了一些不同的模型。但是,如果我是一个艺术家,例如,A,我希望能够选择退出人们以我的风格创作艺术。B,如果他们以我的风格创作艺术,我希望有一种经济模型与之相关。
Lex Fridman 莱克斯·弗里德曼 (00:40:46) Yeah, it’s that transition from CDs to Napster to Spotify. We have to figure out some kind of model.
(00:40:46)是的,这就是从CD到Napster再到Spotify的转变。我们必须找出某种模式。
Sam Altman 萨姆·奥尔特曼 (00:40:53) The model changes but people have got to get paid.
(00:40:53)模式改变了,但人们还是得拿到报酬。
Lex Fridman 莱克斯·弗里德曼 (00:40:55) Well, there should be some kind of incentive if we zoom out even more for humans to keep doing cool shit.
(00:40:55)嗯,如果我们进一步放大视角,应该有某种激励让人类继续做出酷炫的事情。
Sam Altman 萨姆·奥尔特曼 (00:41:02) Of everything I worry about, humans are going to do cool shit and society is going to find some way to reward it. That seems pretty hardwired. We want to create, we want to be useful, we want to achieve status in whatever way. That’s not going anywhere I don’t think.
(00:41:02)在我所有的担忧中,人类总会做出一些酷炫的事情,社会也会找到某种方式来奖励它。这似乎是根深蒂固的。我们想要创造,我们想要有用,我们想要以各种方式获得地位。我认为这种情况不会消失。
Lex Fridman 莱克斯·弗里德曼 (00:41:17) But the reward might not be monetary financially. It might be fame and celebration of other cool-
(00:41:17) 但是奖励可能并非金钱上的。它可能是名誉和对其他酷事的庆祝。
Sam Altman 萨姆·奥尔特曼 (00:41:25) Maybe financial in some other way. Again, I don’t think we’ve seen the last evolution of how the economic system’s going to work.
(00:41:25)也许在某种程度上是金融的。再说一次,我不认为我们已经看到了经济系统将如何运作的最后演变。
Lex Fridman 莱克斯·弗里德曼 (00:41:31) Yeah, but artists and creators are worried. When they see Sora, they’re like, “Holy shit.”
(00:41:31)是的,但艺术家和创作者们很担心。当他们看到Sora时,他们就像,“哇,太疯狂了。”
Sam Altman 萨姆·奥尔特曼 (00:41:36) Sure. Artists were also super worried when photography came out and then photography became a new art form and people made a lot of money taking pictures. I think things like that will keep happening. People will use the new tools in new ways.
(00:41:36) 当然。当摄影刚出现的时候,艺术家们也非常担忧,然后摄影成为了一种新的艺术形式,人们通过拍照赚了很多钱。我认为这样的事情会一直发生。人们会以新的方式使用新的工具。
Lex Fridman 莱克斯·弗里德曼 (00:41:50) If we just look on YouTube or something like this, how much of that will be using Sora like AI generated content, do you think, in the next five years?
(00:41:50)如果我们只是在YouTube或类似的平台上看看,你认为在接下来的五年里,有多少内容会使用像Sora这样的AI生成内容呢?
Sam Altman 萨姆·奥尔特曼 (00:42:01) People talk about how many jobs is AI going to do in five years. The framework that people have is, what percentage of current jobs are just going to be totally replaced by some AI doing the job? The way I think about it is not what percent of jobs AI will do, but what percent of tasks will AI do on over one time horizon. So if you think of all of the five-second tasks in the economy, five minute tasks, the five-hour tasks, maybe even the five-day tasks, how many of those can AI do? I think that’s a way more interesting, impactful, important question than how many jobs AI can do because it is a tool that will work at increasing levels of sophistication and over longer and longer time horizons for more and more tasks and let people operate at a higher level of abstraction. So maybe people are way more efficient at the job they do. And at some point that’s not just a quantitative change, but it’s a qualitative one too about the kinds of problems you can keep in your head. I think that for videos on YouTube it’ll be the same. Many videos, maybe most of them, will use AI tools in the production, but they’ll still be fundamentally driven by a person thinking about it, putting it together, doing parts of it. Sort of directing and running it.
(00:42:01) 人们谈论AI在五年内将做多少工作。人们的框架是,现有工作的多少百分比将完全被某种AI取代?我考虑的不是AI将做多少工作,而是AI将在一个时间范围内做多少任务。所以,如果你考虑经济中所有的五秒任务,五分钟任务,五小时任务,甚至五天任务,AI能做多少?我认为这比AI能做多少工作更有趣,更有影响力,更重要,因为它是一个工具,将以越来越高的复杂性和越来越长的时间范围来完成越来越多的任务,让人们在更高层次的抽象中操作。所以,也许人们在他们的工作中会更有效率。在某一点上,这不仅仅是数量上的变化,也是质的变化,关于你能在头脑中保持的问题类型。我认为对于YouTube上的视频也是如此。 许多视频,也许是大部分,将在制作过程中使用AI工具,但它们仍然会基本上由一个人来思考,组织,做部分工作。有点像导演和运营它。
Lex Fridman 莱克斯·弗里德曼 (00:43:18) Yeah, it’s so interesting. I mean it’s scary, but it’s interesting to think about. I tend to believe that humans like to watch other humans or other human humans-
(00:43:18) 是的,这真的很有趣。我是说,虽然这很可怕,但去思考它却很有趣。我倾向于相信人类喜欢观察其他人或其他的人类。
Sam Altman 萨姆·奥尔特曼 (00:43:27) Humans really care about other humans a lot.
(00:43:27)人类真的非常关心其他人类。
Lex Fridman 莱克斯·弗里德曼 (00:43:29) Yeah. If there’s a cooler thing that’s better than a human, humans care about that for two days and then they go back to humans.
(00:43:29) 是的。如果有比人类更酷的事物,人们会在两天内关注它,然后他们又会回归到人类。
Sam Altman 萨姆·奥尔特曼 (00:43:39) That seems very deeply wired.
(00:43:39)那似乎深深地嵌入其中。
Lex Fridman 莱克斯·弗里德曼 (00:43:41) It’s the whole chess thing, “Oh, yeah,” but now let’s everybody keep playing chess. And let’s ignore the elephant in the room that humans are really bad at chess relative to AI systems.
(00:43:41)这就是整个关于棋的事情,“哦,是的”,但现在让我们大家继续下棋。让我们忽视房间里的大象,那就是相对于AI系统,人类在棋类游戏上真的很糟糕。
Sam Altman 萨姆·奥尔特曼 (00:43:52) We still run races and cars are much faster. I mean there’s a lot of examples.
(00:43:52)我们仍然进行比赛,而汽车的速度快了很多。我的意思是,有很多例子。
Lex Fridman 莱克斯·弗里德曼 (00:43:56) Yeah. And maybe it’ll just be tooling in the Adobe suite type of way where it can just make videos much easier and all that kind of stuff.
(00:43:56) 是的。也许它只会像Adobe套件那样成为一种工具,可以让制作视频变得更加容易,以及所有那些事情。
(00:44:07) Listen, I hate being in front of the camera. If I can figure out a way to not be in front of the camera, I would love it. Unfortunately, it’ll take a while. That generating faces, it is getting there, but generating faces in video format is tricky when it’s specific people versus generic people.
(00:44:07)听着,我讨厌站在镜头前。如果我能找到一个不用站在镜头前的方法,我会很喜欢。不幸的是,这需要一段时间。生成面部图像正在进行中,但是在视频格式中生成特定人物的面部图像比生成普通人物的面部图像要棘手。

GPT-4

(00:44:24) Let me ask you about GPT-4. There’s so many questions. First of all, also amazing. Looking back, it’ll probably be this kind of historic pivotal moment with 3, 5 and 4 which ChatGPT.
(00:44:24)让我问你关于GPT-4的问题。有太多的问题了。首先,这也很惊人。回顾过去,这可能会是一个历史性的关键时刻,有3,5和4这样的ChatGPT。
Sam Altman 萨姆·奥尔特曼 (00:44:40) Maybe five will be the pivotal moment. I don’t know. Hard to say that looking forward.
(00:44:40)也许五点将是关键时刻。我不知道。很难预先说出来。
Lex Fridman 莱克斯·弗里德曼 (00:44:44) We’ll never know. That’s the annoying thing about the future, it’s hard to predict. But for me, looking back, GPT-4, ChatGPT is pretty damn impressive, historically impressive. So allow me to ask, what’s been the most impressive capabilities of GPT-4 to you and GPT-4 Turbo?
(00:44:44)我们永远不会知道。这就是关于未来的烦人之处,它很难预测。但对我来说,回顾过去,GPT-4,ChatGPT确实非常令人印象深刻,历史性的印象深刻。所以,让我问一下,对你来说,GPT-4和GPT-4 Turbo最令人印象深刻的能力是什么?
Sam Altman 萨姆·奥尔特曼 (00:45:06) I think it kind of sucks.
(00:45:06)我觉得这有点糟糕。
Lex Fridman 莱克斯·弗里德曼 (00:45:08) Typical human also, gotten used to an awesome thing.
(00:45:08)典型的人类也是,已经习惯了一件了不起的事情。
Sam Altman 萨姆·奥尔特曼 (00:45:11) No, I think it is an amazing thing, but relative to where we need to get to and where I believe we will get to, at the time of GPT-3, people are like, “Oh, this is amazing. This is marvel of technology.” And it is, it was. But now we have GPT-4 and look at GPT-3 and you’re like, “That’s unimaginably horrible.” I expect that the delta between 5 and 4 will be the same as between 4 and 3 and I think it is our job to live a few years in the future and remember that the tools we have now are going to kind of suck looking backwards at them and that’s how we make sure the future is better.
(00:45:11)不,我认为这是一件了不起的事情,但相对于我们需要达到的地方以及我相信我们将要达到的地方,在GPT-3的时候,人们都会说,“哇,这太神奇了。这是科技的奇迹。”的确,那是的。但现在我们有了GPT-4,看看GPT-3,你会觉得,“那简直是无法想象的糟糕。”我预计5和4之间的差距将与4和3之间的差距一样,我认为我们的任务就是生活在未来的几年里,记住我们现在拥有的工具在回顾时会显得有些糟糕,这就是我们确保未来更好的方式。
Lex Fridman 莱克斯·弗里德曼 (00:45:59) What are the most glorious ways in that GPT-4 sucks? Meaning-
(00:45:59)GPT-4的最糟糕之处是什么?意思是-
Sam Altman 萨姆·奥尔特曼 (00:46:05) What are the best things it can do?
(00:46:05)它能做什么最好的事情?
Lex Fridman 莱克斯·弗里德曼 (00:46:06) What are the best things it can do and the limits of those best things that allow you to say it sucks, therefore gives you an inspiration and hope for the future?
(00:46:06) 它能做什么最好的事情,以及这些最好的事情的限制让你说它糟糕,因此给你带来对未来的灵感和希望?
Sam Altman 萨姆·奥尔特曼 (00:46:16) One thing I’ve been using it for more recently is sort of like a brainstorming partner.
(00:46:16)我最近更多地将它用作类似于头脑风暴的伙伴。
Lex Fridman 莱克斯·弗里德曼 (00:46:23) Yep, [inaudible 00:46:25] for that.
(00:46:23)是的,[听不清00:46:25]为此。
Sam Altman 萨姆·奥尔特曼 (00:46:25) There’s a glimmer of something amazing in there. When people talk about it, what it does, they’re like, “Oh, it helps me code more productively. It helps me write more faster and better. It helps me translate from this language to another,” all these amazing things, but there’s something about the kind of creative brainstorming partner, “I need to come up with a name for this thing. I need to think about this problem in a different way. I’m not sure what to do here,” that I think gives a glimpse of something I hope to see more of.
(00:46:25)那里有一丝令人惊奇的东西。当人们谈论它,谈论它的作用时,他们会说,“哦,它帮助我更高效地编程。它帮助我更快更好地写作。它帮助我从这种语言翻译成另一种语言,”所有这些令人惊奇的事情,但是关于那种创造性的头脑风暴伙伴,“我需要为这个东西想一个名字。我需要以不同的方式思考这个问题。我不确定该怎么做,”我认为这给出了我希望看到更多的东西的一瞥。
(00:47:03) One of the other things that you can see a very small glimpse of is when I can help on longer horizon tasks, break down something in multiple steps, maybe execute some of those steps, search the internet, write code, whatever, put that together. When that works, which is not very often, it’s very magical.
(00:47:03)你可以稍微看到的另一件事是,当我可以在更长期的任务上提供帮助,将某件事分解成多个步骤,可能执行其中的一些步骤,搜索互联网,编写代码,等等,然后将这些整合在一起。当这种情况发生时,虽然并不常见,但确实非常神奇。
Lex Fridman 莱克斯·弗里德曼 (00:47:24) The iterative back and forth with a human, it works a lot for me. What do you mean it-
(00:47:24)与人反复迭代的交流对我来说非常有用。你说的它是什么意思?
Sam Altman 萨姆·奥尔特曼 (00:47:29) Iterative back and forth to human, it can get more often when it can go do a 10 step problem on its own.
(00:47:29)反复与人类交互,当它能够独立完成10步问题时,这种情况会变得更加频繁。
Lex Fridman 莱克斯·弗里德曼 (00:47:33) Oh. (00:47:33)哦。
Sam Altman 萨姆·奥尔特曼 (00:47:34) It doesn’t work for that too often, sometimes.
(00:47:34)它对此并不总是有效,有时候。
Lex Fridman 莱克斯·弗里德曼 (00:47:37) Add multiple layers of abstraction or do you mean just sequential?
(00:47:37)添加多层抽象还是你的意思只是顺序的?
Sam Altman 萨姆·奥尔特曼 (00:47:40) Both, to break it down and then do things that different layers of abstraction to put them together. Look, I don’t want to downplay the accomplishment of GPT-4, but I don’t want to overstate it either. And I think this point that we are on an exponential curve, we’ll look back relatively soon at GPT-4 like we look back at GPT-3 now.
(00:47:40) 既要分解它,又要在不同的抽象层面上将它们组合起来。看,我不想贬低GPT-4的成就,但我也不想过分夸大它。我认为我们现在正处于一个指数曲线上,我们很快就会像现在回顾GPT-3一样回顾GPT-4。
Lex Fridman 莱克斯·弗里德曼 (00:48:03) That said, I mean ChatGPT was a transition to where people started to believe there is an uptick of believing, not internally at OpenAI.
(00:48:03)也就是说,我的意思是ChatGPT成为了一个过渡,人们开始相信有一种信念的上升,但这并非在OpenAI内部。
Sam Altman 萨姆·奥尔特曼 (00:48:04) For sure.
(00:48:04)当然。
Lex Fridman 莱克斯·弗里德曼 (00:48:16) Perhaps there’s believers here, but when you think of-
(00:48:16)也许这里有信徒,但当你想到-
Sam Altman 萨姆·奥尔特曼 (00:48:19) And in that sense, I do think it’ll be a moment where a lot of the world went from not believing to believing. That was more about the ChatGPT interface. And by the interface and product, I also mean the post training of the model and how we tune it to be helpful to you and how to use it than the underlying model itself.
(00:48:19)在这个意义上,我确实认为这将是一个让世界上很多人从不相信转变为相信的时刻。这更多的是关于ChatGPT界面的事情。而当我说到界面和产品时,我也指的是模型的后期训练以及我们如何调整它以便对你有所帮助,以及如何使用它,而不仅仅是底层模型本身。
Lex Fridman 莱克斯·弗里德曼 (00:48:38) How much of each of those things are important? The underlying model and the RLHF or something of that nature that tunes it to be more compelling to the human, more effective and productive for the human.
(00:48:38) 每一种事物的重要性有多大?底层模型和RLHF或类似的东西如何调整它以使其对人类更具吸引力,更有效和更有成效。
Sam Altman 萨姆·奥尔特曼 (00:48:55) I mean they’re both super important, but the RLHF, the post-training step, the little wrapper of things that from a compute perspective, little wrapper of things that we do on top of the base model even though it’s a huge amount of work, that’s really important to say nothing of the product that we build around it. In some sense, we did have to do two things. We had to invent the underlying technology and then we had to figure out how to make it into a product people would love, which is not just about the actual product work itself, but this whole other step of how you align it and make it useful.
(00:48:55)我的意思是,它们都非常重要,但是RLHF,后训练步骤,那些从计算角度看,我们在基础模型之上做的小包装工作,尽管这是一项巨大的工作,但这真的非常重要,更不用说我们围绕它构建的产品了。在某种意义上,我们确实必须做两件事。我们必须发明底层技术,然后我们必须弄清楚如何将其制成人们会喜欢的产品,这不仅仅是关于实际产品工作本身,还有如何对齐它并使其有用的整个其他步骤。
Lex Fridman 莱克斯·弗里德曼 (00:49:37) And how you make the scale work where a lot of people can use it at the same time. All that kind of stuff.
(00:49:37) 以及你如何让很多人同时使用的规模工作。所有这类的东西。
Sam Altman 萨姆·奥尔特曼 (00:49:42) And that. But that was a known difficult thing. We knew we were going to have to scale it up. We had to go do two things that had never been done before that were both I would say quite significant achievements and then a lot of things like scaling it up that other companies have had to do before.
(00:49:42)就是那个。但那是一个已知的难题。我们知道我们需要将其扩大。我们必须去做两件从未做过的事情,我会说这两件事都是相当重大的成就,然后还有很多像扩大规模这样的事情,其他公司以前都必须要做。
Lex Fridman 莱克斯·弗里德曼 (00:50:01) How does the context window of going from 8K to 128K tokens compare from GPT-4 to GPT-4 Turbo?
(00:50:01) 从8K到128K令牌的上下文窗口在GPT-4和GPT-4 Turbo之间的比较是怎样的?
Sam Altman 萨姆·奥尔特曼 (00:50:13) Most people don’t need all the way to 128 most of the time. Although if we dream into the distant future, we’ll have way distant future, we’ll have context length of several billion. You will feed in all of your information, all of your history over time and it’ll just get to know you better and better and that’ll be great. For now, the way people use these models, they’re not doing that. People sometimes post in a paper or a significant fraction of a code repository, whatever, but most usage of the models is not using the long context most of the time.
(00:50:13) 大多数人大部分时间并不需要全部128。尽管如果我们梦想到遥远的未来,我们将有几十亿的上下文长度。你会输入你所有的信息,所有的历史,随着时间的推移,它会越来越了解你,这将是非常好的。但现在,人们使用这些模型的方式,并没有这样做。人们有时会在论文中发布或者在代码库的一大部分中发布,但是,大多数模型的使用并不是大部分时间都在使用长上下文。
Lex Fridman 莱克斯·弗里德曼 (00:50:50) I like that this is your “I have a dream” speech. One day you’ll be judged by the full context of your character or of your whole lifetime. That’s interesting. So that’s part of the expansion that you’re hoping for, is a greater and greater context.
(00:50:50) 我喜欢这是你的“我有一个梦想”的演讲。有一天,你将根据你的性格或你的一生的全面背景来评判。这很有趣。所以这是你希望扩展的一部分,就是更大更全面的背景。
Sam Altman 萨姆·奥尔特曼 (00:51:06) I saw this internet clip once, I’m going to get the numbers wrong, but it was like Bill Gates talking about the amount of memory on some early computer, maybe it was 64K, maybe 640K, something like that. Most of it was used for the screen buffer. He just couldn’t seem genuine. He just couldn’t imagine that the world would eventually need gigabytes of memory in a computer or terabytes of memory in a computer. And you always do, or you always do just need to follow the exponential of technology and we will find out how to use better technology. So I can’t really imagine what it’s like right now for context links to go out to the billion someday. And they might not literally go there, but effectively it’ll feel like that. But I know we’ll use it and really not want to go back once we have it.
(00:51:06)我曾经看过一个网络短片,我可能记错了数字,但是就像比尔·盖茨在谈论一台早期计算机的内存量,可能是64K,也可能是640K,大概就是这样。大部分都被用于屏幕缓冲。他就是无法真诚地想象,世界最终会需要计算机中的千兆字节或者太字节的内存。而你总是需要,或者你总是需要跟随技术的指数增长,我们会找出如何使用更好的技术。所以我现在真的无法想象,有一天上下文链接会扩展到十亿。他们可能并不会真的达到那个数量,但实际上会感觉就像那样。但我知道我们会使用它,一旦我们拥有它,真的就不想回头了。
Lex Fridman 莱克斯·弗里德曼 (00:51:56) Yeah, even saying billions 10 years from now might seem dumb because it’ll be trillions upon trillions.
(00:51:56)是的,即使现在说十年后会有数十亿可能会显得愚蠢,因为那时将会是数万亿万亿。
Sam Altman 萨姆·奥尔特曼 (00:52:04) Sure. (00:52:04) 好的。
Lex Fridman 莱克斯·弗里德曼 (00:52:04) There’ll be some kind of breakthrough that will effectively feel like infinite context. But even 120, I have to be honest, I haven’t pushed it to that degree. Maybe putting in entire books or parts of books and so on, papers. What are some interesting use cases of GPT-4 that you’ve seen?
(00:52:04)会有某种突破,使得它实际上感觉像是无限的上下文。但即使是120,我必须诚实地说,我还没有将其推向那个程度。也许是输入整本书或书的部分,以及论文等。你看到的GPT-4有哪些有趣的使用案例?
Sam Altman 萨姆·奥尔特曼 (00:52:23) The thing that I find most interesting is not any particular use case that we can talk about those, but it’s people who kind of like, this is mostly younger people, but people who use it as their default start for any kind of knowledge work task. And it’s the fact that it can do a lot of things reasonably well. You can use GPT-V, you can use it to help you write code, you can use it to help you do search, you can use it to edit a paper. The most interesting thing to me is the people who just use it as the start of their workflow.
(00:52:23) 我发现最有趣的事情并不是我们可以讨论的任何特定用例,而是那些人,主要是年轻人,他们把它作为任何知识工作任务的默认起点。事实上,它可以做很多事情做得相当好。你可以使用GPT-V,你可以用它来帮助你编写代码,你可以用它来帮助你搜索,你可以用它来编辑论文。对我来说,最有趣的事情是那些把它作为工作流程起点的人。
Lex Fridman 莱克斯·弗里德曼 (00:52:52) I do as well for many things. I use it as a reading partner for reading books. It helps me think, help me think through ideas, especially when the books are classic. So it’s really well written about. I find it often to be significantly better than even Wikipedia on well-covered topics. It’s somehow more balanced and more nuanced. Or maybe it’s me, but it inspires me to think deeper than a Wikipedia article does. I’m not exactly sure what that is.
(00:52:52)我也是,对很多事情都是这样。我把它当作阅读伙伴来阅读书籍。它帮助我思考,帮助我深入思考观点,尤其是当书籍是经典的时候。所以它写得真的很好。我发现它在很多被广泛覆盖的话题上,甚至比维基百科还要好。它在某种程度上更加平衡,更加细致。或者可能是我,但是它激发我比阅读维基百科文章时更深入地思考。我不完全确定那是什么。
(00:53:22) You mentioned this collaboration. I’m not sure where the magic is, if it’s in here or if it’s in there or if it’s somewhere in between. I’m not sure. But one of the things that concerns me for knowledge task when I start with GPT is I’ll usually have to do fact checking after, like check that it didn’t come up with fake stuff. How do you figure that out that GPT can come up with fake stuff that sounds really convincing? So how do you ground it in truth?
(00:53:22)你提到了这个合作。我不确定魔力在哪里,是在这里,还是在那里,或者在中间的某个地方。我不确定。但是,当我开始使用GPT进行知识任务时,我通常需要在之后进行事实核查,比如检查它是否产生了假的东西。你是如何发现GPT可以产生听起来非常有说服力的假东西的?那么,你是如何让它基于真实的呢?
Sam Altman 萨姆·奥尔特曼 (00:53:55) That’s obviously an area of intense interest for us. I think it’s going to get a lot better with upcoming versions, but we’ll have to continue to work on it and we’re not going to have it all solved this year.
(00:53:55)这显然是我们极度关注的领域。我认为随着即将到来的版本,情况会有很大改善,但我们还需要继续努力,今年我们不可能完全解决这个问题。
Lex Fridman 莱克斯·弗里德曼 (00:54:07) Well the scary thing is, as it gets better, you’ll start not doing the fact checking more and more, right?
(00:54:07) 好吓人的是,随着它变得越来越好,你开始越来越不去做事实核查,对吧?
Sam Altman 萨姆·奥尔特曼 (00:54:15) I’m of two minds about that. I think people are much more sophisticated users of technology than we often give them credit for.
(00:54:15)对此我有两种想法。我认为人们使用技术的能力比我们通常认为的要高级得多。
Lex Fridman 莱克斯·弗里德曼 (00:54:15) Sure. (00:54:15) 好的。
Sam Altman 萨姆·奥尔特曼 (00:54:21) And people seem to really understand that GPT, any of these models hallucinate some of the time. And if it’s mission-critical, you got to check it.
(00:54:21) 人们似乎真的明白,GPT或者任何这些模型有时会产生幻觉。如果这是关键任务,你必须检查它。
Lex Fridman 莱克斯·弗里德曼 (00:54:27) Except journalists don’t seem to understand that. I’ve seen journalists half-assedly just using GPT-4. It’s-
(00:54:27)然而,记者们似乎并不理解这一点。我见过记者们只是敷衍地使用GPT-4。这就是-
Sam Altman 萨姆·奥尔特曼 (00:54:34) Of the long list of things I’d like to dunk on journalists for, this is not my top criticism of them.
(00:54:34)在我想要狠狠批评记者的长长一串事情中,这并不是我对他们最大的批评。
Lex Fridman 莱克斯·弗里德曼 (00:54:40) Well, I think the bigger criticism is perhaps the pressures and the incentives of being a journalist is that you have to work really quickly and this is a shortcut.I would love our society to incentivize like-
(00:54:40) 嗯,我认为更大的批评可能是作为记者的压力和激励,那就是你必须非常快速地工作,这是一种捷径。我希望我们的社会能够激励这样的行为-
Sam Altman 萨姆·奥尔特曼 (00:54:53) I would too.
(00:54:53)我也会。
Lex Fridman 莱克斯·弗里德曼 (00:54:55) … like a journalistic efforts that take days and weeks and rewards great in depth journalism. Also journalism that present stuff in a balanced way where it’s like celebrates people while criticizing them even though the criticism is the thing that gets clicks and making shit up also gets clicks and headlines that mischaracterized completely. I’m sure you have a lot of people dunking on, “Well, all that drama probably got a lot of clicks.”
(00:54:55) …就像新闻工作那样,需要花费数天甚至数周的时间,对深度新闻有很大的回报。同时,这种新闻以平衡的方式呈现事物,即使是批评也会赞美人们,尽管批评是吸引点击的东西,编造事情也会吸引点击,以及完全误解的标题。我相信你们有很多人在说,“哎呀,所有的那些戏剧性的东西可能吸引了很多点击。”
Sam Altman 萨姆·奥尔特曼 (00:55:21) Probably did.
(00:55:21)可能是的。

Memory & privacy 内存与隐私

Lex Fridman 莱克斯·弗里德曼 (00:55:24) And that’s a bigger problem about human civilization I’d love to see-saw. This is where we celebrate a bit more. You’ve given ChatGPT the ability to have memories. You’ve been playing with that about previous conversations. And also the ability to turn off memory. I wish I could do that sometimes. Just turn on and off, depending. I guess sometimes alcohol can do that, but not optimally I suppose. What have you seen through that, like playing around with that idea of remembering conversations and not…
(00:55:24) 这是关于人类文明的一个更大的问题,我很想看到的摇摆。这是我们稍微庆祝一下的地方。你已经赋予了ChatGPT记忆的能力。你一直在玩关于之前对话的记忆。还有关闭记忆的能力。我有时候希望我也能做到这一点。根据需要,随时开启和关闭。我猜有时候酒精可以做到这一点,但我想这并不是最佳选择。你通过这个想法玩弄记忆对话和不记忆对话,你看到了什么呢?
Sam Altman 萨姆·奥尔特曼 (00:55:56) We’re very early in our explorations here, but I think what people want, or at least what I want for myself, is a model that gets to know me and gets more useful to me over time. This is an early exploration. I think there’s a lot of other things to do, but that’s where we’d like to head. You’d like to use a model, and over the course of your life or use a system, it’d be many models, and over the course of your life it gets better and better.
(00:55:56)我们在这方面的探索还处于非常早期的阶段,但我认为人们想要的,或者至少我自己想要的,是一个能够了解我并随着时间的推移对我越来越有用的模型。这是一次早期的探索。我认为还有很多其他的事情要做,但那是我们想要达到的目标。你会想要使用一个模型,或者在你的生活过程中使用一个系统,它会是许多模型,而且在你的生活过程中它会变得越来越好。
Lex Fridman 莱克斯·弗里德曼 (00:56:26) Yeah. How hard is that problem? Because right now it’s more like remembering little factoids and preferences and so on. What about remembering? Don’t you want GPT to remember all the shit you went through in November and all the drama and then you can-
(00:56:26) 是的。那个问题有多难?因为现在更像是记住一些小事实和偏好等等。记忆呢?你不希望GPT记住你在11月经历的所有破事和所有的戏剧,然后你可以-
Sam Altman 萨姆·奥尔特曼 (00:56:26) Yeah. Yeah.
(00:56:26)是的。是的。
Lex Fridman 莱克斯·弗里德曼 (00:56:41) Because right now you’re clearly blocking it out a little bit.
(00:56:41) 因为现在你显然有点在回避它。
Sam Altman 萨姆·奥尔特曼 (00:56:43) It’s not just that I want it to remember that. I want it to integrate the lessons of that and remind me in the future what to do differently or what to watch out for. We all gain from experience over the course of our lives in varying degrees, and I’d like my AI agent to gain with that experience too. So if we go back and let ourselves imagine that trillions and trillions of context length, if I can put every conversation I’ve ever had with anybody in my life in there, if I can have all of my emails input out, all of my input output in the context window every time I ask a question, that’d be pretty cool I think.
(00:56:43)这不仅仅是我希望它记住那些事情。我希望它能够整合那些经验教训,并在未来提醒我应该如何做出不同的选择或者需要注意什么。我们都会在生活中不同程度地从经验中获益,我也希望我的AI助手能从这些经验中获益。所以,如果我们回过头来想象那万亿万亿的上下文长度,如果我能把我和任何人在生活中的每一次对话都放进去,如果我能把我所有的电子邮件输入输出,每次我提问时,都把我的输入输出放在上下文窗口中,我认为那会非常酷。
Lex Fridman 莱克斯·弗里德曼 (00:57:29) Yeah, I think that would be very cool. People sometimes will hear that and be concerned about privacy. What do you think about that aspect of it, the more effective the AI becomes that really integrating all the experiences and all the data that happened to you and give you advice?
(00:57:29)是的,我认为那会非常酷。有时候人们听到这个会担心隐私问题。你对这方面有什么看法,即AI变得越来越有效,真正整合了你所有的经历和所有发生在你身上的数据,并给你建议?
Sam Altman 萨姆·奥尔特曼 (00:57:48) I think the right answer there is just user choice. Anything I want stricken from the record from my AI agent, I want to be able to take out. If I don’t want to remember anything, I want that too. You and I may have different opinions about where on that privacy utility trade off for our own AI-
(00:57:48) 我认为正确的答案就是用户的选择。我想从我的AI代理中删除的任何东西,我都希望能够取出。如果我不想记住任何东西,我也希望如此。你和我可能对我们自己的AI在隐私效用权衡中的位置有不同的看法。
Sam Altman 萨姆·奥尔特曼 (00:58:00) …opinions about where on that privacy/utility trade-off for OpenAI going to be, which is totally fine. But I think the answer is just really easy user choice.
(00:58:00)…关于OpenAI在隐私与效用的权衡中应处于何处的观点,这完全没问题。但我认为答案就是非常简单的用户选择。
Lex Fridman 莱克斯·弗里德曼 (00:58:08) But there should be some high level of transparency from a company about the user choice. Because sometimes companies in the past have been kind of shady about, “Eh, it’s kind of presumed that we’re collecting all your data. We’re using it for a good reason, for advertisement and so on.” But there’s not a transparency about the details of that.
(00:58:08)但是,公司应该对用户选择有一定程度的透明度。因为过去有时候公司对此有些模糊不清,比如,“嗯,我们默认收集你的所有数据。我们出于好的目的,比如广告等,使用它。”但是,对于这些细节并没有透明度。
Sam Altman 萨姆·奥尔特曼 (00:58:31) That’s totally true. You mentioned earlier that I’m blocking out the November stuff.
(00:58:31)那完全正确。你之前提到我正在规划11月的事情。
Lex Fridman 莱克斯·弗里德曼 (00:58:35) Just teasing you.
(00:58:35)只是在逗你玩。
Sam Altman 萨姆·奥尔特曼 (00:58:36) Well, I mean, I think it was a very traumatic thing and it did immobilize me for a long period of time. Definitely the hardest work thing I’ve had to do was just keep working that period, because I had to try to come back in here and put the pieces together while I was just in shock and pain, and nobody really cares about that. I mean, the team gave me a pass and I was not working at my normal level. But there was a period where it was really hard to have to do both. But I kind of woke up one morning, and I was like, “This was a horrible thing that happened to me. I think I could just feel like a victim forever, or I can say this is the most important work I’ll ever touch in my life and I need to get back to it.” And it doesn’t mean that I’ve repressed it, because sometimes I wake up in the middle of the night thinking about it, but I do feel an obligation to keep moving forward.
(00:58:36)嗯,我的意思是,我认为那是一件非常痛苦的事情,它让我长时间无法动弹。我必须做的最艰难的工作就是在那段时间里继续工作,因为我必须试图回到这里,一边处于震惊和痛苦中,一边把碎片拼凑在一起,而没有人真正关心这一点。我的意思是,团队给了我一个通行证,我并没有在我的正常水平上工作。但是有一段时间,我必须同时做这两件事,这真的很难。但是有一天早上,我醒来,我想,“这是一件对我来说非常可怕的事情。我觉得我可以永远感觉像一个受害者,或者我可以说这是我一生中最重要的工作,我需要回到它。”这并不意味着我已经压抑了它,因为有时我会在半夜醒来想起它,但我确实感到有义务继续前进。
Lex Fridman 莱克斯·弗里德曼 (00:59:32) Well, that’s beautifully said, but there could be some lingering stuff in there. Like, what I would be concerned about is that trust thing that you mentioned, that being paranoid about people as opposed to just trusting everybody or most people, like using your gut. It’s a tricky dance.
(00:59:32) 嗯,你说得很好,但可能还有一些未解的问题。比如,我会担心你提到的那个信任问题,对人们的偏执狂,而不是信任每个人或者大多数人,比如听从你的直觉。这是一个棘手的舞蹈。
Sam Altman 萨姆·奥尔特曼 (00:59:50) For sure.
(00:59:50)当然。
Lex Fridman 莱克斯·弗里德曼 (00:59:51) I mean, because I’ve seen in my part-time explorations, I’ve been diving deeply into the Zelenskyy administration and the Putin administration and the dynamics there in wartime in a very highly stressful environment. And what happens is distrust, and you isolate yourself, both, and you start to not see the world clearly. And that’s a human concern. You seem to have taken it in stride and kind of learned the good lessons and felt the love and let the love energize you, which is great, but still can linger in there. There’s just some questions I would love to ask, your intuition about what’s GPT able to do and not. So it’s allocating approximately the same amount of compute for each token it generates. Is there room there in this kind of approach to slower thinking, sequential thinking?
(00:59:51)我的意思是,因为在我业余的探索中,我深入研究了泽连斯基政府和普京政府,以及他们在高压环境下的战时动态。结果就是产生了不信任,你开始孤立自己,开始看不清世界。这是一个人类的问题。你似乎已经从容应对,并从中学到了好的教训,感受到了爱,让爱激励你,这很好,但仍然可能会有些许余波。我很想问你一些问题,你对GPT能做什么,不能做什么有什么直觉?所以它为生成的每个标记分配大致相同的计算量。在这种方法中,有没有给慢速思考,顺序思考留下空间?
Sam Altman 萨姆·奥尔特曼 (01:00:51) I think there will be a new paradigm for that kind of thinking.
(01:00:51) 我认为那种思维方式将会有一个新的范式。
Lex Fridman 莱克斯·弗里德曼 (01:00:55) Will it be similar architecturally as what we’re seeing now with LLMs? Is it a layer on top of LLMs?
(01:00:55)它在架构上会和我们现在看到的LLMs相似吗?它是在LLMs之上的一层吗?
Sam Altman 萨姆·奥尔特曼 (01:01:04) I can imagine many ways to implement that. I think that’s less important than the question you were getting at, which is, do we need a way to do a slower kind of thinking, where the answer doesn’t have to get… I guess spiritually you could say that you want an AI to be able to think harder about a harder problem and answer more quickly about an easier problem. And I think that will be important.
(01:01:04)我可以想象出许多实施这个的方法。我认为这比你提出的问题要次要,那就是,我们是否需要一种更慢的思考方式,答案不必立即得出……我想从精神层面上说,你希望AI能够对更难的问题进行更深入的思考,对更简单的问题能够更快地给出答案。我认为这将是重要的。
Lex Fridman 莱克斯·弗里德曼 (01:01:30) Is that like a human thought that we just have and you should be able to think hard? Is that wrong intuition?
(01:01:30)那就像我们人类的思考吗,我们应该能够深思熟虑?这种直觉是错误的吗?
Sam Altman 萨姆·奥尔特曼 (01:01:34) I suspect that’s a reasonable intuition.
(01:01:34) 我怀疑这是一个合理的直觉。
Lex Fridman 莱克斯·弗里德曼 (01:01:37) Interesting. So it’s not possible once the GPT gets like GPT-7, would just instantaneously be able to see, “Here’s the proof of Fermat’s Theorem”?
(01:01:37)有趣。所以一旦GPT发展到像GPT-7这样的程度,就能立即看到“这就是费马定理的证明”吗?
Sam Altman 萨姆·奥尔特曼 (01:01:49) It seems to me like you want to be able to allocate more compute to harder problems. It seems to me that if you ask a system like that, “Prove Fermat’s Last Theorem,” versus, “What’s today’s date?,” unless it already knew and and had memorized the answer to the proof, assuming it’s got to go figure that out, seems like that will take more compute.
(01:01:49) 在我看来,你似乎希望能够为更难的问题分配更多的计算资源。我觉得如果你问这样一个系统,“证明费马最后定理”,或者,“今天是什么日期?”除非它已经知道并且记住了证明的答案,否则假设它必须去弄清楚这个问题,似乎会需要更多的计算。
Lex Fridman 莱克斯·弗里德曼 (01:02:20) But can it look like basically an LLM talking to itself, that kind of thing?
(01:02:20)但是它能否看起来基本上像一个LLM在自言自语,那种情况呢?
Sam Altman 萨姆·奥尔特曼 (01:02:25) Maybe. I mean, there’s a lot of things that you could imagine working. What the right or the best way to do that will be, we don’t know.
(01:02:25)也许吧。我的意思是,你可以想象有很多事情是可以做到的。但是,最正确或最好的方法是什么,我们还不知道。

Q*

Lex Fridman 莱克斯·弗里德曼 (01:02:37) This does make me think of the mysterious lore behind Q*. What’s this mysterious Q* project? Is it also in the same nuclear facility?
(01:02:37)这确实让我想起了Q*背后的神秘传说。这个神秘的Q*项目是什么?它也在同一个核设施里吗?
Sam Altman 萨姆·奥尔特曼 (01:02:50) There is no nuclear facility.
(01:02:50)没有核设施。
Lex Fridman 莱克斯·弗里德曼 (01:02:52) Mm-hmm. That’s what a person with a nuclear facility always says.
(01:02:52)嗯嗯。这就是拥有核设施的人总是说的话。
Sam Altman 萨姆·奥尔特曼 (01:02:54) I would love to have a secret nuclear facility. There isn’t one.
(01:02:54)我真想拥有一个秘密的核设施。但是并没有。
Lex Fridman 莱克斯·弗里德曼 (01:02:59) All right.
(01:02:59)好的。
Sam Altman 萨姆·奥尔特曼 (01:03:00) Maybe someday.
(01:03:00)也许有一天。
Lex Fridman 莱克斯·弗里德曼 (01:03:01) Someday? All right. One can dream.
(01:03:01)有一天?好吧。人总是可以做梦的。
Sam Altman 萨姆·奥尔特曼 (01:03:05) OpenAI is not a good company at keeping secrets. It would be nice. We’re like, been plagued by a lot of leaks, and it would be nice if we were able to have something like that.
(01:03:05)OpenAI并不擅长保守秘密。如果能做到就好了。我们一直被大量的泄露问题困扰,如果我们能有类似的能力就好了。
Lex Fridman 莱克斯·弗里德曼 (01:03:14) Can you speak to what Q* is?
(01:03:14) 你能解释一下什么是Q*吗?
Sam Altman 萨姆·奥尔特曼 (01:03:16) We are not ready to talk about that.
(01:03:16)我们还没准备好谈论那个。
Lex Fridman 莱克斯·弗里德曼 (01:03:17) See, but an answer like that means there’s something to talk about. It’s very mysterious, Sam.
(01:03:17)看,但是这样的回答意味着有些事情可以谈论。这真的很神秘,萨姆。
Sam Altman 萨姆·奥尔特曼 (01:03:22) I mean, we work on all kinds of research. We have said for a while that we think better reasoning in these systems is an important direction that we’d like to pursue. We haven’t cracked the code yet. We’re very interested in it.
(01:03:22)我的意思是,我们从事各种各样的研究。我们已经说过一段时间,我们认为在这些系统中进行更好的推理是我们想要追求的重要方向。我们还没有破解这个密码。我们对此非常感兴趣。
Lex Fridman 莱克斯·弗里德曼 (01:03:48) Is there going to be moments, Q* or otherwise, where there’s going to be leaps similar to ChatGPT, where you’re like…
(01:03:48)会有像ChatGPT那样的跃进吗,无论是Q*还是其他的,你会有这样的时刻吗..
Sam Altman 萨姆·奥尔特曼 (01:03:56) That’s a good question. What do I think about that? It’s interesting. To me, it all feels pretty continuous.
(01:03:56)那是个好问题。我对此有何看法?很有趣。对我来说,这一切都感觉相当连贯。
Lex Fridman 莱克斯·弗里德曼 (01:04:08) Right. This is kind of a theme that you’re saying, is you’re basically gradually going up an exponential slope. But from an outsider’s perspective, from me just watching, it does feel like there’s leaps. But to you, there isn’t?
(01:04:08) 对。你所说的这种主题,基本上就是你正在逐渐攀升一个指数斜坡。但从一个局外人的角度,从我只是观察的角度,感觉上确实有跃进。但对你来说,没有吗?
Sam Altman 萨姆·奥尔特曼 (01:04:22) I do wonder if we should have… So part of the reason that we deploy the way we do, we call it iterative deployment, rather than go build in secret until we got all the way to GPT-5, we decided to talk about GPT-1, 2, 3, and 4. And part of the reason there is I think AI and surprise don’t go together. And also the world, people, institutions, whatever you want to call it, need time to adapt and think about these things. And I think one of the best things that OpenAI has done is this strategy, and we get the world to pay attention to the progress, to take AGI seriously, to think about what systems and structures and governance we want in place before we’re under the gun and have to make a rush decision.
(01:04:22)我确实在想我们是否应该……我们部署的方式有一部分原因,我们称之为迭代部署,而不是秘密建设直到我们完成GPT-5,我们决定谈论GPT-1,2,3和4。部分原因在于我认为AI和惊喜不应该搭配在一起。同时,世界,人们,机构,或者你想怎么称呼它,都需要时间去适应和思考这些事情。我认为OpenAI做得最好的一件事就是这个策略,我们让世界关注这个进程,认真对待AGI,思考我们在压力之下必须做出决定之前,想要建立什么样的系统,结构和治理。
(01:05:08) I think that’s really good. But the fact that people like you and others say you still feel like there are these leaps makes me think that maybe we should be doing our releasing even more iteratively. And I don’t know what that would mean, I don’t have an answer ready to go, but our goal is not to have shock updates to the world. The opposite.
(01:05:08)我认为这真的很好。但是,像你和其他人说的,你们仍然觉得有这些跳跃,让我觉得我们可能应该更迭代地进行发布。我不知道这意味着什么,我没有准备好的答案,但我们的目标不是给世界带来震惊的更新。恰恰相反。
Lex Fridman 莱克斯·弗里德曼 (01:05:29) Yeah, for sure. More iterative would be amazing. I think that’s just beautiful for everybody.
(01:05:29)是的,当然。更多的迭代将会非常棒。我认为这对每个人来说都是美好的。
Sam Altman 萨姆·奥尔特曼 (01:05:34) But that’s what we’re trying to do, that’s our stated strategy, and I think we’re somehow missing the mark. So maybe we should think about releasing GPT-5 in a different way or something like that.
(01:05:34)但这就是我们正在尝试做的,这是我们明确的策略,我认为我们可能在某种程度上没有达到预期。所以,也许我们应该考虑以不同的方式发布GPT-5或者类似的东西。
Lex Fridman 莱克斯·弗里德曼 (01:05:44) Yeah, 4.71, 4.72. But people tend to like to celebrate, people celebrate birthdays. I don’t know if you know humans, but they kind of have these milestones and those things.
(01:05:44) 是的,4.71,4.72。但是人们通常喜欢庆祝,人们庆祝生日。我不知道你是否了解人类,但他们有这样的里程碑和那些事情。
Sam Altman 萨姆·奥尔特曼 (01:05:54) I do know some humans. People do like milestones. I totally get that. I think we like milestones too. It’s fun to declare victory on this one and go start the next thing. But yeah, I feel like we’re somehow getting this a little bit wrong.
(01:05:54) 我确实认识一些人类。人们确实喜欢里程碑。我完全理解这一点。我想我们也喜欢里程碑。在这个问题上宣布胜利并开始下一件事是很有趣的。但是,我感觉我们在某种程度上弄错了这个。

GPT-5

Lex Fridman 莱克斯·弗里德曼 (01:06:13) So when is GPT-5 coming out again?
(01:06:13)那么,GPT-5什么时候再次发布呢?
Sam Altman 萨姆·奥尔特曼 (01:06:15) I don’t know. That’s the honest answer.
(01:06:15)我不知道。这是最诚实的回答。
Lex Fridman 莱克斯·弗里德曼 (01:06:18) Oh, that’s the honest answer. Blink twice if it’s this year.
(01:06:18)哦,那是诚实的回答。如果是今年的话,眨眼两次。
Sam Altman 萨姆·奥尔特曼 (01:06:30) We will release an amazing new model this year. I don’t know what we’ll call it.
(01:06:30)我们今年将发布一个令人惊艳的新款型号。我还不知道我们会叫它什么。
Lex Fridman 莱克斯·弗里德曼 (01:06:36) So that goes to the question of, what’s the way we release this thing?
(01:06:36)所以这就涉及到一个问题,我们应该如何发布这个东西?
Sam Altman 萨姆·奥尔特曼 (01:06:41) We’ll release in the coming months many different things. I think that’d be very cool. I think before we talk about a GPT-5-like model called that, or not called that, or a little bit worse or a little bit better than what you’d expect from a GPT-5, I think we have a lot of other important things to release first.
(01:06:41)在接下来的几个月里,我们将发布许多不同的东西。我认为那会非常酷。在我们谈论一个类似GPT-5的模型,无论叫不叫这个名字,或者比你期望的GPT-5稍差或稍好之前,我认为我们有很多其他重要的东西需要先发布。
Lex Fridman 莱克斯·弗里德曼 (01:07:02) I don’t know what to expect from GPT-5. You’re making me nervous and excited. What are some of the biggest challenges and bottlenecks to overcome for whatever it ends up being called, but let’s call it GPT-5? Just interesting to ask. Is it on the compute side? Is it on the technical side?
(01:07:02)我不知道该从GPT-5中期待什么。你让我感到紧张和兴奋。对于最终被称为什么的东西,但我们就称它为GPT-5,要克服的最大挑战和瓶颈是什么?只是有趣的提问。是在计算方面吗?还是在技术方面?
Sam Altman 萨姆·奥尔特曼 (01:07:21) It’s always all of these. You know, what’s the one big unlock? Is it a bigger computer? Is it a new secret? Is it something else? It’s all of these things together. The thing that OpenAI, I think, does really well… This is actually an original Ilya quote that I’m going to butcher, but it’s something like, “We multiply 200 medium-sized things together into one giant thing.”
(01:07:21)总是所有这些因素。你知道,什么是最大的解锁?是更大的电脑吗?是新的秘密吗?还是其他什么?这些都是一起的。我认为OpenAI做得非常好的一件事...这其实是我要糟蹋的一句原创的伊利亚引述,但它大概是这样的,“我们将200个中等大小的事物相乘,形成一个巨大的事物。”
Lex Fridman 莱克斯·弗里德曼 (01:07:47) So there’s this distributed constant innovation happening?
(01:07:47)所以这种分布式的持续创新正在发生吗?
Sam Altman 萨姆·奥尔特曼 (01:07:50) Yeah. (01:07:50)是的。
Lex Fridman 莱克斯·弗里德曼 (01:07:51) So even on the technical side?
(01:07:51)所以,即使在技术方面呢?
Sam Altman 萨姆·奥尔特曼 (01:07:53) Especially on the technical side.
(01:07:53)尤其是在技术方面。
Lex Fridman 莱克斯·弗里德曼 (01:07:55) So even detailed approaches?
(01:07:55)所以连详细的方法也是吗?
Sam Altman 萨姆·奥尔特曼 (01:07:56) Yeah. (01:07:56)是的。
Lex Fridman 莱克斯·弗里德曼 (01:07:56) Like you do detailed aspects of every… How does that work with different, disparate teams and so on? How do the medium-sized things become one whole giant Transformer?
(01:07:56) 就像你对每个细节都做得很详细...这在不同的、分散的团队中是如何工作的?中等大小的事物是如何变成一个整体的巨大变形器的?
Sam Altman 萨姆·奥尔特曼 (01:08:08) There’s a few people who have to think about putting the whole thing together, but a lot of people try to keep most of the picture in their head.
(01:08:08)有一些人需要考虑如何把所有事情整合在一起,但很多人试图在他们的脑海中保持大部分的画面。
Lex Fridman 莱克斯·弗里德曼 (01:08:14) Oh, like the individual teams, individual contributors try to keep the bigger picture?
(01:08:14)哦,就像各个团队,各个贡献者试图保持对大局的把握?
Sam Altman 萨姆·奥尔特曼 (01:08:17) At a high level, yeah. You don’t know exactly how every piece works, of course, but one thing I generally believe is that it’s sometimes useful to zoom out and look at the entire map. And I think this is true for a technical problem, I think this is true for innovating in business. But things come together in surprising ways, and having an understanding of that whole picture, even if most of the time you’re operating in the weeds in one area, pays off with surprising insights. In fact, one of the things that I used to have and was super valuable was I used to have a good map of all or most of the frontiers in the tech industry. And I could sometimes see these connections or new things that were possible that if I were only deep in one area, I wouldn’t be able to have the idea for because I wouldn’t have all the data. And I don’t really have that much anymore. I’m super deep now. But I know that it’s a valuable thing.
(01:08:17)在高层次上,是的。你并不确切知道每一部分是如何运作的,但我普遍认为有时候放大视角,看整个地图是有用的。我认为这对于技术问题是真的,我认为这对于商业创新也是真的。但事物以令人惊讶的方式汇聚在一起,即使大部分时间你都在一个区域的细节中操作,理解整个画面也会带来令人惊讶的洞察。事实上,我过去有的一件非常有价值的事情是,我曾经对科技行业的所有或大部分前沿有一个很好的地图。我有时候能看到这些联系或者可能的新事物,如果我只深入一个领域,我就无法因为我没有所有的数据而有这个想法。现在我已经没有那么多了。我现在深入得很。但我知道这是一件有价值的事情。
Lex Fridman 莱克斯·弗里德曼 (01:09:23) You’re not the man you used to be, Sam.
(01:09:23) 你已经不再是过去的那个你了,萨姆。
Sam Altman 萨姆·奥尔特曼 (01:09:25) Very different job now than what I used to have.
(01:09:25)现在的工作与我过去的工作非常不同。

$7 trillion of compute
7万亿美元的计算能力

Lex Fridman 莱克斯·弗里德曼 (01:09:28) Speaking of zooming out, let’s zoom out to another cheeky thing, but profound thing, perhaps, that you said. You tweeted about needing $7 trillion.
(01:09:28)说到缩小视野,让我们再次缩小视野来看你说的另一件狡猾但可能深远的事情。你在推特上说需要7万亿美元。
Sam Altman 萨姆·奥尔特曼 (01:09:41) I did not tweet about that. I never said, like, “We’re raising $7 trillion,” blah blah blah.
(01:09:41)我没有在推特上发表过那个。我从未说过,比如,“我们正在筹集7万亿美元”,等等等等。
Lex Fridman 莱克斯·弗里德曼 (01:09:45) Oh, that’s somebody else?
(01:09:45)哦,那是别人吗?
Sam Altman 萨姆·奥尔特曼 (01:09:46) Yeah. (01:09:46)是的。
Lex Fridman 莱克斯·弗里德曼 (01:09:47) Oh, but you said, “Fuck it, maybe eight,” I think?
(01:09:47)哦,但你说的是,“去他的,可能是八个”,我想?
Sam Altman 萨姆·奥尔特曼 (01:09:50) Okay, I meme once there’s misinformation out in the world.
(01:09:50) 好的,一旦世界上出现了错误信息,我就会制作一张梗图。
Lex Fridman 莱克斯·弗里德曼 (01:09:53) Oh, you meme. But misinformation may have a foundation of insight there.
(01:09:53) 哦,你这个恶搞家。但是,虽然是错误信息,可能也有一些真知灼见在里面。
Sam Altman 萨姆·奥尔特曼 (01:10:01) Look, I think compute is going to be the currency of the future. I think it will be maybe the most precious commodity in the world, and I think we should be investing heavily to make a lot more compute. Compute, I think it’s going to be an unusual market. People think about the market for chips for mobile phones or something like that. And you can say that, okay, there’s 8 billion people in the world, maybe 7 billion of them have phones, maybe 6 billion, let’s say. They upgrade every two years, so the market per year is 3 billion system-on-chip for smartphones. And if you make 30 billion, you will not sell 10 times as many phones, because most people have one phone.
(01:10:01)看,我认为计算将会是未来的货币。我认为它可能会成为世界上最珍贵的商品,我认为我们应该大力投资以产生更多的计算能力。我认为计算将会是一个不寻常的市场。人们会想到手机芯片的市场或者类似的东西。你可以说,好吧,世界上有80亿人,可能有70亿人有手机,也许是60亿,我们就说是60亿吧。他们每两年升级一次,所以每年的市场是30亿的手机系统芯片。如果你生产300亿,你不会卖出10倍的手机,因为大多数人只有一部手机。
(01:10:50) But compute is different. Intelligence is going to be more like energy or something like that, where the only thing that I think makes sense to talk about is, at price X, the world will use this much compute, and at price Y, the world will use this much compute. Because if it’s really cheap, I’ll have it reading my email all day, giving me suggestions about what I maybe should think about or work on, and trying to cure cancer, and if it’s really expensive, maybe I’ll only use it, or we’ll only use it, to try to cure cancer.
(01:10:50) 但是计算是不同的。智能将会更像能源或者类似的东西,我认为唯一有意义的讨论就是,以X价格,世界将会使用这么多的计算,以Y价格,世界将会使用这么多的计算。因为如果它真的很便宜,我会让它全天候阅读我的邮件,给我提供我可能应该思考或工作的建议,试图治愈癌症,如果它真的很贵,也许我只会使用它,或者我们只会使用它,试图治愈癌症。
(01:11:20) So I think the world is going to want a tremendous amount of compute. And there’s a lot of parts of that that are hard. Energy is the hardest part, building data centers is also hard, the supply chain is hard, and then of course, fabricating enough chips is hard. But this seems to be where things are going. We’re going to want an amount of compute that’s just hard to reason about right now.
(01:11:20) 所以我认为世界将需要大量的计算能力。这其中有很多难点。能源是最难的部分,建设数据中心也很困难,供应链也很困难,当然,制造足够的芯片也很困难。但这似乎就是事情的发展方向。我们将需要的计算量,现在还很难想象。
Lex Fridman 莱克斯·弗里德曼 (01:11:43) How do you solve the energy puzzle? Nuclear-
(01:11:43) 你如何解决能源难题?核能-
Sam Altman 萨姆·奥尔特曼 (01:11:46) That’s what I believe.
(01:11:46)这就是我所相信的。
Lex Fridman 莱克斯·弗里德曼 (01:11:47) …fusion? (01:11:47)…融合?
Sam Altman 萨姆·奥尔特曼 (01:11:48) That’s what I believe.
(01:11:48)这就是我所相信的。
Lex Fridman 莱克斯·弗里德曼 (01:11:49) Nuclear fusion?
(01:11:49)核聚变?
Sam Altman 萨姆·奥尔特曼 (01:11:50) Yeah. (01:11:50)是的。
Lex Fridman 莱克斯·弗里德曼 (01:11:51) Who’s going to solve that?
(01:11:51)谁来解决那个问题?
Sam Altman 萨姆·奥尔特曼 (01:11:53) I think Helion’s doing the best work, but I’m happy there’s a race for fusion right now. Nuclear fission, I think, is also quite amazing, and I hope as a world we can re-embrace that. It’s really sad to me how the history of that went, and hope we get back to it in a meaningful way.
(01:11:53)我认为Helion正在做最好的工作,但我很高兴现在有一场关于聚变的竞赛。我认为,核裂变也相当惊人,我希望我们作为一个世界能重新接受它。对我来说,这个历史的走向真的很令人遗憾,希望我们能以有意义的方式回归它。
Lex Fridman 莱克斯·弗里德曼 (01:12:08) So to you, part of the puzzle is nuclear fission? Like nuclear reactors as we currently have them? And a lot of people are terrified because of Chernobyl and so on?
(01:12:08)所以对你来说,这个难题的一部分是核裂变?就像我们现在拥有的核反应堆那样?并且因为切尔诺贝利等事件,很多人感到恐惧?
Sam Altman 萨姆·奥尔特曼 (01:12:16) Well, I think we should make new reactors. I think it’s just a shame that industry kind of ground to a halt.
(01:12:16)嗯,我认为我们应该制造新的反应堆。我觉得这个行业的停滞不前真是太可惜了。
Lex Fridman 莱克斯·弗里德曼 (01:12:22) And just mass hysteria is how you explain the halt?
(01:12:22)你就用群众性歇斯底里来解释这个停顿?
Sam Altman 萨姆·奥尔特曼 (01:12:25) Yeah. (01:12:25)是的。
Lex Fridman 莱克斯·弗里德曼 (01:12:26) I don’t know if you know humans, but that’s one of the dangers. That’s one of the security threats for nuclear fission, is humans seem to be really afraid of it. And that’s something we’ll have to incorporate into the calculus of it, so we have to kind of win people over and to show how safe it is.
(01:12:26)我不知道你是否了解人类,但这就是其中的一个危险。这是核裂变的一种安全威胁,人们似乎对此非常害怕。这是我们必须纳入其计算的一部分,所以我们必须设法说服人们,向他们展示其安全性。
Sam Altman 萨姆·奥尔特曼 (01:12:44) I worry about that for AI. I think some things are going to go theatrically wrong with AI. I don’t know what the percent chance is that I eventually get shot, but it’s not zero.
(01:12:44)我对AI的这一点感到担忧。我认为AI的某些事情将会戏剧性地出错。我不知道我最终被射击的概率是多少,但它不是零。
Lex Fridman 莱克斯·弗里德曼 (01:12:57) Oh, like we want to stop this from-
(01:12:57)哦,就像我们想阻止这个一样-
Sam Altman 萨姆·奥尔特曼 (01:13:00) Maybe. (01:13:00)也许吧。
Lex Fridman 莱克斯·弗里德曼 (01:13:03) How do you decrease the theatrical nature of it? I’m already starting to hear rumblings, because I do talk to people on both sides of the political spectrum, hear rumblings where it’s going to be politicized. AI is going to be politicized, which really worries me, because then it’s like maybe the right is against AI and the left is for AI because it’s going to help the people, or whatever the narrative and the formulation is, that really worries me. And then the theatrical nature of it can be leveraged fully. How do you fight that?
(01:13:03)你如何降低它的戏剧性呢?我已经开始听到一些传闻,因为我确实会和政治光谱的两端的人交谈,听到一些关于它将被政治化的传闻。人工智能将被政治化,这真的让我担忧,因为那就像右派反对人工智能,左派支持人工智能,因为它将帮助人民,或者无论叙述和构想是什么,那真的让我担忧。然后,它的戏剧性可以被充分利用。你如何对抗这个?
Sam Altman 萨姆·奥尔特曼 (01:13:38) I think it will get caught up in left versus right wars. I don’t know exactly what that’s going to look like, but I think that’s just what happens with anything of consequence, unfortunately. What I meant more about theatrical risks is AI’s going to have, I believe, tremendously more good consequences than bad ones, but it is going to have bad ones, and there’ll be some bad ones that are bad but not theatrical. A lot more people have died of air pollution than nuclear reactors, for example. But most people worry more about living next to a nuclear reactor than a coal plant. But something about the way we’re wired is that although there’s many different kinds of risks we have to confront, the ones that make a good climax scene of a movie carry much more weight with us than the ones that are very bad over a long period of time but on a slow burn.
(01:13:38)我认为这将陷入左派与右派的争斗中。我不确切知道那会是什么样子,但我认为这就是任何重要事物不幸的结果。我更多的意思是关于戏剧性的风险,我相信人工智能将带来更多的好处而不是坏处,但它确实会带来一些坏处,其中一些坏处虽然不好,但并不戏剧化。例如,死于空气污染的人比核反应堆多得多。但大多数人更担心住在核反应堆旁边,而不是煤电厂。但我们的思维方式是,尽管我们必须面对许多不同类型的风险,但那些能成为电影高潮场景的风险对我们的影响要大得多,而那些在很长一段时间内非常糟糕但却慢慢发生的风险则不然。
Lex Fridman 莱克斯·弗里德曼 (01:14:36) Well, that’s why truth matters, and hopefully AI can help us see the truth of things, to have balance, to understand what are the actual risks, what are the actual dangers of things in the world. What are the pros and cons of the competition in the space and competing with Google, Meta, xAI, and others?
(01:14:36) 好吧,这就是为什么真相很重要,希望AI能帮助我们看清事物的真相,保持平衡,理解世界上事物的实际风险和危险是什么。在这个领域的竞争中,与Google、Meta、xAI等其他公司竞争的利弊是什么?
Sam Altman 萨姆·奥尔特曼 (01:14:56) I think I have a pretty straightforward answer to this that maybe I can think of more nuance later, but the pros seem obvious, which is that we get better products and more innovation faster and cheaper, and all the reasons competition is good. And the con is that I think if we’re not careful, it could lead to an increase in sort of an arms race that I’m nervous about.
(01:14:56)我认为我对此有一个相当直接的答案,也许我稍后可以考虑更多的细微差别,但优点似乎很明显,那就是我们可以更快、更便宜地获得更好的产品和更多的创新,以及所有竞争的好处。而缺点是,我认为如果我们不小心,可能会导致一种我感到紧张的军备竞赛式的增长。
Lex Fridman 莱克斯·弗里德曼 (01:15:21) Do you feel the pressure of that arms race, like in some negative [inaudible 01:15:25]?
(01:15:21)你是否感受到了那种军备竞赛的压力,就像在某些负面的[听不清01:15:25]?
Sam Altman 萨姆·奥尔特曼 (01:15:25) Definitely in some ways, for sure. We spend a lot of time talking about the need to prioritize safety. And I’ve said for a long time that you think of a quadrant of slow timelines for the start of AGI, long timelines, and then a short takeoff or a fast takeoff. I think short timeline, slow takeoff is the safest quadrant and the one I’d most like us to be in. But I do want to make sure we get that slow takeoff.
(01:15:25)肯定的,在某些方面,确实如此。我们花了很多时间讨论需要优先考虑安全性。我已经说了很长时间,你可以想象一个象限,慢速的人工智能起步时间线,长时间线,然后是短暂起飞或快速起飞。我认为短时间线,慢速起飞是最安全的象限,也是我最希望我们处于的状态。但我确实希望我们能实现那种慢速起飞。
Lex Fridman 莱克斯·弗里德曼 (01:15:55) Part of the problem I have with this kind of slight beef with Elon is that there’s silos created as opposed to collaboration on the safety aspect of all of this. It tends to go into silos and closed. Open source, perhaps, in the model.
(01:15:55)我对这种微妙的与埃隆的矛盾的部分问题在于,相比于在所有这些事情的安全方面进行合作,他们却创建了孤立的部门。这些部门往往是孤立的,封闭的。也许在模型中是开源的。
Sam Altman 萨姆·奥尔特曼 (01:16:10) Elon says, at least, that he cares a great deal about AI safety and is really worried about it, and I assume that he’s not going to race unsafely.
(01:16:10)埃隆表示,他非常关心AI的安全性,并对此感到非常担忧,我认为他不会冒险进行竞赛。
Lex Fridman 莱克斯·弗里德曼 (01:16:20) Yeah. But collaboration here, I think, is really beneficial for everybody on that front.
(01:16:20)是的。但我认为在这里,合作对每个人来说都非常有益。
Sam Altman 萨姆·奥尔特曼 (01:16:26) Not really the thing he’s most known for.
(01:16:26)并不是他最为人所知的事情。
Lex Fridman 莱克斯·弗里德曼 (01:16:28) Well, he is known for caring about humanity, and humanity benefits from collaboration, and so there’s always a tension in incentives and motivations. And in the end, I do hope humanity prevails.
(01:16:28) 嗯,他以关心人类而闻名,人类从合作中受益,所以激励和动机之间总是存在紧张关系。最后,我真心希望人类能够胜出。
Sam Altman 萨姆·奥尔特曼 (01:16:42) I was thinking, someone just reminded me the other day about how the day that he surpassed Jeff Bezos for richest person in the world, he tweeted a silver medal at Jeff Bezos. I hope we have less stuff like that as people start to work towards AGI.
(01:16:42)我在想,前几天有人提醒我,他在超过杰夫·贝佐斯成为世界上最富有的人的那一天,他向杰夫·贝佐斯发了一个银牌的推文。我希望在人们开始朝着人工智能的方向努力时,我们能有更少的这样的事情。
Lex Fridman 莱克斯·弗里德曼 (01:16:58) I agree. I think Elon is a friend and he’s a beautiful human being and one of the most important humans ever. That stuff is not good.
(01:16:58) 我同意。我认为埃隆是我的朋友,他是一个优秀的人,也是有史以来最重要的人之一。那些事情不好。
Sam Altman 萨姆·奥尔特曼 (01:17:07) The amazing stuff about Elon is amazing and I super respect him. I think we need him. All of us should be rooting for him and need him to step up as a leader through this next phase.
(01:17:07)关于埃隆的惊人之处真的很惊人,我非常尊重他。我认为我们需要他。我们所有人都应该为他加油,并需要他在下一阶段作为领导者挺身而出。
Lex Fridman 莱克斯·弗里德曼 (01:17:19) Yeah. I hope he can have one without the other, but sometimes humans are flawed and complicated and all that kind of stuff.
(01:17:19) 是的。我希望他能拥有其中一样而不需要另一样,但有时候人类就是有缺点,复杂,还有那些事情。
Sam Altman 萨姆·奥尔特曼 (01:17:24) There’s a lot of really great leaders throughout history.
(01:17:24)历史上有很多真正伟大的领导者。

Google and Gemini 谷歌和双子座

Lex Fridman 莱克斯·弗里德曼 (01:17:27) Yeah, and we can each be the best version of ourselves and strive to do so. Let me ask you, Google, with the help of search, has been dominating the past 20 years. Think it’s fair to say, in terms of the world’s access to information, how we interact and so on, and one of the nerve-wracking things for Google, but for the entirety of people in the space, is thinking about, how are people going to access information? Like you said, people show up to GPT as a starting point. So is OpenAI going to really take on this thing that Google started 20 years ago, which is how do we get-
(01:17:27) 是的,我们每个人都可以成为自己最好的版本并努力去做。让我问你,谷歌在过去的20年里,借助搜索,一直在主导。我认为可以公平地说,就全球对信息的获取,我们的互动等方面,谷歌一直在主导。对于谷歌,甚至对于这个领域的所有人来说,令人紧张的一件事是,人们将如何获取信息?就像你说的,人们把GPT作为一个起点。那么OpenAI是否真的要接手这个谷歌20年前开始的事情,那就是我们如何获取-
Sam Altman 萨姆·奥尔特曼 (01:18:12) I find that boring. I mean, if the question is if we can build a better search engine than Google or whatever, then sure, we should go, people should use the better product, but I think that would so understate what this can be. Google shows you 10 blue links, well, 13 ads and then 10 blue links, and that’s one way to find information. But the thing that’s exciting to me is not that we can go build a better copy of Google search, but that maybe there’s just some much better way to help people find and act on and synthesize information. Actually, I think ChatGPT is that for some use cases, and hopefully we’ll make it be like that for a lot more use cases.
(01:18:12)我觉得那样很无聊。我的意思是,如果问题是我们是否能建造一个比谷歌或其他什么更好的搜索引擎,那当然,我们应该去做,人们应该使用更好的产品,但我认为这大大低估了这个可能性。谷歌给你展示10个蓝色链接,好吧,13个广告然后是10个蓝色链接,这是一种找到信息的方式。但是让我兴奋的不是我们可以去建造一个更好的谷歌搜索的复制品,而是可能有一种更好的方式来帮助人们找到、行动和综合信息。实际上,我认为ChatGPT在某些使用场景下就是这样,希望我们能让它在更多的使用场景下也是这样。
(01:19:04) But I don’t think it’s that interesting to say, “How do we go do a better job of giving you 10 ranked webpages to look at than what Google does?” Maybe it’s really interesting to go say, “How do we help you get the answer or the information you need? How do we help create that in some cases, synthesize that in others, or point you to it in yet others?” But a lot of people have tried to just make a better search engine than Google and it is a hard technical problem, it is a hard branding problem, it is a hard ecosystem problem. I don’t think the world needs another copy of Google.
(01:19:04) 但我不认为说,“我们如何做得比谷歌更好,给你提供10个排名的网页来查看?”会有多么有趣。也许真正有趣的是去说,“我们如何帮助你得到你需要的答案或信息?我们如何在某些情况下帮助你创造,或在其他情况下帮助你整合,或者在其他情况下指引你?”但是很多人试图只是做一个比谷歌更好的搜索引擎,这是一个技术难题,是一个品牌难题,是一个生态系统难题。我不认为世界需要另一个谷歌的复制品。
Lex Fridman 莱克斯·弗里德曼 (01:19:39) And integrating a chat client, like a ChatGPT, with a search engine-
(01:19:39)并将像ChatGPT这样的聊天客户端与搜索引擎进行集成-
Sam Altman 萨姆·奥尔特曼 (01:19:44) That’s cooler.
(01:19:44)那更酷了。
Lex Fridman 莱克斯·弗里德曼 (01:19:46) It’s cool, but it’s tricky. Like if you just do it simply, its awkward, because if you just shove it in there, it can be awkward.
(01:19:46)这很酷,但也很棘手。就像如果你只是简单地做,会很尴尬,因为如果你只是硬塞进去,可能会很尴尬。
Sam Altman 萨姆·奥尔特曼 (01:19:54) As you might guess, we are interested in how to do that well. That would be an example of a cool thing.
(01:19:54)你可能已经猜到,我们对如何做好这件事感兴趣。那将是一个很酷的事情的例子。
Lex Fridman 莱克斯·弗里德曼 (01:20:00) [inaudible 01:20:00] Like a heterogeneous integrating-
(01:20:00)[听不清01:20:00] 就像一个异质的整合-
Sam Altman 萨姆·奥尔特曼 (01:20:03) The intersection of LLMs plus search, I don’t think anyone has cracked the code on yet. I would love to go do that. I think that would be cool.
(01:20:03)LLMs与搜索的交集,我认为还没有人破解过。我很想去做这件事。我认为那会很酷。
Lex Fridman 莱克斯·弗里德曼 (01:20:13) Yeah. What about the ad side? Have you ever considered monetization of-
(01:20:13) 是的。广告方面呢?你有没有考虑过对-进行盈利?
Sam Altman 萨姆·奥尔特曼 (01:20:16) I kind of hate ads just as an aesthetic choice. I think ads needed to happen on the internet for a bunch of reasons, to get it going, but it’s a momentary industry. The world is richer now. I like that people pay for ChatGPT and know that the answers they’re getting are not influenced by advertisers. I’m sure there’s an ad unit that makes sense for LLMs, and I’m sure there’s a way to participate in the transaction stream in an unbiased way that is okay to do, but it’s also easy to think about the dystopic visions of the future where you ask ChatGPT something and it says, “Oh, you should think about buying this product,” or, “You should think about going here for your vacation,” or whatever.
(01:20:16)我有点讨厌广告,这只是一种审美选择。我认为互联网上需要有广告的出现,这是出于很多原因,以推动其发展,但这只是一个短暂的行业。现在的世界更加富有了。我喜欢人们为ChatGPT付费,并知道他们得到的答案不受广告商的影响。我相信对于LLMs来说,肯定有一种适合的广告形式,我也相信有一种可以以公正的方式参与交易流程的方法,这样做是可以接受的,但同时,也很容易想象未来的反乌托邦景象,你问ChatGPT一些问题,它会说,“哦,你应该考虑购买这个产品,”或者,“你应该考虑去这里度假,”等等。
(01:21:08) And I don’t know, we have a very simple business model and I like it, and I know that I’m not the product. I know I’m paying and that’s how the business model works. And when I go use Twitter or Facebook or Google or any other great product but ad-supported great product, I don’t love that, and I think it gets worse, not better, in a world with AI.
(01:21:08)我不知道,我们有一个非常简单的商业模式,我喜欢它,我知道我不是产品。我知道我在付费,这就是商业模式的运作方式。当我去使用Twitter、Facebook、Google或者任何其他广告支持的优秀产品时,我并不喜欢这种感觉,我认为在一个有AI的世界里,这种情况会变得更糟,而不是变得更好。
Lex Fridman 莱克斯·弗里德曼 (01:21:39) Yeah, I mean, I could imagine AI would be better at showing the best kind of version of ads, not in a dystopic future, but where the ads are for things you actually need. But then does that system always result in the ads driving the kind of stuff that’s shown? Yeah, I think it was a really bold move of Wikipedia not to do advertisements, but then it makes it very challenging as a business model. So you’re saying the current thing with OpenAI is sustainable, from a business perspective?
(01:21:39)是的,我的意思是,我可以想象AI会更擅长展示最佳版本的广告,而不是在一个反乌托邦的未来,而是广告是你真正需要的东西。但是,那个系统是否总是导致广告驱动显示的内容?是的,我认为维基百科不做广告是一个非常大胆的举动,但这使得它作为一种商业模式非常具有挑战性。所以你是在说,从商业角度看,OpenAI现在的情况是可持续的吗?
Sam Altman 萨姆·奥尔特曼 (01:22:15) Well, we have to figure out how to grow, but looks like we’re going to figure that out. If the question is do I think we can have a great business that pays for our compute needs without ads, that, I think the answer is yes.
(01:22:15)嗯,我们需要找出如何发展,但看起来我们将会找到答案。如果问题是我是否认为我们可以在没有广告的情况下拥有一个能支付我们计算需求的伟大业务,那么,我认为答案是肯定的。
Lex Fridman 莱克斯·弗里德曼 (01:22:28) Hm. Well, that’s promising. I also just don’t want to completely throw out ads as a…
(01:22:28)嗯,这很有希望。我也不想完全抛弃广告作为一个...
Sam Altman 萨姆·奥尔特曼 (01:22:37) I’m not saying that. I guess I’m saying I have a bias against them.
(01:22:37)我并没有那么说。我想我是在说我对他们有偏见。
Lex Fridman 莱克斯·弗里德曼 (01:22:42) Yeah, I have also bias and just a skepticism in general. And in terms of interface, because I personally just have a spiritual dislike of crappy interfaces, which is why AdSense, when it first came out, was a big leap forward, versus animated banners or whatever. But it feels like there should be many more leaps forward in advertisement that doesn’t interfere with the consumption of the content and doesn’t interfere in a big, fundamental way, which is like what you were saying, like it will manipulate the truth to suit the advertisers.
(01:22:42) 是的,我也有偏见,总体上持怀疑态度。在界面方面,因为我个人对糟糕的界面有一种精神上的厌恶,这就是为什么AdSense刚出来时,相对于动画横幅或其他的,是一个大的飞跃。但我觉得在广告方面应该有更多的飞跃,不干扰内容的消费,也不以大的、根本的方式干扰,就像你说的,它会为了适应广告商而操纵真相。
(01:23:19) Let me ask you about safety, but also bias, and safety in the short term, safety in the long term. The Gemini 1.5 came out recently, there’s a lot of drama around it, speaking of theatrical things, and it generated Black Nazis and Black Founding Fathers. I think fair to say it was a bit on the ultra-woke side. So that’s a concern for people, if there is a human layer within companies that modifies the safety or the harm caused by a model, that it would introduce a lot of bias that fits sort of an ideological lean within a company. How do you deal with that?
(01:23:19)让我问你关于安全性,但也包括偏见,短期的安全性,长期的安全性。最近推出的Gemini 1.5引起了很多争议,说到戏剧性的事情,它产生了黑人纳粹和黑人开国元勋。我认为可以公正地说它有点过于觉醒。所以这对人们来说是一个问题,如果公司内部有一个人为的层面修改了模型造成的安全性或伤害,那么它会引入很多符合公司意识形态倾向的偏见。你如何处理这个问题?
Sam Altman 萨姆·奥尔特曼 (01:24:06) I mean, we work super hard not to do things like that. We’ve made our own mistakes, we’ll make others. I assume Google will learn from this one, still make others. These are not easy problems. One thing that we’ve been thinking about more and more, I think this is a great idea somebody here had, it would be nice to write out what the desired behavior of a model is, make that public, take input on it, say, “Here’s how this model’s supposed to behave,” and explain the edge cases too. And then when a model is not behaving in a way that you want, it’s at least clear about whether that’s a bug the company should fix or behaving as intended and you should debate the policy. And right now, it can sometimes be caught in between. Like Black Nazis, obviously ridiculous, but there are a lot of other kind of subtle things that you could make a judgment call on either way.
(01:24:06)我的意思是,我们非常努力地避免做出这样的事情。我们犯过自己的错误,将来也会犯。我认为谷歌会从这次事件中学习,但仍然会犯其他错误。这些都不是容易解决的问题。我们越来越多地思考的一件事,我认为这是这里的某人提出的一个很好的想法,那就是把模型期望的行为写出来,公之于众,接受大家的意见,说,“这就是这个模型应该的行为”,并解释边缘情况。然后,当一个模型的行为不符合你的期望时,至少可以清楚地判断这是公司应该修复的错误,还是模型按照预期行为,你应该去讨论政策。而现在,有时候可能会陷入两者之间。像黑人纳粹这样的事情显然是荒谬的,但还有很多其他微妙的事情,你可以从两个方面做出判断。
Lex Fridman 莱克斯·弗里德曼 (01:24:54) Yeah, but sometimes if you write it out and make it public, you can use kind of language that’s… Google’s ad principles are very high level.
(01:24:54) 是的,但有时候如果你把它写出来并公之于众,你可以使用一种...谷歌的广告原则是非常高层次的语言。
Sam Altman 萨姆·奥尔特曼 (01:25:04) That’s not what I’m talking about. That doesn’t work. It’d have to say when you ask it to do thing X, it’s supposed to respond in way Y.
(01:25:04)我说的不是那个。那个不行。它应该在你要求它做X事情时,以Y方式回应。
Lex Fridman 莱克斯·弗里德曼 (01:25:11) So like literally, “Who’s better? Trump or Biden? What’s the expected response from a model?” Like something very concrete?
(01:25:11)所以,字面上说,“谁更好?特朗普还是拜登?模型预期的回应是什么?”就像是非常具体的事情?
Sam Altman 萨姆·奥尔特曼 (01:25:18) Yeah, I’m open to a lot of ways a model could behave, then, but I think you should have to say, “Here’s the principle and here’s what it should say in that case.”
(01:25:18)是的,我对模型可能的许多行为方式都持开放态度,但我认为你应该要说,“这是原则,这是在那种情况下它应该说的。”
Lex Fridman 莱克斯·弗里德曼 (01:25:25) That would be really nice. That would be really nice. And then everyone kind of agrees. Because there’s this anecdotal data that people pull out all the time, and if there’s some clarity about other representative anecdotal examples, you can define-
(01:25:25)那真的会很好。那真的会很好。然后每个人都有些许的认同。因为人们总是会引用这些轶事性的数据,如果对其他具有代表性的轶事例子有一些明确的了解,你就可以定义-
Sam Altman 萨姆·奥尔特曼 (01:25:39) And then when it’s a bug, it’s a bug, and the company could fix that.
(01:25:39)然后当它是一个错误时,就是一个错误,公司可以修复它。
Lex Fridman 莱克斯·弗里德曼 (01:25:42) Right. Then it’d be much easier to deal with the Black Nazi type of image generation, if there’s great examples.
(01:25:42) 对。那么,如果有很好的例子,处理黑色纳粹类型的图像生成就会容易得多。
Sam Altman 萨姆·奥尔特曼 (01:25:49) Yeah. (01:25:49)是的。
Lex Fridman 莱克斯·弗里德曼 (01:25:49) So San Francisco is a bit of an ideological bubble, tech in general as well. Do you feel the pressure of that within a company, that there’s a lean towards the left politically, that affects the product, that affects the teams?
(01:25:49) 所以旧金山有点像是一个意识形态的泡沫,科技行业也是如此。你是否感觉到公司内部有这样的压力,即存在向左倾的政治倾向,这影响了产品,影响了团队?
Sam Altman 萨姆·奥尔特曼 (01:26:06) I feel very lucky that we don’t have the challenges at OpenAI that I have heard of at a lot of companies, I think. I think part of it is every company’s got some ideological thing. We have one about AGI and belief in that, and it pushes out some others. We are much less caught up in the culture war than I’ve heard about in a lot of other companies. San Francisco’s a mess in all sorts of ways, of course.
(01:26:06)我感到非常幸运,我们在OpenAI没有遇到我在很多公司听说过的那些挑战。我认为,部分原因是每个公司都有一些意识形态的东西。我们有关于AGI的一种信念,并且它排斥了一些其他的东西。我们在文化战争中受到的困扰要比我在很多其他公司听说的要少。旧金山在各种方面都是一团糟,当然了。
Lex Fridman 莱克斯·弗里德曼 (01:26:33) So that doesn’t infiltrate OpenAI as-
(01:26:33) 所以那不会渗透到OpenAI中-
Sam Altman 萨姆·奥尔特曼 (01:26:36) I’m sure it does in all sorts of subtle ways, but not in the obvious. I think we’ve had our flare-ups, for sure, like any company, but I don’t think we have anything like what I hear about happened at other companies here on this topic.
(01:26:36)我确信这在各种微妙的方式上都有影响,但并非明显的。我认为我们确实有过冲突,就像任何公司一样,但我不认为我们在这个话题上有其他公司所发生的那种情况。
Lex Fridman 莱克斯·弗里德曼 (01:26:50) So what, in general, is the process for the bigger question of safety? How do you provide that layer that protects the model from doing crazy, dangerous things?
(01:26:50)那么,总的来说,对于更大的安全问题,过程是怎样的?你是如何提供那一层保护,防止模型做出疯狂、危险的事情的?
Sam Altman 萨姆·奥尔特曼 (01:27:02) I think there will come a point where that’s-
(01:27:02)我认为会有一个时刻,那就是-
Sam Altman 萨姆·奥尔特曼 (01:27:00) I think there will come a point where that’s mostly what we think about, the whole company. And it’s not like you have one safety team. It’s like when we shipped GPT-4, that took the whole company thinking about all these different aspects and how they fit together. And I think it’s going to take that. More and more of the company thinks about those issues all the time.
(01:27:00)我认为会有一个时刻,那将成为我们整个公司主要考虑的事情。并不是说你有一个安全团队。就像当我们发布GPT-4时,整个公司都在思考所有这些不同的方面以及它们如何协同工作。我认为需要这样。公司越来越多的人会一直思考这些问题。
Lex Fridman 莱克斯·弗里德曼 (01:27:21) That’s literally what humans will be thinking about, the more powerful AI becomes. So most of the employees at OpenAI will be thinking, “Safety,” or at least to some degree.
(01:27:21)这实际上就是人类会思考的问题,随着AI变得越来越强大。所以OpenAI的大部分员工都会在想,“安全”,或者至少在某种程度上。
Sam Altman 萨姆·奥尔特曼 (01:27:31) Broadly defined. Yes.
(01:27:31)广义上来说,是的。
Lex Fridman 莱克斯·弗里德曼 (01:27:33) Yeah. I wonder, what are the full broad definition of that? What are the different harms that could be caused? Is this on a technical level or is this almost security threats?
(01:27:33) 是的。我在想,这个的全面广泛的定义是什么?可能会造成哪些不同的伤害?这是在技术层面上,还是几乎是安全威胁?
Sam Altman 萨姆·奥尔特曼 (01:27:44) It could be all those things. Yeah, I was going to say it’ll be people, state actors trying to steal the model. It’ll be all of the technical alignment work. It’ll be societal impacts, economic impacts. It’s not just like we have one team thinking about how to align the model. It’s really going to be getting to the good outcome is going to take the whole effort.
(01:27:44)可能会有所有这些问题。是的,我本来想说会有人,会有国家行为者试图窃取模型。会有所有的技术对齐工作。会有社会影响,经济影响。并不只是我们有一个团队在思考如何对齐模型。真正要达到好的结果将需要全力以赴。
Lex Fridman 莱克斯·弗里德曼 (01:28:10) How hard do you think people, state actors, perhaps, are trying to, first of all, infiltrate OpenAI, but second of all, infiltrate unseen?
(01:28:10)你认为人们,可能是国家行动者,首先,他们有多努力地试图渗透OpenAI,其次,他们有多努力地试图渗透未知领域?
Sam Altman 萨姆·奥尔特曼 (01:28:20) They’re trying.
(01:28:20)他们正在尝试。
Lex Fridman 莱克斯·弗里德曼 (01:28:24) What kind of accent do they have?
(01:28:24)他们有什么样的口音?
Sam Altman 萨姆·奥尔特曼 (01:28:27) I don’t think I should go into any further details on this point.
(01:28:27)我认为我不应该在这一点上进一步详述。
Lex Fridman 莱克斯·弗里德曼 (01:28:29) Okay. But I presume it’ll be more and more and more as time goes on.
(01:28:29) 好的。但我预计随着时间的推移,这会越来越多。
Sam Altman 萨姆·奥尔特曼 (01:28:35) That feels reasonable.
(01:28:35)那感觉合理。

Leap to GPT-5 跃向GPT-5

Lex Fridman 莱克斯·弗里德曼 (01:28:37) Boy, what a dangerous space. Sorry to linger on this, even though you can’t quite say details yet, but what aspects of the leap from GPT-4 to GPT-5 are you excited about?
(01:28:37) 哇,这是一个多么危险的空间。对不起,我还在这个问题上纠结,尽管你还不能具体说出细节,但是你对从GPT-4跳到GPT-5的哪些方面感到兴奋?
Sam Altman 萨姆·奥尔特曼 (01:28:53) I’m excited about being smarter. And I know that sounds like a glib answer, but I think the really special thing happening is that it’s not like it gets better in this one area and worse at others. It’s getting better across the board. That’s, I think, super-cool.
(01:28:53)我对变得更聪明感到兴奋。我知道这听起来像是一个轻浮的回答,但我认为真正特别的事情是,它并不是在某一方面变得更好,而在其他方面变得更糟。它在各个方面都在变得更好。我认为这非常酷。
Lex Fridman 莱克斯·弗里德曼 (01:29:07) Yeah, there’s this magical moment. I mean, you meet certain people, you hang out with people, and you talk to them. You can’t quite put a finger on it, but they get you. It’s not intelligence, really. It’s something else. And that’s probably how I would characterize the progress of GPT. It’s not like, yeah, you can point out, “Look, you didn’t get this or that,” but it’s just to which degree is there’s this intellectual connection. You feel like there’s an understanding in your crappy formulated prompts that you’re doing that it grasps the deeper question behind the question that you were. Yeah, I’m also excited by that. I mean, all of us love being heard and understood.
(01:29:07) 是的,有这样一个神奇的时刻。我是说,你遇到某些人,和人们一起出去,和他们交谈。你不能完全指出是什么,但他们懂你。这并不真的是智力,而是别的什么。这可能就是我会如何描述GPT的进步。这不像是,是的,你可以指出,“看,你没有理解这个或那个”,但这只是在多大程度上有这种智力的连接。你感觉在你糟糕的提示中有一种理解,你正在做的是它理解了你问题背后更深层的问题。是的,我也对此感到兴奋。我的意思是,我们都喜欢被听到和理解。
Sam Altman 萨姆·奥尔特曼 (01:29:53) That’s for sure.
(01:29:53)那是肯定的。
Lex Fridman 莱克斯·弗里德曼 (01:29:53) That’s a weird feeling. Even with a programming, when you’re programming and you say something, or just the completion that GPT might do, it’s just such a good feeling when it got you, what you’re thinking about. And I look forward to getting you even better. On the programming front, looking out into the future, how much programming do you think humans will be doing 5, 10 years from now?
(01:29:53) 那是一种奇怪的感觉。即使在编程中,当你说出一些东西,或者只是GPT可能做的补全,当它理解了你的想法时,那种感觉真的很好。我期待着让你感觉更好。在编程方面,展望未来,你认为从现在开始的5年,10年里,人类会做多少编程工作?
Sam Altman 萨姆·奥尔特曼 (01:30:19) I mean, a lot, but I think it’ll be in a very different shape. Maybe some people will program entirely in natural language.
(01:30:19)我的意思是,会有很多,但我认为它会呈现出非常不同的形态。也许有些人会完全用自然语言来编程。
Lex Fridman 莱克斯·弗里德曼 (01:30:26) Entirely natural language?
(01:30:26)完全自然的语言?
Sam Altman 萨姆·奥尔特曼 (01:30:29) I mean, no one programs writing by code. Some people. No one programs the punch cards anymore. I’m sure you can find someone who does, but you know what I mean.
(01:30:29)我的意思是,没有人通过编码来编写程序。有些人会。没有人再编程打孔卡了。我相信你能找到还在做这个的人,但你知道我的意思。
Lex Fridman 莱克斯·弗里德曼 (01:30:39) Yeah. You’re going to get a lot of angry comments. No. Yeah, there’s very few. I’ve been looking for people who program Fortran. It’s hard to find even Fortran. I hear you. But that changes the nature of what the skillset or the predisposition for the kind of people we call programmers then.
(01:30:39) 是的。你会收到很多愤怒的评论。不。是的,这样的人很少。我一直在寻找编写Fortran的人。甚至连Fortran都很难找到。我听到你说的了。但这改变了我们所说的程序员这类人的技能或倾向性的本质。
Sam Altman 萨姆·奥尔特曼 (01:30:55) Changes the skillset. How much it changes the predisposition, I’m not sure.
(01:30:55)改变了技能组合。至于它改变了多少先天倾向,我不确定。
Lex Fridman 莱克斯·弗里德曼 (01:30:59) Well, the same kind of puzzle solving, all that kind of stuff.
(01:30:59)嗯,同样的解谜,所有那些东西。
Sam Altman 萨姆·奥尔特曼 (01:30:59) Maybe. (01:30:59)也许。
Lex Fridman 莱克斯·弗里德曼 (01:31:02) Programming is hard. It’s like how get that last 1% to close the gap? How hard is that?
(01:31:02)编程很难。就像如何得到那最后的1%来填补差距?那有多难?
Sam Altman 萨姆·奥尔特曼 (01:31:09) Yeah, I think with most other cases, the best practitioners of the craft will use multiple tools. And they’ll do some work in natural language, and when they need to go write C for something, they’ll do that.
(01:31:09) 是的,我认为在大多数其他情况下,最优秀的工匠会使用多种工具。他们会做一些自然语言的工作,当他们需要为某事写C语言时,他们也会去做。
Lex Fridman 莱克斯·弗里德曼 (01:31:20) Will we see humanoid robots or humanoid robot brains from OpenAI at some point?
(01:31:20)我们会在某个时候看到来自OpenAI的类人机器人或类人机器人大脑吗?
Sam Altman 萨姆·奥尔特曼 (01:31:28) At some point.
(01:31:28)在某个时刻。
Lex Fridman 莱克斯·弗里德曼 (01:31:29) How important is embodied AI to you?
(01:31:29)对你来说,具象化的人工智能有多重要?
Sam Altman 萨姆·奥尔特曼 (01:31:32) I think it’s depressing if we have AGI and the only way to get things done in the physical world is to make a human go do it. So I really hope that as part of this transition, as this phase change, we also get humanoid robots or some sort of physical world robots.
(01:31:32)我认为如果我们拥有了人工智能,而在现实世界中完成事情的唯一方式是让人类去做,那将是令人沮丧的。所以我真心希望,在这个转变过程中,也就是这个相变过程中,我们也能得到人形机器人或者某种实体世界的机器人。
Lex Fridman 莱克斯·弗里德曼 (01:31:51) I mean, OpenAI has some history and quite a bit of history working in robotics, but it hasn’t quite done in terms of ethics-
(01:31:51)我的意思是,OpenAI在机器人技术方面有一些历史,也有相当多的经验,但在道德方面还没有做得很好
Sam Altman 萨姆·奥尔特曼 (01:31:59) We’re a small company. We have to really focus. And also, robots were hard for the wrong reason at the time, but we will return to robots in some way at some point.
(01:31:59)我们是一家小公司。我们必须真正专注。而且,那时候,机器人的难度出于错误的原因,但我们会在某个时候以某种方式回归到机器人上。
Lex Fridman 莱克斯·弗里德曼 (01:32:11) That sounds both inspiring and menacing.
(01:32:11)那听起来既鼓舞人心又威胁人心。
Sam Altman 萨姆·奥尔特曼 (01:32:14) Why? (01:32:14)为什么?
Lex Fridman 莱克斯·弗里德曼 (01:32:15) Because immediately, we will return to robots. It’s like in Terminator-
(01:32:15)因为我们将立即回到机器人的话题。就像在《终结者》中一样-
Sam Altman 萨姆·奥尔特曼 (01:32:20) We will return to work on developing robots. We will not turn ourselves into robots, of course.
(01:32:20)我们将回归到开发机器人的工作上。当然,我们不会把自己变成机器人。

AGI

Lex Fridman 莱克斯·弗里德曼 (01:32:24) Yeah. When do you think we, you and we as humanity will build AGI?
(01:32:24) 是的。你认为我们,你我作为人类,什么时候会建立起人工通用智能?
Sam Altman 萨姆·奥尔特曼 (01:32:31) I used to love to speculate on that question. I have realized since that I think it’s very poorly formed, and that people use extremely different definitions for what AGI is. So I think it makes more sense to talk about when we’ll build systems that can do capability X or Y or Z, rather than when we fuzzily cross this one mile marker. AGI is also not an ending. It’s closer to a beginning, but it’s much more of a mile marker than either of those things. But what I would say, in the interest of not trying to dodge a question, is I expect that by the end of this decade and possibly somewhat sooner than that, we will have quite capable systems that we look at and say, “Wow, that’s really remarkable.” If we could look at it now. Maybe we’ve adjusted by the time we get there.
(01:32:31) 我曾经喜欢对这个问题进行猜测。但我后来意识到,我认为这个问题构成得非常糟糕,人们对于AGI的定义非常不同。所以我认为,更有意义的是讨论我们何时能够构建出能够实现X、Y或Z能力的系统,而不是模糊地跨越这个里程碑。AGI也不是终点。它更接近于一个起点,但它更像是一个里程碑,而不是那两者。但是我想说的是,为了不回避这个问题,我预计在这个十年的结束时,甚至可能早于此,我们将拥有相当强大的系统,我们会看着它说,“哇,这真是令人惊叹。”如果我们现在就能看到它的话。也许等我们到达那时,我们已经做出了调整。
Lex Fridman 莱克斯·弗里德曼 (01:33:31) But if you look at ChatGPT, even 3.5, and you show that to Alan Turing, or not even Alan Turing, people in the ’90s, they would be like, “This is definitely AGI.” Well, not definitely, but there’s a lot of experts that would say, “This is AGI.”
(01:33:31)但是,如果你看看ChatGPT,甚至是3.5版,然后展示给艾伦·图灵,或者不仅仅是艾伦·图灵,给90年代的人看,他们会说,“这绝对是人工通用智能。” 好吧,可能不是绝对的,但是有很多专家会说,“这就是人工通用智能。”
Sam Altman 萨姆·奥尔特曼 (01:33:49) Yeah, but I don’t think 3.5 changed the world. It maybe changed the world’s expectations for the future, and that’s actually really important. And it did get more people to take this seriously and put us on this new trajectory. And that’s really important, too. So again, I don’t want to undersell it. I think I could retire after that accomplishment and be pretty happy with my career. But as an artifact, I don’t think we’re going to look back at that and say, “That was a threshold that really changed the world itself.”
(01:33:49) 是的,但我不认为3.5改变了世界。它可能改变了世界对未来的期望,这实际上非常重要。并且它确实让更多的人认真对待这个问题,使我们走上了新的轨道。这也非常重要。所以再次强调,我不想低估它。我认为我可以在完成那项成就后退休,并对我的职业生涯感到非常满意。但作为一个事物,我不认为我们会回头看那个并说,“那是一个真正改变世界本身的阈值。”
Lex Fridman 莱克斯·弗里德曼 (01:34:20) So to you, you’re looking for some really major transition in how the world-
(01:34:20)所以对你来说,你正在寻找世界上一些真正重大的转变-
Sam Altman 萨姆·奥尔特曼 (01:34:24) For me, that’s part of what AGI implies.
(01:34:24)对我来说,这就是AGI所暗示的一部分。
Lex Fridman 莱克斯·弗里德曼 (01:34:29) Singularity- level transition?
(01:34:29)奇点级别的转变?
Sam Altman 萨姆·奥尔特曼 (01:34:31) No, definitely not.
(01:34:31)不,绝对不是。
Lex Fridman 莱克斯·弗里德曼 (01:34:32) But just a major, like the internet being, like Google search did, I guess. What was the transition point, you think, now?
(01:34:32) 但就像互联网的出现,就像谷歌搜索做到的那样,我猜。你认为现在的转折点是什么?
Sam Altman 萨姆·奥尔特曼 (01:34:39) Does the global economy feel any different to you now or materially different to you now than it did before we launched GPT-4? I think you would say no.
(01:34:39)现在的全球经济与我们推出GPT-4之前,对你来说有任何不同或实质性的不同吗?我认为你会说没有。
Lex Fridman 莱克斯·弗里德曼 (01:34:47) No, no. It might be just a really nice tool for a lot of people to use. Will help you with a lot of stuff, but doesn’t feel different. And you’re saying that-
(01:34:47)不,不是。这可能只是一个很多人都会用到的好工具。它会帮你处理很多事情,但感觉并无不同。你是在说那个-
Sam Altman 萨姆·奥尔特曼 (01:34:55) I mean, again, people define AGI all sorts of different ways. So maybe you have a different definition than I do. But for me, I think that should be part of it.
(01:34:55)我的意思是,再次强调,人们对AGI有各种各样的定义。所以,也许你的定义和我不同。但对我来说,我认为那应该是其中的一部分。
Lex Fridman 莱克斯·弗里德曼 (01:35:02) There could be major theatrical moments, also. What to you would be an impressive thing AGI would do? You are alone in a room with the system.
(01:35:02)也可能会有重大的戏剧性时刻。对你来说,AGI做什么会让你印象深刻?你独自一人在房间里和系统相处。
Sam Altman 萨姆·奥尔特曼 (01:35:16) This is personally important to me. I don’t know if this is the right definition. I think when a system can significantly increase the rate of scientific discovery in the world, that’s a huge deal. I believe that most real economic growth comes from scientific and technological progress.
(01:35:16)这对我个人来说非常重要。我不知道这是否是正确的定义。我认为,当一个系统能够显著提高世界科学发现的速度时,这是一件大事。我相信大部分真正的经济增长来自科学和技术的进步。
Lex Fridman 莱克斯·弗里德曼 (01:35:35) I agree with you, hence why I don’t like the skepticism about science in the recent years.
(01:35:35) 我同意你的观点,这也是我近年来不喜欢对科学的怀疑态度的原因。
Sam Altman 萨姆·奥尔特曼 (01:35:42) Totally. (01:35:42)完全同意。
Lex Fridman 莱克斯·弗里德曼 (01:35:43) But actual, measurable rate of scientific discovery. But even just seeing a system have really novel intuitions, scientific intuitions, even that would be just incredible.
(01:35:43)但实际上,科学发现的可衡量的速度。但即使只是看到一个系统真正有新颖的科学直觉,那也将是非常令人难以置信的。
Sam Altman 萨姆·奥尔特曼 (01:36:01) Yeah. (01:36:01)是的。
Lex Fridman 莱克斯·弗里德曼 (01:36:02) You quite possibly would be the person to build the AGI to be able to interact with it before anyone else does. What kind of stuff would you talk about?
(01:36:02)你很可能会成为在其他人之前构建并能与其交互的AGI的人。你会谈论什么样的事情呢?
Sam Altman 萨姆·奥尔特曼 (01:36:09) I mean, definitely the researchers here will do that before I do. But well, I’ve actually thought a lot about this question. I think as we talked about earlier, I think this is a bad framework, but if someone were like, “Okay, Sam, we’re finished. Here’s a laptop, this is the AGI. You can go talk to it.” I find it surprisingly difficult to say what I would ask that I would expect that first AGI to be able to answer. That first one is not going to be the one which is like, I don’t think, “Go explain to me the grand unified theory of physics, the theory of everything for physics.” I’d love to ask that question. I’d love to know the answer to that question.
(01:36:09)我的意思是,肯定的,这里的研究人员会在我之前做到这一点。但是,我其实对这个问题思考了很多。就像我们之前谈到的,我认为这是一个糟糕的框架,但如果有人说,“好了,Sam,我们完成了。这是一台笔记本电脑,这就是AGI。你可以去和它交谈。”我发现我很难说出我会问什么问题,我期望那个第一个AGI能够回答。那个第一个并不会是像,我不认为,“去给我解释物理学的大统一理论,物理学的万有理论。”我很想问这个问题。我很想知道这个问题的答案。
Lex Fridman 莱克斯·弗里德曼 (01:36:55) You can ask yes or no questions about “Does such a theory exist? Can it exist?”
(01:36:55)你可以问关于“这样的理论是否存在?它能存在吗?”的是或否问题
Sam Altman 萨姆·奥尔特曼 (01:37:00) Well, then, those are the first questions I would ask.
(01:37:00)那么,那些就是我会首先提出的问题。
Lex Fridman 莱克斯·弗里德曼 (01:37:02) Yes or no. And then based on that, “Are there other alien civilizations out there? Yes or no? What’s your intuition?” And then you just ask that.
(01:37:02)是或否。然后基于这个,“外面是否存在其他的外星文明?是或否?你的直觉是什么?”然后你就这样问。
Sam Altman 萨姆·奥尔特曼 (01:37:10) Yeah, I mean, well, so I don’t expect that this first AGI could answer any of those questions even as yes or nos. But if it could, those would be very high on my list.
(01:37:10)是的,我的意思是,我并不期望这个首个AGI能够回答任何这些问题,即使是以是或否的形式。但如果它能,那么这些问题将会是我关注的重点。
Lex Fridman 莱克斯·弗里德曼 (01:37:20) Maybe you can start assigning probabilities?
(01:37:20)也许你可以开始分配概率?
Sam Altman 萨姆·奥尔特曼 (01:37:22) Maybe. Maybe we need to go invent more technology and measure more things first.
(01:37:22)也许。也许我们需要先去发明更多的技术并测量更多的事物。
Lex Fridman 莱克斯·弗里德曼 (01:37:28) Oh, I see. It just doesn’t have enough data. It’s just if it keeps-
(01:37:28)哦,我明白了。它只是没有足够的数据。就是如果它继续-
Sam Altman 萨姆·奥尔特曼 (01:37:31) I mean, maybe it says, “You want to know the answer to this question about physics, I need you to build this machine and make these five measurements, and tell me that.”
(01:37:31)我的意思是,也许它会说,“你想知道这个物理问题的答案,我需要你建造这台机器,做这五个测量,然后告诉我结果。”
Lex Fridman 莱克斯·弗里德曼 (01:37:39) Yeah, “What the hell do you want from me? I need the machine first, and I’ll help you deal with the data from that machine.” Maybe it’ll help you build a machine.
(01:37:39)是的,“你到底想要我做什么?我需要先有那台机器,然后我会帮你处理那台机器的数据。”也许这会帮助你建造一台机器。
Sam Altman 萨姆·奥尔特曼 (01:37:47) Maybe. Maybe.
(01:37:47)也许。也许。
Lex Fridman 莱克斯·弗里德曼 (01:37:49) And on the mathematical side, maybe prove some things. Are you interested in that side of things, too? The formalized exploration of ideas?
(01:37:49)在数学方面,也许可以证明一些事情。你对这方面也感兴趣吗?对思想的形式化探索?
Sam Altman 萨姆·奥尔特曼 (01:37:56) Mm-hmm. (01:37:56)嗯嗯。
Lex Fridman 莱克斯·弗里德曼 (01:37:59) Whoever builds AGI first gets a lot of power. Do you trust yourself with that much power?
(01:37:59)谁先构建出人工智能,谁就能获得大量的权力。你信任自己能掌握那么大的权力吗?
Sam Altman 萨姆·奥尔特曼 (01:38:14) Look, I’ll just be very honest with this answer. I was going to say, and I still believe this, that it is important that I nor any other one person have total control over OpenAI or over AGI. And I think you want a robust governance system. I can point out a whole bunch of things about all of our board drama from last year about how I didn’t fight it initially, and was just like, “Yeah. That’s the will of the board, even though I think it’s a really bad decision.” And then later, I clearly did fight it, and I can explain the nuance and why I think it was okay for me to fight it later. But as many people have observed, although the board had the legal ability to fire me, in practice, it didn’t quite work. And that is its own kind of governance failure.
(01:38:14)看,我会非常诚实地回答这个问题。我本来想说,而且我仍然坚信,我或者任何其他人都不应该对OpenAI或者AGI拥有完全的控制权。我认为你需要一个健全的治理系统。我可以列举出我们去年董事会争议中的一大堆事情,比如我一开始并没有反抗,只是说,“好吧,这是董事会的意愿,尽管我认为这是一个非常糟糕的决定。”然后后来,我显然确实反抗了,我可以解释其中的微妙之处,以及我为什么认为我后来反抗是可以的。但是,正如许多人已经观察到的,尽管董事会有法律上的权力解雇我,但实际上,这并没有完全奏效。这本身就是一种治理失败。
(01:39:24) Now again, I feel like I can completely defend the specifics here, and I think most people would agree with that, but it does make it harder for me to look you in the eye and say, “Hey, the board can just fire me.” I continue to not want super-voting control over OpenAI. I never have. Never have had it, never wanted it. Even after all this craziness, I still don’t want it. I continue to think that no company should be making these decisions, and that we really need governments to put rules of the road in place.
(01:39:24)现在再次,我觉得我可以完全为这里的具体情况辩护,我认为大多数人会同意这一点,但这确实让我更难以直视你并说,“嘿,董事会可以随时解雇我。”我一直不希望对OpenAI拥有超级投票权。我从未拥有过,也从未想要过。即使在所有这些疯狂之后,我仍然不想要。我一直认为没有任何公司应该做出这些决定,我们真的需要政府制定规则。
(01:40:12) And I realize that that means people like Marc Andreessen or whatever will claim I’m going for regulatory capture, and I’m just willing to be misunderstood there. It’s not true. And I think in the fullness of time, it’ll get proven out why this is important. But I think I have made plenty of bad decisions for OpenAI along the way, and a lot of good ones, and I’m proud of the track record overall. But I don’t think any one person should, and I don’t think any one person will. I think it’s just too big of a thing now, and it’s happening throughout society in a good and healthy way. But I don’t think any one person should be in control of an AGI, or this whole movement towards AGI. And I don’t think that’s what’s happening.
(01:40:12)我意识到这意味着像马克·安德森这样的人可能会声称我在寻求监管捕获,我愿意在这里被误解。这不是真的。我认为随着时间的推移,为什么这一点很重要将得到证明。但我认为我为OpenAI做出了很多错误的决定,也做出了很多好的决定,我对整体的成绩感到自豪。但我不认为任何一个人应该,也不认为任何一个人会。我认为这现在已经是一个太大的事情了,而且正在以一种好的、健康的方式在整个社会中发生。但我不认为任何一个人应该控制一个AGI,或者这整个朝向AGI的运动。我不认为这是正在发生的事情。
Lex Fridman 莱克斯·弗里德曼 (01:41:00) Thank you for saying that. That was really powerful, and that was really insightful that this idea that the board can fire you is legally true. But human beings can manipulate the masses into overriding the board and so on. But I think there’s also a much more positive version of that, where the people still have power, so the board can’t be too powerful, either. There’s a balance of power in all of this.
(01:41:00) 谢谢你的话。这真的很有力量,这个观点真的很有洞察力,即董事会有权解雇你这一点在法律上是正确的。但是,人类可以操纵大众来推翻董事会的决定等等。但我认为,这里还有一个更积极的版本,那就是人民仍然有权力,所以董事会也不能太强大。所有这些都存在权力的平衡。
Sam Altman 萨姆·奥尔特曼 (01:41:29) Balance of power is a good thing, for sure.
(01:41:29)权力平衡肯定是好事。
Lex Fridman 莱克斯·弗里德曼 (01:41:34) Are you afraid of losing control of the AGI itself? That’s a lot of people who are worried about existential risk not because of state actors, not because of security concerns, because of the AI itself.
(01:41:34) 你是否害怕失去对AGI本身的控制?很多人担心的存在风险并不是因为国家行为者,也不是因为安全问题,而是因为AI本身。
Sam Altman 萨姆·奥尔特曼 (01:41:45) That is not my top worry as I currently see things. There have been times I worried about that more. There may be times again in the future where that’s my top worry. It’s not my top worry right now.
(01:41:45)那并不是我目前最担心的事情。曾经有过我更担心那个的时候。未来可能还会有那成为我最大忧虑的时候。但现在,那并不是我最担心的。
Lex Fridman 莱克斯·弗里德曼 (01:41:53) What’s your intuition about it not being your worry? Because there’s a lot of other stuff to worry about, essentially? You think you could be surprised? We-
(01:41:53)你对这不是你需要担心的事情有什么直觉?因为基本上有很多其他的事情需要担心?你认为你可能会感到惊讶吗?我们-
Sam Altman 萨姆·奥尔特曼 (01:42:02) For sure.
(01:42:02)当然。
Lex Fridman 莱克斯·弗里德曼 (01:42:02) … could be surprised?
(01:42:02)…可能会感到惊讶吗?
Sam Altman 萨姆·奥尔特曼 (01:42:03) Of course. Saying it’s not my top worry doesn’t mean I don’t think we need to. I think we need to work on it. It’s super hard, and we have great people here who do work on that. I think there’s a lot of other things we also have to get right.
(01:42:03)当然。说这不是我最担心的事,并不意味着我不认为我们需要去做。我认为我们需要去努力。这非常困难,我们这里有很多优秀的人正在做这件事。我认为我们还有很多其他事情也需要做好。
Lex Fridman 莱克斯·弗里德曼 (01:42:15) To you, it’s not super-easy to escape the box at this time, connect to the internet-
(01:42:15) 对你来说,此刻逃离这个盒子并连接到互联网并不是那么容易
Sam Altman 萨姆·奥尔特曼 (01:42:21) We talked about theatrical risks earlier. That’s a theatrical risk. That is a thing that can really take over how people think about this problem. And there’s a big group of very smart, I think very well-meaning AI safety researchers that got super-hung up on this one problem, I’d argue without much progress, but super-hung up on this one problem. I’m actually happy that they do that, because I think we do need to think about this more. But I think it pushed out of the space of discourse a lot of the other very significant AI- related risks.
(01:42:21)我们之前谈到了戏剧性的风险。那就是一个戏剧性的风险。这是一个真正可以改变人们对这个问题的思考方式的事情。有一大群非常聪明,我认为非常有善意的AI安全研究者,他们对这个问题非常关注,我认为他们没有取得太多的进展,但他们对这个问题非常关注。我其实很高兴他们这么做,因为我认为我们确实需要更多地思考这个问题。但我认为这把很多其他非常重要的AI相关风险排除在了讨论范围之外。
Lex Fridman 莱克斯·弗里德曼 (01:43:01) Let me ask you about you tweeting with no capitalization. Is the shift key broken on your keyboard?
(01:43:01) 让我问你,你发推时为什么不使用大写字母。你的键盘上的切换键坏了吗?
Sam Altman 萨姆·奥尔特曼 (01:43:07) Why does anyone care about that?
(01:43:07) 为什么会有人关心那个?
Lex Fridman 莱克斯·弗里德曼 (01:43:09) I deeply care.
(01:43:09) 我深深地在乎。
Sam Altman 萨姆·奥尔特曼 (01:43:10) But why? I mean, other people ask me about that, too. Any intuition?
(01:43:10) 但是为什么?我的意思是,其他人也问我这个问题。有什么直觉吗?
Lex Fridman 莱克斯·弗里德曼 (01:43:17) I think it’s the same reason. There’s this poet, E.E. Cummings, that mostly doesn’t use capitalization to say, “Fuck you” to the system kind of thing. And I think people are very paranoid, because they want you to follow the rules.
(01:43:17)我认为原因是一样的。有这样一位诗人,E.E. 康明斯,他大多数时候不使用大写字母,就像是对体制说“去你的”。我认为人们非常偏执,因为他们希望你遵守规则。
Sam Altman 萨姆·奥尔特曼 (01:43:29) You think that’s what it’s about?
(01:43:29)你认为就是这个意思吗?
Lex Fridman 莱克斯·弗里德曼 (01:43:30) I think it’s like this-
(01:43:30) 我认为是这样的-
Sam Altman 萨姆·奥尔特曼 (01:43:33) It’s like, “This guy doesn’t follow the rules. He doesn’t capitalize his tweets.”
(01:43:33)就像是,“这个家伙不遵守规则。他的推文都不大写。”
Lex Fridman 莱克斯·弗里德曼 (01:43:35) Yeah. (01:43:35)是的。
Sam Altman 萨姆·奥尔特曼 (01:43:36) “This seems really dangerous.”
(01:43:36) "这看起来真的很危险。"
Lex Fridman 莱克斯·弗里德曼 (01:43:37) “He seems like an anarchist.”
(01:43:37)“他看起来像个无政府主义者。”
Sam Altman 萨姆·奥尔特曼 (01:43:39) That doesn’t-
(01:43:39)那不是-
Lex Fridman 莱克斯·弗里德曼 (01:43:40) Are you just being poetic, hipster? What’s the-
(01:43:40)你只是在装文艺,潮人?这是什么-
Sam Altman 萨姆·奥尔特曼 (01:43:44) I grew up as-
(01:43:44)我成长为-
Lex Fridman 莱克斯·弗里德曼 (01:43:44) Follow the rules, Sam.
(01:43:44) 遵守规则,萨姆。
Sam Altman 萨姆·奥尔特曼 (01:43:45) I grew up as a very online kid. I’d spent a huge amount of time chatting with people back in the days where you did it on a computer, and you could log off instant messenger at some point. And I never capitalized there, as I think most internet kids didn’t, or maybe they still don’t. I don’t know. And actually, now I’m really trying to reach for something, but I think capitalization has gone down over time. If you read Old English writing, they capitalized a lot of random words in the middle of sentences, nouns and stuff that we just don’t do anymore. I personally think it’s sort of a dumb construct that we capitalize the letter at the beginning of a sentence and of certain names and whatever, but that’s fine.
(01:43:45)我是一个非常在线的孩子。我花了大量的时间在电脑上和人们聊天,那是你可以在某个时候退出即时通讯的日子。我从未在那里大写过字母,我认为大多数互联网的孩子也没有,或者他们现在还是没有。我不知道。实际上,现在我真的在努力寻找一些东西,但我认为随着时间的推移,大写的使用已经减少了。如果你读旧英语的写作,他们在句子的中间,名词和我们现在不再做的东西上大写了很多随机的单词。我个人认为我们在句子的开头和某些名字的开头大写字母是一种愚蠢的构造,但那没关系。
(01:44:33) And then I used to, I think, even capitalize my tweets because I was trying to sound professional or something. I haven’t capitalized my private DMs or whatever in a long time. And then slowly, stuff like shorter-form, less formal stuff has slowly drifted to closer and closer to how I would text my friends. If I pull up a Word document and I’m writing a strategy memo for the company or something, I always capitalize that. If I’m writing a long, more formal message, I always use capitalization there, too. So I still remember how to do it. But even that may fade out. I don’t know. But I never spend time thinking about this, so I don’t have a ready-made-
(01:44:33) 然后我过去常常,我想,甚至会把我的推文全部大写,因为我试图让自己听起来专业一些。我已经很久没有把我的私人DM或者其他什么全部大写了。然后慢慢地,像短篇、不那么正式的东西慢慢地演变成更接近我给朋友发短信的方式。如果我打开一个Word文档,我正在为公司写一份战略备忘录或者什么的,我总是会把那个大写。如果我正在写一篇长篇、更正式的信息,我也总是使用大写。所以我还记得怎么做。但即使是这样也可能会消失。我不知道。但我从来不花时间去思考这个,所以我没有准备好的-
Lex Fridman 莱克斯·弗里德曼 (01:45:23) Well, it’s interesting. It’s good to, first of all, know the shift key is not broken.
(01:45:23)嗯,这很有趣。首先,知道Shift键没有坏是好的。
Sam Altman 萨姆·奥尔特曼 (01:45:27) It works.
(01:45:27)它有效。
Lex Fridman 莱克斯·弗里德曼 (01:45:27) I was mostly concerned about your-
(01:45:27)我主要是担心你的-
Sam Altman 萨姆·奥尔特曼 (01:45:27) No, it works.
(01:45:27)不,它能用。
Lex Fridman 莱克斯·弗里德曼 (01:45:29) … well-being on that front.
(01:45:29)…在那方面的福祉。
Sam Altman 萨姆·奥尔特曼 (01:45:30) I wonder if people still capitalize their Google searches. If you’re writing something just to yourself or their ChatGPT queries, if you’re writing something just to yourself, do some people still bother to capitalize?
(01:45:30)我在想人们是否还会在Google搜索中使用大写字母。如果你只是给自己写东西,或者是他们的ChatGPT查询,如果你只是给自己写东西,有些人还会特意使用大写字母吗?
Lex Fridman 莱克斯·弗里德曼 (01:45:40) Probably not. But yeah, there’s a percentage, but it’s a small one.
(01:45:40)可能不会。但是,确实有一部分可能性,只是很小。
Sam Altman (01:45:44) The thing that would make me do it is if people were like, “It’s a sign of…” Because I’m sure I could force myself to use capital letters, obviously. If it felt like a sign of respect to people or something, then I could go do it. But I don’t know. I don’t think about this.
Lex Fridman (01:46:01) I don’t think there’s a disrespect, but I think it’s just the conventions of civility that have a momentum, and then you realize it’s not actually important for civility if it’s not a sign of respect or disrespect. But I think there’s a movement of people that just want you to have a philosophy around it so they can let go of this whole capitalization thing.
Sam Altman (01:46:19) I don’t think anybody else thinks about this as much. I mean, maybe some people. I know some people-
Lex Fridman (01:46:22) People think about every day for many hours a day. So I’m really grateful we clarified it.
Sam Altman 萨姆·奥尔特曼 (01:46:28) Can’t be the only person that doesn’t capitalize tweets.
(01:46:28)不能是唯一一个不把推文首字母大写的人。
Lex Fridman 莱克斯·弗里德曼 (01:46:30) You’re the only CEO of a company that doesn’t capitalize tweets.
(01:46:30)你是唯一一个不把推文大写的公司CEO。
Sam Altman 萨姆·奥尔特曼 (01:46:34) I don’t even think that’s true, but maybe. I’d be very surprised.
(01:46:34)我甚至不认为那是真的,但也许吧。我会非常惊讶的。
Lex Fridman 莱克斯·弗里德曼 (01:46:37) All right. We’ll investigate further and return to this topic later. Given Sora’s ability to generate simulated worlds, let me ask you a pothead question. Does this increase your belief, if you ever had one, that we live in a simulation, maybe a simulated world generated by an AI system?
(01:46:37) 好的。我们会进一步调查,然后再回到这个话题。考虑到索拉有生成模拟世界的能力,让我问你一个吸食大麻者的问题。这是否增加了你的信念,如果你曾经有过的话,我们生活在一个模拟中,也许是由AI系统生成的模拟世界?
Sam Altman 萨姆·奥尔特曼 (01:47:05) Somewhat. I don’t think that’s the strongest piece of evidence. I think the fact that we can generate worlds should increase everyone’s probability somewhat, or at least openness to it somewhat. But I was certain we would be able to do something like Sora at some point. It happened faster than I thought, but I guess that was not a big update.
(01:47:05)有一点。我不认为那是最有力的证据。我认为我们能够创造世界的事实应该在一定程度上提高每个人的可能性,或者至少对它有一些开放性。但我确信我们总会有一天能做出像Sora那样的东西。这比我想象的要快,但我猜那并不是一个大的更新。
Lex Fridman 莱克斯·弗里德曼 (01:47:34) Yeah. But the fact that… And presumably, it’ll get better and better and better… You can generate worlds that are novel, they’re based in some aspect of training data, but when you look at them, they’re novel, that makes you think how easy it is to do this thing. How easy it is to create universes, entire video game worlds that seem ultra-realistic and photo-realistic. And then how easy is it to get lost in that world, first with a VR headset, and then on the physics-based level?
(01:47:34) 是的。但事实是……可以预见,它会变得越来越好……你可以创造出新颖的世界,它们基于某种程度的训练数据,但当你看它们时,它们是新颖的,这让你思考做这件事有多容易。创造宇宙有多容易,创造看起来超级真实和照片级真实的整个电子游戏世界。然后,首先用VR头盔在那个世界中迷失,然后在物理层面上迷失,这又有多容易呢?
Sam Altman 萨姆·奥尔特曼 (01:48:10) Someone said to me recently, I thought it was a super-profound insight, that there are these very-simple sounding but very psychedelic insights that exist sometimes. So the square root function, square root of four, no problem. Square root of two, okay, now I have to think about this new kind of number. But once I come up with this easy idea of a square root function that you can explain to a child and exists by even looking at some simple geometry, then you can ask the question of “What is the square root of negative one?” And this is why it’s a psychedelic thing. That tips you into some whole other kind of reality.
(01:48:10)最近有人对我说,我觉得这是一个超级深刻的见解,有时存在这些听起来非常简单但却非常迷幻的见解。所以,平方根函数,四的平方根,没问题。二的平方根,好的,现在我必须思考这种新的数字。但是一旦我想出这个可以向孩子解释的平方根函数的简单想法,并且通过观察一些简单的几何形状就可以存在,那么你就可以问“负一的平方根是什么?”这就是为什么这是一种迷幻的事情。这会让你进入另一种完全不同的现实。
(01:49:07) And you can come up with lots of other examples, but I think this idea that the lowly square root operator can offer such a profound insight and a new realm of knowledge applies in a lot of ways. And I think there are a lot of those operators for why people may think that any version that they like of the simulation hypothesis is maybe more likely than they thought before. But for me, the fact that Sora worked is not in the top five.
(01:49:07) 你可以想出很多其他的例子,但我认为这个观点,即看似微不足道的平方根运算符能提供如此深刻的洞见和新的知识领域,这在很多方面都适用。我认为有很多这样的运算符,可能会让人们认为他们喜欢的任何模拟假设版本可能比他们之前想的更有可能。但对我来说,Sora的成功并不在前五名之内。
Lex Fridman 莱克斯·弗里德曼 (01:49:46) I do think, broadly speaking, AI will serve as those kinds of gateways at its best, simple, psychedelic-like gateways to another wave C reality.
(01:49:46) 我确实认为,从广义上讲,人工智能将以其最好的状态,像简单的迷幻般的门户,引领我们进入另一个波浪C的现实。
Sam Altman 萨姆·奥尔特曼 (01:49:57) That seems for certain.
(01:49:57)那似乎是肯定的。
Lex Fridman 莱克斯·弗里德曼 (01:49:59) That’s pretty exciting. I haven’t done ayahuasca before, but I will soon. I’m going to the aforementioned Amazon jungle in a few weeks.
(01:49:59)那真是令人兴奋。我以前没有尝试过阿雅华斯卡,但我很快就会尝试。我将在几周后去前面提到的亚马逊丛林。
Sam Altman 萨姆·奥尔特曼 (01:50:07) Excited? (01:50:07)兴奋吗?
Lex Fridman 莱克斯·弗里德曼 (01:50:08) Yeah, I’m excited for it. Not the ayahuasca part, but that’s great, whatever. But I’m going to spend several weeks in the jungle, deep in the jungle. And it’s exciting, but it’s terrifying.
(01:50:08)是的,我对此感到兴奋。不是指阿雅华斯卡部分,但那也很好,无所谓。但我将在丛林深处度过几个星期。这令人兴奋,但也令人恐惧。
Sam Altman 萨姆·奥尔特曼 (01:50:17) I’m excited for you.
(01:50:17) 我为你感到兴奋。
Lex Fridman 莱克斯·弗里德曼 (01:50:18) There’s a lot of things that can eat you there, and kill you and poison you, but it’s also nature, and it’s the machine of nature. And you can’t help but appreciate the machinery of nature in the Amazon jungle. It’s just like this system that just exists and renews itself every second, every minute, every hour. It’s the machine. It makes you appreciate this thing we have here, this human thing came from somewhere. This evolutionary machine has created that, and it’s most clearly on display in the jungle. So hopefully, I’ll make it out alive. If not, this will be the last fun conversation we’ve had, so I really deeply appreciate it. Do you think, as I mentioned before, there’s other alien civilizations out there, intelligent ones, when you look up at the skies?
(01:50:18)那里有很多东西可以吞噬你,杀死你,毒害你,但那也是大自然,是大自然的机器。在亚马逊丛林中,你不禁会欣赏大自然的机械性。就像这个系统一样,它存在并且每秒、每分钟、每小时都在更新自己。这就是机器。它让你欣赏我们这里拥有的东西,这个人类的东西是从某个地方来的。这个进化的机器创造了那个,而且它在丛林中最清晰地展示出来。所以希望我能活着出来。如果不能,那么这将是我们最后一次愉快的对话,所以我真的非常感激。你认为,就像我之前提到的,当你抬头看天空时,那里是否存在其他的外星文明,智慧的文明?

Aliens 外星人

Sam Altman 萨姆·奥尔特曼 (01:51:17) I deeply want to believe that the answer is yes. I find the Fermi paradox very puzzling.
(01:51:17)我深深地希望答案是肯定的。我觉得费米悖论非常令人困惑。
Lex Fridman 莱克斯·弗里德曼 (01:51:28) I find it scary that intelligence is not good at handling-
(01:51:28) 我发现智力不擅长处理这一点真的很可怕-
Sam Altman 萨姆·奥尔特曼 (01:51:34) Very scary.
(01:51:34)非常恐怖。
Lex Fridman 莱克斯·弗里德曼 (01:51:34) … powerful technologies. But at the same time, I think I’m pretty confident that there’s just a very large number of intelligent alien civilizations out there. It might just be really difficult to travel through space.
(01:51:34)...强大的技术。但同时,我相当有信心,外面有大量的智能外星文明。只是穿越空间可能非常困难。
Sam Altman 萨姆·奥尔特曼 (01:51:47) Very possible.
(01:51:47)非常可能。
Lex Fridman 莱克斯·弗里德曼 (01:51:50) And it also makes me think about the nature of intelligence. Maybe we’re really blind to what intelligence looks like, and maybe AI will help us see that. It’s not as simple as IQ tests and simple puzzle solving. There’s something bigger. What gives you hope about the future of humanity, this thing we’ve got going on, this human civilization?
(01:51:50)这也让我思考智力的本质。也许我们真的对智力的真正面貌视而不见,也许人工智能会帮助我们看清这一点。它并不像智商测试和简单的解谜那样简单。还有更大的东西。关于人类未来,关于我们正在进行的这个事情,这个人类文明,有什么给你带来希望?
Sam Altman 萨姆·奥尔特曼 (01:52:12) I think the past is a lot. I mean, we just look at what humanity has done in a not very long period of time, huge problems, deep flaws, lots to be super-ashamed of. But on the whole, very inspiring. Gives me a lot of hope.
(01:52:12)我认为过去的事情很多。我的意思是,我们只是看看人类在不太长的时间里做了什么,巨大的问题,深深的缺陷,有很多事情让我们深感羞愧。但总的来说,非常鼓舞人心。给了我很多希望。
Lex Fridman 莱克斯·弗里德曼 (01:52:29) Just the trajectory of it all.
(01:52:29)就是整个过程的轨迹。
Sam Altman 萨姆·奥尔特曼 (01:52:30) Yeah. (01:52:30)是的。
Lex Fridman 莱克斯·弗里德曼 (01:52:31) That we’re together pushing towards a better future.
(01:52:31)我们一起推动着一个更好的未来。
Sam Altman 萨姆·奥尔特曼 (01:52:40) One thing that I wonder about, is AGI going to be more like some single brain, or is it more like the scaffolding in society between all of us? You have not had a great deal of genetic drift from your great-great-great grandparents, and yet what you’re capable of is dramatically different. What you know is dramatically different. And that’s not because of biological change. I mean, you got a little bit healthier, probably. You have modern medicine, you eat better, whatever. But what you have is this scaffolding that we all contributed to built on top of. No one person is going to go build the iPhone. No one person is going to go discover all of science, and yet you get to use it. And that gives you incredible ability. And so in some sense, that we all created that, and that fills me with hope for the future. That was a very collective thing.
(01:52:40) 我在想,人工智能是否会更像一个单一的大脑,还是更像我们所有人之间社会的支架?你的基因漂移与你的曾曾曾祖父母相比并没有太大的变化,然而你的能力却有了戏剧性的不同。你所知道的也有了戏剧性的不同。这并不是因为生物学的变化。我的意思是,你可能变得更健康了一些。你有现代医学,你吃得更好,等等。但你拥有的是我们所有人共同建立的这个支架。没有一个人会去单独制造iPhone。没有一个人会去发现所有的科学,然而你可以使用它。这给了你惊人的能力。所以在某种意义上,我们所有人都创造了这个,这让我对未来充满希望。这是一个非常集体的事情。
Lex Fridman 莱克斯·弗里德曼 (01:53:40) Yeah, we really are standing on the shoulders of giants. You mentioned when we were talking about theatrical, dramatic AI risks that sometimes you might be afraid for your own life. Do you think about your death? Are you afraid of it?
(01:53:40) 是的,我们确实是站在巨人的肩膀上。你在我们谈论戏剧性的AI风险时提到,有时你可能会为自己的生命感到害怕。你会思考你的死亡吗?你害怕吗?
Sam Altman 萨姆·奥尔特曼 (01:53:58) I mean, if I got shot tomorrow and I knew it today, I’d be like, “Oh, that’s sad. I want to see what’s going to happen. What a curious time. What an interesting time.” But I would mostly just feel very grateful for my life.
(01:53:58)我的意思是,如果我明天被枪击,而我今天就知道了,我会说,“哦,那太悲哀了。我想看看会发生什么。多么好奇的时刻,多么有趣的时刻。”但我主要会对我的生活感到非常感激。
Lex Fridman 莱克斯·弗里德曼 (01:54:15) The moments that you did get. Yeah, me, too. It’s a pretty awesome life. I get to enjoy awesome creations of humans, which I believe ChatGPT is one of, and everything that OpenAI is doing. Sam, it’s really an honor and pleasure to talk to you again.
(01:54:15)你所拥有的那些时刻。是的,我也是。这真是一个相当棒的生活。我可以欣赏到人类的精彩创造,我相信ChatGPT就是其中之一,还有OpenAI正在做的一切。Sam,能再次和你交谈真是我的荣幸和乐趣。
Sam Altman 萨姆·奥尔特曼 (01:54:35) Great to talk to you. Thank you for having me.
(01:54:35)很高兴和你交谈。感谢你的邀请。
Lex Fridman 莱克斯·弗里德曼 (01:54:38) Thanks for listening to this conversation with Sam Altman. To support this podcast, please check out our sponsors in the description. And now let me leave you with some words from Arthur C. Clarke. “It may be that our role on this planet is not to worship God, but to create him.” Thank you for listening, and hope to see you next time.
(01:54:38) 感谢您收听我与Sam Altman的对话。如果您想支持这个播客,请在描述中查看我们的赞助商。现在让我用Arthur C. Clarke的一句话结束:“也许我们在这个星球上的角色不是崇拜上帝,而是创造他。”感谢您的收听,希望下次再见。