Transcript for Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419
Sam Altman: OpenAI, GPT-5, Sora, 董事会纷争, Elon Musk, Ilya, 权力与AGI | Lex Fridman播客#419 的文字记录

This is a transcript of Lex Fridman Podcast #419 with Sam Altman 2. The timestamps in the transcript are clickable links that take you directly to that point in the main video. Please note that the transcript is human generated, and may have errors. Here are some useful links:
这是Lex Fridman播客#419与Sam Altman 2的文字记录。文字记录中的时间戳是可点击的链接,直接带您到主视频的那个点。请注意,文字记录是人工生成的,可能存在错误。以下是一些有用的链接:

Table of Contents 目录

Here are the loose “chapters” in the conversation. Click link to jump approximately to that part in the transcript:
以下是对话中的“章节”。点击链接可以跳转到对话记录的大致位置:

Introduction 介绍

Sam Altman 萨姆·奥尔特曼 (00:00:00) I think compute is going to be the currency of the future. I think it’ll be maybe the most precious commodity in the world. I expect that by the end of this decade, and possibly somewhat sooner than that, we will have quite capable systems that we look at and say, “Wow, that’s really remarkable.” The road to AGI should be a giant power struggle. I expect that to be the case.
(00:00:00) 我认为计算将会是未来的货币。我认为它可能会成为世界上最珍贵的商品。我预计在这个十年的末尾,甚至可能比这更早,我们将拥有相当强大的系统,我们会看着它们说,“哇,这真的很了不起。”通往AGI的道路应该是一场巨大的权力斗争。我预计这将是这样的情况。
Lex Fridman 莱克斯·弗里德曼 (00:00:26) Whoever builds AGI first gets a lot of power. Do you trust yourself with that much power?
(00:00:26)谁先构建出人工通用智能,谁就能获得大量的权力。你信任自己能掌握那么大的权力吗?
(00:00:36) The following is a conversation with Sam Altman, his second time on the podcast. He is the CEO of OpenAI, the company behind GPT-4, ChaTGPT, Sora, and perhaps one day the very company that will build AGI. This is The Lex Fridman Podcast. To support it, please check out our sponsors in the description. And now, dear friends, here’s Sam Altman.
(00:00:36) 接下来是我与Sam Altman的对话,这是他第二次来到播客。他是OpenAI的CEO,这家公司是GPT-4,ChaTGPT,Sora的背后推手,也许有一天,它将是构建AGI的公司。这是Lex Fridman的播客。为了支持它,请在描述中查看我们的赞助商。现在,亲爱的朋友们,这里是Sam Altman。

OpenAI board saga OpenAI董事会纷争

(00:01:05) Take me through the OpenAI board saga that started on Thursday, November 16th, maybe Friday, November 17th for you.
(00:01:05) 带我了解一下从周四,11月16日开始的OpenAI董事会纷争,或许对你来说是周五,11月17日。
Sam Altman 萨姆·奥尔特曼 (00:01:13) That was definitely the most painful professional experience of my life, and chaotic and shameful and upsetting and a bunch of other negative things. There were great things about it too, and I wish it had not been in such an adrenaline rush that I wasn’t able to stop and appreciate them at the time. But I came across this old tweet of mine or this tweet of mine from that time period. It was like going your own eulogy, watching people say all these great things about you, and just unbelievable support from people I love and care about. That was really nice, really nice. That whole weekend, with one big exception, I felt like a great deal of love and very little hate, even though it felt like I have no idea what’s happening and what’s going to happen here and this feels really bad. And there were definitely times I thought it was going to be one of the worst things to ever happen for AI safety. Well, I also think I’m happy that it happened relatively early. I thought at some point between when OpenAI started and when we created AGI, there was going to be something crazy and explosive that happened, but there may be more crazy and explosive things still to happen. It still, I think, helped us build up some resilience and be ready for more challenges in the future.
(00:01:13)那绝对是我人生中最痛苦的专业经历,混乱、羞愧、困扰,还有许多其他的负面情绪。当然,也有很多好的方面,我希望当时我没有处在如此的肾上腺素冲动中,以至于我无法停下来欣赏它们。但是我看到了我那个时期的一条旧推特,就像是在听自己的悼词,看着人们对你说出所有的好话,以及来自我所爱和关心的人们的难以置信的支持。那真的很好,真的很好。那整个周末,除了一个大的例外,我感觉到了大量的爱,很少的恨,尽管我感觉我不知道正在发生什么,也不知道接下来会发生什么,这感觉真的很糟糕。而且,我确实有过这样的想法,这可能会成为AI安全性问题中最糟糕的事情之一。不过,我也认为我很高兴这件事情发生得相对较早。我曾经想过,在OpenAI开始和我们创造AGI之间的某个时刻,会有一些疯狂和爆炸性的事情发生,但可能还会有更多的疯狂和爆炸性的事情即将发生。 我认为,这仍然帮助我们建立了一些韧性,并为未来的更多挑战做好了准备。
Lex Fridman 莱克斯·弗里德曼 (00:03:02) But the thing you had a sense that you would experience is some kind of power struggle?
(00:03:02) 但是你有预感你会经历某种权力斗争吗?
Sam Altman 萨姆·奥尔特曼 (00:03:08) The road to AGI should be a giant power struggle. The world should… Well, not should. I expect that to be the case.
(00:03:08)通往AGI的道路应该是一场巨大的权力斗争。世界应该...嗯,不是应该。我预期那就是情况。
Lex Fridman 莱克斯·弗里德曼 (00:03:17) And so you have to go through that, like you said, iterate as often as possible in figuring out how to have a board structure, how to have organization, how to have the kind of people that you’re working with, how to communicate all that in order to deescalate the power struggle as much as possible.
(00:03:17)所以你必须经历这个过程,就像你说的,尽可能多地迭代,以找出如何设立董事会结构,如何组织,如何选择你要合作的人,如何沟通所有这些,以尽可能地降低权力斗争。
Sam Altman 萨姆·奥尔特曼 (00:03:37) Yeah. (00:03:37)是的。
Lex Fridman 莱克斯·弗里德曼 (00:03:37) Pacify it.
(00:03:37) 平息它。
Sam Altman 萨姆·奥尔特曼 (00:03:38) But at this point, it feels like something that was in the past that was really unpleasant and really difficult and painful, but we’re back to work and things are so busy and so intense that I don’t spend a lot of time thinking about it. There was a time after, there was this fugue state for the month after, maybe 45 days after, that I was just drifting through the days. I was so out of it. I was feeling so down.
(00:03:38) 但在这个时候,那感觉就像是过去的一些非常不愉快、困难和痛苦的事情,但我们已经回到工作中,事情如此繁忙和紧张,以至于我没有太多时间去想它。之后有一段时间,也就是过去的一个月,可能是45天后,我只是在日子里漂泊。我当时的状态非常糟糕。我感觉非常低落。
Lex Fridman 莱克斯·弗里德曼 (00:04:17) Just on a personal, psychological level?
(00:04:17)就个人心理层面而言?
Sam Altman 萨姆·奥尔特曼 (00:04:20) Yeah. Really painful, and hard to have to keep running OpenAI in the middle of that. I just wanted to crawl into a cave and recover for a while. But now it’s like we’re just back to working on the mission.
(00:04:20)是的。真的很痛苦,而且在那种情况下还要继续运营OpenAI真的很难。我只是想躲进一个洞里休息一段时间。但现在我们又回到了执行任务的工作中。
Lex Fridman 莱克斯·弗里德曼 (00:04:38) Well, it’s still useful to go back there and reflect on board structures, on power dynamics, on how companies are run, the tension between research and product development and money and all this kind of stuff so that you, who have a very high potential of building AGI, would do so in a slightly more organized, less dramatic way in the future. So there’s value there to go, both the personal psychological aspects of you as a leader, and also just the board structure and all this messy stuff.
(00:04:38) 嗯,回顾董事会结构,权力动态,公司运营方式,研究与产品开发以及金钱之间的紧张关系等等,仍然是有用的。这样,你们这些有很高潜力建立AGI的人,将来可以以更有组织,少些戏剧性的方式去做。所以,无论是你作为领导者的个人心理方面,还是董事会结构和所有这些混乱的事情,都有价值去深入了解。
Sam Altman 萨姆·奥尔特曼 (00:05:18) I definitely learned a lot about structure and incentives and what we need out of a board. And I think that it is valuable that this happened now in some sense. I think this is probably not the last high-stress moment of OpenAI, but it was quite a high-stress moment. My company very nearly got destroyed. And we think a lot about many of the other things we’ve got to get right for AGI, but thinking about how to build a resilient org and how to build a structure that will stand up to a lot of pressure in the world, which I expect more and more as we get closer, I think that’s super important.
(00:05:18)我确实从中学到了很多关于结构和激励以及我们对董事会的需求。我认为在某种意义上,这件事现在发生是有价值的。我认为这可能不是OpenAI最后一次面临高压力的时刻,但这确实是一个非常高压的时刻。我的公司几乎被毁掉。我们对于实现AGI需要做对的许多其他事情进行了深思熟虑,但是思考如何建立一个能够抵御世界上大量压力的弹性组织和结构,我认为这非常重要,尤其是当我们越来越接近这个目标时。
Lex Fridman 莱克斯·弗里德曼 (00:06:01) Do you have a sense of how deep and rigorous the deliberation process by the board was? Can you shine some light on just human dynamics involved in situations like this? Was it just a few conversations and all of a sudden it escalates and why don’t we fire Sam kind of thing?
(00:06:01)你对董事会的审议过程有多深入和严谨有了解吗?你能否揭示一些这种情况下涉及的人际动态?是不是就只有几次对话,然后突然升级,为什么我们不解雇萨姆这样的事情?
Sam Altman 萨姆·奥尔特曼 (00:06:22) I think the board members are well-meaning people on the whole, and I believe that in stressful situations where people feel time pressure or whatever, people understand and make suboptimal decisions. And I think one of the challenges for OpenAI will be we’re going to have to have a board and a team that are good at operating under pressure.
(00:06:22)我认为董事会成员总的来说都是出于好意的人,我相信在人们感到时间压力或其他压力的紧张情况下,人们会理解并做出次优的决策。我认为OpenAI面临的一个挑战是,我们必须拥有一个能够在压力下运作良好的董事会和团队。
Lex Fridman 莱克斯·弗里德曼 (00:07:00) Do you think the board had too much power?
(00:07:00)你认为董事会的权力过大吗?
Sam Altman 萨姆·奥尔特曼 (00:07:03) I think boards are supposed to have a lot of power, but one of the things that we did see is in most corporate structures, boards are usually answerable to shareholders. Sometimes people have super voting shares or whatever. In this case, and I think one of the things with our structure that we maybe should have thought about more than we did is that the board of a nonprofit has, unless you put other rules in place, quite a lot of power. They don’t really answer to anyone but themselves. And there’s ways in which that’s good, but what we’d really like is for the board of OpenAI to answer to the world as a whole, as much as that’s a practical thing.
(00:07:03) 我认为董事会应该拥有很大的权力,但我们确实看到,在大多数公司结构中,董事会通常要对股东负责。有时人们拥有超级投票权或其他权利。在这种情况下,我认为我们的结构中,我们可能应该比实际上更多地考虑的一点是,非营利组织的董事会,除非你制定其他规则,否则拥有相当大的权力。他们真正的回答者只有他们自己。这在某些方面是好的,但我们真正希望的是OpenAI的董事会能对全世界负责,只要这是一种实际的事情。
Lex Fridman 莱克斯·弗里德曼 (00:07:44) So there’s a new board announced.
(00:07:44)所以,有一个新的董事会被宣布了。
Sam Altman 萨姆·奥尔特曼 (00:07:46) Yeah. (00:07:46)是的。
Lex Fridman 莱克斯·弗里德曼 (00:07:47) There’s I guess a new smaller board at first, and now there’s a new final board?
(00:07:47)我猜首先有一个新的小型董事会,现在有一个新的最终董事会?
Sam Altman 萨姆·奥尔特曼 (00:07:53) Not a final board yet. We’ve added some. We’ll add more.
(00:07:53)这还不是最终的板子。我们已经添加了一些。我们会再添加更多。
Lex Fridman 莱克斯·弗里德曼 (00:07:56) Added some. Okay. What is fixed in the new one that was perhaps broken in the previous one?
(00:07:56)添加了一些。好的。新版本修复了哪些可能在之前版本中出现的问题?
Sam Altman 萨姆·奥尔特曼 (00:08:05) The old board got smaller over the course of about a year. It was nine and then it went down to six, and then we couldn’t agree on who to add. And the board also I think didn’t have a lot of experienced board members, and a lot of the new board members at OpenAI have just have more experience as board members. I think that’ll help.
(00:08:05)大约一年的时间里,旧的董事会规模变小了。原本有九个人,然后减少到了六个,之后我们无法达成一致,不知道该增加谁。我认为董事会也没有很多有经验的董事,而OpenAI的很多新董事都有更多的董事经验。我认为这将会有所帮助。
Lex Fridman 莱克斯·弗里德曼 (00:08:31) It’s been criticized, some of the people that are added to the board. I heard a lot of people criticizing the addition of Larry Summers, for example. What’s the process of selecting the board? What’s involved in that?
(00:08:31)它受到了批评,一些被增加到董事会的人。例如,我听到很多人批评增加拉里·萨默斯。选择董事会的过程是什么?那涉及到什么?
Sam Altman 萨姆·奥尔特曼 (00:08:43) So Brett and Larry were decided in the heat of the moment over this very tense weekend, and that weekend was a real rollercoaster. It was a lot of ups and downs. And we were trying to agree on new board members that both the executive team here and the old board members felt would be reasonable. Larry was actually one of their suggestions, the old board members. Brett, I think I had even previous to that weekend suggested, but he was busy and didn’t want to do it, and then we really needed help in [inaudible 00:09:22]. We talked about a lot of other people too, but I felt like if I was going to come back, I needed new board members. I didn’t think I could work with the old board again in the same configuration, although we then decided, and I’m grateful that Adam would stay, but we considered various configurations, decided we wanted to get to a board of three and had to find two new board members over the course of a short period of time.
(00:08:43) 所以在这个非常紧张的周末,布雷特和拉里在一时冲动中被决定下来,那个周末真的像坐过山车一样。有很多起伏。我们试图达成对新董事会成员的一致意见,这些新成员既能得到这里的执行团队的认可,也能得到旧董事会成员的认可。拉里实际上是旧董事会成员的建议之一。我想我甚至在那个周末之前就提议过布雷特,但他很忙,不想做这个,然后我们真的需要在[听不清00:09:22]方面得到帮助。我们也谈论了很多其他人,但我觉得如果我要回来,我需要新的董事会成员。我不认为我能再和旧的董事会以同样的配置合作,尽管我们后来决定,我很感激亚当会留下,但我们考虑了各种配置,决定我们想要一个三人的董事会,并且必须在短时间内找到两个新的董事会成员。
(00:09:57) So those were decided honestly without… You do that on the battlefield. You don’t have time to design a rigorous process then. For new board members since, and new board members we’ll add going forward, we have some criteria that we think are important for the board to have, different expertise that we want the board to have. Unlike hiring an executive where you need them to do one role well, the board needs to do a whole role of governance and thoughtfulness well, and so, one thing that Brett says which I really like is that we want to hire board members in slates, not as individuals one at a time. And thinking about a group of people that will bring nonprofit expertise, expertise at running companies, good legal and governance expertise, that’s what we’ve tried to optimize for.
(00:09:57) 所以这些决定是在战场上诚实地做出的……你在那里做决定,没有时间设计一个严谨的过程。对于后来的新董事会成员,以及我们将要增加的新董事会成员,我们有一些我们认为对董事会重要的标准,我们希望董事会拥有不同的专业知识。与雇佣一个只需要做好一个角色的高管不同,董事会需要做好整个治理和思考的角色,所以,布雷特说的一句我非常喜欢的话是,我们希望以整个团队的形式雇佣董事会成员,而不是一次雇佣一个人。考虑到一群人将带来非营利专业知识,运营公司的专业知识,良好的法律和治理专业知识,这就是我们试图优化的地方。
Lex Fridman 莱克斯·弗里德曼 (00:10:49) So is technical savvy important for the individual board members?
(00:10:49)那么,对于董事会成员来说,技术娴熟度重要吗?
Sam Altman 萨姆·奥尔特曼 (00:10:52) Not for every board member, but for certainly some you need that. That’s part of what the board needs to do.
(00:10:52)并非每个董事会成员都需要,但肯定有些人需要。这是董事会需要做的一部分。
Lex Fridman 莱克斯·弗里德曼 (00:10:57) The interesting thing that people probably don’t understand about OpenAI, I certainly don’t, is all the details of running the business. When they think about the board, given the drama, they think about you. They think about if you reach AGI or you reach some of these incredibly impactful products and you build them and deploy them, what’s the conversation with the board like? And they think, all right, what’s the right squad to have in that kind of situation to deliberate?
(00:10:57)关于OpenAI,人们可能不太了解的有趣之处,我当然也不了解,那就是运营业务的所有细节。当他们想到董事会,考虑到其中的戏剧性,他们会想到你。他们会想到,如果你达到了人工智能的极限,或者你开发并部署了一些影响力极大的产品,那么你和董事会的对话会是怎样的?他们会想,好吧,那种情况下应该有哪种团队来进行讨论?
Sam Altman 萨姆·奥尔特曼 (00:11:25) Look, I think you definitely need some technical experts there. And then you need some people who are like, “How can we deploy this in a way that will help people in the world the most?” And people who have a very different perspective. I think a mistake that you or I might make is to think that only the technical understanding matters, and that’s definitely part of the conversation you want that board to have, but there’s a lot more about how that’s going to just impact society and people’s lives that you really want represented in there too.
(00:11:25)看,我认为你那里肯定需要一些技术专家。然后你需要一些人,他们会问,“我们如何部署这个以便最大程度地帮助世界上的人们?”以及那些有着非常不同观点的人。我认为你或我可能会犯的错误是认为只有技术理解才重要,这肯定是你希望那个委员会进行的对话的一部分,但是关于这将如何影响社会和人们的生活,你真的希望在那里也能得到体现。
Lex Fridman 莱克斯·弗里德曼 (00:11:56) Are you looking at the track record of people or you’re just having conversations?
(00:11:56) 你是在查看人们的过往记录,还是只是在进行对话?
Sam Altman 萨姆·奥尔特曼 (00:12:00) Track record is a big deal. You of course have a lot of conversations, but there are some roles where I totally ignore track record and just look at slope, ignore the Y-intercept.
(00:12:00)过往记录是一件大事。你当然会有很多对话,但有些角色我完全忽视过往记录,只看斜率,忽视Y轴截距。
Lex Fridman 莱克斯·弗里德曼 (00:12:18) Thank you. Thank you for making it mathematical for the audience.
(00:12:18) 谢谢你。感谢你为观众将其数学化。
Sam Altman 萨姆·奥尔特曼 (00:12:21) For a board member, I do care much more about the Y-intercept. I think there is something deep to say about track record there, and experience is something’s very hard to replace.
(00:12:21)作为一个董事会成员,我确实更关心Y轴截距。我认为这里有关于过往记录的深层含义,而且经验是很难被替代的。
Lex Fridman 莱克斯·弗里德曼 (00:12:32) Do you try to fit a polynomial function or exponential one to the track record?
(00:12:32) 你是尝试将多项式函数或指数函数拟合到跟踪记录吗?
Sam Altman 萨姆·奥尔特曼 (00:12:36) That analogy doesn’t carry that far.
(00:12:36) 那个类比并不适用到那么远。
Lex Fridman 莱克斯·弗里德曼 (00:12:39) All right. You mentioned some of the low points that weekend. What were some of the low points psychologically for you? Did you consider going to the Amazon jungle and just taking ayahuasca and disappearing forever?
(00:12:39) 好的。你提到了那个周末的一些低谷。对你来说,心理上的一些低谷是什么?你有考虑过去亚马逊丛林,只是服用一些死藤水,然后永远消失吗?
Sam Altman 萨姆·奥尔特曼 (00:12:53) It was a very bad period of time. There were great high points too. My phone was just nonstop blowing up with nice messages from people I worked with every day, people I hadn’t talked to in a decade. I didn’t get to appreciate that as much as I should have because I was just in the middle of this firefight, but that was really nice. But on the whole, it was a very painful weekend. It was like a battle fought in public to a surprising degree, and that was extremely exhausting to me, much more than I expected. I think fights are generally exhausting, but this one really was. The board did this Friday afternoon. I really couldn’t get much in the way of answers, but I also was just like, well, the board gets to do this, so I’m going to think for a little bit about what I want to do, but I’ll try to find the blessing in disguise here.
(00:12:53)那是一个非常糟糕的时期。当然也有很高的峰值。我的手机不停地接收到我每天工作的人,甚至是我十年未联系的人发来的友好信息。我没有像我应该的那样去欣赏这一点,因为我当时正处于这场火拼的中间,但那真的很好。但总的来说,那是一个非常痛苦的周末。这就像是在公众面前进行的一场战斗,程度之大让我感到惊讶,这对我来说极其疲惫,比我预期的要多。我认为争斗通常都很疲惫,但这一次真的是。董事会在周五下午做了这个决定。我真的无法得到太多的答案,但我也只是想,嗯,董事会有权这么做,所以我会花一点时间思考我想做什么,但我会试着在这里找到隐藏的祝福。
(00:13:52) And I was like, well, my current job at OpenAI is, or it was, to run a decently sized company at this point. And the thing I’d always liked the most was just getting to work with the researchers. And I was like, yeah, I can just go do a very focused AGI research effort. And I got excited about that. Didn’t even occur to me at the time possibly that this was all going to get undone. This was Friday afternoon.
(00:13:52)我当时就想,嗯,我现在在OpenAI的工作,或者说我曾经的工作,就是管理一个此刻规模还算不错的公司。我最喜欢的事情就是能和研究人员一起工作。我当时就想,是的,我可以去做一个非常专注的AGI研究工作。我对此感到很兴奋。我当时甚至没有想到,可能这一切都会被撤销。那是周五的下午。
Lex Fridman 莱克斯·弗里德曼 (00:14:19) So you’ve accepted the death of this-
(00:14:19) 所以你已经接受了这个的死亡-
Sam Altman 萨姆·奥尔特曼 (00:14:22) Very quickly. Very quickly. I went through a little period of confusion and rage, but very quickly, quickly. And by Friday night, I was talking to people about what was going to be next, and I was excited about that. I think it was Friday evening for the first time that I heard from the exec team here, which is like, “Hey, we’re going to fight this.” and then I went to bed just still being like, okay, excited. Onward.
(00:14:22)非常快。非常快。我经历了一段小小的困惑和愤怒期,但非常快,快速。到了周五晚上,我开始和人们讨论接下来会发生什么,我对此感到兴奋。我想是周五晚上我第一次听到这里的执行团队说,“嘿,我们要对抗这个。”然后我上床睡觉,仍然感到,好的,兴奋。向前。
Lex Fridman 莱克斯·弗里德曼 (00:14:52) Were you able to sleep?
(00:14:52) 你能睡着吗?
Sam Altman 萨姆·奥尔特曼 (00:14:54) Not a lot. One of the weird things was there was this period of four and a half days where I didn’t sleep much, didn’t eat much, and still had a surprising amount of energy. You learn a weird thing about adrenaline in wartime.
(00:14:54)并没有太多。其中一个奇怪的事情是,有四天半的时间我几乎没有睡觉,也没有吃多少东西,但仍然有惊人的精力。你在战时会对肾上腺素有一种奇怪的了解。
Lex Fridman 莱克斯·弗里德曼 (00:15:09) So you accepted the death of this baby, OpenAI.
(00:15:09) 所以你接受了这个婴儿的死亡,OpenAI。
Sam Altman 萨姆·奥尔特曼 (00:15:13) And I was excited for the new thing. I was just like, “Okay, this was crazy, but whatever.”
(00:15:13)我对新的事物感到兴奋。我就像,“好吧,这很疯狂,但无所谓。”
Lex Fridman 莱克斯·弗里德曼 (00:15:17) It’s a very good coping mechanism.
(00:15:17)这是一个非常好的应对机制。
Sam Altman 萨姆·奥尔特曼 (00:15:18) And then Saturday morning, two of the board members called and said, “Hey, we didn’t mean to destabilize things. We don’t want to store a lot of value here. Can we talk about you coming back?” And I immediately didn’t want to do that, but I thought a little more and I was like, well, I really care about the people here, the partners, shareholders. I love this company. And so I thought about it and I was like, “Well, okay, but here’s the stuff I would need.” And then the most painful time of all was over the course of that weekend, I kept thinking and being told, and not just me, the whole team here kept thinking, well, we were trying to keep OpenAI stabilized while the whole world was trying to break it apart, people trying to recruit whatever.
(00:15:18) 然后在周六早上,两位董事会成员打电话来说,“嘿,我们并不是想让事情变得不稳定。我们并不想在这里储存大量的价值。我们能谈谈你回来的事吗?”我立刻就不想这么做,但我又多想了一会儿,我想,嗯,我真的很关心这里的人,合作伙伴,股东。我爱这家公司。所以我想了想,我说,“好吧,但这是我需要的东西。”然后最痛苦的时刻就在那个周末,我一直在思考,一直被告知,不仅仅是我,这里的整个团队都在想,嗯,我们正在试图保持OpenAI的稳定,而整个世界都在试图将其分裂,人们试图招募等等。
(00:16:04) We kept being told, all right, we’re almost done. We’re almost done. We just need a little bit more time. And it was this very confusing state. And then Sunday evening when, again, every few hours I expected that we were going to be done and we’re going to figure out a way for me to return and things to go back to how they were. The board then appointed a new interim CEO, and then I was like, that feels really bad. That was the low point of the whole thing. I’ll tell you something. It felt very painful, but I felt a lot of love that whole weekend. Other than that one moment Sunday night, I would not characterize my emotions as anger or hate, but I felt a lot of love from people, towards people. It was painful, but the dominant emotion of the weekend was love, not hate.
(00:16:04)我们一直被告知,好了,我们快完成了。我们快完成了。我们只需要再多一点时间。这是一种非常混乱的状态。然后在周日晚上,我又一次期待我们将完成,我将找到一种方式回归,事情将回到原来的样子。然后董事会任命了一位新的临时CEO,我感到非常不好。那是整个事件中最低谷的时刻。我要告诉你一件事。我感到非常痛苦,但那整个周末我感到了很多的爱。除了周日晚上那一刻,我不会把我的情绪描述为愤怒或恨,但我感到了来自人们的很多爱,对人们的很多爱。那是痛苦的,但那个周末的主导情绪是爱,而不是恨。
Lex Fridman 莱克斯·弗里德曼 (00:17:04) You’ve spoken highly of Mira Murati, that she helped especially, as you put in the tweet, in the quiet moments when it counts. Perhaps we could take a bit of a tangent. What do you admire about Mira?
(00:17:04)你对米拉·穆拉蒂评价很高,说她在关键的安静时刻给予了特别的帮助,就像你在推文中所说的。也许我们可以稍微偏离一下主题。你对米拉有什么欣赏之处?
Sam Altman 萨姆·奥尔特曼 (00:17:15) Well, she did a great job during that weekend in a lot of chaos, but people often see leaders in the crisis moments, good or bad. But a thing I really value in leaders is how people act on a boring Tuesday at 9:46 in the morning and in just the normal drudgery of the day-to-day. How someone shows up in a meeting, the quality of the decisions they make. That was what I meant about the quiet moments.
(00:17:15)嗯,她在那个混乱的周末做得非常好,但人们通常在危机时刻看到的是领导者,无论好坏。但我真正看重的领导者特质是人们在一个平淡无奇的周二上午9:46以及在日常琐事中的行为。一个人在会议中的表现,他们做出的决策的质量。这就是我所说的安静时刻的含义。
Lex Fridman 莱克斯·弗里德曼 (00:17:47) Meaning most of the work is done on a day-by-day, in meeting-by-meeting. Just be present and make great decisions.
(00:17:47)这意味着大部分的工作是日复一日,会议接会议地完成的。只需保持在场并做出优秀的决策。
Sam Altman 萨姆·奥尔特曼 (00:17:58) Yeah. Look, what you have wanted to spend the last 20 minutes about, and I understand, is this one very dramatic weekend, but that’s not really what OpenAI is about. OpenAI is really about the other seven years.
(00:17:58) 是的。看,你过去20分钟想要谈论的,我理解,是这个非常戏剧化的周末,但这并不是OpenAI的真正含义。OpenAI实际上是关于其他七年的事情。
Lex Fridman 莱克斯·弗里德曼 (00:18:10) Well, yeah. Human civilization is not about the invasion of the Soviet Union by Nazi Germany, but still that’s something people focus on.
(00:18:10)嗯,是的。人类文明并不是关于纳粹德国对苏联的入侵,但这仍然是人们关注的事情。
Sam Altman 萨姆·奥尔特曼 (00:18:18) Very understandable.
(00:18:18) 非常易懂。
Lex Fridman 莱克斯·弗里德曼 (00:18:19) It gives us an insight into human nature, the extremes of human nature, and perhaps some of the damage in some of the triumphs of human civilization can happen in those moments, so it’s illustrative. Let me ask you about Ilya. Is he being held hostage in a secret nuclear facility?
(00:18:19)它让我们深入了解人性,人性的极端,也许人类文明的一些损失和一些胜利就在那些时刻发生,所以它具有示范性。让我问你关于伊利亚的事。他是在一个秘密的核设施里被扣为人质吗?

Ilya Sutskever 伊利亚·苏茨凯弗

Sam Altman 萨姆·奥尔特曼 (00:18:36) No. (00:18:36)不。
Lex Fridman 莱克斯·弗里德曼 (00:18:37) What about a regular secret facility?
(00:18:37)那普通的秘密设施呢?
Sam Altman 萨姆·奥尔特曼 (00:18:39) No. (00:18:39)不。
Lex Fridman 莱克斯·弗里德曼 (00:18:40) What about a nuclear non-secret facility?
(00:18:40)那么一个非秘密的核设施呢?
Sam Altman 萨姆·奥尔特曼 (00:18:41) Neither. Not that either.
(00:18:41)都不是。那个也不是。
Lex Fridman 莱克斯·弗里德曼 (00:18:44) This is becoming a meme at some point. You’ve known Ilya for a long time. He was obviously part of this drama with the board and all that kind of stuff. What’s your relationship with him now?
(00:18:44)这在某个时候变成了一个梗。你认识伊利亚已经很长时间了。他显然是这场董事会争议的一部分,还有所有那些事情。你现在和他的关系如何?
Sam Altman 萨姆·奥尔特曼 (00:18:57) I love Ilya. I have tremendous respect for Ilya. I don’t have anything I can say about his plans right now. That’s a question for him, but I really hope we work together for certainly the rest of my career. He’s a little bit younger than me. Maybe he works a little bit longer.
(00:18:57)我爱伊利亚。我对伊利亚有着极高的尊重。我现在无法对他的计划发表任何评论。这是他的问题,但我真心希望我们能在我职业生涯的剩余时间里一起工作。他比我年轻一点。也许他会工作得更久一些。
Lex Fridman 莱克斯·弗里德曼 (00:19:15) There’s a meme that he saw something, like he maybe saw AGI and that gave him a lot of worry internally. What did Ilya see?
(00:19:15)有一个流行观点是,他看到了某些东西,比如他可能看到了AGI,这让他内心感到很担忧。那么,伊利亚看到了什么呢?
Sam Altman 萨姆·奥尔特曼 (00:19:28) Ilya has not seen AGI. None of us have seen AGI. We’ve not built AGI. I do think one of the many things that I really love about Ilya is he takes AGI and the safety concerns, broadly speaking, including things like the impact this is going to have on society, very seriously. And as we continue to make significant progress, Ilya is one of the people that I’ve spent the most time over the last couple of years talking about what this is going to mean, what we need to do to ensure we get it right, to ensure that we succeed at the mission. So Ilya did not see AGI, but Ilya is a credit to humanity in terms of how much he thinks and worries about making sure we get this right.
(00:19:28)伊利亚还没有见过AGI。我们都没有见过AGI。我们还没有建造AGI。我确实认为,我非常喜欢伊利亚的许多方面之一就是他非常认真地对待AGI以及广义上的安全问题,包括这将对社会产生的影响等事情。随着我们继续取得重大进展,伊利亚是我在过去几年中花费最多时间讨论这将意味着什么,我们需要做什么来确保我们做对了,以确保我们成功完成任务的人之一。所以伊利亚没有看到AGI,但伊利亚在思考和担忧我们如何做对这件事上,对人类来说是一种荣誉。
Lex Fridman 莱克斯·弗里德曼 (00:20:30) I’ve had a bunch of conversation with him in the past. I think when he talks about technology, he’s always doing this long-term thinking type of thing. So he is not thinking about what this is going to be in a year. He’s thinking about in 10 years, just thinking from first principles like, “Okay, if this scales, what are the fundamentals here? Where’s this going?” And so that’s a foundation for them thinking about all the other safety concerns and all that kind of stuff, which makes him a really fascinating human to talk with. Do you have any idea why he’s been quiet? Is it he’s just doing some soul-searching?
(00:20:30)我过去和他有过很多次对话。我认为当他谈论科技时,他总是在做这种长期思考的事情。所以他并不在考虑一年后这会变成什么样。他在想的是十年后,就像从基本原理出发,“好的,如果这个东西扩大了,基本的情况是什么?这会走向哪里?”所以这就是他们思考所有其他安全问题和所有那些事情的基础,这使他成为一个非常有趣的人来交谈。你知道他为什么一直保持沉默吗?是他在进行一些深度的反思吗?
Sam Altman 萨姆·奥尔特曼 (00:21:08) Again, I don’t want to speak for Ilya. I think that you should ask him that. He’s definitely a thoughtful guy. I think Ilya is always on a soul search in a really good way.
(00:21:08)再次强调,我不想替伊利亚发言。我认为你应该问他。他绝对是个深思熟虑的人。我认为伊利亚总是以一种非常好的方式在寻找自我。
Lex Fridman 莱克斯·弗里德曼 (00:21:27) Yes. Yeah. Also, he appreciates the power of silence. Also, I’m told he can be a silly guy, which I’ve never seen that side of him.
(00:21:27)是的。对。他也欣赏沉默的力量。另外,我听说他可以是个滑稽的家伙,但我从未见过他的那一面。
Sam Altman 萨姆·奥尔特曼 (00:21:36) It’s very sweet when that happens.
(00:21:36)发生这种事情真是太甜蜜了。
Lex Fridman 莱克斯·弗里德曼 (00:21:39) I’ve never witnessed a silly Ilya, but I look forward to that as well.
(00:21:39) 我从未见过傻气的伊利亚,但我也期待这一刻。
Sam Altman 萨姆·奥尔特曼 (00:21:43) I was at a dinner party with him recently and he was playing with a puppy and he was in a very silly mood, very endearing. And I was thinking, oh man, this is not the side of Ilya that the world sees the most.
(00:21:43)我最近和他一起参加了一个晚宴,他正在和一只小狗玩耍,他的心情非常愚蠢,非常讨人喜欢。我在想,哦,这不是伊利亚最常向世界展示的一面。
Lex Fridman 莱克斯·弗里德曼 (00:21:55) So just to wrap up this whole saga, are you feeling good about the board structure-
(00:21:55) 所以,总结一下这整个事件,你对董事会的结构感觉良好吗-
Sam Altman 萨姆·奥尔特曼 (00:21:55) Yes. (00:21:55)是的。
Lex Fridman 莱克斯·弗里德曼 (00:22:01) … about all of this and where it’s moving?
(00:22:01)…关于所有这些以及它的发展方向?
Sam Altman 萨姆·奥尔特曼 (00:22:04) I feel great about the new board. In terms of the structure of OpenAI, one of the board’s tasks is to look at that and see where we can make it more robust. We wanted to get new board members in place first, but we clearly learned a lesson about structure throughout this process. I don’t have, I think, super deep things to say. It was a crazy, very painful experience. I think it was a perfect storm of weirdness. It was a preview for me of what’s going to happen as the stakes get higher and higher and the need that we have robust governance structures and processes and people. I’m happy it happened when it did, but it was a shockingly painful thing to go through.
(00:22:04) 我对新的董事会感觉非常好。在OpenAI的结构方面,董事会的任务之一就是审视并看看我们在哪里可以使其更为健壮。我们希望首先安排好新的董事会成员,但我们显然在这个过程中对结构学到了一课。我想我没有什么深入的话要说。这是一个疯狂的,非常痛苦的经历。我认为这是一场完美的怪异风暴。这对我来说是一个预览,随着赌注越来越高,我们需要健壮的治理结构、流程和人员。我很高兴它在这个时候发生,但这是一件令人震惊的痛苦事情。
Lex Fridman 莱克斯·弗里德曼 (00:22:47) Did it make you be more hesitant in trusting people?
(00:22:47)这让你在信任他人时变得更加犹豫吗?
Sam Altman 萨姆·奥尔特曼 (00:22:50) Yes. (00:22:50)是的。
Lex Fridman 莱克斯·弗里德曼 (00:22:51) Just on a personal level?
(00:22:51)就个人层面而言?
Sam Altman 萨姆·奥尔特曼 (00:22:52) Yes. I think I’m like an extremely trusting person. I’ve always had a life philosophy of don’t worry about all of the paranoia. Don’t worry about the edge cases. You get a little bit screwed in exchange for getting to live with your guard down. And this was so shocking to me. I was so caught off guard that it has definitely changed, and I really don’t like this, it’s definitely changed how I think about just default trust of people and planning for the bad scenarios.
(00:22:52)是的。我认为我是一个极度信任他人的人。我一直有一种生活哲学,就是不要担心所有的偏执狂想法。不要担心那些极端情况。你可能会因此稍微吃点亏,但换来的是可以放下警惕生活。而这件事对我来说太震惊了。我被完全打了个措手不及,这确实改变了我,我真的不喜欢这种感觉,它确实改变了我对人们默认信任以及为坏情况做计划的看法。
Lex Fridman 莱克斯·弗里德曼 (00:23:21) You got to be careful with that. Are you worried about becoming a little too cynical?
(00:23:21)你得小心点。你担心自己会变得过于愤世嫉俗吗?
Sam Altman 萨姆·奥尔特曼 (00:23:26) I’m not worried about becoming too cynical. I think I’m the extreme opposite of a cynical person, but I’m worried about just becoming less of a default trusting person.
(00:23:26)我并不担心自己会变得过于愤世嫉俗。我认为我是一个极度反愤世嫉俗的人,但我担心的是我可能会变得不再默认相信他人。
Lex Fridman 莱克斯·弗里德曼 (00:23:36) I’m actually not sure which mode is best to operate in for a person who’s developing AGI, trusting or un-trusting. It’s an interesting journey you’re on. But in terms of structure, see, I’m more interested on the human level. How do you surround yourself with humans that are building cool shit, but also are making wise decisions? Because the more money you start making, the more power the thing has, the weirder people get.
(00:23:36)实际上,我并不确定对于一个正在开发AGI的人来说,是信任模式还是不信任模式更好。你正在进行的这个旅程很有趣。但在结构方面,我更感兴趣的是人类层面。你如何让自己被那些正在创造酷炫东西,但同时也在做出明智决策的人们所包围?因为你开始赚取的钱越多,这个东西的权力越大,人们就会变得越奇怪。
Sam Altman 萨姆·奥尔特曼 (00:24:06) I think you could make all kinds of comments about the board members and the level of trust I should have had there, or how I should have done things differently. But in terms of the team here, I think you’d have to give me a very good grade on that one. And I have just enormous gratitude and trust and respect for the people that I work with every day, and I think being surrounded with people like that is really important.
(00:24:06)我认为你可以对董事会成员以及我应该对他们有多大的信任,或者我应该如何以不同的方式做事发表各种评论。但在这个团队方面,我认为你必须给我一个很高的评价。我对我每天工作的人们充满了巨大的感激、信任和尊重,我认为被这样的人包围是非常重要的。

Elon Musk lawsuit 埃隆·马斯克诉讼

Lex Fridman 莱克斯·弗里德曼 (00:24:39) Our mutual friend Elon sued OpenAI. What to you is the essence of what he’s criticizing? To what degree does he have a point? To what degree is he wrong?
(00:24:39) 我们的共同朋友埃隆起诉了OpenAI。你认为他批评的核心是什么?他的观点有多少是对的?又有多少是错的?
Sam Altman 萨姆·奥尔特曼 (00:24:52) I don’t know what it’s really about. We started off just thinking we were going to be a research lab and having no idea about how this technology was going to go. Because it was only seven or eight years ago, it’s hard to go back and really remember what it was like then, but this is before language models were a big deal. This was before we had any idea about an API or selling access to a chatbot. It was before we had any idea we were going to productize at all. So we’re like, “We’re just going to try to do research and we don’t really know what we’re going to do with that.” I think with many fundamentally new things, you start fumbling through the dark and you make some assumptions, most of which turned out to be wrong.
(00:24:52)我真的不知道这是关于什么的。我们一开始只是认为我们会成为一个研究实验室,对这项技术的发展方向一无所知。因为那只是七八年前的事,很难回忆起那时的情况,但那是在语言模型成为大热门之前。那是在我们对API或出售聊天机器人的访问权限一无所知之前。那是在我们还没有任何将其产品化的想法之前。所以我们就像,“我们只是打算做些研究,我们真的不知道我们会用那些研究成果做什么。”我认为对于许多根本性的新事物,你开始时就像在黑暗中摸索,做出一些假设,其中大部分最后都被证明是错误的。
(00:25:31) And then it became clear that we were going to need to do different things and also have huge amounts more capital. So we said, “Okay, well, the structure doesn’t quite work for that. How do we patch the structure?” And then you patch it again and patch it again and you end up with something that does look eyebrow-raising, to say the least. But we got here gradually with, I think, reasonable decisions at each point along the way. And it doesn’t mean I wouldn’t do it totally differently if we could go back now with an Oracle, but you don’t get the Oracle at the time. But anyway, in terms of what Elon’s real motivations here are, I don’t know.
(00:25:31) 然后我们明白了,我们需要做一些不同的事情,同时也需要大量的资金。所以我们说,“好吧,现在的结构可能不太适用。我们怎么修改这个结构呢?”然后你就一次又一次地修改,最后得到的东西看起来确实让人大跌眼镜,至少可以这么说。但是我们是逐步到达这里的,我认为在每个阶段我们都做出了合理的决定。这并不意味着如果我们现在能回到过去,有了预知未来的能力,我就不会完全不同的方式去做。但是你在那个时候并没有预知未来的能力。不过,至于埃隆真正的动机是什么,我不知道。
Lex Fridman 莱克斯·弗里德曼 (00:26:12) To the degree you remember, what was the response that OpenAI gave in the blog post? Can you summarize it?
(00:26:12) 尽你所记,OpenAI在博客文章中的回应是什么?你能总结一下吗?
Sam Altman 萨姆·奥尔特曼 (00:26:21) Oh, we just said Elon said this set of things. Here’s our characterization, or here’s not our characterization. Here’s the characterization of how this went down. We tried to not make it emotional and just say, “Here’s the history.”
(00:26:21)哦,我们刚刚说伊隆说了这些事情。这是我们的描述,或者不是我们的描述。这就是这件事的描述。我们试图不让它带有情绪,只是说,“这就是历史。”
Lex Fridman 莱克斯·弗里德曼 (00:26:44) I do think there’s a degree of mischaracterization from Elon here about one of the points you just made, which is the degree of uncertainty you had at the time. You guys are a small group of researchers crazily talking about AGI when everybody’s laughing at that thought.
(00:26:44)我确实认为埃隆在这里对你刚才提到的一点有些误解,那就是你当时的不确定程度。你们是一小群疯狂讨论AGI的研究者,而所有人都在嘲笑这个想法。
Sam Altman 萨姆·奥尔特曼 (00:27:09) It wasn’t that long ago Elon was crazily talking about launching rockets when people were laughing at that thought, so I think he’d have more empathy for this.
(00:27:09)就在不久前,当人们嘲笑这个想法时,埃隆疯狂地谈论发射火箭,所以我认为他对此会有更多的同情。
Lex Fridman 莱克斯·弗里德曼 (00:27:20) I do think that there’s personal stuff here, that there was a split that OpenAI and a lot of amazing people here chose to part ways with Elon, so there’s a personal-
(00:27:20)我确实认为这里有一些个人因素,OpenAI和这里的许多出色的人选择与埃隆分道扬镳,所以这是个人的-
Sam Altman 萨姆·奥尔特曼 (00:27:34) Elon chose to part ways.
(00:27:34)埃隆选择了分道扬镳。
Lex Fridman 莱克斯·弗里德曼 (00:27:37) Can you describe that exactly? The choosing to part ways?
(00:27:37)你能具体描述一下吗?选择分道扬镳的那一刻?
Sam Altman 萨姆·奥尔特曼 (00:27:42) He thought OpenAI was going to fail. He wanted total control to turn it around. We wanted to keep going in the direction that now has become OpenAI. He also wanted Tesla to be able to build an AGI effort. At various times, he wanted to make OpenAI into a for-profit company that he could have control of or have it merge with Tesla. We didn’t want to do that, and he decided to leave, which that’s fine.
(00:27:42)他认为OpenAI将会失败。他想要完全的控制权来扭转局面。我们想要继续朝着现在已经成为OpenAI的方向前进。他还希望特斯拉能够建立一个AGI(人工通用智能)的项目。在不同的时候,他想要将OpenAI变成一个他可以控制的盈利公司,或者让它与特斯拉合并。我们不想这样做,所以他决定离开,这没问题。
Lex Fridman 莱克斯·弗里德曼 (00:28:06) So you’re saying, and that’s one of the things that the blog post says, is that he wanted OpenAI to be basically acquired by Tesla in the same way that, or maybe something similar or maybe something more dramatic than the partnership with Microsoft.
(00:28:06) 所以你的意思是,这也是博客文章中提到的一点,他希望OpenAI能够被特斯拉收购,就像或者可能类似于,或者可能比与微软的合作更具戏剧性。
Sam Altman 萨姆·奥尔特曼 (00:28:23) My memory is the proposal was just like, yeah, get acquired by Tesla and have Tesla have full control over it. I’m pretty sure that’s what it was.
(00:28:23)我的记忆是,提案就像是,是的,被特斯拉收购,让特斯拉完全控制它。我很确定就是这样。
Lex Fridman 莱克斯·弗里德曼 (00:28:29) So what does the word open in OpenAI mean to Elon at the time? Ilya has talked about this in the email exchanges and all this kind of stuff. What does it mean to you at the time? What does it mean to you now?
(00:28:29) 那么,对于当时的埃隆来说,OpenAI中的"open"是什么意思呢?伊利亚在电子邮件交流和所有这些东西中谈到过这个。那时对你来说是什么意思?现在对你来说又是什么意思?
Sam Altman 萨姆·奥尔特曼 (00:28:44) Speaking of going back with an Oracle, I’d pick a different name. One of the things that I think OpenAI is doing that is the most important of everything that we’re doing is putting powerful technology in the hands of people for free, as a public good. We don’t run ads on our-
(00:28:44)说到和一个神谕回去,我会选择一个不同的名字。我认为OpenAI正在做的最重要的事情之一就是将强大的技术免费放在人们手中,作为公共利益。我们不在我们的-上运行广告。
Sam Altman 萨姆·奥尔特曼 (00:29:01) … as a public good. We don’t run ads on our free version. We don’t monetize it in other ways. We just say it’s part of our mission. We want to put increasingly powerful tools in the hands of people for free and get them to use them. I think that kind of open is really important to our mission. I think if you give people great tools and teach them to use them or don’t even teach them, they’ll figure it out, and let them go build an incredible future for each other with that, that’s a big deal. So if we can keep putting free or low cost or free and low cost powerful AI tools out in the world, I think that’s a huge deal for how we fulfill the mission. Open source or not, yeah, I think we should open source some stuff and not other stuff. It does become this religious battle line where nuance is hard to have, but I think nuance is the right answer.
(00:29:01) …作为一种公共利益。我们在我们的免费版本上不运行广告。我们也不通过其他方式来盈利。我们只是说这是我们的使命的一部分。我们希望将越来越强大的工具免费放在人们手中,并让他们使用。我认为这种开放对我们的使命非常重要。我认为如果你给人们提供了伟大的工具,并教他们如何使用,或者甚至不教他们,他们会自己弄明白,然后让他们用这些工具为彼此建设一个令人难以置信的未来,这是一件大事。所以,如果我们能继续将免费或低成本的强大AI工具推向世界,我认为这对我们实现使命有着巨大的影响。无论是否开源,是的,我认为我们应该开源一些东西,而不是其他东西。这确实成为了一条宗教战线,很难有细微的差别,但我认为细微的差别是正确的答案。
Lex Fridman 莱克斯·弗里德曼 (00:29:55) So he said, “Change your name to CloseAI and I’ll drop the lawsuit.” I mean is it going to become this battleground in the land of memes about the name?
(00:29:55)所以他说,“把你的名字改为CloseAI,我就撤销诉讼。”我的意思是,这个名字问题会不会变成这片迷因之地的战场?
Sam Altman 萨姆·奥尔特曼 (00:30:06) I think that speaks to the seriousness with which Elon means the lawsuit, and that’s like an astonishing thing to say, I think.
(00:30:06)我认为这表明了埃隆对诉讼的严肃态度,我觉得这是一件令人惊讶的事情。
Lex Fridman 莱克斯·弗里德曼 (00:30:23) Maybe correct me if I’m wrong, but I don’t think the lawsuit is legally serious. It’s more to make a point about the future of AGI and the company that’s currently leading the way.
(00:30:23)也许如果我错了你可以纠正我,但我认为这场诉讼在法律上并不严重。它更多的是为了对未来的人工智能和目前领先的公司提出一个观点。
Sam Altman 萨姆·奥尔特曼 (00:30:37) Look, I mean Grok had not open sourced anything until people pointed out it was a little bit hypocritical and then he announced that Grok will open source things this week. I don’t think open source versus not is what this is really about for him.
(00:30:37)看,我的意思是,直到有人指出这有点虚伪,Grok才没有开源任何东西,然后他宣布Grok将在本周开源一些东西。我不认为开源与否是他真正关心的问题。
Lex Fridman 莱克斯·弗里德曼 (00:30:48) Well, we will talk about open source and not. I do think maybe criticizing the competition is great. Just talking a little shit, that’s great. But friendly competition versus like, “I personally hate lawsuits.”
(00:30:48) 好的,我们将谈论开源与非开源。我确实认为或许批评竞争对手是好事。稍微说点坏话,那很好。但是友好的竞争与“我个人讨厌诉讼”相比,就像……
Sam Altman 萨姆·奥尔特曼 (00:31:01) Look, I think this whole thing is unbecoming of a builder. And I respect Elon as one of the great builders of our time. I know he knows what it’s like to have haters attack him and it makes me extra sad he’s doing it toss.
(00:31:01)看,我认为这整件事对于一个建设者来说是不体面的。我尊重埃隆作为我们这个时代的伟大建设者之一。我知道他知道被恨他的人攻击是什么感觉,这让我更加难过他也在做这种事。
Lex Fridman 莱克斯·弗里德曼 (00:31:18) Yeah, he’s one of the greatest builders of all time, potentially the greatest builder of all time.
(00:31:18) 是的,他是有史以来最伟大的建筑师之一,可能是有史以来最伟大的建筑师。
Sam Altman 萨姆·奥尔特曼 (00:31:22) It makes me sad. And I think it makes a lot of people sad. There’s a lot of people who’ve really looked up to him for a long time. I said in some interview or something that I missed the old Elon and the number of messages I got being like, “That exactly encapsulates how I feel.”
(00:31:22)这让我感到难过。我想这也让很多人感到难过。有很多人长期以来都非常尊敬他。我在某次采访或者什么地方说过我怀念过去的埃隆,然后我收到了很多消息,说“这正好表达了我现在的感受。”
Lex Fridman 莱克斯·弗里德曼 (00:31:36) I think he should just win. He should just make X Grok beat GPT and then GPT beats Grok and it’s just the competition and it’s beautiful for everybody. But on the question of open source, do you think there’s a lot of companies playing with this idea? It’s quite interesting. I would say Meta surprisingly has led the way on this, or at least took the first step in the game of chess of really open sourcing the model. Of course it’s not the state-of-the-art model, but open sourcing Llama Google is flirting with the idea of open sourcing a smaller version. What are the pros and cons of open sourcing? Have you played around with this idea?
(00:31:36)我认为他应该就这样赢。他应该让X Grok击败GPT,然后GPT再击败Grok,这就是竞争,对每个人来说都是美好的。但在开源的问题上,你认为有很多公司在玩弄这个想法吗?这非常有趣。我会说Meta出人意料地在这方面引领了潮流,或者至少在真正开源模型的棋局中迈出了第一步。当然,这并不是最先进的模型,但开源Llama Google正在考虑开源一个较小的版本。开源的利弊是什么?你有没有尝试过这个想法?
Sam Altman 萨姆·奥尔特曼 (00:32:22) Yeah, I think there is definitely a place for open source models, particularly smaller models that people can run locally, I think there’s huge demand for. I think there will be some open source models, there will be some closed source models. It won’t be unlike other ecosystems in that way.
(00:32:22) 是的,我认为开源模型绝对有其存在的价值,特别是那些人们可以在本地运行的小型模型,我认为对此有巨大的需求。我认为会有一些开源模型,也会有一些闭源模型。在这方面,它不会与其他生态系统有所不同。
Lex Fridman 莱克斯·弗里德曼 (00:32:39) I listened to all in podcasts talking about this lawsuit and all that kind of stuff. They were more concerned about the precedent of going from nonprofit to this cap for profit. What precedent that sets for other startups? Is that something-
(00:32:39)我听了所有关于这个诉讼以及所有相关事情的播客。他们更关心的是从非盈利转变为这种有利润上限的先例。这为其他初创公司设立了什么样的先例?这是一种什么情况-
Sam Altman 萨姆·奥尔特曼 (00:32:56) I would heavily discourage any startup that was thinking about starting as a nonprofit and adding a for-profit arm later. I’d heavily discourage them from doing that. I don’t think we’ll set a precedent here.
(00:32:56)我强烈反对任何初创公司一开始就考虑作为非营利机构,并在后期增加营利部门的想法。我会极力劝阻他们这么做。我不认为我们会在这里设立一个先例。
Lex Fridman 莱克斯·弗里德曼 (00:33:05) Okay. So most startups should go just-
(00:33:05) 好的。所以大多数创业公司应该就这样去做-
Sam Altman 萨姆·奥尔特曼 (00:33:08) For sure.
(00:33:08)当然。
Lex Fridman 莱克斯·弗里德曼 (00:33:09) And again-
(00:33:09)再来一次-
Sam Altman 萨姆·奥尔特曼 (00:33:09) If we knew what was going to happen, we would’ve done that too.
(00:33:09)如果我们知道会发生什么,我们也会那样做的。
Lex Fridman 莱克斯·弗里德曼 (00:33:12) Well in theory, if you dance beautifully here, there’s some tax incentives or whatever, but…
(00:33:12)理论上,如果你在这里跳得很美,会有一些税收优惠或者什么的,但是……
Sam Altman 萨姆·奥尔特曼 (00:33:19) I don’t think that’s how most people think about these things.
(00:33:19) 我不认为大多数人是这样看待这些事情的。
Lex Fridman 莱克斯·弗里德曼 (00:33:22) It’s just not possible to save a lot of money for a startup if you do it this way.
(00:33:22)如果你这样做,对于一个创业公司来说,想要存下大量的钱是不可能的。
Sam Altman 萨姆·奥尔特曼 (00:33:27) No, I think there’s laws that would make that pretty difficult.
(00:33:27) 不,我认为有些法律会使这变得相当困难。
Lex Fridman 莱克斯·弗里德曼 (00:33:30) Where do you hope this goes with Elon? This tension, this dance, what do you hope this? If we go 1, 2, 3 years from now, your relationship with him on a personal level too, like friendship, friendly competition, just all this kind of stuff.
(00:33:30)你希望这个与埃隆的关系会如何发展?这种紧张,这种舞蹈,你希望这是什么?如果我们从现在开始,1年,2年,3年,你和他在个人层面上的关系,比如友谊,友好的竞争,还有所有这些东西。
Sam Altman 萨姆·奥尔特曼 (00:33:51) Yeah, I really respect Elon and I hope that years in the future we have an amicable relationship.
(00:33:51)是的,我真的很尊重伊隆,我希望在未来的岁月里我们能有一个和睦的关系。
Lex Fridman 莱克斯·弗里德曼 (00:34:05) Yeah, I hope you guys have an amicable relationship this month and just compete and win and explore these ideas together. I do suppose there’s competition for talent or whatever, but it should be friendly competition. Just build cool shit. And Elon is pretty good at building cool shit. So are you.
(00:34:05) 是的,我希望你们这个月能保持和睦的关系,一起竞争,一起赢,一起探索这些想法。我猜可能会有人才竞争之类的,但应该是友好的竞争。就去创造一些酷炫的东西吧。埃隆非常擅长创造酷炫的东西。你也是。

Sora 索拉

(00:34:32) So speaking of cool shit, Sora. There’s like a million questions I could ask. First of all, it’s amazing. It truly is amazing on a product level but also just on a philosophical level. So let me just technical/philosophical ask, what do you think it understands about the world more or less than GPT-4 for example? The world model when you train on these patches versus language tokens.
(00:34:32) 所以,说到酷炫的东西,Sora。我有一百万个问题想要问。首先,它真的很惊人。不仅在产品层面上,也在哲学层面上。那么,让我从技术/哲学的角度来问,你认为它对世界的理解比如说GPT-4多还是少?当你在这些片段上训练世界模型,而不是语言标记。
Sam Altman 萨姆·奥尔特曼 (00:35:04) I think all of these models understand something more about the world model than most of us give them credit for. And because they’re also very clear things they just don’t understand or don’t get right, it’s easy to look at the weaknesses, see through the veil and say, “Ah, this is all fake.” But it’s not all fake. It’s just some of it works and some of it doesn’t work.
(00:35:04)我认为这些模型对世界模型的理解比我们大多数人认为的要深入。而且,因为它们也有很明显的不理解或者理解错误的地方,所以很容易看到这些弱点,看穿这层面纱,然后说,“啊,这全是假的。”但并非全是假的。只是有些部分有效,有些部分无效。
(00:35:28) I remember when I started first watching Sora videos and I would see a person walk in front of something for a few seconds and occlude it and then walk away and the same thing was still there. I was like, “Oh, this is pretty good.” Or there’s examples where the underlying physics looks so well represented over a lot of steps in a sequence, it’s like, “|Oh, this is quite impressive.” But fundamentally, these models are just getting better and that will keep happening. If you look at the trajectory from DALL·E 1 to 2 to 3 to Sora, there are a lot of people that were dunked on each version saying it can’t do this, it can’t do that and look at it now.
(00:35:28) 我记得当我开始看Sora的视频时,我会看到一个人走在某物前面几秒钟,然后遮挡住它,然后走开,同样的东西还在那里。我就像,“哦,这很不错。”或者有些例子,底层的物理现象在一系列步骤中表现得如此逼真,就像,“哦,这真的很令人印象深刻。”但从根本上说,这些模型只是在变得更好,这种情况会一直持续下去。如果你看看从DALL·E 1到2到3再到Sora的轨迹,有很多人在每个版本上都被批评说它不能做到这个,不能做到那个,现在看看它。
Lex Fridman 莱克斯·弗里德曼 (00:36:04) Well, the thing you just mentioned is the occlusions is basically modeling the physics of the three-dimensional physics of the world sufficiently well to capture those kinds of things.
(00:36:04) 嗯,你刚才提到的这个问题,遮挡物基本上是对世界三维物理现象的建模,足以捕捉到这类事物。
Sam Altman 萨姆·奥尔特曼 (00:36:17) Well… (00:36:17)嗯……
Lex Fridman 莱克斯·弗里德曼 (00:36:18) Or yeah, maybe you can tell me, in order to deal with occlusions, what does the world model need to?
(00:36:18) 或者是的,也许你可以告诉我,为了处理遮挡问题,世界模型需要做什么?
Sam Altman 萨姆·奥尔特曼 (00:36:24) Yeah. So what I would say is it’s doing something to deal with occlusions really well. What I represent that it has a great underlying 3D model of the world, it’s a little bit more of a stretch.
(00:36:24) 是的。所以我想说的是,它处理遮挡物的方式做得非常好。我认为它有一个很好的基础3D世界模型,这有点过于夸大其词。
Lex Fridman 莱克斯·弗里德曼 (00:36:33) But can you get there through just these kinds of two-dimensional training data approaches?
(00:36:33) 但是,你能仅通过这种二维训练数据的方法达到那里吗?
Sam Altman 萨姆·奥尔特曼 (00:36:39) It looks like this approach is going to go surprisingly far. I don’t want to speculate too much about what limits it will surmount and which it won’t, but…
(00:36:39)看起来这种方法将会出人意料地走得很远。我不想过多地猜测它能克服什么限制,不能克服什么,但是……
Lex Fridman 莱克斯·弗里德曼 (00:36:46) What are some interesting limitations of the system that you’ve seen? I mean there’s been some fun ones you’ve posted.
(00:36:46) 你见过这个系统有哪些有趣的限制?我是说你已经发布过一些有趣的内容。
Sam Altman 萨姆·奥尔特曼 (00:36:52) There’s all kinds of fun. I mean, cat’s sprouting an extra limit at random points in a video. Pick what you want, but there’s still a lot of problem, there’s a lot of weaknesses.
(00:36:52)有各种各样的乐趣。我的意思是,视频中的猫随机地长出了额外的肢体。选择你想要的,但还是存在很多问题,有很多弱点。
Lex Fridman 莱克斯·弗里德曼 (00:37:02) Do you think it’s a fundamental flaw of the approach or is it just bigger model or better technical details or better data, more data is going to solve the cat sprouting [inaudible 00:37:19]?
(00:37:02)你认为这是方法的根本性缺陷,还是只是更大的模型,更好的技术细节,或者更好的数据,更多的数据就能解决猫崽子[不可听清00:37:19]的问题?
Sam Altman 萨姆·奥尔特曼 (00:37:19) I would say yes to both. I think there is something about the approach which just seems to feel different from how we think and learn and whatever. And then also I think it’ll get better with scale.
(00:37:19)我会对两者都说是。我认为这种方法有些与我们的思考、学习等方式不同。而且我也认为随着规模的扩大,它会变得更好。
Lex Fridman 莱克斯·弗里德曼 (00:37:30) Like I mentioned, LLMS have tokens, text tokens, and Sora has visual patches so it converts all visual data, a diverse kinds of visual data videos and images into patches. Is the training to the degree you can say fully self supervised, there’s some manual labeling going on? What’s the involvement of humans in all this?
(00:37:30)就像我提到的,LLMS有文本标记,而Sora有视觉补丁,所以它将所有视觉数据,各种各样的视觉数据视频和图像转换成补丁。这种训练到了你可以说完全自我监督的程度了吗?还有一些人工标记在进行吗?人类在这一切中的参与是什么样的?
Sam Altman 萨姆·奥尔特曼 (00:37:49) I mean without saying anything specific about the Sora approach, we use lots of human data in our work.
(00:37:49)我的意思是,不具体谈论Sora的方法,我们在工作中使用了大量的人类数据。
Lex Fridman 莱克斯·弗里德曼 (00:38:00) But not internet scale data? So lots of humans. Lots is a complicated word, Sam.
(00:38:00) 但不是互联网规模的数据?所以有很多人。很多是一个复杂的词,Sam。
Sam Altman 萨姆·奥尔特曼 (00:38:08) I think lots is a fair word in this case.
(00:38:08)我认为在这种情况下,“很多”是个合适的词。
Lex Fridman 莱克斯·弗里德曼 (00:38:12) Because to me, “lots”… Listen, I’m an introvert and when I hang out with three people, that’s a lot of people. Four people, that’s a lot. But I suppose you mean more than…
(00:38:12)因为对我来说,“很多”……听着,我是个内向的人,当我和三个人一起出去,那就是很多人了。四个人,那也很多。但我猜你的意思是超过……
Sam Altman 萨姆·奥尔特曼 (00:38:21) More than three people work on labeling the data for these models, yeah.
(00:38:21)是的,有三个以上的人在为这些模型标注数据。
Lex Fridman 莱克斯·弗里德曼 (00:38:24) Okay. Right. But fundamentally, there’s a lot of self supervised learning. Because what you mentioned in the technical report is internet scale data. That’s another beautiful… It’s like poetry. So it’s a lot of data that’s not human label. It’s self supervised in that way?
(00:38:24) 好的,对。但从根本上说,有很多自我监督学习。因为你在技术报告中提到的是互联网规模的数据。这又是一个美丽的……就像诗一样。所以这是大量没有人类标签的数据。这就是自我监督的方式吗?
Sam Altman 萨姆·奥尔特曼 (00:38:44) Yeah. (00:38:44)是的。
Lex Fridman 莱克斯·弗里德曼 (00:38:45) And then the question is, how much data is there on the internet that could be used in this that is conducive to this kind of self supervised way if only we knew the details of the self supervised. Have you considered opening it up a little more details?
(00:38:45)然后问题是,互联网上有多少数据可以用于这种自我监督的方式,只要我们知道自我监督的细节。你有考虑过公开更多的细节吗?
Sam Altman 萨姆·奥尔特曼 (00:39:02) We have. You mean for source specifically?
(00:39:02)我们有。你是指针对源头特别的吗?
Lex Fridman 莱克斯·弗里德曼 (00:39:04) Source specifically. Because it’s so interesting that can the same magic of LLMs now start moving towards visual data and what does that take to do that?
(00:39:04) 特别是源头。因为这太有趣了,LLMs的同样魔力现在能否开始转向视觉数据,那需要做些什么来实现这一点呢?
Sam Altman 萨姆·奥尔特曼 (00:39:18) I mean it looks to me like yes, but we have more work to do.
(00:39:18)我的意思是,看起来对我来说是的,但我们还有更多的工作要做。
Lex Fridman 莱克斯·弗里德曼 (00:39:22) Sure. What are the dangers? Why are you concerned about releasing the system? What are some possible dangers of this?
(00:39:22) 当然。有什么危险?你为什么担心释放这个系统?这有什么可能的危险?
Sam Altman 萨姆·奥尔特曼 (00:39:29) I mean frankly speaking, one thing we have to do before releasing the system is just get it to work at a level of efficiency that will deliver the scale people are going to want from this so that I don’t want to downplay that. And there’s still a ton ton of work to do there. But you can imagine issues with deepfakes, misinformation. We try to be a thoughtful company about what we put out into the world and it doesn’t take much thought to think about the ways this can go badly.
(00:39:29)坦白说,在发布系统之前,我们必须做的一件事就是让它达到一定的效率,以满足人们对此的期望,我并不想淡化这一点。而且,我们还有大量的工作要做。但你可以想象到深度伪造和误导信息的问题。我们试图成为一个对我们投入世界的东西深思熟虑的公司,而想象这可能会出现的问题并不需要太多的思考。
Lex Fridman 莱克斯·弗里德曼 (00:40:05) There’s a lot of tough questions here, you’re dealing in a very tough space. Do you think training AI should be or is fair use under copyright law?
(00:40:05)这里有很多棘手的问题,你正在处理一个非常困难的领域。你认为在版权法下,训练AI应该或者说是合理使用吗?
Sam Altman 萨姆·奥尔特曼 (00:40:14) I think the question behind that question is, do people who create valuable data deserve to have some way that they get compensated for use of it, and that I think the answer is yes. I don’t know yet what the answer is. People have proposed a lot of different things. We’ve tried some different models. But if I’m like an artist for example, A, I would like to be able to opt out of people generating art in my style. And B, if they do generate art in my style, I’d like to have some economic model associated with that.
(00:40:14) 我认为那个问题背后的问题是,创造有价值数据的人是否应该有某种方式得到使用它的补偿,我认为答案是肯定的。我还不知道答案是什么。人们提出了很多不同的东西。我们尝试了一些不同的模型。但是,如果我是一个艺术家,例如,A,我希望能够选择退出人们以我的风格创作艺术。B,如果他们以我的风格创作艺术,我希望有一种经济模型与之相关。
Lex Fridman 莱克斯·弗里德曼 (00:40:46) Yeah, it’s that transition from CDs to Napster to Spotify. We have to figure out some kind of model.
(00:40:46)是的,这就是从CD到Napster再到Spotify的转变。我们必须找出某种模式。
Sam Altman 萨姆·奥尔特曼 (00:40:53) The model changes but people have got to get paid.
(00:40:53)模式改变了,但人们还是得拿到报酬。
Lex Fridman 莱克斯·弗里德曼 (00:40:55) Well, there should be some kind of incentive if we zoom out even more for humans to keep doing cool shit.
(00:40:55)嗯,如果我们进一步放大视角,应该有某种激励让人类继续做出酷炫的事情。
Sam Altman 萨姆·奥尔特曼 (00:41:02) Of everything I worry about, humans are going to do cool shit and society is going to find some way to reward it. That seems pretty hardwired. We want to create, we want to be useful, we want to achieve status in whatever way. That’s not going anywhere I don’t think.
(00:41:02)在我所有的担忧中,人类总会做出一些酷炫的事情,社会也会找到某种方式来奖励它。这似乎是根深蒂固的。我们想要创造,我们想要有用,我们想要以各种方式获得地位。我认为这种情况不会消失。
Lex Fridman 莱克斯·弗里德曼 (00:41:17) But the reward might not be monetary financially. It might be fame and celebration of other cool-
(00:41:17) 但是奖励可能并非金钱上的。它可能是名誉和对其他酷事的庆祝。
Sam Altman 萨姆·奥尔特曼 (00:41:25) Maybe financial in some other way. Again, I don’t think we’ve seen the last evolution of how the economic system’s going to work.
(00:41:25)也许在某种程度上是金融的。再说一次,我不认为我们已经看到了经济系统将如何运作的最后演变。
Lex Fridman 莱克斯·弗里德曼 (00:41:31) Yeah, but artists and creators are worried. When they see Sora, they’re like, “Holy shit.”
(00:41:31)是的,但艺术家和创作者们很担心。当他们看到Sora时,他们就像,“哇,太疯狂了。”
Sam Altman 萨姆·奥尔特曼 (00:41:36) Sure. Artists were also super worried when photography came out and then photography became a new art form and people made a lot of money taking pictures. I think things like that will keep happening. People will use the new tools in new ways.
(00:41:36) 当然。当摄影刚出现的时候,艺术家们也非常担忧,然后摄影成为了一种新的艺术形式,人们通过拍照赚了很多钱。我认为这样的事情会一直发生。人们会以新的方式使用新的工具。
Lex Fridman 莱克斯·弗里德曼 (00:41:50) If we just look on YouTube or something like this, how much of that will be using Sora like AI generated content, do you think, in the next five years?
(00:41:50)如果我们只是在YouTube或类似的平台上看看,你认为在接下来的五年里,有多少内容会使用像Sora这样的AI生成内容呢?
Sam Altman 萨姆·奥尔特曼 (00:42:01) People talk about how many jobs is AI going to do in five years. The framework that people have is, what percentage of current jobs are just going to be totally replaced by some AI doing the job? The way I think about it is not what percent of jobs AI will do, but what percent of tasks will AI do on over one time horizon. So if you think of all of the five-second tasks in the economy, five minute tasks, the five-hour tasks, maybe even the five-day tasks, how many of those can AI do? I think that’s a way more interesting, impactful, important question than how many jobs AI can do because it is a tool that will work at increasing levels of sophistication and over longer and longer time horizons for more and more tasks and let people operate at a higher level of abstraction. So maybe people are way more efficient at the job they do. And at some point that’s not just a quantitative change, but it’s a qualitative one too about the kinds of problems you can keep in your head. I think that for videos on YouTube it’ll be the same. Many videos, maybe most of them, will use AI tools in the production, but they’ll still be fundamentally driven by a person thinking about it, putting it together, doing parts of it. Sort of directing and running it.
(00:42:01) 人们谈论AI在五年内将做多少工作。人们的框架是,现有工作的多少百分比将完全被某种AI取代?我考虑的不是AI将做多少工作,而是AI将在一个时间范围内做多少任务。所以,如果你考虑经济中所有的五秒任务,五分钟任务,五小时任务,甚至五天任务,AI能做多少?我认为这比AI能做多少工作更有趣,更有影响力,更重要,因为它是一个工具,将以越来越高的复杂性和越来越长的时间范围来完成越来越多的任务,让人们在更高层次的抽象中操作。所以,也许人们在他们的工作中会更有效率。在某一点上,这不仅仅是数量上的变化,也是质的变化,关于你能在头脑中保持的问题类型。我认为对于YouTube上的视频也是如此。 许多视频,也许是大部分,将在制作过程中使用AI工具,但它们仍然会基本上由一个人来思考,组织,做部分工作。有点像导演和运营它。
Lex Fridman 莱克斯·弗里德曼 (00:43:18) Yeah, it’s so interesting. I mean it’s scary, but it’s interesting to think about. I tend to believe that humans like to watch other humans or other human humans-
(00:43:18) 是的,这真的很有趣。我是说,虽然这很可怕,但去思考它却很有趣。我倾向于相信人类喜欢观察其他人或其他的人类。
Sam Altman 萨姆·奥尔特曼 (00:43:27) Humans really care about other humans a lot.
(00:43:27)人类真的非常关心其他人类。
Lex Fridman 莱克斯·弗里德曼 (00:43:29) Yeah. If there’s a cooler thing that’s better than a human, humans care about that for two days and then they go back to humans.
(00:43:29) 是的。如果有比人类更酷的事物,人们会在两天内关注它,然后他们又会回归到人类。
Sam Altman 萨姆·奥尔特曼 (00:43:39) That seems very deeply wired.
(00:43:39)那似乎深深地嵌入其中。
Lex Fridman 莱克斯·弗里德曼 (00:43:41) It’s the whole chess thing, “Oh, yeah,” but now let’s everybody keep playing chess. And let’s ignore the elephant in the room that humans are really bad at chess relative to AI systems.
(00:43:41)这就是整个关于棋的事情,“哦,是的”,但现在让我们大家继续下棋。让我们忽视房间里的大象,那就是相对于AI系统,人类在棋类游戏上真的很糟糕。
Sam Altman 萨姆·奥尔特曼 (00:43:52) We still run races and cars are much faster. I mean there’s a lot of examples.
(00:43:52)我们仍然进行比赛,而汽车的速度快了很多。我的意思是,有很多例子。
Lex Fridman 莱克斯·弗里德曼 (00:43:56) Yeah. And maybe it’ll just be tooling in the Adobe suite type of way where it can just make videos much easier and all that kind of stuff.
(00:43:56) 是的。也许它只会像Adobe套件那样成为一种工具,可以让制作视频变得更加容易,以及所有那些事情。
(00:44:07) Listen, I hate being in front of the camera. If I can figure out a way to not be in front of the camera, I would love it. Unfortunately, it’ll take a while. That generating faces, it is getting there, but generating faces in video format is tricky when it’s specific people versus generic people.
(00:44:07)听着,我讨厌站在镜头前。如果我能找到一个不用站在镜头前的方法,我会很喜欢。不幸的是,这需要一段时间。生成面部图像正在进行中,但是在视频格式中生成特定人物的面部图像比生成普通人物的面部图像要棘手。

GPT-4

(00:44:24) Let me ask you about GPT-4. There’s so many questions. First of all, also amazing. Looking back, it’ll probably be this kind of historic pivotal moment with 3, 5 and 4 which ChatGPT.
(00:44:24)让我问你关于GPT-4的问题。有太多的问题了。首先,这也很惊人。回顾过去,这可能会是一个历史性的关键时刻,有3,5和4这样的ChatGPT。
Sam Altman 萨姆·奥尔特曼 (00:44:40) Maybe five will be the pivotal moment. I don’t know. Hard to say that looking forward.
(00:44:40)也许五点将是关键时刻。我不知道。很难预先说出来。
Lex Fridman 莱克斯·弗里德曼 (00:44:44) We’ll never know. That’s the annoying thing about the future, it’s hard to predict. But for me, looking back, GPT-4, ChatGPT is pretty damn impressive, historically impressive. So allow me to ask, what’s been the most impressive capabilities of GPT-4 to you and GPT-4 Turbo?
(00:44:44)我们永远不会知道。这就是关于未来的烦人之处,它很难预测。但对我来说,回顾过去,GPT-4,ChatGPT确实非常令人印象深刻,历史性的印象深刻。所以,让我问一下,对你来说,GPT-4和GPT-4 Turbo最令人印象深刻的能力是什么?
Sam Altman 萨姆·奥尔特曼 (00:45:06) I think it kind of sucks.
(00:45:06)我觉得这有点糟糕。
Lex Fridman 莱克斯·弗里德曼 (00:45:08) Typical human also, gotten used to an awesome thing.
(00:45:08)典型的人类也是,已经习惯了一件了不起的事情。
Sam Altman 萨姆·奥尔特曼 (00:45:11) No, I think it is an amazing thing, but relative to where we need to get to and where I believe we will get to, at the time of GPT-3, people are like, “Oh, this is amazing. This is marvel of technology.” And it is, it was. But now we have GPT-4 and look at GPT-3 and you’re like, “That’s unimaginably horrible.” I expect that the delta between 5 and 4 will be the same as between 4 and 3 and I think it is our job to live a few years in the future and remember that the tools we have now are going to kind of suck looking backwards at them and that’s how we make sure the future is better.
(00:45:11)不,我认为这是一件了不起的事情,但相对于我们需要达到的地方以及我相信我们将要达到的地方,在GPT-3的时候,人们都会说,“哇,这太神奇了。这是科技的奇迹。”的确,那是的。但现在我们有了GPT-4,看看GPT-3,你会觉得,“那简直是无法想象的糟糕。”我预计5和4之间的差距将与4和3之间的差距一样,我认为我们的任务就是生活在未来的几年里,记住我们现在拥有的工具在回顾时会显得有些糟糕,这就是我们确保未来更好的方式。
Lex Fridman 莱克斯·弗里德曼 (00:45:59) What are the most glorious ways in that GPT-4 sucks? Meaning-
(00:45:59)GPT-4的最糟糕之处是什么?意思是-
Sam Altman 萨姆·奥尔特曼 (00:46:05) What are the best things it can do?
(00:46:05)它能做什么最好的事情?
Lex Fridman 莱克斯·弗里德曼 (00:46:06) What are the best things it can do and the limits of those best things that allow you to say it sucks, therefore gives you an inspiration and hope for the future?
(00:46:06) 它能做什么最好的事情,以及这些最好的事情的限制让你说它糟糕,因此给你带来对未来的灵感和希望?
Sam Altman 萨姆·奥尔特曼 (00:46:16) One thing I’ve been using it for more recently is sort of like a brainstorming partner.
(00:46:16)我最近更多地将它用作类似于头脑风暴的伙伴。
Lex Fridman 莱克斯·弗里德曼 (00:46:23) Yep, [inaudible 00:46:25] for that.
(00:46:23)是的,[听不清00:46:25]为此。
Sam Altman 萨姆·奥尔特曼 (00:46:25) There’s a glimmer of something amazing in there. When people talk about it, what it does, they’re like, “Oh, it helps me code more productively. It helps me write more faster and better. It helps me translate from this language to another,” all these amazing things, but there’s something about the kind of creative brainstorming partner, “I need to come up with a name for this thing. I need to think about this problem in a different way. I’m not sure what to do here,” that I think gives a glimpse of something I hope to see more of.
(00:46:25)那里有一丝令人惊奇的东西。当人们谈论它,谈论它的作用时,他们会说,“哦,它帮助我更高效地编程。它帮助我更快更好地写作。它帮助我从这种语言翻译成另一种语言,”所有这些令人惊奇的事情,但是关于那种创造性的头脑风暴伙伴,“我需要为这个东西想一个名字。我需要以不同的方式思考这个问题。我不确定该怎么做,”我认为这给出了我希望看到更多的东西的一瞥。
(00:47:03) One of the other things that you can see a very small glimpse of is when I can help on longer horizon tasks, break down something in multiple steps, maybe execute some of those steps, search the internet, write code, whatever, put that together. When that works, which is not very often, it’s very magical.
(00:47:03)你可以稍微看到的另一件事是,当我可以在更长期的任务上提供帮助,将某件事分解成多个步骤,可能执行其中的一些步骤,搜索互联网,编写代码,等等,然后将这些整合在一起。当这种情况发生时,虽然并不常见,但确实非常神奇。
Lex Fridman 莱克斯·弗里德曼 (00:47:24) The iterative back and forth with a human, it works a lot for me. What do you mean it-
(00:47:24)与人反复迭代的交流对我来说非常有用。你说的它是什么意思?
Sam Altman 萨姆·奥尔特曼 (00:47:29) Iterative back and forth to human, it can get more often when it can go do a 10 step problem on its own.
(00:47:29)反复与人类交互,当它能够独立完成10步问题时,这种情况会变得更加频繁。
Lex Fridman 莱克斯·弗里德曼 (00:47:33) Oh. (00:47:33)哦。
Sam Altman 萨姆·奥尔特曼 (00:47:34) It doesn’t work for that too often, sometimes.
(00:47:34)它对此并不总是有效,有时候。
Lex Fridman 莱克斯·弗里德曼 (00:47:37) Add multiple layers of abstraction or do you mean just sequential?
(00:47:37)添加多层抽象还是你的意思只是顺序的?
Sam Altman 萨姆·奥尔特曼 (00:47:40) Both, to break it down and then do things that different layers of abstraction to put them together. Look, I don’t want to downplay the accomplishment of GPT-4, but I don’t want to overstate it either. And I think this point that we are on an exponential curve, we’ll look back relatively soon at GPT-4 like we look back at GPT-3 now.
(00:47:40) 既要分解它,又要在不同的抽象层面上将它们组合起来。看,我不想贬低GPT-4的成就,但我也不想过分夸大它。我认为我们现在正处于一个指数曲线上,我们很快就会像现在回顾GPT-3一样回顾GPT-4。
Lex Fridman 莱克斯·弗里德曼 (00:48:03) That said, I mean ChatGPT was a transition to where people started to believe there is an uptick of believing, not internally at OpenAI.
(00:48:03)也就是说,我的意思是ChatGPT成为了一个过渡,人们开始相信有一种信念的上升,但这并非在OpenAI内部。
Sam Altman 萨姆·奥尔特曼 (00:48:04) For sure.
(00:48:04)当然。
Lex Fridman 莱克斯·弗里德曼 (00:48:16) Perhaps there’s believers here, but when you think of-
(00:48:16)也许这里有信徒,但当你想到-
Sam Altman 萨姆·奥尔特曼 (00:48:19) And in that sense, I do think it’ll be a moment where a lot of the world went from not believing to believing. That was more about the ChatGPT interface. And by the interface and product, I also mean the post training of the model and how we tune it to be helpful to you and how to use it than the underlying model itself.
(00:48:19)在这个意义上,我确实认为这将是一个让世界上很多人从不相信转变为相信的时刻。这更多的是关于ChatGPT界面的事情。而当我说到界面和产品时,我也指的是模型的后期训练以及我们如何调整它以便对你有所帮助,以及如何使用它,而不仅仅是底层模型本身。
Lex Fridman 莱克斯·弗里德曼 (00:48:38) How much of each of those things are important? The underlying model and the RLHF or something of that nature that tunes it to be more compelling to the human, more effective and productive for the human.
(00:48:38) 每一种事物的重要性有多大?底层模型和RLHF或类似的东西如何调整它以使其对人类更具吸引力,更有效和更有成效。
Sam Altman 萨姆·奥尔特曼 (00:48:55) I mean they’re both super important, but the RLHF, the post-training step, the little wrapper of things that from a compute perspective, little wrapper of things that we do on top of the base model even though it’s a huge amount of work, that’s really important to say nothing of the product that we build around it. In some sense, we did have to do two things. We had to invent the underlying technology and then we had to figure out how to make it into a product people would love, which is not just about the actual product work itself, but this whole other step of how you align it and make it useful.