Listen to this post: 听这篇文章:
Good morning, 早上好,
On Monday’s Sharp Tech we discussed the TikTok situation, including a digression about how the nature of governing may be poised to change under the current administration (the flood of executive orders very much fit within the thesis I put forward), and how tech companies have become quasi-government entities in their own right.
在周一的 Sharp Tech 节目中,我们讨论了 TikTok 的情况,包括关于在当前政府下治理性质可能发生变化的插曲(大量的行政命令非常符合我提出的论点),以及科技公司如何在自身权利上成为准政府实体。
On to the Update:
更新内容:
Stratechery Updates Stratechery 更新
A few notes about some Stratechery changes that happened over the weekend:
关于周末发生的一些 Stratechery 变更的几点说明:
- First, the Stratechery website has been re-designed to better accomodate the additional content in the Stratechery bundle. Articles and Updates now include a podcast player (and YouTube link, if it exists), and the latest podcasts in the Stratechery Plus bundle are in the sidebar. There is also a new visual distinction between quotes from external sources versus from past Stratechery posts (this has not yet been added to the emails).
首先,Stratechery 网站已重新设计,以更好地容纳 Stratechery 套件中的额外内容。文章和更新现在包括一个播客播放器(如果存在,还包括 YouTube 链接),而 Stratechery Plus 套件中的最新播客位于侧边栏。此外,外部来源的引用与过去 Stratechery 帖子的引用之间也有了新的视觉区分(这尚未添加到电子邮件中)。 - Second, there were two bugs that I want to apologize for. First, feeds across the Stratechery bundle contained podcasts from other shows early Monday morning. That was rectified fairly quickly, and you can refresh your feed if they are still incorrect. Secondly, and more seriously, the SMS notification for yesterday’s Interview with Jon Yu was sent to every phone number, when it should have only been sent to those who have opted in to SMS notifications in their account settings. This is totally unacceptable on my part and I am very sorry about this.
其次,我想为两个错误表示歉意。首先,Stratechery 套餐中的信息源在周一早上包含了其他节目的播客。这一问题很快得到了纠正,如果信息源仍然不正确,您可以刷新一下。其次,更严重的是,昨天与 Jon Yu 的采访的 SMS 通知被发送到了每个电话号码,而它应该只发送给在其账户设置中选择接收 SMS 通知的用户。这完全是我这边的失误,我对此深感抱歉。
This was, as you might have guessed, a very substantive Update to nearly every piece of Stratechery infrastructure, and frankly, we didn’t handle it as well as we should have (I haven’t even mentioned countless Updates over the last few years, because they usually go much more smoothly!). We did learn a lot, though, and fixed some very obscure bugs that will make Passport better in the long run.
这正如你可能猜到的,是对几乎每个 Stratechery 基础设施的非常实质性的更新,坦率地说,我们处理得并不如我们应该的那样好(我甚至没有提到过去几年中无数的更新,因为它们通常进行得更加顺利!)。不过,我们确实学到了很多,并修复了一些非常隐蔽的错误,这将使 Passport 在长远中变得更好。
With that out of the way, I am very excited about the content additions: I think that Jon Yu’s Asianometry is a tremendous resource; check out the introductory post if you missed it, as well as the Interview I did with Jon.
处理完这些,我对内容的增加感到非常兴奋:我认为 Jon Yu 的 Asianometry 是一个极好的资源;如果你错过了,可以查看介绍性文章,以及我与 Jon 的采访。
That brings me to the final point: if you want to receive emails from Jon or add the Asianometry podcast, you do need to go to the Asianometry Passport page to opt in and add the podcast. My final mistake in yesterday’s post was to include automatic “Add this podcast” links to the podcast show notes, but for reasons that are complicated to explain — and which will be fixed — you actually need to have signed in first. I’m sorry for the confusion on this point.
这让我提到最后一点:如果您想接收 Jon 的电子邮件或添加 Asianometry 播客,您需要访问 Asianometry 护照页面进行选择并添加播客。我在昨天的帖子中的最后一个错误是包含了自动“添加此播客”的链接到播客节目说明,但由于一些复杂的原因——这些原因将会被修复——您实际上需要先登录。我对这一点造成的混淆表示歉意。
I know this weekend was a bit messy, but my goal is to give you even more compelling content for your subscription at no additional cost, and I thank you for your patience as we make this happen.
我知道这个周末有点混乱,但我的目标是为您的订阅提供更具吸引力的内容,而不收取额外费用,感谢您在我们实现这一目标时的耐心。
DeepSeek-R1 深度搜索-R1
From VentureBeat 来自 VentureBeat
Chinese AI startup DeepSeek, known for challenging leading AI vendors with open-source technologies, just dropped another bombshell: a new open reasoning LLM called DeepSeek-R1. Based on the recently introduced DeepSeek V3 mixture-of-experts model, DeepSeek-R1 matches the performance of o1, OpenAI’s frontier reasoning LLM, across math, coding and reasoning tasks. The best part? It does this at a much more tempting cost, proving to be 90-95% more affordable than the latter.
中国人工智能初创公司 DeepSeek,以挑战领先的人工智能供应商而闻名,最近又发布了一项重磅消息:一款名为 DeepSeek-R1 的新开放推理LLM。基于最近推出的 DeepSeek V3 专家混合模型,DeepSeek-R1 在数学、编码和推理任务上与 OpenAI 的前沿推理LLM的性能相匹配。最棒的是?它的成本要诱人得多,证明比后者便宜 90-95%。The release marks a major leap forward in the open-source arena. It showcases that open models are further closing the gap with closed commercial models in the race to artificial general intelligence (AGI). To show the prowess of its work, DeepSeek also used R1 to distill six Llama and Qwen models, taking their performance to new levels. In one case, the distilled version of Qwen-1.5B outperformed much bigger models, GPT-4o and Claude 3.5 Sonnet, in select math benchmarks. These distilled models, along with the main R1, have been open-sourced and are available on Hugging Face under an MIT license.
此次发布标志着开源领域的重大进展。它展示了开放模型在追赶人工通用智能(AGI)的过程中,进一步缩小了与封闭商业模型之间的差距。为了展示其工作的实力,DeepSeek 还利用 R1 提炼了六个 Llama 和 Qwen 模型,使其性能达到了新的高度。在一个案例中,Qwen-1.5B 的提炼版本在特定的数学基准测试中超越了更大的模型 GPT-4o 和 Claude 3.5 Sonnet。这些提炼模型以及主要的 R1 已被开源,并在 Hugging Face 上以 MIT 许可证提供。
As a quick aside, the distilled Llama models are almost certainly violating the Llama license; DeepSeek doesn’t have the right to unilaterally change the Llama license, which is “open”, but not unencumbered like the MIT license.
作为一个简短的旁注,精简版的 Llama 模型几乎肯定违反了 Llama 许可证;DeepSeek 没有权利单方面更改 Llama 许可证,该许可证是“开放的”,但并不像 MIT 许可证那样没有限制。
That noted, this model is a big deal, even if it’s not the first we’ve heard of it. Over Christmas break DeepSeek released a GPT-4 level model called V3 that was notable for how efficiently it was trained, using only 2788K H800 training hours, which costs around $5.6 million, a shockingly low figure (and easily covered through smuggled chips). V3 was also fine-tuned using a then-yet-unreleased reasoning model called R1 to enhance its capabilities in areas like coding, mathematics, and logic; from the V3 Technical Report:
值得注意的是,这个模型非常重要,即使它不是我们第一次听说的。在圣诞假期期间,DeepSeek 发布了一个名为 V3 的 GPT-4 级别模型,该模型以其高效的训练而闻名,仅使用了 2788K H800 训练小时,成本约为 560 万美元,这个数字令人震惊地低(并且可以通过走私芯片轻松覆盖)。V3 还使用一个当时尚未发布的推理模型 R1 进行了微调,以增强其在编码、数学和逻辑等领域的能力;来自 V3 技术报告:
To establish our methodology, we begin by developing an expert model tailored to a specific domain, such as code, mathematics, or general reasoning, using a combined Supervised FineTuning (SFT) and Reinforcement Learning (RL) training pipeline. This expert model serves as a data generator for the final model. The training process involves generating two distinct types of SFT samples for each instance: the first couples the problem with its original response in the format of, while the second incorporates a system prompt alongside the problem and the R1 response in the format of.
为了建立我们的方法论,我们首先开发一个针对特定领域(如代码、数学或一般推理)的专家模型,使用结合监督微调(SFT)和强化学习(RL)的训练流程。该专家模型作为最终模型的数据生成器。训练过程涉及为每个实例生成两种不同类型的 SFT 样本:第一种将问题与其原始响应配对,格式为,而第二种则在问题和 R1 响应的格式中加入系统提示。The system prompt is meticulously designed to include instructions that guide the model toward producing responses enriched with mechanisms for reflection and verification. During the RL phase, the model leverages high-temperature sampling to generate responses that integrate patterns from both the R1-generated and original data, even in the absence of explicit system prompts. After hundreds of RL steps, the intermediate RL model learns to incorporate R1 patterns, thereby enhancing overall performance strategically.
系统提示经过精心设计,包含指导模型生成富有反思和验证机制的响应的指令。在强化学习阶段,模型利用高温采样生成响应,这些响应整合了来自 R1 生成的数据和原始数据的模式,即使在没有明确系统提示的情况下也是如此。经过数百个强化学习步骤,中间强化学习模型学会了整合 R1 模式,从而在战略上提升整体性能。
The key part here is that R1 was used to generate synthetic data to make V3 better; in other words, one AI was training another AI. This is a critical capability in terms of the progress for these models.
关键部分在于 R1 被用来生成合成数据以使 V3 更好;换句话说,一个人工智能在训练另一个人工智能。这在这些模型的进展方面是一个关键能力。
What is even more interesting, though, is how R1 developed its reasoning in the first place. To that end, the most interesting revelation in the R1 Technical Report was that DeepSeek actually developed two R1 models: R1 and R1-Zero. R1 is the model that is publicly available; R1-Zero, though, is the bigger deal in my mind. From the paper:
更有趣的是,R1 最初是如何发展其推理的。为此,R1 技术报告中最有趣的揭示是,DeepSeek 实际上开发了两个 R1 模型:R1 和 R1-Zero。R1 是公开可用的模型;而在我看来,R1-Zero 更为重要。来自论文:
In this paper, we take the first step toward improving language model reasoning capabilities using pure reinforcement learning (RL). Our goal is to explore the potential of LLMs to develop reasoning capabilities without any supervised data, focusing on their self-evolution through a pure RL process. Specifically, we use DeepSeek-V3-Base as the base model and employ GRPO as the RL framework to improve model performance in reasoning. During training, DeepSeek-R1-Zero naturally emerged with numerous powerful and interesting reasoning behaviors. After thousands of RL steps, DeepSeek-R1-Zero exhibits super performance on reasoning benchmarks. For instance, the pass@1 score on AIME 2024 increases from 15.6% to 71.0%, and with majority voting, the score further improves to 86.7%, matching the performance of OpenAI-o1-0912.
在本文中,我们迈出了提高语言模型推理能力的第一步,采用纯强化学习(RL)。我们的目标是探索LLMs在没有任何监督数据的情况下发展推理能力的潜力,专注于它们通过纯 RL 过程的自我演化。具体而言,我们使用 DeepSeek-V3-Base 作为基础模型,并采用 GRPO 作为 RL 框架,以提高模型在推理方面的表现。在训练过程中,DeepSeek-R1-Zero 自然出现了许多强大而有趣的推理行为。经过数千步的 RL 训练,DeepSeek-R1-Zero 在推理基准测试中表现出色。例如,AIME 2024 上的 pass@1 得分从 15.6%提高到 71.0%,并且通过多数投票,得分进一步提高到 86.7%,与 OpenAI-o1-0912 的表现相匹配。
Reinforcement learning is a technique where a machine learning model is given a bunch of data and a reward function. The classic example is AlphaGo, where DeepMind gave the model the rules of Go with the reward function of winning the game, and then let the model figure everything else on its own. This famously ended up working better than other more human-guided techniques.
强化学习是一种技术,其中机器学习模型被提供一堆数据和一个奖励函数。经典的例子是 AlphaGo,DeepMind 给模型提供了围棋的规则,并以赢得比赛作为奖励函数,然后让模型自己找出其他一切。这最终被证明比其他更依赖人类指导的技术效果更好。
LLMs to date, however, have relied on reinforcement learning with human feedback; humans are in the loop to help guide the model, navigate difficult choices where rewards aren’t obvious, etc. RLHF was the key innovation in transforming GPT-3 into ChatGPT, with well-formed paragraphs, answers that were concise and didn’t trail off into gibberish, etc.
截至LLMs,然而,仍然依赖于带有人类反馈的强化学习;人类参与其中以帮助引导模型,处理奖励不明显的困难选择等。RLHF 是将 GPT-3 转变为 ChatGPT 的关键创新,生成了结构良好的段落,答案简洁且没有陷入无意义的内容等。
R1-Zero, however, drops the HF part — it’s just reinforcement learning. DeepSeek gave the model a set of math, code, and logic questions, and set two reward functions: one for the right answer, and one for the right format that utilized a thinking process. Moreover, the technique was a simple one: instead of trying to evaluate step-by-step (process supervision), or doing a search of all possible answers (a la AlphaGo), DeepSeek encouraged the model to try several different answers at a time and then graded them according to the two reward functions.
R1-Zero 然而去掉了 HF 部分——它只是强化学习。DeepSeek 给模型提供了一组数学、代码和逻辑问题,并设置了两个奖励函数:一个用于正确答案,另一个用于利用思维过程的正确格式。此外,这种技术很简单:DeepSeek 鼓励模型同时尝试几种不同的答案,然后根据这两个奖励函数对它们进行评分,而不是逐步评估(过程监督)或搜索所有可能的答案(类似 AlphaGo)。
What emerged is a model that developed reasoning and chains-of-thought on its own, including what DeepSeek called “Aha Moments”:
出现的是一个模型,它自主发展了推理和思维链,包括 DeepSeek 所称的“顿悟时刻”:
A particularly intriguing phenomenon observed during the training of DeepSeek-R1-Zero is the occurrence of an “aha moment”. This moment, as illustrated in Table 3, occurs in an intermediate version of the model. During this phase, DeepSeek-R1-Zero learns to allocate more thinking time to a problem by reevaluating its initial approach. This behavior is not only a testament to the model’s growing reasoning abilities but also a captivating example of how reinforcement learning can lead to unexpected and sophisticated outcomes.
在 DeepSeek-R1-Zero 的训练过程中观察到的一个特别引人注目的现象是“顿悟时刻”的出现。正如表 3 所示,这一时刻发生在模型的一个中间版本中。在这个阶段,DeepSeek-R1-Zero 通过重新评估其初始方法来学习为问题分配更多的思考时间。这种行为不仅证明了模型日益增长的推理能力,也是强化学习如何导致意想不到和复杂结果的迷人例子。This moment is not only an “aha moment” for the model but also for the researchers observing its behavior. It underscores the power and beauty of reinforcement learning: rather than explicitly teaching the model on how to solve a problem, we simply provide it with the right incentives, and it autonomously develops advanced problem-solving strategies. The “aha moment” serves as a powerful reminder of the potential of RL to unlock new levels of intelligence in artificial systems, paving the way for more autonomous and adaptive models in the future.
这一时刻不仅是模型的“顿悟时刻”,也是观察其行为的研究人员的“顿悟时刻”。它强调了强化学习的力量和美丽:我们并不是明确教导模型如何解决问题,而是简单地为它提供正确的激励,它便能自主发展出先进的问题解决策略。“顿悟时刻”强有力地提醒我们,强化学习有潜力在人工系统中解锁新的智能水平,为未来更自主和适应性强的模型铺平道路。
This is one of the most powerful affirmations yet of The Bitter Lesson: you don’t need to teach the AI how to reason, you can just give it enough compute and data and it will teach itself!
这是对《苦涩的教训》最有力的肯定之一:你不需要教 AI 如何推理,你只需给它足够的计算能力和数据,它就会自我学习!
Well, almost: R1-Zero reasons, but in a way that humans have trouble understanding. Back to the introduction:
好吧,差不多:R1-零理由,但以人类难以理解的方式。回到介绍:
However, DeepSeek-R1-Zero encounters challenges such as poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates a small amount of cold-start data and a multi-stage training pipeline. Specifically, we begin by collecting thousands of cold-start data to fine-tune the DeepSeek-V3-Base model. Following this, we perform reasoning-oriented RL like DeepSeek-R1-Zero. Upon nearing convergence in the RL process, we create new SFT data through rejection sampling on the RL checkpoint, combined with supervised data from DeepSeek-V3 in domains such as writing, factual QA, and self-cognition, and then retrain the DeepSeek-V3-Base model. After fine-tuning with the new data, the checkpoint undergoes an additional RL process, taking into account prompts from all scenarios. After these steps, we obtained a checkpoint referred to as DeepSeek-R1, which achieves performance on par with OpenAI-o1-1217.
然而,DeepSeek-R1-Zero 面临着可读性差和语言混合等挑战。为了解决这些问题并进一步提升推理性能,我们引入了 DeepSeek-R1,该模型结合了一小部分冷启动数据和多阶段训练流程。具体而言,我们首先收集数千条冷启动数据,以微调 DeepSeek-V3-Base 模型。随后,我们进行类似于 DeepSeek-R1-Zero 的面向推理的强化学习(RL)。在 RL 过程接近收敛时,我们通过对 RL 检查点进行拒绝采样创建新的 SFT 数据,并结合来自 DeepSeek-V3 的监督数据,涵盖写作、事实问答和自我认知等领域,然后重新训练 DeepSeek-V3-Base 模型。在用新数据微调后,检查点经历了额外的 RL 过程,考虑到所有场景的提示。经过这些步骤,我们获得了一个称为 DeepSeek-R1 的检查点,其性能与 OpenAI-o1-1217 相当。
This sounds a lot like what OpenAI did for o1
: DeepSeek started the model out with a bunch of examples of chain-of-though thinking so it could learn the proper format for human consumption, and then did the reinforcement learning to enhance its reasoning, along with a number of editing and refinement steps; the output is a model that appears to be very competitive with o1
.
这听起来很像 OpenAI 为 o1
所做的:DeepSeek 最初用一系列链式思维的示例来训练模型,以便它能够学习适合人类消费的正确格式,然后进行了强化学习以增强其推理能力,并进行了多次编辑和精炼步骤;最终的输出是一个看起来与 o1
非常有竞争力的模型。
The third thing DeepSeek did was fine-tune various sized models from Qwen (Alibaba’s family of models) and Llama with R1; this imbued those models with R1’s reasoning capability, dramatically increasing their performance. Moreover, this increase in performance exceeded the gains from simply training the smaller models directly to reason; in other words, effectively having a large model train a small model provides better results than improving the smaller model directly.
DeepSeek 做的第三件事是对来自 Qwen(阿里巴巴的模型系列)和 Llama 的各种大小模型进行微调,使用 R1;这使得这些模型具备了 R1 的推理能力,显著提高了它们的性能。此外,这种性能的提升超过了仅仅直接训练较小模型以进行推理所带来的收益;换句话说,让一个大模型训练一个小模型的效果优于直接改进较小模型。
DeepSeek Implications 深度寻求的影响
All of this is quite remarkable, even if it might not be entirely new; after all, the o1
and o3
models exist. We don’t, however, know anything about those models, given that OpenAI is anything but; the fact these technical papers exist — and the fact that R1 actually writes out its thinking process — is particularly exciting.
所有这些都相当令人瞩目,即使它可能并不完全新颖;毕竟, o1
和 o3
模型是存在的。然而,我们对这些模型一无所知,因为 OpenAI 绝非如此;这些技术论文的存在——以及 R1 实际上写出了其思维过程——尤其令人兴奋。
What is confounding about this reality is the provenance of these models: China. Somehow we’ve ended up in a situation where the leading U.S. labs, including OpenAI and Anthropic (which hasn’t yet released a reasoning model), are keeping everything close to their vest, while this Chinese lab — itself a spin-off from a quantitative hedge fund called High-Flyer — is probably the world leader in open source models, and certainly in open-source reasoning models.
让人困惑的是这些模型的来源:中国。不知何故,我们陷入了这样一种局面:包括 OpenAI 和 Anthropic(尚未发布推理模型)在内的美国顶尖实验室都对一切守口如瓶,而这个中国实验室——本身是一个名为 High-Flyer 的量化对冲基金的衍生公司——可能是全球开源模型的领导者,尤其是在开源推理模型方面。
They also have no choice but to lead in efficiency, thanks to the challenge in acquiring chips; that efficiency, moreover, means that DeepSeek has caused a price war in China amongst LLM inference providers. Notice, however, the long-term path that points to: all U.S. AI model providers would like more chips, but at the end of the day access to chips is not their number one constraint; that means that it is unlikely they will achieve similar levels of efficiency. That’s a problem, however, in a world where models are a commodity: absent differentiation, the only way to have sustainable profits is through a sustainable cost advantage, and DeepSeek appears to be further down the road towards delivering exactly that, even as the U.S. model makers basically benefit from protectionism, which invariably leads to less competitive entities in a commoditized world.
他们别无选择,只能在效率上领先,这要归功于获取芯片的挑战;而这种效率意味着,DeepSeek 在中国引发了LLM个推理提供商之间的价格战。然而,请注意,长期的趋势指向:所有美国的 AI 模型提供商都希望获得更多的芯片,但归根结底,获取芯片并不是他们的首要限制;这意味着他们不太可能实现类似的效率水平。然而,在模型成为商品的世界中,这就是一个问题:缺乏差异化,获得可持续利润的唯一途径是通过可持续的成本优势,而 DeepSeek 似乎在朝着实现这一目标的道路上走得更远,即使美国的模型制造商基本上受益于保护主义,这不可避免地导致在商品化的世界中竞争力较弱的实体。
In other words, you could make the case that the China Chip Ban hasn’t just failed, but actually backfired; Tyler Cowen suggested as much in Bloomberg:
换句话说,你可以说中国芯片禁令不仅失败了,实际上还适得其反;泰勒·科恩在《彭博社》中暗示了这一点:
I have in the past supported these trade restrictions, as AI technology is a vital matter of national security. But I now think the ban was too ambitious to work. It may have delayed Chinese progress in AI by a few years, but it also induced a major Chinese innovation — namely, DeepSeek.
我过去支持这些贸易限制,因为人工智能技术是国家安全的重要问题。但我现在认为禁令过于雄心勃勃,难以奏效。它可能延迟了中国在人工智能方面的进展几年,但也促成了一项重大的中国创新——即 DeepSeek。Now the world knows that a very high-quality AI system can be trained for a relatively small sum of money. That could bring comparable AI systems into realistic purview for nations such as Russia, Iran, Pakistan and others. It is possible to imagine a foreign billionaire initiating a similar program, although personnel would be a constraint. Whatever the dangers of the Chinese system and its potential uses, DeepSeek-inspired offshoots in other nations could be more worrying yet.
现在世界知道,一个高质量的人工智能系统可以用相对较少的资金进行训练。这可能使俄罗斯、伊朗、巴基斯坦等国家能够实现类似的人工智能系统。可以想象一个外国亿万富翁启动类似的项目,尽管人力资源将是一个限制。无论中国系统的危险及其潜在用途如何,受 DeepSeek 启发的其他国家的分支可能更令人担忧。Finding cheaper ways to build AI systems was almost certainly going to happen anyway. But consider the tradeoff here: US policy succeeded in hampering China’s ability to deploy high-quality chips in AI systems, with the accompanying national-security benefits, but it also accelerated the development of effective AI systems that do not rely on the highest-quality chips.
寻找更便宜的方式来构建人工智能系统几乎肯定会发生。但是考虑一下这里的权衡:美国政策成功地阻碍了中国在人工智能系统中部署高质量芯片的能力,带来了相应的国家安全利益,但它也加速了不依赖于高质量芯片的有效人工智能系统的发展。
Of course now the argument will be that the chip ban is even more important, now that China has its own leading models; one wonders, however, if at some point it might be worth asking if (1) AI capability can truly be contained and (2) we might be overly confident in our ability to predict exactly how the landscape will evolve, which ought to increase the discount rate of perceived future benefits of controls as compared to very real losses today.
当然,现在的论点是,芯片禁令变得更加重要,因为中国拥有自己的领先模型;然而,人们不禁想问,是否在某个时刻值得询问 (1) 人工智能能力是否真的可以被遏制,以及 (2) 我们是否对预测未来局势的发展过于自信,这应该会提高与今天非常真实的损失相比,控制措施所感知的未来收益的折现率。
This Update will be available as a podcast later today. To receive it in your podcast player, visit Stratechery.
此更新将在今天稍晚作为播客发布。要在您的播客播放器中接收它,请访问 Stratechery。
The Stratechery Update is intended for a single recipient, but occasional forwarding is totally fine! If you would like to order multiple subscriptions for your team with a group discount (minimum 5), please contact me directly.
Stratechery 更新是为单个收件人准备的,但偶尔转发是完全可以的!如果您希望为您的团队订购多个订阅并享受团体折扣(最低 5 个),请直接与我联系。
Thanks for being a subscriber, and have a great day!
感谢您成为订阅者,祝您有美好的一天!