001
DeepSeek-V3 Technical Report
DeepSeek-V3 技术报告
Abstract 摘要
We present DeepSeek-V3, a strong Mixture-of-Experts (MoE) language model with 671B total parameters with 37B activated for each token.
To achieve efficient inference and cost-effective training, DeepSeek-V3 adopts Multi-head Latent Attention (MLA) and DeepSeekMoE architectures, which were thoroughly validated in DeepSeek-V2.
Furthermore, DeepSeek-V3 pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance.
We pre-train DeepSeek-V3 on 14.8 trillion diverse and high-quality tokens, followed by Supervised Fine-Tuning and Reinforcement Learning stages to fully harness its capabilities.
Comprehensive evaluations reveal that DeepSeek-V3 outperforms other open-source models and achieves performance comparable to leading closed-source models.
Despite its excellent performance, DeepSeek-V3 requires only 2.788M H800 GPU hours for its full training.
In addition, its training process is remarkably stable.
Throughout the entire training process, we did not experience any irrecoverable loss spikes or perform any rollbacks.
The model checkpoints are available at https://github.com/deepseek-ai/DeepSeek-V3.
我们介绍 DeepSeek-V3,这是一种强大的专家混合(MoE)语言模型,总参数量为 6710 亿,每个 token 激活 37 亿参数。为了实现高效推理和成本效益的训练,DeepSeek-V3 采用了在 DeepSeek-V2 中经过彻底验证的多头潜态注意力(MLA)和 DeepSeekMoE 架构。此外,DeepSeek-V3 率先采用了一种无辅助损失的负载均衡策略,并设定一个多 token 预测的训练目标,以实现更强的性能。我们在 14.8 万亿多样且高质量的 token 上预训练 DeepSeek-V3,随后进行有监督的微调和强化学习阶段,以充分发挥其能力。全面评估显示,DeepSeek-V3 优于其他开源模型,并实现了可与领先的闭源模型媲美的性能。尽管性能优异,DeepSeek-V3 全程训练仅需 2788 万小时的 H800 GPU。此外,其训练过程异常稳定。在整个训练过程中,我们没有遇到任何不可恢复的损失峰值,也未进行任何回滚。模型的检查点可在 https://github.com/deepseek-ai/DeepSeek-V3 获得。
UTF8gbsn
1 Introduction 1 介绍
In recent years, Large Language Models (LLMs) have been undergoing rapid iteration and evolution (OpenAI, 2024a; Anthropic, 2024; Google, 2024), progressively diminishing the gap towards Artificial General Intelligence (AGI).
Beyond closed-source models, open-source models, including DeepSeek series (DeepSeek-AI, 2024b, c; Guo et al., 2024; DeepSeek-AI, 2024a), LLaMA series (Touvron et al., 2023a, b; AI@Meta, 2024a, b), Qwen series (Qwen, 2023, 2024a, 2024b), and Mistral series (Jiang et al., 2023; Mistral, 2024), are also making significant strides, endeavoring to close the gap with their closed-source counterparts.
To further push the boundaries of open-source model capabilities, we scale up our models and introduce DeepSeek-V3, a large Mixture-of-Experts (MoE) model with 671B parameters, of which 37B are activated for each token.
近年来,大型语言模型(LLMs)正经历快速迭代和演变(OpenAI, 2024a; Anthropic, 2024; Google, 2024),逐步缩小与通用人工智能(AGI)的差距。除了闭源模型之外,开源模型,包括 DeepSeek 系列(DeepSeek-AI, 2024b, c; Guo et al., 2024; DeepSeek-AI, 2024a)、LLaMA 系列(Touvron et al., 2023a, b; AI@Meta, 2024a, b)、Qwen 系列(Qwen, 2023, 2024a, 2024b)和 Mistral 系列(Jiang et al., 2023; Mistral, 2024),也在取得重大进展,努力缩小与其闭源对手的差距。为了进一步推动开源模型能力的极限,我们扩大了我们的模型并推出 DeepSeek-V3,这是一种参数量为 6710 亿的专家混合(MoE)大模型,其中每个 token 激活 37 亿参数。
With a forward-looking perspective, we consistently strive for strong model performance and economical costs.
Therefore, in terms of architecture, DeepSeek-V3 still adopts Multi-head Latent Attention (MLA) (DeepSeek-AI, 2024c) for efficient inference and DeepSeekMoE (Dai et al., 2024) for cost-effective training.
These two architectures have been validated in DeepSeek-V2 (DeepSeek-AI, 2024c), demonstrating their capability to maintain robust model performance while achieving efficient training and inference.
Beyond the basic architecture, we implement two additional strategies to further enhance the model capabilities.
Firstly, DeepSeek-V3 pioneers an auxiliary-loss-free strategy (Wang et al., 2024a) for load balancing, with the aim of minimizing the adverse impact on model performance that arises from the effort to encourage load balancing.
Secondly, DeepSeek-V3 employs a multi-token prediction training objective, which we have observed to enhance the overall performance on evaluation benchmarks.
从前瞻性的角度出发,我们始终努力追求强大的模型性能和经济成本。因此,在架构方面,DeepSeek-V3 仍然采用多头潜态注意力(MLA)(DeepSeek-AI, 2024c)以实现高效推理,并采用 DeepSeekMoE(Dai et al., 2024)以实现经济高效的训练。 这两种架构已在 DeepSeek-V2(DeepSeek-AI, 2024c)中得到验证,证明其在实现高效的训练和推理的同时能够维持稳健的模型性能。除了基础架构外,我们实施了两个额外的策略来进一步增强模型能力。首先,DeepSeek-V3 率先采用了一种无辅助损失的负载均衡策略(Wang et al., 2024a),以最小化由推动负载均衡努力带来的对模型性能的不利影响。其次,DeepSeek-V3 采用多 token 预测的训练目标,我们已观察到这能增强在评估基准上的总体性能。
In order to achieve efficient training, we support the FP8 mixed precision training and implement comprehensive optimizations for the training framework.
Low-precision training has emerged as a promising solution for efficient training (Kalamkar et al., 2019; Narang et al., 2017; Peng et al., 2023b; Dettmers et al., 2022), its evolution being closely tied to advancements in hardware capabilities (Micikevicius et al., 2022; Luo et al., 2024; Rouhani et al., 2023a).
In this work, we introduce an FP8 mixed precision training framework and, for the first time, validate its effectiveness on an extremely large-scale model.
Through the support for FP8 computation and storage, we achieve both accelerated training and reduced GPU memory usage.
As for the training framework, we design the DualPipe algorithm for efficient pipeline parallelism, which has fewer pipeline bubbles and hides most of the communication during training through computation-communication overlap.
This overlap ensures that, as the model further scales up, as long as we maintain a constant computation-to-communication ratio, we can still employ fine-grained experts across nodes while achieving a near-zero all-to-all communication overhead.
In addition, we also develop efficient cross-node all-to-all communication kernels to fully utilize InfiniBand (IB) and NVLink bandwidths.
Furthermore, we meticulously optimize the memory footprint, making it possible to train DeepSeek-V3 without using costly tensor parallelism.
Combining these efforts, we achieve high training efficiency.
为了实现高效训练,我们支持 FP8 混合精度训练,并对训练框架进行了全面优化。低精度训练作为高效训练的一个有前景的解决方案已经出现(Kalamkar 等,2019;Narang 等,2017;Peng 等,2023b;Dettmers 等,2022),其演变与硬件能力的进步密切相关(Micikevicius 等,2022;Luo 等,2024;Rouhani 等,2023a)。在本研究中,我们引入了 FP8 混合精度训练框架,并首次在超大规模模型上验证其有效性。通过支持 FP8 计算和存储,我们实现了训练加速和 GPU 内存使用减少。至于训练框架,我们设计了 DualPipe 算法来实现高效的管道并行,减少了管道气泡,并通过计算与通讯的重叠隐藏了大部分训练期间的通讯。这种重叠确保了随着模型进一步扩展,只要保持恒定的计算与通讯比例,我们仍然可以在节点间使用细粒度专家,并实现几乎为零的全归约通讯开销。此外,我们还开发了高效的跨节点全归约通讯内核,充分利用 InfiniBand(IB)和 NVLink 带宽。此外,我们仔细优化了内存占用,使得无需使用高成本的张量并行即可训练 DeepSeek-V3。结合这些努力,我们实现了高效的训练效率。
During pre-training, we train DeepSeek-V3 on 14.8T high-quality and diverse tokens.
The pre-training process is remarkably stable.
Throughout the entire training process, we did not encounter any irrecoverable loss spikes or have to roll back.
Next, we conduct a two-stage context length extension for DeepSeek-V3. In the first stage, the maximum context length is extended to 32K, and in the second stage, it is further extended to 128K.
Following this, we conduct post-training, including Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) on the base model of DeepSeek-V3, to align it with human preferences and further unlock its potential.
During the post-training stage, we distill the reasoning capability from the DeepSeek-R1 series of models, and meanwhile carefully maintain the balance between model accuracy and generation length.
在预训练期间,我们在 14.8T 高质量及多样化的 tokens 上训练 DeepSeek-V3。预训练过程异常稳定。在整个训练过程中,我们没有遇到无法恢复的损失尖峰或需要回滚的情况。接下来,我们为 DeepSeek-V3 进行两阶段上下文长度扩展。在第一阶段,最大上下文长度扩展至 32K,在第二阶段进一步扩展至 128K。随后,我们进行后续训练,包括在 DeepSeek-V3 基础模型上进行监督微调(SFT)和强化学习(RL),以使其与人类偏好对齐并进一步释放其潜力。在后续训练阶段,我们从 DeepSeek-R1 系列模型中提取推理能力,并同时仔细保持模型的准确性与生成长度之间的平衡。
We evaluate DeepSeek-V3 on a comprehensive array of benchmarks.
Despite its economical training costs, comprehensive evaluations reveal that DeepSeek-V3-Base has emerged as the strongest open-source base model currently available, especially in code and math.
Its chat version also outperforms other open-source models and achieves performance comparable to leading closed-source models, including GPT-4o and Claude-3.5-Sonnet, on a series of standard and open-ended benchmarks.
我们在综合基准测试上评估了 DeepSeek-V3。尽管其训练成本经济,通过综合评估显示 DeepSeek-V3-Base 已经成为当前最强的开源基础模型,尤其是在代码和数学方面。其聊天版本也优于其他开源模型,并在一系列标准和开放式基准测试中实现了与领先的闭源模型相当的性能,包括 GPT-4o 和 Claude-3.5-Sonnet。
Training Costs 训练成本 | Pre-Training 预训练 | Context Extension 上下文扩展 | Post-Training 后训练 | Total 总计 |
---|---|---|---|---|
in H800 GPU Hours 以 H800 GPU 小时计 |
2664K | 119K | 5K | 2788K |
in USD 以美元计 | $5.328M | $0.238M | $0.01M | $5.576M |
表 1:DeepSeek-V3 的训练成本,假设 H800 的租用价格为每 GPU 小时 2 美元。
Lastly, we emphasize again the economical training costs of DeepSeek-V3, summarized in Table 1, achieved through our optimized co-design of algorithms, frameworks, and hardware.
During the pre-training stage, training DeepSeek-V3 on each trillion tokens requires only 180K H800 GPU hours, i.e., 3.7 days on our cluster with 2048 H800 GPUs.
Consequently, our pre-training stage is completed in less than two months and costs 2664K GPU hours.
Combined with 119K GPU hours for the context length extension and 5K GPU hours for post-training, DeepSeek-V3 costs only 2.788M GPU hours for its full training.
Assuming the rental price of the H800 GPU is $2 per GPU hour, our total training costs amount to only $5.576M.
Note that the aforementioned costs include only the official training of DeepSeek-V3, excluding the costs associated with prior research and ablation experiments on architectures, algorithms, or data.
最后,我们再次强调通过我们对算法、框架和硬件的优化共同设计实现的 DeepSeek-V3 的经济训练成本,如表 1 所总结。在预训练阶段,每训练一万亿的 tokens 需要 180K 个 H800 GPU 小时,即在我们的 2048 个 H800 GPU 集群上约 3.7 天。因此,我们的预训练阶段在不到两个月内完成,耗费了 2664K GPU 小时。结合上下文长度扩展的 119K GPU 小时和后续训练的 5K GPU 小时,DeepSeek-V3 的整体训练仅耗费了 2.788M GPU 小时。假设 H800 GPU 的租用价格为每 GPU 小时 2 美元,我们的总训练成本仅为 5.576M 美元。请注意,上述成本仅包括 DeepSeek-V3 的正式训练,不包括与架构、算法或数据的先前研究和消融实验相关的成本。
Our main contribution includes:
我们的主要贡献包括:
Architecture: Innovative Load Balancing Strategy and Training Objective
架构:创新的负载均衡策略和训练目标
-
•
On top of the efficient architecture of DeepSeek-V2, we pioneer an auxiliary-loss-free strategy for load balancing, which minimizes the performance degradation that arises from encouraging load balancing.
• 在 DeepSeek-V2 的高效架构基础上,我们开创了一种无辅助损失的负载均衡策略,最大限度地减少了因鼓励负载均衡而导致的性能下降。 -
•
We investigate a Multi-Token Prediction (MTP) objective and prove it beneficial to model performance. It can also be used for speculative decoding for inference acceleration.
• 我们研究了一种多标记预测(MTP)目标,并证明其对模型性能有益。它还可以用于推测性解码以加速推理。
Pre-Training: Towards Ultimate Training Efficiency
预训练:迈向终极训练效率
-
•
We design an FP8 mixed precision training framework and, for the first time, validate the feasibility and effectiveness of FP8 training on an extremely large-scale model.
• 我们设计了一种 FP8 混合精度训练框架,并首次验证了 FP8 训练在超大规模模型上的可行性和有效性。 -
•
Through the co-design of algorithms, frameworks, and hardware, we overcome the communication bottleneck in cross-node MoE training, achieving near-full computation-communication overlap. This significantly enhances our training efficiency and reduces the training costs, enabling us to further scale up the model size without additional overhead.
• 通过算法、框架和硬件的共同设计,我们克服了跨节点 MoE 训练中的通信瓶颈,实现了接计算全通信重叠。这显著提升了我们的训练效率,降低了训练成本,使我们能够在不增加额外开销的情况下进一步扩大模型规模。 -
•
At an economical cost of only 2.664M H800 GPU hours, we complete the pre-training of DeepSeek-V3 on 14.8T tokens, producing the currently strongest open-source base model. The subsequent training stages after pre-training require only 0.1M GPU hours.
• 以仅 2.664M H800 GPU 小时的经济成本,我们完成了对 14.8 万亿标记的 DeepSeek-V3 的预训练,生产出当前最强的开源基本模型。预训练后的随后的训练阶段仅需 0.1M GPU 小时。
Post-Training: Knowledge Distillation from DeepSeek-R1
后期训练:来自 DeepSeek-R1 的知识蒸馏
-
•
We introduce an innovative methodology to distill reasoning capabilities from the long-Chain-of-Thought (CoT) model, specifically from one of the DeepSeek R1 series models, into standard LLMs, particularly DeepSeek-V3. Our pipeline elegantly incorporates the verification and reflection patterns of R1 into DeepSeek-V3 and notably improves its reasoning performance. Meanwhile, we also maintain control over the output style and length of DeepSeek-V3.
• 我们引入了一种创新的方法,将推理能力从长链式思维(CoT)模型,特别是来自 DeepSeek R1 系列模型之一的推理能力,精炼到标准大语言模型(LLMs),尤其是 DeepSeek-V3 中。我们的流程优雅地将 R1 的验证和反思模式融入到 DeepSeek-V3 中,显著提升其推理性能。同时,我们还控制了 DeepSeek-V3 的输出风格和长度。
Summary of Core Evaluation Results
核心评估结果总结
-
•
Knowledge: (1) On educational benchmarks such as MMLU, MMLU-Pro, and GPQA, DeepSeek-V3 outperforms all other open-source models, achieving 88.5 on MMLU, 75.9 on MMLU-Pro, and 59.1 on GPQA. Its performance is comparable to leading closed-source models like GPT-4o and Claude-Sonnet-3.5, narrowing the gap between open-source and closed-source models in this domain. (2) For factuality benchmarks, DeepSeek-V3 demonstrates superior performance among open-source models on both SimpleQA and Chinese SimpleQA. While it trails behind GPT-4o and Claude-Sonnet-3.5 in English factual knowledge (SimpleQA), it surpasses these models in Chinese factual knowledge (Chinese SimpleQA), highlighting its strength in Chinese factual knowledge.
• 知识:(1)在教育基准测试如 MMLU、MMLU-Pro 和 GPQA 上,DeepSeek-V3 优于所有其他开源模型,在 MMLU 上达到 88.5,在 MMLU-Pro 上达到 75.9,在 GPQA 上达到 59.1。其性能可与领先的闭源模型如 GPT-4o 和 Claude-Sonnet-3.5 媲美,缩小了开源和闭源模型在该领域的差距。(2)对于事实性基准测试,DeepSeek-V3 在 SimpleQA 和中文 SimpleQA 上展示出开源模型中出色的性能。虽然它在英语事实知识(SimpleQA)上落后于 GPT-4o 和 Claude-Sonnet-3.5,但在中文事实知识(中文 SimpleQA)上超过这些模型,显示了其在中文事实知识方面的优势。 -
•
Code, Math, and Reasoning: (1) DeepSeek-V3 achieves state-of-the-art performance on math-related benchmarks among all non-long-CoT open-source and closed-source models. Notably, it even outperforms o1-preview on specific benchmarks, such as MATH-500, demonstrating its robust mathematical reasoning capabilities. (2) On coding-related tasks, DeepSeek-V3 emerges as the top-performing model for coding competition benchmarks, such as LiveCodeBench, solidifying its position as the leading model in this domain. For engineering-related tasks, while DeepSeek-V3 performs slightly below Claude-Sonnet-3.5, it still outpaces all other models by a significant margin, demonstrating its competitiveness across diverse technical benchmarks.
• 代码、数学和推理:(1)DeepSeek-V3 在所有非长链推理开放源码和闭源模型中,在数学相关基准测试中实现了最先进的性能。值得注意的是,它在特定基准,如 MATH-500 上甚至超过了 o1-preview,展现了其强大的数学推理能力。(2)在编程相关任务中,DeepSeek-V3 成为了编程竞赛基准(如 LiveCodeBench)的最佳表现模型,巩固了其在该领域的领先地位。对于工程相关任务,尽管 DeepSeek-V3 表现略逊于 Claude-Sonnet-3.5,但仍以显著优势超过所有其他模型,展示了其在各种技术基准中的竞争力。
In the remainder of this paper, we first present a detailed exposition of our DeepSeek-V3 model architecture (Section 2).
Subsequently, we introduce our infrastructures, encompassing our compute clusters, the training framework, the support for FP8 training, the inference deployment strategy, and our suggestions on future hardware design.
Next, we describe our pre-training process, including the construction of training data, hyper-parameter settings, long-context extension techniques, the associated evaluations, as well as some discussions (Section 4).
Thereafter, we discuss our efforts on post-training, which include Supervised Fine-Tuning (SFT), Reinforcement Learning (RL), the corresponding evaluations, and discussions (Section 5).
Lastly, we conclude this work, discuss existing limitations of DeepSeek-V3, and propose potential directions for future research (Section 6).
在本文的余下部分中,我们首先详述我们的 DeepSeek-V3 模型架构(第 2 节)。随后介绍我们的基础设施,包括我们的计算集群、训练框架、FP8 训练支持、推理部署策略及对未来硬件设计的建议。接下来描述我们的预训练过程,包括训练数据的构建、超参数设置、长上下文扩展技术、相关评估以及一些讨论(第 4 节)。之后讨论我们在后训练阶段的努力,包括监督微调(SFT)、强化学习(RL)、相应的评估和讨论(第 5 节)。最后,我们总结了这项工作,讨论了 DeepSeek-V3 的现有局限性,并提出未来研究的潜在方向(第 6 节)。
2 Architecture 2 架构
We first introduce the basic architecture of DeepSeek-V3, featured by Multi-head Latent Attention (MLA) (DeepSeek-AI, 2024c) for efficient inference and DeepSeekMoE (Dai et al., 2024) for economical training.
Then, we present a Multi-Token Prediction (MTP) training objective, which we have observed to enhance the overall performance on evaluation benchmarks.
For other minor details not explicitly mentioned, DeepSeek-V3 adheres to the settings of DeepSeek-V2 (DeepSeek-AI, 2024c).
我们首先介绍 DeepSeek-V3 的基本架构,其特点是多头潜在注意力(MLA)(DeepSeek-AI,2024c)以实现高效推理,以及 DeepSeekMoE(Dai et al., 2024)用于经济型训练。然后,我们提出多标记预测(MTP)训练目标,我们观察到其在评估基准上的整体性能有所提升。对于其他未明确提及的细节,DeepSeek-V3 遵循 DeepSeek-V2 的设置(DeepSeek-AI,2024c)。
2.1 Basic Architecture 2.1 基本架构
The basic architecture of DeepSeek-V3 is still within the Transformer (Vaswani et al., 2017) framework.
For efficient inference and economical training, DeepSeek-V3 also adopts MLA and DeepSeekMoE, which have been thoroughly validated by DeepSeek-V2.
Compared with DeepSeek-V2, an exception is that we additionally introduce an auxiliary-loss-free load balancing strategy (Wang et al., 2024a) for DeepSeekMoE to mitigate the performance degradation induced by the effort to ensure load balance.
Figure 2 illustrates the basic architecture of DeepSeek-V3, and we will briefly review the details of MLA and DeepSeekMoE in this section.
DeepSeek-V3 的基本架构仍然在 Transformer(Vaswani et al., 2017)框架内。为了实现高效推理和经济型训练,DeepSeek-V3 也采用了已经被 DeepSeek-V2 充分验证的 MLA 和 DeepSeekMoE。与 DeepSeek-V2 相比,例外的是我们额外引入了一种无辅助损失的负载平衡策略(Wang et al., 2024a),以减轻为确保负载平衡所带来的性能退化。图 2 展示了 DeepSeek-V3 的基本架构,我们将在本节简要回顾 MLA 和 DeepSeekMoE 的细节。
2.1.1 Multi-Head Latent Attention
2.1.1 多头潜在注意力
For attention, DeepSeek-V3 adopts the MLA architecture.
Let denote the embedding dimension, denote the number of attention heads, denote the dimension per head, and denote the attention input for the -th token at a given attention layer.
The core of MLA is the low-rank joint compression for attention keys and values to reduce Key-Value (KV) cache during inference:
对于注意力,DeepSeek-V3 采用了 MLA 架构。令 表示嵌入维度, 表示注意力头的数量, 表示每个头的维度, 表示在给定注意力层中第 个标记的注意力输入。MLA 的核心是针对注意力键和值进行低秩联合压缩,以减少推理过程中键值(KV)缓存:
(1) | ||||
(2) | ||||
(3) | ||||
(4) | ||||
(5) |
where is the compressed latent vector for keys and values;
indicates the KV compression dimension;
denotes the down-projection matrix;
are the up-projection matrices for keys and values, respectively;
is the matrix used to produce the decoupled key that carries Rotary Positional Embedding (RoPE) (Su et al., 2024);
denotes the operation that applies RoPE matrices;
and denotes concatenation.
Note that for MLA, only the blue-boxed vectors (i.e., and ) need to be cached during generation, which results in significantly reduced KV cache while maintaining performance comparable to standard Multi-Head Attention (MHA) (Vaswani et al., 2017).
其中 是用于键和值的压缩潜在向量; 表示 KV 压缩维度; 是向下投影矩阵; 分别为键和值的向上投影矩阵; 是用于生成携带旋转位置嵌入(RoPE)(Su et al., 2024)的解耦键的矩阵; 表示应用 RoPE 矩阵的操作; 表示级联。需要注意的是,对于 MLA,仅需要缓存蓝框向量(即 和 )即可,这显著减少了 KV 缓存,同时保持了与标准多头注意力(MHA)(Vaswani et al., 2017)相当的性能。
For the attention queries, we also perform a low-rank compression, which can reduce the activation memory during training:
对于注意力查询,我们还进行了低秩压缩,这可以在训练期间减少激活内存:
(6) | ||||
(7) | ||||
(8) | ||||
(9) |
where is the compressed latent vector for queries;
denotes the query compression dimension;
are the down-projection and up-projection matrices for queries, respectively;
and is the matrix to produce the decoupled queries that carry RoPE.
其中 是用于查询的压缩潜在向量; 表示查询压缩维度; 分别是用于查询的向下投影和向上投影矩阵; 是用于生成携带 RoPE 的解耦查询的矩阵。
Ultimately, the attention queries (), keys (), and values () are combined to yield the final attention output :
最终,注意力查询( )、键( )和值( )结合以产生最终的注意力输出 :
(10) | ||||
(11) |
where denotes the output projection matrix.
其中 表示输出投影矩阵。
2.1.2 DeepSeekMoE with Auxiliary-Loss-Free Load Balancing
2.1.2 无辅助损失的 DeepSeekMoE 负载平衡策略
Basic Architecture of DeepSeekMoE.
DeepSeekMoE 的基本架构。
For Feed-Forward Networks (FFNs), DeepSeek-V3 employs the DeepSeekMoE architecture (Dai et al., 2024).
Compared with traditional MoE architectures like GShard (Lepikhin et al., 2021), DeepSeekMoE uses finer-grained experts and isolates some experts as shared ones.
Let denote the FFN input of the -th token, we compute the FFN output as follows:
对于前馈网络(FFNs),DeepSeek-V3 采用了 DeepSeekMoE 架构(Dai 等, 2024)。与传统的 MoE 架构如 GShard(Lepikhin 等, 2021)相比,DeepSeekMoE 使用了更细粒度的专家,并将一些专家隔离为共享的专家。设 表示第 个标记的 FFN 输入,我们计算 FFN 输出 如下:
(12) | ||||
(13) | ||||
(14) | ||||
(15) |
where and denote the numbers of shared experts and routed experts, respectively;
and denote the -th shared expert and the -th routed expert, respectively;
denotes the number of activated routed experts;
is the gating value for the -th expert;
is the token-to-expert affinity;
is the centroid vector of the -th routed expert;
and denotes the set comprising highest scores among the affinity scores calculated for the -th token and all routed experts.
Slightly different from DeepSeek-V2, DeepSeek-V3 uses the sigmoid function to compute the affinity scores, and applies a normalization among all selected affinity scores to produce the gating values.
其中, 和 分别表示共享专家和路由专家的数量; 和 分别表示第 个共享专家和第 个路由专家; 表示激活的路由专家数量; 是第 个专家的门控值; 是标记到专家的亲和度; 是第 个路由专家的中心向量; 表示由第 个标记和所有路由专家计算的亲和度分数中得分最高的 个的集合。与 DeepSeek-V2 略有不同,DeepSeek-V3 使用 sigmoid 函数计算亲和度分数,并在所有选定的亲和度分数之间应用归一化以生成门控值。
Auxiliary-Loss-Free Load Balancing.
无需辅助损失的负载均衡。
For MoE models, an unbalanced expert load will lead to routing collapse (Shazeer et al., 2017) and diminish computational efficiency in scenarios with expert parallelism.
Conventional solutions usually rely on the auxiliary loss (Fedus et al., 2021; Lepikhin et al., 2021) to avoid unbalanced load.
However, too large an auxiliary loss will impair the model performance (Wang et al., 2024a).
To achieve a better trade-off between load balance and model performance, we pioneer an auxiliary-loss-free load balancing strategy (Wang et al., 2024a) to ensure load balance.
To be specific, we introduce a bias term for each expert and add it to the corresponding affinity scores to determine the top-K routing:
对于 MoE 模型,专家负载不平衡会导致路由崩溃(Shazeer 等, 2017),并在专家并行的场景下降低计算效率。传统解决方案通常依赖于辅助损失(Fedus 等, 2021; Lepikhin 等, 2021)来避免负载不平衡。然而,过大的辅助损失会削弱模型性能(Wang 等, 2024a)。为在负载平衡和模型性能之间实现更佳的权衡,我们开创了一种无辅助损失的负载平衡策略(Wang 等, 2024a)以确保负载平衡。具体而言,我们为每个专家引入偏差项 ,并将其加到相应的亲和度分数 上以确定前 K 个路由:
(16) |
Note that the bias term is only used for routing.
The gating value, which will be multiplied with the FFN output, is still derived from the original affinity score .
During training, we keep monitoring the expert load on the whole batch of each training step.
At the end of each step, we will decrease the bias term by if its corresponding expert is overloaded, and increase it by if its corresponding expert is underloaded, where is a hyper-parameter called bias update speed.
Through the dynamic adjustment, DeepSeek-V3 keeps balanced expert load during training, and achieves better performance than models that encourage load balance through pure auxiliary losses.
请注意,偏差项仅用于路由。将与 FFN 输出相乘的门控值仍然来自于原始的亲和度分数 。在训练过程中,我们持续监控每个训练步骤中整个批次的专家负载。在每一步结束时,如果对应的专家过载,我们将偏差项减少 ,如果对应的专家负载不足,我们将其增加 ,其中 是一个称为偏差更新速度的超参数。通过动态调整,DeepSeek-V3 在训练期间保持了平衡的专家负载,并且比那些仅通过辅助损失来实现负载平衡的模型表现更好。
Complementary Sequence-Wise Auxiliary Loss.
序列互补辅助损失。
Although DeepSeek-V3 mainly relies on the auxiliary-loss-free strategy for load balance, to prevent extreme imbalance within any single sequence, we also employ a complementary sequence-wise balance loss:
虽然 DeepSeek-V3 主要依赖于无辅助损失的策略来实现负载平衡,但为了防止任意单个序列中出现极端不平衡,我们也采用了一种互补的序列平衡损失:
(17) | ||||
(18) | ||||
(19) | ||||
(20) |
where the balance factor is a hyper-parameter, which will be assigned an extremely small value for DeepSeek-V3;
denotes the indicator function;
and denotes the number of tokens in a sequence.
The sequence-wise balance loss encourages the expert load on each sequence to be balanced.
其中,平衡因子 是一个超参数,对于 DeepSeek-V3 将被赋予一个极小的值; 表示指示函数; 表示序列中的标记数。序列级的平衡损失鼓励每个序列上的专家负载保持平衡。
Node-Limited Routing. 节点限制路由。
Like the device-limited routing used by DeepSeek-V2, DeepSeek-V3 also uses a restricted routing mechanism to limit communication costs during training.
In short, we ensure that each token will be sent to at most nodes, which are selected according to the sum of the highest affinity scores of the experts distributed on each node.
Under this constraint, our MoE training framework can nearly achieve full computation-communication overlap.
类似于 DeepSeek-V2 使用的设备限制路由,DeepSeek-V3 也使用了一种受限制的路由机制,以限制训练期间的通信成本。简言之,我们确保每个标记最多被发送到 个节点,这些节点根据分布在每个节点上的专家的最高 个亲和度分数之和进行选择。在这一约束下,我们的 MoE 训练框架几乎可以实现计算和通信的完全重叠。
No Token-Dropping. 不丢弃标记。
Due to the effective load balancing strategy, DeepSeek-V3 keeps a good load balance during its full training.
Therefore, DeepSeek-V3 does not drop any tokens during training.
In addition, we also implement specific deployment strategies to ensure inference load balance, so DeepSeek-V3 also does not drop tokens during inference.
由于有效的负载平衡策略,DeepSeek-V3 在整个训练过程中保持良好的负载平衡。因此,DeepSeek-V3 在训练期间不会丢弃任何标记。此外,我们还实施了特定的部署策略以确保推理时的负载平衡,因此 DeepSeek-V3 在推理时也不会丢弃标记。
2.2 Multi-Token Prediction
2.2 多标记预测
Inspired by Gloeckle et al. (2024), we investigate and set a Multi-Token Prediction (MTP) objective for DeepSeek-V3, which extends the prediction scope to multiple future tokens at each position.
On the one hand, an MTP objective densifies the training signals and may improve data efficiency.
On the other hand, MTP may enable the model to pre-plan its representations for better prediction of future tokens.
Figure 3 illustrates our implementation of MTP.
Different from Gloeckle et al. (2024), which parallelly predicts additional tokens using independent output heads, we sequentially predict additional tokens and keep the complete causal chain at each prediction depth.
We introduce the details of our MTP implementation in this section.
受 Gloeckle 等人(2024 年)的启发,我们研究并为 DeepSeek-V3 设置了一个多令牌预测(MTP)目标,将预测范围扩展到每个位置的多个未来令牌。一方面,MTP 目标密集化了训练信号,可以提高数据效率。另一方面,MTP 可能使模型提前规划其表征,以更好地预测未来令牌。图 3 展示了我们对 MTP 的实现。与 Gloeckle 等人(2024 年)不同,他们通过独立输出头并行预测 个附加令牌,而我们则是顺序预测附加令牌,并在每个预测深度保持完整的因果链。我们将在本节中介绍我们的 MTP 实现细节。
MTP Modules. MTP 模块。
To be specific, our MTP implementation uses sequential modules to predict additional tokens.
The -th MTP module consists of a shared embedding layer , a shared output head , a Transformer block , and a projection matrix .
For the -th input token , at the -th prediction depth, we first combine the representation of the -th token at the -th depth and the embedding of the -th token with the linear projection:
具体而言,我们的 MTP 实现使用 个顺序模块来预测 个附加令牌。第 个 MTP 模块由一个共享嵌入层 、一个共享输出头 、一个 Transformer 块 和一个投影矩阵 组成。对于第 个输入令牌 ,在第 个预测深度处,我们首先结合第 个深度中第 个令牌表征 和第 个令牌的嵌入 ,进行线性投影:
(21) |
where denotes concatenation.
Especially, when , refers to the representation given by the main model.
Note that for each MTP module, its embedding layer is shared with the main model.
The combined serves as the input of the Transformer block at the -th depth to produce the output representation at the current depth :
其中 表示拼接。特别地,当 时, 指的是由主模型给出的表征。请注意,对于每个 MTP 模块,其嵌入层与主模型共享。组合 作为 Transformer 块在第 个深度的输入,以产生当前深度 的输出表征:
(22) |
where represents the input sequence length and i:j denotes the slicing operation (inclusive of both the left and right boundaries).
Finally, taking as the input, the shared output head will compute the probability distribution for the -th additional prediction token , where is the vocabulary size:
其中 表示输入序列长度, i:j 表示切片操作(包括左和右边界)。最后,以 作为输入,共享输出头将计算第 个附加预测令牌 的概率分布,其中 是词汇表大小:
(23) |
The output head linearly maps the representation to logits and subsequently applies the function to compute the prediction probabilities of the -th additional token.
Also, for each MTP module, its output head is shared with the main model.
Our principle of maintaining the causal chain of predictions is similar to that of EAGLE (Li et al., 2024b), but its primary objective is speculative decoding (Xia et al., 2023; Leviathan et al., 2023), whereas we utilize MTP to improve training.
输出头 线性映射表征为 logits,然后应用 函数计算第 个附加令牌的预测概率。另外,对于每个 MTP 模块,其输出头是与主模型共享的。我们保持预测因果链的原则类似于 EAGLE(Li 等人,2024b),但其主要目标是推测性解码(Xia 等人,2023;Leviathan 等人,2023),而我们利用 MTP 来改进训练。
MTP Training Objective. MTP 训练目标。
For each prediction depth, we compute a cross-entropy loss :
对于每个预测深度,我们计算交叉熵损失 :
(24) |
where denotes the input sequence length, denotes the ground-truth token at the -th position, and denotes the corresponding prediction probability of , given by the -th MTP module.
Finally, we compute the average of the MTP losses across all depths and multiply it by a weighting factor to obtain the overall MTP loss , which serves as an additional training objective for DeepSeek-V3:
其中 表示输入序列长度, 表示第 个位置的真实令牌, 表示第 个 MTP 模块给出的对应预测概率 。最后,我们计算跨所有深度的 MTP 损失平均值,并乘以一个权重系数 以获得整体 MTP 损失 ,它是 DeepSeek-V3 的额外训练目标:
(25) |
MTP in Inference. MTP 在推理中的应用。
Our MTP strategy mainly aims to improve the performance of the main model, so during inference, we can directly discard the MTP modules and the main model can function independently and normally.
Additionally, we can also repurpose these MTP modules for speculative decoding to further improve the generation latency.
我们的 MTP 策略主要旨在提高主模型的性能,因此在推理过程中,我们可以直接丢弃 MTP 模块,主模型可以独立正常运行。此外,我们还可以将这些 MTP 模块重新用于推测解码,以进一步改善生成延迟。
3 Infrastructures 3 基础设施
3.1 Compute Clusters 3.1 计算集群
DeepSeek-V3 is trained on a cluster equipped with 2048 NVIDIA H800 GPUs.
Each node in the H800 cluster contains 8 GPUs connected by NVLink and NVSwitch within nodes.
Across different nodes, InfiniBand (IB) interconnects are utilized to facilitate communications.
DeepSeek-V3 的训练是在一个配备有 2048 个 NVIDIA H800 GPU 的集群上进行的。H800 集群的每个节点内含有 8 个通过 NVLink 和 NVSwitch 相连的 GPU。在不同节点之间,使用 InfiniBand(IB)互连来促进通信。
3.2 Training Framework 3.2 训练框架
The training of DeepSeek-V3 is supported by the HAI-LLM framework, an efficient and lightweight training framework crafted by our engineers from the ground up.
On the whole, DeepSeek-V3 applies 16-way Pipeline Parallelism (PP) (Qi et al., 2023a), 64-way Expert Parallelism (EP) (Lepikhin et al., 2021) spanning 8 nodes, and ZeRO-1 Data Parallelism (DP) (Rajbhandari et al., 2020).
DeepSeek-V3 的训练由 HAI-LLM 框架支持,这是一个由我们的工程师从头开始打造的高效且轻量级的训练框架。总体而言,DeepSeek-V3 应用了 16 路流水并行(PP)(Qi 等人,2023a)、8 个节点跨越的 64 路专家并行(EP)(Lepikhin 等人,2021)以及 ZeRO-1 数据并行(DP)(Rajbhandari 等人,2020)。
In order to facilitate efficient training of DeepSeek-V3, we implement meticulous engineering optimizations.
Firstly, we design the DualPipe algorithm for efficient pipeline parallelism.
Compared with existing PP methods, DualPipe has fewer pipeline bubbles.
More importantly, it overlaps the computation and communication phases across forward and backward processes, thereby addressing the challenge of heavy communication overhead introduced by cross-node expert parallelism.
Secondly, we develop efficient cross-node all-to-all communication kernels to fully utilize IB and NVLink bandwidths and conserve Streaming Multiprocessors (SMs) dedicated to communication.
Finally, we meticulously optimize the memory footprint during training, thereby enabling us to train DeepSeek-V3 without using costly Tensor Parallelism (TP).
为了促进 DeepSeek-V3 的高效训练,我们实现了精细的工程优化。首先,我们设计了适用于高效流水线并行的 DualPipe 算法。与现有的 PP 方法相比,DualPipe 具有更少的流水线气泡。更重要的是,它在前向和后向过程中重叠计算和通信阶段,从而解决了跨节点专家并行引入的通信开销大的问题。其次,我们开发了高效的跨节点全对全通信内核,以充分利用 IB 和 NVLink 带宽,并节约用于通信的流处理器(SMs)。最后,我们精细优化了训练过程中的内存占用,从而能够在不使用高成本张量并行(TP)的情况下训练 DeepSeek-V3。
3.2.1 DualPipe and Computation-Communication Overlap
3.2.1 DualPipe 和计算-通信重叠
For DeepSeek-V3, the communication overhead introduced by cross-node expert parallelism results in an inefficient computation-to-communication ratio of approximately 1:1.
To tackle this challenge, we design an innovative pipeline parallelism algorithm called DualPipe, which not only accelerates model training by effectively overlapping forward and backward computation-communication phases, but also reduces the pipeline bubbles.
对于 DeepSeek-V3,由跨节点专家并行引入的通信开销导致了大约 1:1 的不高效计算与通信比率。为了解决这个问题,我们设计了一种创新的流水线并行算法,称为 DualPipe,它不仅通过有效重叠前向和后向计算-通信阶段加速了模型训练,还减少了流水线气泡。
The key idea of DualPipe is to overlap the computation and communication within a pair of individual forward and backward chunks.
To be specific, we divide each chunk into four components: attention, all-to-all dispatch, MLP, and all-to-all combine.
Specially, for a backward chunk, both attention and MLP are further split into two parts, backward for input and backward for weights, like in ZeroBubble (Qi et al., 2023b).
In addition, we have a PP communication component.
As illustrated in Figure 4, for a pair of forward and backward chunks, we rearrange these components and manually adjust the ratio of GPU SMs dedicated to communication versus computation.
In this overlapping strategy, we can ensure that both all-to-all and PP communication can be fully hidden during execution.
Given the efficient overlapping strategy, the full DualPipe scheduling is illustrated in Figure 5.
It employs a bidirectional pipeline scheduling, which feeds micro-batches from both ends of the pipeline simultaneously and a significant portion of communications can be fully overlapped.
This overlap also ensures that, as the model further scales up, as long as we maintain a constant computation-to-communication ratio, we can still employ fine-grained experts across nodes while achieving a near-zero all-to-all communication overhead.
DualPipe 的关键思想是在单个前向和后向块对内重叠计算和通信。具体地,我们将每个块分为四个组件:注意力、全对全派发、MLP 和全对全合并。特别地,对于后向块,注意力和 MLP 进一步分为两部分,相对于输入的后向和相对于权重的后向,就像 ZeroBubble(Qi 等,2023b)中一样。此外,我们还有一个 PP 通信组件。如图 4 所示,对于一个前向和后向块对,我们重新排列这些组件并手动调整用于通信和计算的 GPU SMs 比例。在这种重叠策略下,我们可以确保全对全和 PP 通信在执行过程中能够完全隐藏。鉴于这种高效的重叠策略,完整的 DualPipe 调度在图 5 中展示。它采用双向流水线调度,同时从流水线两端输入微批次,并且大量通信可以完全重叠。这种重叠还确保了随着模型的进一步扩展,只要我们保持恒定的计算与通信比,仍然可以跨节点使用细粒度的专家,同时实现接近于零的全对全通信开销。
In addition, even in more general scenarios without a heavy communication burden, DualPipe still exhibits efficiency advantages.
In Table 2, we summarize the pipeline bubbles and memory usage across different PP methods.
As shown in the table, compared with ZB1P (Qi et al., 2023b) and 1F1B (Harlap et al., 2018), DualPipe significantly reduces the pipeline bubbles while only increasing the peak activation memory by times.
Although DualPipe requires keeping two copies of the model parameters, this does not significantly increase the memory consumption since we use a large EP size during training.
Compared with Chimera (Li and Hoefler, 2021), DualPipe only requires that the pipeline stages and micro-batches be divisible by 2, without requiring micro-batches to be divisible by pipeline stages.
In addition, for DualPipe, neither the bubbles nor activation memory will increase as the number of micro-batches grows.
此外,即使在没有较大通信负担的一般情况下,DualPipe 仍然表现出效率优势。在表 2 中,我们总结了不同 PP 方法的流水线气泡和内存使用情况。如表中所示,与 ZB1P(Qi 等,2023b)和 1F1B(Harlap 等,2018)相比,DualPipe 显著减少了流水线气泡,而峰值激活内存仅增加了 倍。尽管 DualPipe 需要保留两份模型参数,但由于我们在训练中使用了大型 EP 规模,这并不会显著增加内存消耗。与 Chimera(Li 和 Hoefler,2021)相比,DualPipe 仅要求流水线阶段和微批次数可以被 2 整除,而不要求微批次可被流水线阶段整除。此外,对于 DualPipe,即使微批次数增加,气泡或激活内存也不会增加。
Method 方法 | Bubble 气泡 | Parameter 参数 | Activation 激活 |
---|---|---|---|
1F1B | |||
ZB1P | |||
DualPipe (Ours) DualPipe(我们的) |
表 2:比较不同管道并行方法中的管道冒泡和内存使用情况。 表示一个前向块的执行时间, 表示一个完整后向块的执行时间, 表示一个“后向权重”块的执行时间, 表示两个相互重叠的前向和后向块的执行时间。
3.2.2 Efficient Implementation of Cross-Node All-to-All Communication
3.2.2 跨节点全对全通信的高效实现
In order to ensure sufficient computational performance for DualPipe, we customize efficient cross-node all-to-all communication kernels (including dispatching and combining) to conserve the number of SMs dedicated to communication.
The implementation of the kernels is co-designed with the MoE gating algorithm and the network topology of our cluster.
To be specific, in our cluster, cross-node GPUs are fully interconnected with IB, and intra-node communications are handled via NVLink.
NVLink offers a bandwidth of 160 GB/s, roughly 3.2 times that of IB (50 GB/s).
To effectively leverage the different bandwidths of IB and NVLink, we limit each token to be dispatched to at most 4 nodes, thereby reducing IB traffic.
For each token, when its routing decision is made, it will first be transmitted via IB to the GPUs with the same in-node index on its target nodes.
Once it reaches the target nodes, we will endeavor to ensure that it is instantaneously forwarded via NVLink to specific GPUs that host their target experts, without being blocked by subsequently arriving tokens.
In this way, communications via IB and NVLink are fully overlapped, and each token can efficiently select an average of 3.2 experts per node without incurring additional overhead from NVLink.
This implies that, although DeepSeek-V3 selects only 8 routed experts in practice, it can scale up this number to a maximum of 13 experts (4 nodes 3.2 experts/node) while preserving the same communication cost.
Overall, under such a communication strategy, only 20 SMs are sufficient to fully utilize the bandwidths of IB and NVLink.
为了确保 DualPipe 的足够计算性能,我们定制了高效的跨节点全对全通信内核(包括分发和合并),以节省专用于通信的 SM 数量。内核的实现与 MoE 门控算法和我们集群的网络拓扑共同设计。具体而言,在我们的集群中,跨节点 GPU 通过 IB 完全互联,节点内通信通过 NVLink 进行处理。NVLink 提供 160 GB/s 带宽,大约是 IB(50 GB/s)的 3.2 倍。为了有效利用 IB 和 NVLink 的不同带宽,我们限制每个 token 最多分发到 4 个节点,从而减少 IB 流量。对于每个 token,当其路由决定完成后,将首先通过 IB 传输到其目标节点上具有相同节点内索引的 GPU。一旦到达目标节点,我们将努力确保它立即通过 NVLink 转发到特定的主机其目标专家的 GPU,而不会被随后到达的 tokens 阻塞。通过这种方式,IB 和 NVLink 的通信完全重叠,每个 token 可以在不增加 NVLink 额外开销的情况下,平均选择每个节点 3.2 个专家。这意味着,尽管 DeepSeek-V3 在实践中只选择了 8 个路由的专家,它可以在保持相同通信成本的情况下将这个数字扩展到最多 13 个专家(4 节点 每节点 3.2 个专家)。总体而言,在这种通信策略下,仅用 20 个 SM 就足以充分利用 IB 和 NVLink 的带宽。
In detail, we employ the warp specialization technique (Bauer et al., 2014) and partition 20 SMs into 10 communication channels.
During the dispatching process, (1) IB sending, (2) IB-to-NVLink forwarding, and (3) NVLink receiving are handled by respective warps.
The number of warps allocated to each communication task is dynamically adjusted according to the actual workload across all SMs.
Similarly, during the combining process, (1) NVLink sending, (2) NVLink-to-IB forwarding and accumulation, and (3) IB receiving and accumulation are also handled by dynamically adjusted warps.
In addition, both dispatching and combining kernels overlap with the computation stream, so we also consider their impact on other SM computation kernels.
Specifically, we employ customized PTX (Parallel Thread Execution) instructions and auto-tune the communication chunk size, which significantly reduces the use of the L2 cache and the interference to other SMs.
详细而言,我们采用 warp 专业化技术(Bauer et al., 2014)并将 20 个 SM 划分成 10 个通信通道。在分发过程中,(1) IB 发送,(2) IB 到 NVLink 的转发,及(3) NVLink 接收由各个 warp 处理。针对实际跨所有 SM 的工作负载为每个通信任务动态调整分配给 warp 的数量。同样,在合并过程中,(1) NVLink 发送,(2) NVLink 到 IB 的转发与累积,以及(3) IB 接收与累积也由动态调整的 warps 处理。此外,分发和合并内核与计算流重叠,因此我们也考虑了它们对其他 SM 计算内核的影响。具体来说,我们采用定制的 PTX(并行线程执行)指令并自动调整通信块大小,这大大减少了 L2 缓存的使用和对其他 SM 的干扰。
3.2.3 Extremely Memory Saving with Minimal Overhead
3.2.3 极致内存节省与最小开销
In order to reduce the memory footprint during training, we employ the following techniques.
为了减少训练期间的内存占用,我们采用了以下技术。
Recomputation of RMSNorm and MLA Up-Projection.
重新计算 RMSNorm 和 MLA 上投影。
We recompute all RMSNorm operations and MLA up-projections during back-propagation, thereby eliminating the need to persistently store their output activations.
With a minor overhead, this strategy significantly reduces memory requirements for storing activations.
我们在反向传播过程中重新计算所有 RMSNorm 操作和 MLA 上投影,从而无需持续存储它们的输出激活。虽然有少量开销,此策略显著减少了存储激活的内存需求。
Exponential Moving Average in CPU.
CPU 中的指数移动平均。
During training, we preserve the Exponential Moving Average (EMA) of the model parameters for early estimation of the model performance after learning rate decay.
The EMA parameters are stored in CPU memory and are updated asynchronously after each training step.
This method allows us to maintain EMA parameters without incurring additional memory or time overhead.
在训练过程中,我们保留模型参数的指数移动平均(EMA),以便在学习率衰减后提前估计模型性能。EMA 参数存储在 CPU 内存中,并在每次训练步骤后异步更新。此方法使我们能够在不产生额外内存或时间开销的情况下维护 EMA 参数。
Shared Embedding and Output Head for Multi-Token Prediction.
多 token 预测的共享嵌入和输出头。
With the DualPipe strategy, we deploy the shallowest layers (including the embedding layer) and deepest layers (including the output head) of the model on the same PP rank.
This arrangement enables the physical sharing of parameters and gradients, of the shared embedding and output head, between the MTP module and the main model.
This physical sharing mechanism further enhances our memory efficiency.
使用 DualPipe 策略,我们将模型的最浅层(包括嵌入层)和最深层(包括输出头)部署在同一个 PP 级别。这种安排使共享嵌入和输出头参数及梯度在 MTP 模块与主模型之间的物理共享成为可能。这种物理共享机制进一步增强了我们的内存效率。
3.3 FP8 Training 3.3FP8 训练
Inspired by recent advances in low-precision training (Peng et al., 2023b; Dettmers et al., 2022; Noune et al., 2022), we propose a fine-grained mixed precision framework utilizing the FP8 data format for training DeepSeek-V3.
While low-precision training holds great promise, it is often limited by the presence of outliers in activations, weights, and gradients (Sun et al., 2024; He et al., ; Fishman et al., 2024).
Although significant progress has been made in inference quantization (Xiao et al., 2023; Frantar et al., 2022), there are relatively few studies demonstrating successful application of low-precision techniques in large-scale language model pre-training (Fishman et al., 2024).
To address this challenge and effectively extend the dynamic range of the FP8 format, we introduce a fine-grained quantization strategy: tile-wise grouping with elements or block-wise grouping with elements.
The associated dequantization overhead is largely mitigated under our increased-precision accumulation process, a critical aspect for achieving accurate FP8 General Matrix Multiplication (GEMM).
Moreover, to further reduce memory and communication overhead in MoE training, we cache and dispatch activations in FP8, while storing low-precision optimizer states in BF16.
We validate the proposed FP8 mixed precision framework on two model scales similar to DeepSeek-V2-Lite and DeepSeek-V2, training for approximately 1 trillion tokens (see more details in Appendix B.1).
Notably, compared with the BF16 baseline, the relative loss error of our FP8-training model remains consistently below 0.25%, a level well within the acceptable range of training randomness.
受近期低精度训练最新进展的启发(Peng et al., 2023b; Dettmers et al., 2022; Noune et al., 2022),我们提出了一种利用 FP8 数据格式训练 DeepSeek-V3 的细粒度混合精度框架。尽管低精度训练前景看好,但激活、权重和梯度中的异常值常常限制其应用(Sun et al., 2024; He et al.; Fishman et al., 2024)。尽管推理量化方面已取得显著进展(Xiao et al., 2023; Frantar et al., 2022),但在大规模语言模型预训练中成功应用低精度技术的研究相对较少(Fishman et al., 2024)。为应对这一挑战并有效扩展 FP8 格式的动态范围,我们引入了一种细粒度量化策略:具有 个元素的瓦片组或具有 个元素的块组。在我们的增精度累积过程中,相关的去量化开销在很大程度上得到缓解,这对于实现精确的 FP8 通用矩阵乘法(GEMM)至关重要。此外,为了进一步降低 MoE 训练中的内存和通信开销,我们在 FP8 中缓存和分派激活,同时以 BF16 存储低精度优化器状态。我们在类似 DeepSeek-V2-Lite 和 DeepSeek-V2 的两个模型规模上验证了提出的 FP8 混合精度框架,训练大约 1 万亿个 tokens(更多详情见附录 B.1)。值得注意的是,与 BF16 基准相比,我们的 FP8 训练模型的相对损失误差始终低于 0.25%,这一水平处于训练随机性可接受范围内。
3.3.1 Mixed Precision Framework
3.3.1 混合精度框架
Building upon widely adopted techniques in low-precision training (Kalamkar et al., 2019; Narang et al., 2017), we propose a mixed precision framework for FP8 training.
In this framework, most compute-density operations are conducted in FP8, while a few key operations are strategically maintained in their original data formats to balance training efficiency and numerical stability.
The overall framework is illustrated in Figure 6.
基于广泛采用的低精度训练技术(Kalamkar et al., 2019; Narang et al., 2017),我们提出了一种用于 FP8 训练的混合精度框架。在此框架中,大部分计算密集型操作以 FP8 执行,而少数关键操作被策略性地保持原始数据格式,以平衡训练效率和数值稳定性。整体框架如图 6 所示。
Firstly, in order to accelerate model training, the majority of core computation kernels, i.e., GEMM operations, are implemented in FP8 precision.
These GEMM operations accept FP8 tensors as inputs and produce outputs in BF16 or FP32.
As depicted in Figure 6, all three GEMMs associated with the Linear operator, namely Fprop (forward pass), Dgrad (activation backward pass), and Wgrad (weight backward pass), are executed in FP8.
This design theoretically doubles the computational speed compared with the original BF16 method.
Additionally, the FP8 Wgrad GEMM allows activations to be stored in FP8 for use in the backward pass.
This significantly reduces memory consumption.
首先,为加快模型训练,大部分核心计算内核,即 GEMM 操作,均在 FP8 精度下实现。这些 GEMM 操作接受 FP8 张量作为输入,输出为 BF16 或 FP32。如图 6 所示,线性算子相关的所有三个 GEMM,即 Fprop(正向传递)、Dgrad(激活反向传递)和 Wgrad(权重反向传递),均在 FP8 中执行。这一设计理论上将计算速度提高了一倍,与原始 BF16 方法相比。此外,FP8 Wgrad GEMM 允许在反向传递中以 FP8 存储激活,从而显著减少内存消耗。
Despite the efficiency advantage of the FP8 format, certain operators still require a higher precision due to their sensitivity to low-precision computations.
Besides, some low-cost operators can also utilize a higher precision with a negligible overhead to the overall training cost.
For this reason, after careful investigations, we maintain the original precision (e.g., BF16 or FP32) for the following components: the embedding module, the output head, MoE gating modules, normalization operators, and attention operators.
These targeted retentions of high precision ensure stable training dynamics for DeepSeek-V3.
To further guarantee numerical stability, we store the master weights, weight gradients, and optimizer states in higher precision.
While these high-precision components incur some memory overheads, their impact can be minimized through efficient sharding across multiple DP ranks in our distributed training system.
尽管 FP8 格式具有效率优势,某些算子由于对低精度计算敏感仍需更高的精度。此外,一些低成本算子也可以在对整体训练成本影响可忽略不计的情况下使用更高的精度。出于这个原因,经过仔细研究,我们为以下组件保留原始精度(例如 BF16 或 FP32):嵌入模块、输出头、MoE 门控模块、归一化算子和注意力算子。针对性地保持高精度确保了 DeepSeek-V3 的稳定训练动态。为了进一步保证数值稳定性,我们将主权重、权重梯度和优化器状态存储在更高的精度中。尽管这些高精度组件会产生一些内存开销,但通过在我们的分布式训练系统中跨多个 DP 级别进行高效分片,其影响可以最小化。
3.3.2 Improved Precision from Quantization and Multiplication
3.3.2 通过量化和乘法提高精度
Based on our mixed precision FP8 framework, we introduce several strategies to enhance low-precision training accuracy, focusing on both the quantization method and the multiplication process.
基于我们的混合精度 FP8 框架,我们引入了几种策略来增强低精度训练精度,重点关注量化方法和乘法过程。
Fine-Grained Quantization.
细粒度量化。
In low-precision training frameworks, overflows and underflows are common challenges due to the limited dynamic range of the FP8 format, which is constrained by its reduced exponent bits.
As a standard practice, the input distribution is aligned to the representable range of the FP8 format by scaling the maximum absolute value of the input tensor to the maximum representable value of FP8 (Narang et al., 2017).
This method makes low-precision training highly sensitive to activation outliers, which can heavily degrade quantization accuracy.
To solve this, we propose a fine-grained quantization method that applies scaling at a more granular level.
As illustrated in Figure 7 (a), (1) for activations, we group and scale elements on a 1x128 tile basis (i.e., per token per 128 channels); and (2) for weights, we group and scale elements on a 128x128 block basis (i.e., per 128 input channels per 128 output channels).
This approach ensures that the quantization process can better accommodate outliers by adapting the scale according to smaller groups of elements.
In Appendix B.2, we further discuss the training instability when we group and scale activations on a block basis in the same way as weights quantization.
在低精度训练框架中,由于 FP8 格式的动态范围有限,其减少的指数位导致溢出和下溢是常见的挑战。作为标准做法,通过将输入张量的最大绝对值缩放到 FP8 的最大可表示值,输入分布被调整到 FP8 格式的可表示范围内(Narang 等,2017)。这种方法使得低精度训练对激活异常值非常敏感,这可能严重降低量化精度。为了解决这一问题,我们提出了一种细粒度量化方法,在更细的粒度级别进行缩放。如图 7(a)所示,(1) 对于激活,我们在 1x128 的瓦片基础上对元素进行分组和缩放(即每个 token 每 128 个通道);(2) 对于权重,我们在 128x128 的块基础上对元素进行分组和缩放(即每 128 个输入通道每 128 个输出通道)。这种方法确保了量化过程可以通过根据较小的元素组调整缩放来更好地适应异常值。在附录 B.2 中,我们进一步讨论了当我们像权重量化一样在块基础上对激活进行分组和缩放时的训练不稳定性。
One key modification in our method is the introduction of per-group scaling factors along the inner dimension of GEMM operations.
This functionality is not directly supported in the standard FP8 GEMM.
However, combined with our precise FP32 accumulation strategy, it can be efficiently implemented.
我们方法中的一个关键修改是在 GEMM 操作的内维度上引入了每组缩放因子。此功能在标准 FP8 GEMM 中不直接支持。然而,结合我们精确的 FP32 累积策略,它可以高效实现。
Notably, our fine-grained quantization strategy is highly consistent with the idea of microscaling formats (Rouhani et al., 2023b), while the Tensor Cores of NVIDIA next-generation GPUs (Blackwell series) have announced the support for microscaling formats with smaller quantization granularity (NVIDIA, 2024a).
We hope our design can serve as a reference for future work to keep pace with the latest GPU architectures.
值得注意的是,我们的细粒度量化策略与微缩格式的理念高度一致(Rouhani 等,2023b),而 NVIDIA 下一代 GPU(Blackwell 系列)的 Tensor Cores 已宣布支持具有更小量化粒度的微缩格式(NVIDIA,2024a)。我们希望我们的设计能够作为未来工作参考,以跟上最新的 GPU 架构。
Increasing Accumulation Precision.
提高累积精度。
Low-precision GEMM operations often suffer from underflow issues, and their accuracy largely depends on high-precision accumulation, which is commonly performed in an FP32 precision (Kalamkar et al., 2019; Narang et al., 2017).
However, we observe that the accumulation precision of FP8 GEMM on NVIDIA H800 GPUs is limited to retaining around 14 bits, which is significantly lower than FP32 accumulation precision.
This problem will become more pronounced when the inner dimension K is large (Wortsman et al., 2023), a typical scenario in large-scale model training where the batch size and model width are increased.
Taking GEMM operations of two random matrices with K = 4096 for example, in our preliminary test, the limited accumulation precision in Tensor Cores results in a maximum relative error of nearly 2%.
Despite these problems, the limited accumulation precision is still the default option in a few FP8 frameworks (NVIDIA, 2024b), severely constraining the training accuracy.
低精度 GEMM 操作常常遭遇下溢问题,其精度在很大程度上依赖于 FP32 精度中常见的高精度累加(Kalamkar 等,2019;Narang 等,2017)。然而,我们观察到 NVIDIA H800 GPU 上的 FP8 GEMM 的累加精度限制在保留大约 14 位,这显著低于 FP32 累加精度。当内部维度 K 较大时,这一问题将更加明显(Wortsman 等,2023),这是大型模型训练中批大小和模型宽度增加的典型场景。以 K=4096 的两个随机矩阵的 GEMM 操作为例,在我们的初步测试中,Tensor Cores 中的有限累加精度导致最大相对误差接近 2%。尽管存在这些问题,有限的累加精度仍然是一些 FP8 框架中的默认选项(NVIDIA,2024b),这严重限制了训练精度。
In order to address this issue, we adopt the strategy of promotion to CUDA Cores for higher precision (Thakkar et al., 2023).
The process is illustrated in Figure 7 (b).
To be specific, during MMA (Matrix Multiply-Accumulate) execution on Tensor Cores, intermediate results are accumulated using the limited bit width.
Once an interval of is reached, these partial results will be copied to FP32 registers on CUDA Cores, where full-precision FP32 accumulation is performed.
As mentioned before, our fine-grained quantization applies per-group scaling factors along the inner dimension K.
These scaling factors can be efficiently multiplied on the CUDA Cores as the dequantization process with minimal additional computational cost.
为了解决这一问题,我们采用了提升到 CUDA Cores 以获得更高精度的策略(Thakkar 等,2023)。该过程如图 7(b)所示。具体来说,在 Tensor Cores 上执行 MMA(矩阵乘积-累加)时,使用有限位宽累加中间结果。一旦达到 的区间,这些部分结果将被复制到 CUDA Cores 上的 FP32 寄存器,在那里进行全精度的 FP32 累加。如前所述,我们的细粒度量化沿内部维度 K 应用每组的缩放因子。这些缩放因子可以在 CUDA Cores 上有效地相乘,作为去量化过程,其额外的计算成本极小。
It is worth noting that this modification reduces the WGMMA (Warpgroup-level Matrix Multiply-Accumulate) instruction issue rate for a single warpgroup. However, on the H800 architecture, it is typical for two WGMMA to persist concurrently: while one warpgroup performs the promotion operation, the other is able to execute the MMA operation. This design enables overlapping of the two operations, maintaining high utilization of Tensor Cores.
Based on our experiments, setting elements, equivalent to 4 WGMMAs, represents the minimal accumulation interval that can significantly improve precision without introducing substantial overhead.
值得注意的是,这种修改降低了单个 warpgroup 的 WGMMA(Warpgroup 级矩阵乘累加)指令发射率。然而,在 H800 架构上,通常两个 WGMMA 可以同时存在:一个 warpgroup 执行提升操作,另一个则可以执行 MMA 操作。这种设计使得两者操作可以重叠,保持对 Tensor Cores 的高利用率。根据我们的实验,设置 个元素,相当于 4 个 WGMMA,代表了可以显著提高精度而不引入大量开销的最小累加间隔。
Mantissa over Exponents. 尾数优于指数。
In contrast to the hybrid FP8 format adopted by prior work (NVIDIA, 2024b; Peng et al., 2023b; Sun et al., 2019b), which uses E4M3 (4-bit exponent and 3-bit mantissa) in Fprop and E5M2 (5-bit exponent and 2-bit mantissa) in Dgrad and Wgrad, we adopt the E4M3 format on all tensors for higher precision.
We attribute the feasibility of this approach to our fine-grained quantization strategy, i.e., tile and block-wise scaling.
By operating on smaller element groups, our methodology effectively shares exponent bits among these grouped elements, mitigating the impact of the limited dynamic range.
与先前工作(NVIDIA, 2024b; Peng et al., 2023b; Sun et al., 2019b)采用的混合 FP8 格式不同,其在 Fprop 中使用 E4M3(4 位指数和 3 位尾数)格式,在 Dgrad 和 Wgrad 中采用 E5M2(5 位指数和 2 位尾数)格式,我们在所有张量上采用 E4M3 格式以提高精度。我们将这种方式的可行性归因于我们的细粒度量化策略,即分块和块级缩放。通过操作较小的元素组,我们的方法有效地在这些分组元素之间共享指数位,减轻了有限动态范围的影响。
Online Quantization. 在线量化。
Delayed quantization is employed in tensor-wise quantization frameworks (NVIDIA, 2024b; Peng et al., 2023b), which maintains a history of the maximum absolute values across prior iterations to infer the current value.
In order to ensure accurate scales and simplify the framework, we calculate the maximum absolute value online for each 1x128 activation tile or 128x128 weight block.
Based on it, we derive the scaling factor and then quantize the activation or weight online into the FP8 format.
延迟量化用于张量级量化框架(NVIDIA, 2024b; Peng et al., 2023b),它维护以往迭代的最大绝对值历史来推测当前值。为了确保准确的尺度并简化框架,我们在线计算每个 1x128 激活块或 128x128 权重块的最大绝对值。基于此,我们得到缩放因子,然后在线将激活或权重量化为 FP8 格式。
3.3.3 Low-Precision Storage and Communication
3.3.3 低精度存储与通信
In conjunction with our FP8 training framework, we further reduce the memory consumption and communication overhead by compressing cached activations and optimizer states into lower-precision formats.
结合我们的 FP8 训练框架,我们通过压缩缓存的激活和优化器状态为低精度格式,进一步降低了内存消耗和通信开销。
Low-Precision Optimizer States.
低精度优化器状态。
We adopt the BF16 data format instead of FP32 to track the first and second moments in the AdamW (Loshchilov and Hutter, 2017) optimizer, without incurring observable performance degradation.
However, the master weights (stored by the optimizer) and gradients (used for batch size accumulation) are still retained in FP32 to ensure numerical stability throughout training.
我们采用 BF16 数据格式而不是 FP32 来跟踪 AdamW(Loshchilov 和 Hutter,2017)优化器中的一阶和二阶矩,而不引起可观察的性能下降。然而,主权重(由优化器存储)和梯度(用于批次累积)仍以 FP32 形式保留,以确保训练的数值稳定性。
Low-Precision Activation.
低精度激活。
As illustrated in Figure 6, the Wgrad operation is performed in FP8.
To reduce the memory consumption, it is a natural choice to cache activations in FP8 format for the backward pass of the Linear operator.
However, special considerations are taken on several operators for low-cost high-precision training:
如图 6 所示,Wgrad 操作在 FP8 中执行。为了减少内存消耗,对 Linear 算子的反向传递以 FP8 格式缓存激活数据是一个自然的选择。然而,我们对几个算子进行了特殊的处理,以实现低成本高精度的训练:
(1) Inputs of the Linear after the attention operator. These activations are also used in the backward pass of the attention operator, which makes it sensitive to precision. We adopt a customized E5M6 data format exclusively for these activations. Additionally, these activations will be converted from an 1x128 quantization tile to an 128x1 tile in the backward pass. To avoid introducing extra quantization error, all the scaling factors are round scaled, i.e., integral power of 2.
(1)注意力算子后的 Linear 输入。这些激活数据也用于注意力算子的反向传递,因此对精度很敏感。我们为这些激活数据采用定制的 E5M6 数据格式。此外,这些激活数据将在反向传递中从 1x128 量化块转换为 128x1 块。为了避免引入额外的量化误差,所有缩放因子均进行整数次方 2 的舍入缩放。(2) Inputs of the SwiGLU operator in MoE. To further reduce the memory cost, we cache the inputs of the SwiGLU operator and recompute its output in the backward pass. These activations are also stored in FP8 with our fine-grained quantization method, striking a balance between memory efficiency and computational accuracy.
(2)MoE 中 SwiGLU 算子的输入。为了进一步降低内存成本,我们缓存 SwiGLU 算子的输入并在反向传递中重新计算其输出。这些激活数据也采用我们的细粒度量化方法存储在 FP8 中,平衡了内存效率和计算准确性。
Low-Precision Communication.
低精度通信。
Communication bandwidth is a critical bottleneck in the training of MoE models.
To alleviate this challenge, we quantize the activation before MoE up-projections into FP8 and then apply dispatch components, which is compatible with FP8 Fprop in MoE up-projections. Like the inputs of the Linear after the attention operator, scaling factors for this activation are integral power of 2.
A similar strategy is applied to the activation gradient before MoE down-projections.
For both the forward and backward combine components, we retain them in BF16 to preserve training precision in critical parts of the training pipeline.
通信带宽是 MoE 模型训练中的一个关键瓶颈。为了解决这一挑战,我们在 MoE 上投影之前,将激活数据量化为 FP8 格式并应用分派组件,这与 MoE 上投影中的 FP8 前向传播兼容。与注意力算子后的 Linear 输入相似,这些激活数据的缩放因子是 2 的整数次方。在 MoE 下投影之前的激活度梯度也采用类似策略。对于前向和反向结合组件,我们保留 BF16 以在训练管道的关键部分保持训练精度。
3.4 Inference and Deployment
3.4 推理与部署
We deploy DeepSeek-V3 on the H800 cluster, where GPUs within each node are interconnected using NVLink, and all GPUs across the cluster are fully interconnected via IB.
To simultaneously ensure both the Service-Level Objective (SLO) for online services and high throughput, we employ the following deployment strategy that separates the prefilling and decoding stages.
我们在 H800 集群上部署了 DeepSeek-V3,每个节点内的 GPU 通过 NVLink 互连,而整个集群中所有 GPU 通过 IB 完全互联。为了同时确保在线服务的服务级目标(SLO)和高吞吐量,我们采用以下部署策略,将预填充和解码阶段分开处理。
3.4.1 Prefilling 3.4.1 预填充
The minimum deployment unit of the prefilling stage consists of 4 nodes with 32 GPUs.
The attention part employs 4-way Tensor Parallelism (TP4) with Sequence Parallelism (SP), combined with 8-way Data Parallelism (DP8).
Its small TP size of 4 limits the overhead of TP communication.
For the MoE part, we use 32-way Expert Parallelism (EP32), which ensures that each expert processes a sufficiently large batch size, thereby enhancing computational efficiency.
For the MoE all-to-all communication, we use the same method as in training: first transferring tokens across nodes via IB, and then forwarding among the intra-node GPUs via NVLink.
In particular, we use 1-way Tensor Parallelism for the dense MLPs in shallow layers to save TP communication.
预填充阶段的最小部署单位由 4 个节点和 32 个 GPU 组成。注意力部分采用 4 路张量并行(TP4)与序列并行(SP)结合 8 路数据并行(DP8)。其较小的 4 路 TP 设计限制了 TP 通信的开销。在 MoE 部分,我们采用 32 路专家并行(EP32),确保每位专家处理足够大的批次,进而提高计算效率。在 MoE 全互通信中,我们使用与训练阶段相同的方法:首先通过 IB 在节点间传输 Token,然后通过 NVLink 在节点内的 GPU 之间转发。特别是在浅层的密集 MLP 中,我们使用 1 路张量并行来减少 TP 通信。
To achieve load balancing among different experts in the MoE part, we need to ensure that each GPU processes approximately the same number of tokens.
To this end, we introduce a deployment strategy of redundant experts, which duplicates high-load experts and deploys them redundantly.
The high-load experts are detected based on statistics collected during the online deployment and are adjusted periodically (e.g., every 10 minutes).
After determining the set of redundant experts, we carefully rearrange experts among GPUs within a node based on the observed loads, striving to balance the load across GPUs as much as possible without increasing the cross-node all-to-all communication overhead.
For the deployment of DeepSeek-V3, we set 32 redundant experts for the prefilling stage.
For each GPU, besides the original 8 experts it hosts, it will also host one additional redundant expert.
为实现 MoE 部分不同专家间的负载均衡,我们需要确保每个 GPU 处理的 Token 数量大致相同。为此,我们引入了冗余专家的部署策略,复制高负载的专家并冗余部署。高负载专家根据在线部署期间收集的统计数据检测,并定期调整(例如,每 10 分钟)。在确定冗余专家集合后,我们根据观察到的负载,仔细在节点内的 GPU 间重新安排专家,努力在不增加跨节点全互通信开销的情况下,尽可能平衡 GPU 间的负载。在 DeepSeek-V3 的部署中,我们为预填充阶段设置了 32 个冗余专家。对于每个 GPU,除了它原有的 8 个专家外,还将额外承载一个冗余专家。
Furthermore, in the prefilling stage, to improve the throughput and hide the overhead of all-to-all and TP communication, we simultaneously process two micro-batches with similar computational workloads, overlapping the attention and MoE of one micro-batch with the dispatch and combine of another.
此外,在预填充阶段,为提高吞吐量并隐藏全互和 TP 通信的开销,我们同时处理两个计算负载相似的微批,重叠一个微批的注意力和 MoE 与另一个微批的分派和组合。
Finally, we are exploring a dynamic redundancy strategy for experts, where each GPU hosts more experts (e.g., 16 experts), but only 9 will be activated during each inference step.
Before the all-to-all operation at each layer begins, we compute the globally optimal routing scheme on the fly.
Given the substantial computation involved in the prefilling stage, the overhead of computing this routing scheme is almost negligible.
最后,我们正在探索一种专家动态冗余策略,其中每个 GPU 承载更多专家(例如,16 个专家),但每次推理步只激活 9 个。在每层的全互操作开始之前,我们实时计算全局最优路由方案。考虑到预填充阶段的计算量巨大,计算此路由方案的开销几乎可以忽略不计。
3.4.2 Decoding 3.4.2 解码
During decoding, we treat the shared expert as a routed one.
From this perspective, each token will select 9 experts during routing, where the shared expert is regarded as a heavy-load one that will always be selected.
The minimum deployment unit of the decoding stage consists of 40 nodes with 320 GPUs.
The attention part employs TP4 with SP, combined with DP80, while the MoE part uses EP320.
For the MoE part, each GPU hosts only one expert, and 64 GPUs are responsible for hosting redundant experts and shared experts.
All-to-all communication of the dispatch and combine parts is performed via direct point-to-point transfers over IB to achieve low latency.
Additionally, we leverage the IBGDA (NVIDIA, 2022) technology to further minimize latency and enhance communication efficiency.
在解码阶段,我们将共享专家视为一个路由专家。从这个角度看,每个 Token 在路由期间将选择 9 个专家,其中共享专家被视为一个总是被选择的高负载专家。解码阶段的最小部署单位由 40 个节点和 320 个 GPU 组成。注意力部分采用 TP4 与 SP 结合 DP80,而 MoE 部分使用 EP320。在 MoE 部分中,每个 GPU 只承载一个专家,64 个 GPU 负责承载冗余专家和共享专家。分派和组合部分的全互通信通过 IB 直接点到点传输实现以达到低延迟。此外,我们利用 IBGDA(NVIDIA, 2022)技术进一步减少延迟并提高通信效率。
Similar to prefilling, we periodically determine the set of redundant experts in a certain interval, based on the statistical expert load from our online service.
However, we do not need to rearrange experts since each GPU only hosts one expert.
We are also exploring the dynamic redundancy strategy for decoding.
However, this requires more careful optimization of the algorithm that computes the globally optimal routing scheme and the fusion with the dispatch kernel to reduce overhead.
与预填充类似,我们会定期根据在线服务的统计专家负载,在某个区间内确定多余专家的集合。然而,由于每个 GPU 只承载一个专家,我们不需要重新安排专家。同时,我们正在探索用于解码的动态冗余策略。然而,这需要更仔细地优化算法,以计算全局最优路由方案,并与派发内核融合以减少开销。
Additionally, to enhance throughput and hide the overhead of all-to-all communication, we are also exploring processing two micro-batches with similar computational workloads simultaneously in the decoding stage.
Unlike prefilling, attention consumes a larger portion of time in the decoding stage. Therefore, we overlap the attention of one micro-batch with the dispatch+MoE+combine of another.
In the decoding stage, the batch size per expert is relatively small (usually within 256 tokens), and the bottleneck is memory access rather than computation.
Since the MoE part only needs to load the parameters of one expert, the memory access overhead is minimal, so using fewer SMs will not significantly affect the overall performance.
Therefore, to avoid impacting the computation speed of the attention part, we can allocate only a small portion of SMs to dispatch+MoE+combine.
此外,为了提高吞吐量并隐藏全互通信的开销,我们还在探索在解码阶段同时处理两个计算工作负载相似的微批次。与预填充不同,注意力在解码阶段消耗较大比例的时间。因此,我们将一个微批次的注意力与另一个的派发+MoE+合并过程重叠。在解码阶段,每个专家的批次大小相对较小(通常在 256 个 tokens 以内),瓶颈在于内存访问而非计算。由于 MoE 部分只需加载一个专家的参数,因此内存访问开销很小,因此使用较少的 SM 不会显著影响整体性能。因此,为了避免影响注意力部分的计算速度,我们可以分配少量的 SM 给派发+MoE+合并。
3.5 Suggestions on Hardware Design
3.5 硬件设计建议
Based on our implementation of the all-to-all communication and FP8 training scheme, we propose the following suggestions on chip design to AI hardware vendors.
基于我们对全互联通信和 FP8 训练方案的实现,我们向 AI 硬件供应商提出以下芯片设计建议。
3.5.1 Communication Hardware
3.5.1 通信硬件
In DeepSeek-V3, we implement the overlap between computation and communication to hide the communication latency during computation.
This significantly reduces the dependency on communication bandwidth compared to serial computation and communication.
However, the current communication implementation relies on expensive SMs (e.g., we allocate 20 out of the 132 SMs available in the H800 GPU for this purpose), which will limit the computational throughput.
Moreover, using SMs for communication results in significant inefficiencies, as tensor cores remain entirely -utilized.
在 DeepSeek-V3 中,我们实现了计算和通信的重叠,以隐藏计算过程中的通信延迟。这相比于串行计算和通信显著减少了对通信带宽的依赖。然而,目前的通信实现依赖于昂贵的 SMs(例如,我们在 H800 GPU 中为此分配了 132 个 SM 中的 20 个),这将限制计算吞吐量。此外,使用 SMs 进行通信会导致显著的效率低下,因为张量核心完全未被利用。
Currently, the SMs primarily perform the following tasks for all-to-all communication:
目前,SMs 主要执行以下任务来进行全互联通信:
-
•
Forwarding data between the IB (InfiniBand) and NVLink domain while aggregating IB traffic destined for multiple GPUs within the same node from a single GPU.
• 在 IB(InfiniBand)和 NVLink 域之间转发数据,同时汇聚单个 GPU 向同一节点内多个 GPU 发送的 IB 流量。 -
•
Transporting data between RDMA buffers (registered GPU memory regions) and input/output buffers.
• 在 RDMA 缓冲区(已注册的 GPU 内存区域)和输入/输出缓冲区之间传输数据。 -
•
Executing reduce operations for all-to-all combine.
• 执行全互联结合的化简操作。 -
•
Managing fine-grained memory layout during chunked data transferring to multiple experts across the IB and NVLink domain.
• 在跨 IB 和 NVLink 域向多个专家传输分块数据时管理细粒度内存布局。
We aspire to see future vendors developing hardware that offloads these communication tasks from the valuable computation unit SM, serving as a GPU co-processor or a network co-processor like NVIDIA SHARP Graham et al. (2016).
Furthermore, to reduce application programming complexity, we aim for this hardware to unify the IB (scale-out) and NVLink (scale-up) networks from the perspective of the computation units.
With this unified interface, computation units can easily accomplish operations such as read, write, multicast, and reduce across the entire IB-NVLink-unified domain via submitting communication requests based on simple primitives.
我们希望未来的供应商能开发硬件,将这些通信任务从宝贵的计算单元 SM 上卸载出来,作为 GPU 协处理器或类似于 NVIDIA SHARP Graham et al. (2016)的网络协处理器。此外,为了减少应用编程的复杂性,我们希望这类硬件能从计算单元的视角统一 IB(横向扩展)和 NVLink(纵向扩展)网络。有了这个统一的接口,计算单元可以轻松完成诸如读、写、多播和在整个 IB-NVLink 统一域内进行简化的通信请求的操作。
3.5.2 Compute Hardware 3.5.2 计算硬件
Higher FP8 GEMM Accumulation Precision in Tensor Cores.
张量核心中更高 FP8 GEMM 累积精度。
In the current Tensor Core implementation of the NVIDIA Hopper architecture, FP8 GEMM (General Matrix Multiply) employs fixed-point accumulation, aligning the mantissa products by right-shifting based on the maximum exponent before addition.
Our experiments reveal that it only uses the highest 14 bits of each mantissa product after sign-fill right shifting, and truncates bits exceeding this range.
However, for example, to achieve precise FP32 results from the accumulation of 32 FP8FP8 multiplications, at least 34-bit precision is required.
Thus, we recommend that future chip designs increase accumulation precision in Tensor Cores to support full-precision accumulation, or select an appropriate accumulation bit-width according to the accuracy requirements of training and inference algorithms.
This approach ensures that errors remain within acceptable bounds while maintaining computational efficiency.
在 NVIDIA Hopper 架构的当前张量核心实现中,FP8 GEMM(通用矩阵乘法)采用定点累积,通过基于最大指数的右移来对齐尾数积再进行加法运算。我们的实验表明,它在符号填充右移后的每个尾数积中仅使用最高的 14 位,并截断超出此范围的位。然而,例如,要从 32 个 FP8 相乘的累积中获得精确的 FP32 结果,至少需要 34 位精度。因此,我们建议未来芯片设计增加张量核心中的累积精度,以支持全精度累积,或根据训练和推理算法的精度要求选取适当的累积位宽。这种方法确保在保持计算效率的同时,使误差在可接受范围内。
Support for Tile- and Block-Wise Quantization.
支持矩阵块和块级量化。
Current GPUs only support per-tensor quantization, lacking the native support for fine-grained quantization like our tile- and block-wise quantization.
In the current implementation, when the interval is reached, the partial results will be copied from Tensor Cores to CUDA cores, multiplied by the scaling factors, and added to FP32 registers on CUDA cores.
Although the dequantization overhead is significantly mitigated combined with our precise FP32 accumulation strategy, the frequent data movements between Tensor Cores and CUDA cores still limit the computational efficiency.
Therefore, we recommend future chips to support fine-grained quantization by enabling Tensor Cores to receive scaling factors and implement MMA with group scaling.
In this way, the whole partial sum accumulation and dequantization can be completed directly inside Tensor Cores until the final result is produced, avoiding frequent data movements.
当前的 GPU 仅支持按张量量化,缺乏对我们所使用的块级和分块量化等细粒度量化的原生支持。在当前实现中,当达到 时间间隔时,部分结果会从张量核心复制到 CUDA 核心,乘以缩放因子后,添加到 CUDA 核心上的 FP32 寄存器。尽管结合我们的精确 FP32 累积策略,反量化开销显著减少,但张量核心与 CUDA 核心之间的频繁数据移动仍然限制了计算效率。因此,我们建议未来芯片支持细粒度量化,使张量核心能够接收缩放因子,并根据组缩放进行 MMA。这样,整个部分和累积及反量化可直接在张量核心内完成,直到最终结果生成,避免频繁的数据移动。
Support for Online Quantization.
支持在线量化。
The current implementations struggle to effectively support online quantization, despite its effectiveness demonstrated in our research.
In the existing process, we need to read 128 BF16 activation values (the output of the previous computation) from HBM (High Bandwidth Memory) for quantization, and the quantized FP8 values are then written back to HBM, only to be read again for MMA.
To address this inefficiency, we recommend that future chips integrate FP8 cast and TMA (Tensor Memory Accelerator) access into a single fused operation, so quantization can be completed during the transfer of activations from global memory to shared memory, avoiding frequent memory reads and writes.
We also recommend supporting a warp-level cast instruction for speedup, which further facilitates the better fusion of layer normalization and FP8 cast.
Alternatively, a near-memory computing approach can be adopted, where compute logic is placed near the HBM.
In this case, BF16 elements can be cast to FP8 directly as they are read from HBM into the GPU, reducing off-chip memory access by roughly 50%.
尽管我们的研究证明其有效性,但当前实现难以有效支持在线量化。在现有过程中,我们需要从 HBM(高带宽内存)中读取 128 个 BF16 激活值(先前计算的输出)用于量化,然后将量化后的 FP8 值写回 HBM,再次读取以进行 MMA。为了解决这一低效问题,我们建议未来的芯片将 FP8 转换和 TMA(张量内存加速器)访问整合为单一融合操作,从而可以在激活从全局内存传输到共享内存的过程中完成量化,避免频繁的内存读写。我们还建议支持 warp 级转换指令以加速,这进一步促进了层归一化和 FP8 转换的更好融合。或者,可以采用近内存计算的方法,将计算逻辑放在接近 HBM 的位置。在这种情况下,BF16 元素可以在从 HBM 读取到 GPU 的过程中直接转换为 FP8,从而减少大约 50%的片外内存访问。
Support for Transposed GEMM Operations.
支持转置矩阵乘法(GEMM)操作。
The current architecture makes it cumbersome to fuse matrix transposition with GEMM operations.
In our workflow, activations during the forward pass are quantized into 1x128 FP8 tiles and stored.
During the backward pass, the matrix needs to be read out, dequantized, transposed, re-quantized into 128x1 tiles, and stored in HBM.
To reduce memory operations, we recommend future chips to enable direct transposed reads of matrices from shared memory before MMA operation, for those precisions required in both training and inference. Combined with the fusion of FP8 format conversion and TMA access, this enhancement will significantly streamline the quantization workflow.
当前的架构使得矩阵转置与 GEMM 操作的融合变得麻烦。在我们的工作流程中,前向传播时的激活被量化为 1x128 FP8 块并存储。在反向传播过程中,矩阵需要被读取、反量化、转置,重新量化为 128x1 块,并存储在 HBM 中。为了减少内存操作,我们建议未来的芯片在 MMA 操作前,能从共享内存中直接读取转置的矩阵,对于那些在训练和推理中都需要的精度而言。结合 FP8 格式转换和 TMA 访问的融合,这一增强将显著简化量化工作流程。
4 Pre-Training 4 预训练
4.1 Data Construction 4.1 数据构建
Compared with DeepSeek-V2, we optimize the pre-training corpus by enhancing the ratio of mathematical and programming samples, while expanding multilingual coverage beyond English and Chinese.
Also, our data processing pipeline is refined to minimize redundancy while maintaining corpus diversity.
Inspired by Ding et al. (2024), we implement the document packing method for data integrity but do not incorporate cross-sample attention masking during training.
Finally, the training corpus for DeepSeek-V3 consists of 14.8T high-quality and diverse tokens in our tokenizer.
与 DeepSeek-V2 相比,我们通过增加数学和编程样本的比重来优化预训练语料库,同时扩展多语言覆盖到英语和汉语之外。此外,我们的数据处理管道经过改进,以在保持语料库多样性的同时减少冗余。受 Ding 等人(2024)的启发,我们实施了文件打包方法以确保数据完整性,但在训练过程中没有引入跨样本注意力屏蔽。最终,DeepSeek-V3 的训练语料库由 14.8T 高质量和多样化的标记组成。
In the training process of DeepSeekCoder-V2 (DeepSeek-AI, 2024a), we observe that the Fill-in-Middle (FIM) strategy does not compromise the next-token prediction capability while enabling the model to accurately predict middle text based on contextual cues.
In alignment with DeepSeekCoder-V2, we also incorporate the FIM strategy in the pre-training of DeepSeek-V3.
To be specific, we employ the Prefix-Suffix-Middle (PSM) framework to structure data as follows:
在 DeepSeekCoder-V2(DeepSeek-AI,2024a)的训练过程中,我们观察到填中间(FIM)策略不会影响下一个标记的预测能力,同时使模型能够根据上下文线索准确预测中间文本。与 DeepSeekCoder-V2 一致,我们在 DeepSeek-V3 的预训练中也采用 FIM 策略。具体而言,我们使用前缀-后缀-中间(PSM)框架来构建数据如下:
This structure is applied at the document level as a part of the pre-packing process.
The FIM strategy is applied at a rate of 0.1, consistent with the PSM framework.
这种结构在文档级别应用,作为预打包过程的一部分。FIM 策略的应用率为 0.1,与 PSM 框架一致。
The tokenizer for DeepSeek-V3 employs Byte-level BPE (Shibata et al., 1999) with an extended vocabulary of 128K tokens.
The pretokenizer and training data for our tokenizer are modified to optimize multilingual compression efficiency.
In addition, compared with DeepSeek-V2, the new pretokenizer introduces tokens that combine punctuations and line breaks.
However, this trick may introduce the token boundary bias (Lundberg, 2023) when the model processes multi-line prompts without terminal line breaks, particularly for few-shot evaluation prompts.
To address this issue, we randomly split a certain proportion of such combined tokens during training, which exposes the model to a wider array of special cases and mitigates this bias.
DeepSeek-V3 的分词器采用字节级 BPE(Shibata 等人,1999),扩展词汇量为 128K 标记。我们的预分词和分词器的训练数据经过修改以优化多语言压缩效率。此外,与 DeepSeek-V2 相比,新的预分词器引入了结合标点和换行符的标记。然而,这种技巧可能在模型处理没有终端换行符的多行提示时引入标记边界偏差(Lundberg,2023),特别是对少样本评估提示。为了解决这个问题,我们在训练过程中随机拆分了这一部分合并的标记,这使模型接触到更广泛的特殊案例并减少这种偏差。
4.2 Hyper-Parameters 4.2 超参数
Model Hyper-Parameters. 模型超参数。
We set the number of Transformer layers to 61 and the hidden dimension to 7168.
All learnable parameters are randomly initialized with a standard deviation of 0.006.
In MLA, we set the number of attention heads to 128 and the per-head dimension to 128.
The KV compression dimension is set to 512, and the query compression dimension is set to 1536.
For the decoupled queries and key, we set the per-head dimension to 64.
We substitute all FFNs except for the first three layers with MoE layers.
Each MoE layer consists of 1 shared expert and 256 routed experts, where the intermediate hidden dimension of each expert is 2048.
Among the routed experts, 8 experts will be activated for each token, and each token will be ensured to be sent to at most 4 nodes.
The multi-token prediction depth is set to 1, i.e., besides the exact next token, each token will predict one additional token.
As DeepSeek-V2, DeepSeek-V3 also employs additional RMSNorm layers after the compressed latent vectors, and multiplies additional scaling factors at the width bottlenecks.
Under this configuration, DeepSeek-V3 comprises 671B total parameters, of which 37B are activated for each token.
我们将 Transformer 层数设定为 61,hidden dimension 设定为 7168。所有可学习参数都随机初始化,标准差为 0.006。在 MLA 中,我们将注意力头的数量 设为 128,每个头的维度 设为 128。KV 压缩维度 设为 512,query 压缩维度 设为 1536。对于解耦查询和键,我们将每个头的维度 设为 64。我们替换了前三层以外的所有 FFN 为 MoE 层。每个 MoE 层由 1 个共享专家和 256 个路由专家组成,其中每个专家的中间隐藏维度为 2048。在路由专家中,每个标记将激活 8 个专家,并确保每个标记最多被发送到 4 个节点。多标记预测深度 设为 1,即除了下一个确切的标记,每个标记还将预测一个附加的标记。与 DeepSeek-V2 一样,DeepSeek-V3 在压缩的潜在向量之后也采用附加的 RMSNorm 层,并在宽度瓶颈处乘以额外的缩放因子。在此配置下,DeepSeek-V3 包含 671B 个总参数,其中 37B 在每个标记激活。
Training Hyper-Parameters.
训练超参数。
We employ the AdamW optimizer (Loshchilov and Hutter, 2017) with hyper-parameters set to , , and .
We set the maximum sequence length to 4K during pre-training, and pre-train DeepSeek-V3 on 14.8T tokens.
As for the learning rate scheduling, we first linearly increase it from 0 to during the first 2K steps.
Then, we keep a constant learning rate of until the model consumes 10T training tokens.
Subsequently, we gradually decay the learning rate to in 4.3T tokens, following a cosine decay curve.
During the training of the final 500B tokens, we keep a constant learning rate of in the first 333B tokens, and switch to another constant learning rate of in the remaining 167B tokens.
The gradient clipping norm is set to 1.0.
We employ a batch size scheduling strategy, where the batch size is gradually increased from 3072 to 15360 in the training of the first 469B tokens, and then keeps 15360 in the remaining training.
We leverage pipeline parallelism to deploy different layers of a model on different GPUs, and for each layer, the routed experts will be uniformly deployed on 64 GPUs belonging to 8 nodes.
As for the node-limited routing, each token will be sent to at most 4 nodes (i.e., ).
For auxiliary-loss-free load balancing, we set the bias update speed to 0.001 for the first 14.3T tokens, and to 0.0 for the remaining 500B tokens.
For the balance loss, we set to 0.0001, just to avoid extreme imbalance within any single sequence.
The MTP loss weight is set to 0.3 for the first 10T tokens, and to 0.1 for the remaining 4.8T tokens.
我们采用 AdamW 优化器(Loshchilov 和 Hutter,2017),其超参数设置为 , 和 。在预训练期间,我们将最大序列长度设置为 4K,并在 14.8T 个 token 上预训练 DeepSeek-V3。关于学习率调度,我们首先在线性增加其从 0 到 ,持续 2K 步。然后,我们将学习率保持为常数 ,直到模型消耗掉 10T 个训练 token。随后,我们在 4.3T 个 token 内逐渐衰减学习率到 ,遵循一个余弦衰减曲线。在最后 5000 亿个 token 的训练过程中,我们在前 3330 亿个 token 时保持一个常数学习率 ,在接下来的 1670 亿个 token 切换到另一个常数学习率 。梯度裁剪范数设置为 1.0。我们采用批量大小调度策略,即在前 4690 亿个 token 的训练中,批量大小逐渐从 3072 增加到 15360,然后在剩余的训练中保持 15360。我们利用流水线并行在不同的 GPU 上部署模型的不同层,对于每一层,路由专家将会均匀部署在 8 个节点的 64 个 GPU 上。针对节点限制路由,每个 token 最多将被发送到 4 个节点(即 )。为了在无辅助损失的负载均衡中,我们将偏差更新速度 在前 14.3T 个 token 时设置为 0.001,在后 5000 亿个 token 设置为 0.0。对于平衡损失,我们将 设置为 0.0001,以避免任何单一序列中的极端不平衡。MTP 损失权重 在前 10T 个 token 时设置为 0.3,在剩余的 4.8T 个 token 时则为 0.1。
4.3 Long Context Extension
4.3 长上下文扩展
We adopt a similar approach to DeepSeek-V2 (DeepSeek-AI, 2024c) to enable long context capabilities in DeepSeek-V3.
After the pre-training stage, we apply YaRN (Peng et al., 2023a) for context extension and perform two additional training phases, each comprising 1000 steps, to progressively expand the context window from 4K to 32K and then to 128K.
The YaRN configuration is consistent with that used in DeepSeek-V2, being applied exclusively to the decoupled shared key .
The hyper-parameters remain identical across both phases, with the scale , , , and the scaling factor .
In the first phase, the sequence length is set to 32K, and the batch size is 1920.
During the second phase, the sequence length is increased to 128K, and the batch size is reduced to 480.
The learning rate for both phases is set to , matching the final learning rate from the pre-training stage.
我们采取类似于 DeepSeek-V2(DeepSeek-AI,2024c)的方法,以在 DeepSeek-V3 中实现长上下文能力。在预训练阶段之后,我们应用 YaRN(Peng et al., 2023a)进行上下文扩展,并进行两个额外的训练阶段,每个阶段由 1000 步组成,以逐步将上下文窗口从 4K 扩展到 32K,再到 128K。YaRN 的配置与 DeepSeek-V2 中使用的一致,仅应用于解耦共享关键字 。这两个阶段的超参数保持相同,具有缩放 , , 和缩放因子 。在第一阶段,序列长度设置为 32K,批量大小为 1920。在第二阶段,序列长度增加到 128K,批量大小减少到 480。两个阶段的学习率都设置为 ,与预训练阶段的最终学习率一致。
Through this two-phase extension training, DeepSeek-V3 is capable of handling inputs up to 128K in length while maintaining strong performance.
Figure 8 illustrates that DeepSeek-V3, following supervised fine-tuning, achieves notable performance on the "Needle In A Haystack" (NIAH) test, demonstrating consistent robustness across context window lengths up to 128K.
通过这两个阶段的扩展训练,DeepSeek-V3 能够处理长达 128K 的输入,同时保持强大的性能。图 8 显示,DeepSeek-V3 在经过监督微调后,在“针尖中的干草堆”(NIAH)测试中取得了显著的性能,在上下文窗口长度高达 128K 的情况下表现出一致的稳健性。
4.4 Evaluations 4.4 评估
4.4.1 Evaluation Benchmarks
4.4.1 评估基准
The base model of DeepSeek-V3 is pretrained on a multilingual corpus with English and Chinese constituting the majority, so we evaluate its performance on a series of benchmarks primarily in English and Chinese, as well as on a multilingual benchmark.
Our evaluation is based on our internal evaluation framework integrated in our HAI-LLM framework.
Considered benchmarks are categorized and listed as follows, where underlined benchmarks are in Chinese and double-underlined benchmarks are multilingual ones:
DeepSeek-V3 的基础模型在英文和中文构成主体的多语种语料库上进行了预训练,因此我们在主要为英文和中文的一系列基准上,以及在一个多语种基准上评估其性能。我们的评估基于集成在我们 HAI-LLM 框架中的内部评估框架。所考虑的基准按如下类别列出,其中下划线基准为中文,双下划线基准为多语种:
Multi-subject multiple-choice datasets include MMLU (Hendrycks et al., 2020), MMLU-Redux (Gema et al., 2024), MMLU-Pro (Wang et al., 2024b), MMMLU (OpenAI, 2024b), C-Eval (Huang et al., 2023), and CMMLU (Li et al., 2023).
多学科多选数据集包括 MMLU(Hendrycks et al., 2020)、MMLU-Redux(Gema et al., 2024)、MMLU-Pro(Wang et al., 2024b)、MMMLU(OpenAI, 2024b)、C-Eval(Huang et al., 2023)和 CMMLU(Li et al., 2023)。
Language understanding and reasoning datasets include HellaSwag (Zellers et al., 2019), PIQA (Bisk et al., 2020), ARC (Clark et al., 2018), and BigBench Hard (BBH) (Suzgun et al., 2022).
语言理解和推理数据集包括 HellaSwag(Zellers et al., 2019),PIQA(Bisk et al., 2020),ARC(Clark et al., 2018),以及 BigBench Hard (BBH)(Suzgun et al., 2022)。
Closed-book question answering datasets include TriviaQA (Joshi et al., 2017) and NaturalQuestions (Kwiatkowski et al., 2019).
闭卷问答数据集包括 TriviaQA(Joshi et al., 2017)和 NaturalQuestions(Kwiatkowski et al., 2019)。
Reading comprehension datasets include RACE Lai et al. (2017), DROP (Dua et al., 2019), C3 (Sun et al., 2019a), and CMRC (Cui et al., 2019).
阅读理解数据集包括 RACE Lai et al. (2017),DROP(Dua et al., 2019),C3(Sun et al., 2019a),以及 CMRC(Cui et al., 2019)。
Reference disambiguation datasets include CLUEWSC (Xu et al., 2020) and WinoGrande Sakaguchi et al. (2019).
参照消歧数据集包括 CLUEWSC(Xu et al., 2020)和 WinoGrande Sakaguchi et al. (2019)。
Language modeling datasets include Pile (Gao et al., 2020).
语言建模数据集包括 Pile(Gao et al., 2020)。
Chinese understanding and culture datasets include CCPM (Li et al., 2021).
中文理解与文化数据集包括 CCPM(Li et al., 2021)。
Math datasets include GSM8K (Cobbe et al., 2021), MATH (Hendrycks et al., 2021), MGSM (Shi et al., 2023), and CMath (Wei et al., 2023).
数学数据集包括 GSM8K(Cobbe et al., 2021),MATH(Hendrycks et al., 2021),MGSM(Shi et al., 2023),以及 CMath(Wei et al., 2023)。
Code datasets include HumanEval (Chen et al., 2021), LiveCodeBench-Base (0801-1101) (Jain et al., 2024), MBPP (Austin et al., 2021), and CRUXEval (Gu et al., 2024).
代码数据集包括 HumanEval(Chen et al., 2021),LiveCodeBench-Base(0801-1101)(Jain et al., 2024),MBPP(Austin et al., 2021),以及 CRUXEval(Gu et al., 2024)。
Standardized exams include AGIEval (Zhong et al., 2023).
Note that AGIEval includes both English and Chinese subsets.
标准化考试包括 AGIEval(Zhong et al., 2023)。注意,AGIEval 包括英语和中文子集。
Following our previous work (DeepSeek-AI, 2024b, c), we adopt perplexity-based evaluation for datasets including HellaSwag, PIQA, WinoGrande, RACE-Middle, RACE-High, MMLU, MMLU-Redux, MMLU-Pro, MMMLU, ARC-Easy, ARC-Challenge, C-Eval, CMMLU, C3, and CCPM, and adopt generation-based evaluation for TriviaQA, NaturalQuestions, DROP, MATH, GSM8K, MGSM, HumanEval, MBPP, LiveCodeBench-Base, CRUXEval, BBH, AGIEval, CLUEWSC, CMRC, and CMath.
In addition, we perform language-modeling-based evaluation for Pile-test and use Bits-Per-Byte (BPB) as the metric to guarantee fair comparison among models using different tokenizers.
根据我们之前的工作(DeepSeek-AI, 2024b, c),我们采用基于困惑度的评估来评估包括 HellaSwag、PIQA、WinoGrande、RACE-Middle、RACE-High、MMLU、MMLU-Redux、MMLU-Pro、MMMLU、ARC-Easy、ARC-Challenge、C-Eval、CMMLU、C3 和 CCPM 的数据集,并对 TriviaQA、NaturalQuestions、DROP、MATH、GSM8K、MGSM、HumanEval、MBPP、LiveCodeBench-Base、CRUXEval、BBH、AGIEval、CLUEWSC、CMRC 和 CMath 采用基于生成的评估。另外,我们对 Pile-test 进行语言模型评估,并使用每字节比特数(BPB)作为指标,以保证使用不同分词器的模型之间的公平比较。
Benchmark (Metric) 基准 (指标) | # Shots # 样本数 | DeepSeek-V2 | Qwen2.5 | LLaMA-3.1 | DeepSeek-V3 | |
Base 基础 | 72B Base 72B 基础 | 405B Base 405B 基础 | Base 基础 | |||
Architecture 架构 | - | MoE | Dense 密集 | Dense 密集 | MoE | |
# Activated Params # 活动参数数 | - | 21B | 72B | 405B | 37B | |
# Total Params # 总参数数 | - | 236B | 72B | 405B | 671B | |
English 英语 | Pile-test (BPB) Pile-test(BPB) | - | 0.606 | 0.638 | 0.542 | 0.548 |
BBH (EM) BBH(EM) | 3-shot 3 样本 | 78.8 | 79.8 | 82.9 | 87.5 | |
MMLU (EM) MMLU(EM) | 5-shot 5 样本 | 78.4 | 85.0 | 84.4 | 87.1 | |
MMLU-Redux (EM) MMLU-Redux(EM) | 5-shot 5 样本 | 75.6 | 83.2 | 81.3 | 86.2 | |
MMLU-Pro (EM) MMLU-Pro(EM) | 5-shot 5 样本 | 51.4 | 58.3 | 52.8 | 64.4 | |
DROP (F1) DROP(F1) | 3-shot 3 样本 | 80.4 | 80.6 | 86.0 | 89.0 | |
ARC-Easy (EM) ARC-Easy(EM) | 25-shot 25 样本 | 97.6 | 98.4 | 98.4 | 98.9 | |
ARC-Challenge (EM) ARC-Challenge(EM) | 25-shot 25 样本 | 92.2 | 94.5 | 95.3 | 95.3 | |
HellaSwag (EM) HellaSwag(EM) | 10-shot 10 样本 | 87.1 | 84.8 | 89.2 | 88.9 | |
PIQA (EM) PIQA(EM) | 0-shot 0 样本 | 83.9 | 82.6 | 85.9 | 84.7 | |
WinoGrande (EM) WinoGrande(EM) | 5-shot 5 样本 | 86.3 | 82.3 | 85.2 | 84.9 | |
RACE-Middle (EM) RACE-Middle(EM) | 5-shot 5 样本 | 73.1 | 68.1 | 74.2 | 67.1 | |
RACE-High (EM) RACE-High(EM) | 5-shot 5 样本 | 52.6 | 50.3 | 56.8 | 51.3 | |
TriviaQA (EM) TriviaQA(EM) | 5-shot 5 样本 | 80.0 | 71.9 | 82.7 | 82.9 | |
NaturalQuestions (EM) NaturalQuestions(EM) | 5-shot 5 样本 | 38.6 | 33.2 | 41.5 | 40.0 | |
AGIEval (EM) AGIEval(EM) | 0-shot 0 样本 | 57.5 | 75.8 | 60.6 | 79.6 | |
Code 代码 | HumanEval (Pass@1) HumanEval(Pass@1) | 0-shot 0 样本 | 43.3 | 53.0 | 54.9 | 65.2 |
MBPP (Pass@1) MBPP(Pass@1) | 3-shot 3 样本 | 65.0 | 72.6 | 68.4 | 75.4 | |
LiveCodeBench-Base (Pass@1) LiveCodeBench-Base(Pass@1) |
3-shot 3 样本 | 11.6 | 12.9 | 15.5 | 19.4 | |
CRUXEval-I (EM) CRUXEval-I(EM) | 2-shot 2 次示例 | 52.5 | 59.1 | 58.5 | 67.3 | |
CRUXEval-O (EM) CRUXEval-O(EM) | 2-shot 2 次示例 | 49.8 | 59.9 | 59.9 | 69.8 | |
Math 数学 | GSM8K (EM) GSM8K(EM) | 8-shot 8 次示例 | 81.6 | 88.3 | 83.5 | 89.3 |
MATH (EM) MATH(EM) | 4-shot 4 次示例 | 43.4 | 54.4 | 49.0 | 61.6 | |
MGSM (EM) MGSM(EM) | 8-shot 8 次示例 | 63.6 | 76.2 | 69.9 | 79.8 | |
CMath (EM) CMath(EM) | 3-shot 3 样本 | 78.7 | 84.5 | 77.3 | 90.7 | |
Chinese 中文 | CLUEWSC (EM) CLUEWSC(EM) | 5-shot 5 样本 | 82.0 | 82.5 | 83.0 | 82.7 |
C-Eval (EM) C-Eval(EM) | 5-shot 5 样本 | 81.4 | 89.2 | 72.5 | 90.1 | |
CMMLU (EM) CMMLU(EM) | 5-shot 5 样本 | 84.0 | 89.5 | 73.7 | 88.8 | |
CMRC (EM) CMRC(EM) | 1-shot 1 次示例 | 77.4 | 75.8 | 76.0 | 76.3 | |
C3 (EM) C3(EM) | 0-shot 0 样本 | 77.4 | 76.7 | 79.7 | 78.6 | |
CCPM (EM) CCPM(EM) | 0-shot 0 样本 | 93.0 | 88.5 | 78.6 | 92.0 | |
Multilingual 多语言 | MMMLU-non-English (EM) MMMLU-非英语(EM) | 5-shot 5 样本 | 64.0 | 74.8 | 73.8 | 79.4 |
表 3: DeepSeek-V3-Base 与其他代表性开源基础模型的比较。所有模型都在我们的内部框架中进行评估,并共享相同的评估设置。得分差距不超过 0.3 的被视为在同一水平。DeepSeek-V3-Base 在大多数基准上取得了最佳表现,特别是在数学和代码任务上。
4.4.2 Evaluation Results 4.4.2 评估结果
In Table 3, we compare the base model of DeepSeek-V3 with the state-of-the-art open-source base models, including DeepSeek-V2-Base (DeepSeek-AI, 2024c) (our previous release), Qwen2.5 72B Base (Qwen, 2024b), and LLaMA-3.1 405B Base (AI@Meta, 2024b).
We evaluate all these models with our internal evaluation framework, and ensure that they share the same evaluation setting.
Note that due to the changes in our evaluation framework over the past months, the performance of DeepSeek-V2-Base exhibits a slight difference from our previously reported results.
Overall, DeepSeek-V3-Base comprehensively outperforms DeepSeek-V2-Base and Qwen2.5 72B Base, and surpasses LLaMA-3.1 405B Base in the majority of benchmarks, essentially becoming the strongest open-source model.
在表 3 中,我们将 DeepSeek-V3 的基础模型与最新的开源基础模型进行比较,包括 DeepSeek-V2-Base(DeepSeek-AI, 2024c)(我们之前的版本),Qwen2.5 72B Base(Qwen, 2024b),和 LLaMA-3.1 405B Base(AI@Meta, 2024b)。我们使用我们的内部评估框架评估所有这些模型,并确保它们共享相同的评估设置。注意,由于过去几个月我们评估框架的变化,DeepSeek-V2-Base 的表现与我们之前报告的结果略有不同。总体而言,DeepSeek-V3-Base 全面超越了 DeepSeek-V2-Base 和 Qwen2.5 72B Base,并在多数基准测试中超过了 LLaMA-3.1 405B Base,基本成为最强的开源模型。
From a more detailed perspective, we compare DeepSeek-V3-Base with the other open-source base models individually.
(1)
Compared with DeepSeek-V2-Base, due to the improvements in our model architecture, the scale-up of the model size and training tokens, and the enhancement of data quality, DeepSeek-V3-Base achieves significantly better performance as expected.
(2)
Compared with Qwen2.5 72B Base, the state-of-the-art Chinese open-source model, with only half of the activated parameters, DeepSeek-V3-Base also demonstrates remarkable advantages, especially on English, multilingual, code, and math benchmarks.
As for Chinese benchmarks, except for CMMLU, a Chinese multi-subject multiple-choice task, DeepSeek-V3-Base also shows better performance than Qwen2.5 72B.
(3)
Compared with LLaMA-3.1 405B Base, the largest open-source model with 11 times the activated parameters, DeepSeek-V3-Base also exhibits much better performance on multilingual, code, and math benchmarks.
As for English and Chinese language benchmarks, DeepSeek-V3-Base shows competitive or better performance, and is especially good on BBH, MMLU-series, DROP, C-Eval, CMMLU, and CCPM.
从更详细的角度来看,我们将 DeepSeek-V3-Base 与其他开源基础模型逐一进行比较。(1) 与 DeepSeek-V2-Base 相比,由于模型架构的改进、模型规模和训练数据量的扩大、数据质量的提升,DeepSeek-V3-Base 取得了显著的性能提升。(2) 与 Qwen2.5 72B Base 相比,该模型是最先进的中文开源模型,DeepSeek-V3-Base 仅激活了一半的参数,但在英语、多语言、代码和数学基准测试中同样展现出显著优势。对于中文基准,除了 CMMLU,一个中文多学科多选任务,DeepSeek-V3-Base 在其他方面也表现优于 Qwen2.5 72B。(3) 与 LLaMA-3.1 405B Base 相比,这是一个激活参数是其 11 倍的最大开源模型,DeepSeek-V3-Base 在多语言、代码和数学基准测试中同样表现出更高的性能。对于英语和中文语言基准,DeepSeek-V3-Base 显示出具有竞争力甚至更好的表现,特别是在 BBH、MMLU 系列、DROP、C-Eval、CMMLU 和 CCPM 上有突出表现。
Due to our efficient architectures and comprehensive engineering optimizations, DeepSeek-V3 achieves extremely high training efficiency.
Under our training framework and infrastructures, training DeepSeek-V3 on each trillion tokens requires only 180K H800 GPU hours, which is much cheaper than training 72B or 405B dense models.
由于我们高效的架构和全面的工程优化,DeepSeek-V3 实现了极高的训练效率。在我们的训练框架和基础设施下,训练 DeepSeek-V3 在每万亿个令牌上的时间仅需 180,000 小时的 H800 GPU 时间,比训练 72B 或 405B 密集模型便宜得多。
Benchmark (Metric) 基准 (指标) | # Shots # 样本数 | Small MoE 小型 MoE | Small MoE 小型 MoE | Large MoE 大型 MoE | Large MoE 大型 MoE |
---|---|---|---|---|---|
Baseline 基线 | w/ MTP 带 MTP | Baseline 基线 | w/ MTP 带 MTP | ||
# Activated Params (Inference) # 激活参数(推理) |
- | 2.4B | 2.4B | 20.9B | 20.9B |
# Total Params (Inference) # 总参数(推理) |
- | 15.7B | 15.7B | 228.7B | 228.7B |
# Training Tokens # 训练令牌 | - | 1.33T | 1.33T | 540B | 540B |
Pile-test (BPB) Pile-test(BPB) | - | 0.729 | 0.729 | 0.658 | 0.657 |
BBH (EM) BBH(EM) | 3-shot 3 样本 | 39.0 | 41.4 | 70.0 | 70.7 |
MMLU (EM) MMLU(EM) | 5-shot 5 样本 | 50.0 | 53.3 | 67.5 | 66.6 |
DROP (F1) DROP(F1) | 1-shot 1 次示例 | 39.2 | 41.3 | 68.5 | 70.6 |
TriviaQA (EM) TriviaQA(EM) | 5-shot 5 样本 | 56.9 | 57.7 | 67.0 | 67.3 |
NaturalQuestions (EM) NaturalQuestions(EM) | 5-shot 5 样本 | 22.7 | 22.3 | 27.2 | 28.5 |
HumanEval (Pass@1) HumanEval(Pass@1) | 0-shot 0 样本 | 20.7 | 26.8 | 44.5 | 53.7 |
MBPP (Pass@1) MBPP(Pass@1) | 3-shot 3 样本 | 35.8 | 36.8 | 61.6 | 62.2 |
GSM8K (EM) GSM8K(EM) | 8-shot 8 次示例 | 25.4 | 31.4 | 72.3 | 74.0 |
MATH (EM) MATH(EM) | 4-shot 4 次示例 | 10.7 | 12.6 | 38.6 | 39.8 |
表 4: MTP 策略的消融结果。MTP 策略在大多数评估基准上持续提升模型性能。
4.5 Discussion 4.5 讨论
4.5.1 Ablation Studies for Multi-Token Prediction
4.5.1 多 Token 预测的消融研究
In Table 4, we show the ablation results for the MTP strategy.
To be specific, we validate the MTP strategy on top of two baseline models across different scales.
At the small scale, we train a baseline MoE model comprising 15.7B total parameters on 1.33T tokens.
At the large scale, we train a baseline MoE model comprising 228.7B total parameters on 540B tokens.
On top of them, keeping the training data and the other architectures the same, we append a 1-depth MTP module onto them and train two models with the MTP strategy for comparison.
Note that during inference, we directly discard the MTP module, so the inference costs of the compared models are exactly the same.
From the table, we can observe that the MTP strategy consistently enhances the model performance on most of the evaluation benchmarks.
在表 4 中,我们展示了 MTP 策略的消融结果。具体来说,我们在两个基准模型的基础上,在不同规模上验证了 MTP 策略。在小规模上,我们训练了一个包含 15.7B 总参数的基准 MoE 模型,使用 1.33T 的 tokens。在大规模上,我们训练了一个包含 228.7B 总参数的基准 MoE 模型,使用 540B 的 tokens。在这些基础上,保持训练数据和其他架构不变,我们在其上附加一个 1-depth 的 MTP 模块,并使用 MTP 策略训练了两个模型进行比较。注意在推理时,我们直接丢弃 MTP 模块,因此比较模型的推理成本完全相同。从表中可以看出,MTP 策略在大多数评估基准上始终提高了模型的性能。
4.5.2 Ablation Studies for the Auxiliary-Loss-Free Balancing Strategy
4.5.2 辅助损失自由平衡策略的消融研究
In Table 5, we show the ablation results for the auxiliary-loss-free balancing strategy.
We validate this strategy on top of two baseline models across different scales.
At the small scale, we train a baseline MoE model comprising 15.7B total parameters on 1.33T tokens.
At the large scale, we train a baseline MoE model comprising 228.7B total parameters on 578B tokens.
Both of the baseline models purely use auxiliary losses to encourage load balance, and use the sigmoid gating function with top-K affinity normalization.
Their hyper-parameters to control the strength of auxiliary losses are the same as DeepSeek-V2-Lite and DeepSeek-V2, respectively.
On top of these two baseline models, keeping the training data and the other architectures the same, we remove all auxiliary losses and introduce the auxiliary-loss-free balancing strategy for comparison.
From the table, we can observe that the auxiliary-loss-free strategy consistently achieves better model performance on most of the evaluation benchmarks.
在表 5 中,我们展示了辅助损失自由平衡策略的消融结果。我们在两个基准模型的基础上,在不同规模上验证了这一策略。在小规模上,我们训练了一个包含 15.7B 总参数的基准 MoE 模型,使用 1.33T 的 tokens。在大规模上,我们训练了一个包含 228.7B 总参数的基准 MoE 模型,使用 578B 的 tokens。两个基准模型纯粹使用辅助损失来促进负载平衡,并使用带有 top-K 亲和归一化的 sigmoid 门控函数。其控制辅助损失强度的超参数与 DeepSeek-V2-Lite 和 DeepSeek-V2 相同。在这两个基准模型的基础上,保持训练数据和其他架构不变,我们移除所有辅助损失并引入辅助损失自由平衡策略进行比较。从表中可以看出,辅助损失自由策略在大多数评估基准上始终实现了更好的模型性能。
Benchmark (Metric) 基准 (指标) | # Shots # 样本数 | Small MoE 小型 MoE | Small MoE 小型 MoE | Large MoE 大型 MoE | Large MoE 大型 MoE |
---|---|---|---|---|---|
Aux-Loss-Based 基于辅助损失 | Aux-Loss-Free 无辅助损失 | Aux-Loss-Based 基于辅助损失 | Aux-Loss-Free 无辅助损失 | ||
# Activated Params # 活动参数数 | - | 2.4B | 2.4B | 20.9B | 20.9B |
# Total Params # 总参数数 | - | 15.7B | 15.7B | 228.7B | 228.7B |
# Training Tokens # 训练令牌 | - | 1.33T | 1.33T | 578B | 578B |
Pile-test (BPB) Pile 测试 (BPB) | - | 0.727 | 0.724 | 0.656 | 0.652 |
BBH (EM) | 3-shot 3 样本 | 37.3 | 39.3 | 66.7 | 67.9 |
MMLU (EM) | 5-shot 5 样本 | 51.0 | 51.8 | 68.3 | 67.2 |
DROP (F1) | 1-shot 1 次示例 | 38.1 | 39.0 | 67.1 | 67.1 |
TriviaQA (EM) | 5-shot 5 样本 | 58.3 | 58.5 | 66.7 | 67.7 |
NaturalQuestions (EM) | 5-shot 5 样本 | 23.2 | 23.4 | 27.1 | 28.1 |
HumanEval (Pass@1) | 0-shot 0 样本 | 22.0 | 22.6 | 40.2 | 46.3 |
MBPP (Pass@1) | 3-shot 3 样本 | 36.6 | 35.8 | 59.2 | 61.2 |
GSM8K (EM) | 8-shot 8 次示例 | 27.1 | 29.6 | 70.7 | 74.5 |
MATH (EM) | 4-shot 4 次示例 | 10.9 | 11.1 | 37.2 | 39.6 |
表 5:辅助损失自由平衡策略的消融结果。与纯粹基于辅助损失的方法相比,辅助损失自由策略在大多数评估基准上始终实现了更好的模型性能。
4.5.3 Batch-Wise Load Balance VS. Sequence-Wise Load Balance
4.5.3 逐批负载平衡与逐序列负载平衡
The key distinction between auxiliary-loss-free balancing and sequence-wise auxiliary loss lies in their balancing scope: batch-wise versus sequence-wise.
Compared with the sequence-wise auxiliary loss, batch-wise balancing imposes a more flexible constraint, as it does not enforce in-domain balance on each sequence.
This flexibility allows experts to better specialize in different domains.
To validate this, we record and analyze the expert load of a 16B auxiliary-loss-based baseline and a 16B auxiliary-loss-free model on different domains in the Pile test set.
As illustrated in Figure 9, we observe that the auxiliary-loss-free model demonstrates greater expert specialization patterns as expected.
辅助损失自由平衡与序列级辅助损失之间的关键区别在于它们的平衡范围:逐批与逐序列。与序列级辅助损失相比,逐批平衡施加了更灵活的约束,因为它不强制在每个序列上进行域内平衡。这种灵活性使得专家能够更好地专注于不同的领域。为了验证这一点,我们记录并分析了一个 16B 基于辅助损失的基线模型和一个 16B 辅助损失自由模型在 Pile 测试集上不同领域的专家负载。如图 9 所示,我们观察到辅助损失自由模型表现出更明显的专家专用模式,正如预期的那样。
To further investigate the correlation between this flexibility and the advantage in model performance, we additionally design and validate a batch-wise auxiliary loss that encourages load balance on each training batch instead of on each sequence.
The experimental results show that, when achieving a similar level of batch-wise load balance, the batch-wise auxiliary loss can also achieve similar model performance to the auxiliary-loss-free method.
To be specific, in our experiments with 1B MoE models, the validation losses are: 2.258 (using a sequence-wise auxiliary loss), 2.253 (using the auxiliary-loss-free method), and 2.253 (using a batch-wise auxiliary loss).
We also observe similar results on 3B MoE models: the model using a sequence-wise auxiliary loss achieves a validation loss of 2.085, and the models using the auxiliary-loss-free method or a batch-wise auxiliary loss achieve the same validation loss of 2.080.
为了进一步探究这种灵活性与模型性能优越性之间的关系,我们另外设计并验证了一种批量辅助损失,该损失鼓励在每个训练批次上实现负载平衡,而不是在每个序列上。实验结果表明,当实现类似水平的批次负载平衡时,批量辅助损失也可以实现与辅助损失自由方法相似的模型性能。具体来说,在我们使用 1B MoE 模型进行的实验中,验证损失为:2.258(使用序列辅助损失),2.253(使用辅助损失自由方法),和 2.253(使用批量辅助损失)。我们还在 3B MoE 模型上观察到类似结果:使用序列辅助损失的模型实现了 2.085 的验证损失,而使用辅助损失自由方法或批量辅助损失的模型达到相同的验证损失 2.080。
In addition, although the batch-wise load balancing methods show consistent performance advantages, they also face two potential challenges in efficiency:
(1) load imbalance within certain sequences or small batches, and
(2) domain-shift-induced load imbalance during inference.
The first challenge is naturally addressed by our training framework that uses large-scale expert parallelism and data parallelism, which guarantees a large size of each micro-batch.
For the second challenge, we also design and implement an efficient inference framework with redundant expert deployment, as described in Section 3.4, to overcome it.
此外,虽然逐批负载平衡方法显示出一致的性能优势,但它们在效率上也面临两个潜在挑战:(1) 某些序列或小批次内的负载不平衡,以及 (2) 推理过程中领域转移引起的负载不平衡。第一个挑战自然通过我们使用大规模专家并行和数据并行的训练框架解决,该框架保证了每个微批次的大规模。对于第二个挑战,我们还设计并实现了一个高效的推理框架,通过冗余专家部署来克服,如 3.4 节所述。
5 Post-Training 5 训练后
5.1 Supervised Fine-Tuning
5.1 监督微调
We curate our instruction-tuning datasets to include 1.5M instances spanning multiple domains, with each domain employing distinct data creation methods tailored to its specific requirements.
我们策划了我们的指令微调数据集,包含多个领域的 150 万实例,每个领域采用针对其特定要求量身定制的数据创建方法。
Reasoning Data. 推理数据。
For reasoning-related datasets, including those focused on mathematics, code competition problems, and logic puzzles, we generate the data by leveraging an internal DeepSeek-R1 model.
Specifically, while the R1-generated data demonstrates strong accuracy, it suffers from issues such as overthinking, poor formatting, and excessive length.
Our objective is to balance the high accuracy of R1-generated reasoning data and the clarity and conciseness of regularly formatted reasoning data.
对于涉及推理的数据集,包括数学、代码竞赛题目和逻辑谜题,我们通过内部 DeepSeek-R1 模型生成数据。具体而言,尽管 R1 生成的数据表现出很高的准确性,但它也存在深思熟虑、格式不佳和过长的问题。我们的目标是平衡 R1 生成推理数据的高准确性和常规格式推理数据的清晰性和简洁性。
To establish our methodology, we begin by developing an expert model tailored to a specific domain, such as code, mathematics, or general reasoning, using a combined Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) training pipeline.
This expert model serves as a data generator for the final model.
The training process involves generating two distinct types of SFT samples for each instance: the first couples the problem with its original response in the format of <problem, original response>, while the second incorporates a system prompt alongside the problem and the R1 response in the format of <system prompt, problem, R1 response>.
为了建立我们的方法论,我们首先开发了一个针对特定领域(如代码、数学或一般推理)的专家模型,使用综合的监督微调(SFT)和强化学习(RL)训练管道。该专家模型作为最终模型的数据生成器。训练过程涉及为每个实例生成两种不同的 SFT 样本:第一种将问题与其原始响应配对,格式为<问题,原始响应>,而第二种则在问题和 R1 响应的格式<系统提示,问题,R1 响应>中加入系统提示。
The system prompt is meticulously designed to include instructions that guide the model toward producing responses enriched with mechanisms for reflection and verification.
During the RL phase, the model leverages high-temperature sampling to generate responses that integrate patterns from both the R1-generated and original data, even in the absence of explicit system prompts.
After hundreds of RL steps, the intermediate RL model learns to incorporate R1 patterns, thereby enhancing overall performance strategically.
系统提示经过精心设计,以包含指导模型生成富有反思和验证机制的响应的指令。在 RL 阶段,模型利用高温采样生成综合了 R1 生成和原始数据模式的响应,即使缺乏明确的系统提示。经过数百个 RL 步骤后,中间 RL 模型学会融合 R1 模式,从而战略性地提高整体表现。
Upon completing the RL training phase, we implement rejection sampling to curate high-quality SFT data for the final model, where the expert models are used as data generation sources.
This method ensures that the final training data retains the strengths of DeepSeek-R1 while producing responses that are concise and effective.
完成 RL 训练阶段后,我们实施拒绝采样以精选优质的 SFT 数据用于最终模型,其中专家模型用作数据生成源。这种方法确保最终训练数据保留 DeepSeek-R1 的优势,同时生成简明有效的响应。
Non-Reasoning Data. 非推理数据。
For non-reasoning data, such as creative writing, role-play, and simple question answering, we utilize DeepSeek-V2.5 to generate responses and enlist human annotators to verify the accuracy and correctness of the data.
对于非推理数据,如创意写作、角色扮演和简单问答,我们使用 DeepSeek-V2.5 来生成响应,并请人工标注员验证数据的准确性和正确性。
SFT Settings. SFT 设置。
We fine-tune DeepSeek-V3-Base for two epochs using the SFT dataset, using the cosine decay learning rate scheduling that starts at and gradually decreases to .
During training, each single sequence is packed from multiple samples.
However, we adopt a sample masking strategy to ensure that these examples remain isolated and mutually invisible.
我们使用 SFT 数据集对 DeepSeek-V3-Base 进行两轮微调,采用余弦衰减学习率调度,从 开始逐渐降低至 。在训练过程中,每个序列都由多个样本打包而成。然而,我们采用样本屏蔽策略,确保这些示例保持独立和互不可见。
5.2 Reinforcement Learning
5.2 强化学习
5.2.1 Reward Model 5.2.1 奖励模型
We employ a rule-based Reward Model (RM) and a model-based RM in our RL process.
我们在 RL 过程中使用基于规则的奖励模型(RM)和基于模型的 RM。
Rule-Based RM. 基于规则的 RM。
For questions that can be validated using specific rules, we adopt a rule-based reward system to determine the feedback.
For instance, certain math problems have deterministic results, and we require the model to provide the final answer within a designated format (e.g., in a box), allowing us to apply rules to verify the correctness.
Similarly, for LeetCode problems, we can utilize a compiler to generate feedback based on test cases.
By leveraging rule-based validation wherever possible, we ensure a higher level of reliability, as this approach is resistant to manipulation or exploitation.
对于可以使用特定规则验证的问题,我们采用基于规则的奖励系统来确定反馈。例如,某些数学问题有确定的结果,我们要求模型在指定格式(如在框中)提供最终答案,允许我们应用规则来验证其正确性。类似地,对于 LeetCode 问题,我们可以利用编译器根据测试用例生成反馈。通过尽可能使用基于规则的验证,我们确保更高的可靠性,因为这种方法不易被操纵或利用。
Model-Based RM. 基于模型的 RM。
For questions with free-form ground-truth answers, we rely on the reward model to determine whether the response matches the expected ground-truth.
Conversely, for questions without a definitive ground-truth, such as those involving creative writing, the reward model is tasked with providing feedback based on the question and the corresponding answer as inputs.
The reward model is trained from the DeepSeek-V3 SFT checkpoints.
To enhance its reliability, we construct preference data that not only provides the final reward but also includes the chain-of-thought leading to the reward.
This approach helps mitigate the risk of reward hacking in specific tasks.
对于拥有自由形式真实答案的问题,我们依赖奖励模型来判断响应是否与预期的真实答案匹配。相反,对于没有确定真实答案的问题,如涉及创意写作的,奖励模型负责根据问题和相应的答案提供反馈。奖励模型是从 DeepSeek-V3 SFT 检查点训练得到的。为了增强其可靠性,我们构建了偏好数据,不仅提供最终奖励,还包括通向奖励的思路链。此方法有助于降低特定任务中奖励操控的风险。
5.2.2 Group Relative Policy Optimization
5.2.2 群体相对策略优化
Similar to DeepSeek-V2 (DeepSeek-AI, 2024c), we adopt Group Relative Policy Optimization (GRPO) (Shao et al., 2024), which foregoes the critic model that is typically with the same size as the policy model, and estimates the baseline from group scores instead.
Specifically, for each question , GRPO samples a group of outputs from the old policy model and then optimizes the policy model by maximizing the following objective:
类似于 DeepSeek-V2(DeepSeek-AI,2024c),我们采用群体相对策略优化(GRPO)(Shao et al., 2024),该方法放弃了通常与策略模型大小相同的评论模型,转而从群体分数中估算基线。具体来说,对于每个问题 ,GRPO 从旧策略模型 中抽取一组输出 ,然后通过最大化以下目标来优化策略模型 :
(26) |
(27) |
where and are hyper-parameters;
is the reference model;
and is the advantage, derived from the rewards corresponding to the outputs within each group:
其中 和 是超参数; 是参考模型; 是优势,由对应于每个群体中输出的奖励 推导而来:
(28) |
We incorporate prompts from diverse domains, such as coding, math, writing, role-playing, and question answering, during the RL process.
This approach not only aligns the model more closely with human preferences but also enhances performance on benchmarks, especially in scenarios where available SFT data are limited.
我们在 RL 过程中结合了来自不同领域的提示,如编程、数学、写作、角色扮演和问答。这种方法不仅使模型更符合人类的偏好,还提高了在基准测试中的性能,尤其是在可用 SFT 数据有限的情况下。
5.3 Evaluations 5.3 评估
5.3.1 Evaluation Settings 5.3.1 评估设置
Evaluation Benchmarks. 评估基准。
Apart from the benchmark we used for base model testing, we further evaluate instructed models on IFEval (Zhou et al., 2023), FRAMES (Krishna et al., 2024), LongBench v2 (Bai et al., 2024), GPQA (Rein et al., 2023), SimpleQA (OpenAI, 2024c), C-SimpleQA (He et al., 2024), SWE-Bench Verified (OpenAI, 2024d), Aider 111https://aider.chat, LiveCodeBench (Jain et al., 2024) (questions from August 2024 to November 2024), Codeforces 222https://codeforces.com, Chinese National High School Mathematics Olympiad (CNMO 2024)333https://www.cms.org.cn/Home/comp/comp/cid/12.html, and American Invitational Mathematics Examination 2024 (AIME 2024) (MAA, 2024).
除了用于基础模型测试的基准外,我们还在 IFEval(Zhou et al., 2023)、FRAMES(Krishna et al., 2024)、LongBench v2(Bai et al., 2024)、GPQA(Rein et al., 2023)、SimpleQA(OpenAI, 2024c)、C-SimpleQA(He et al., 2024)、SWE-Bench 认证(OpenAI, 2024d)、Aider 1 、LiveCodeBench(Jain et al., 2024)(问题来源于 2024 年 8 月至 2024 年 11 月)、Codeforces 2 、中国国家高中数学奥林匹克(CNMO 2024) 3 和美国邀请数学考试 2024(AIME 2024)(MAA,2024)上评估经过指导的模型。
Compared Baselines. 对比基线。
We conduct comprehensive evaluations of our chat model against several strong baselines, including DeepSeek-V2-0506, DeepSeek-V2.5-0905, Qwen2.5 72B Instruct, LLaMA-3.1 405B Instruct, Claude-Sonnet-3.5-1022, and GPT-4o-0513.
For the DeepSeek-V2 model series, we select the most representative variants for comparison.
For closed-source models, evaluations are performed through their respective APIs.
我们对我们的聊天模型进行了全面评估,与几个强大的基线模型进行了对比,包括 DeepSeek-V2-0506、DeepSeek-V2.5-0905、Qwen2.5 72B Instruct、LLaMA-3.1 405B Instruct、Claude-Sonnet-3.5-1022 和 GPT-4o-0513。对于 DeepSeek-V2 模型系列,我们选择了最具代表性的变体进行比较。对于闭源模型,通过各自的 API 进行评估。
Detailed Evaluation Configurations.
详细评估配置。
For standard benchmarks including MMLU, DROP, GPQA, and SimpleQA, we adopt the evaluation prompts from the simple-evals framework444https://github.com/openai/simple-evals.
We utilize the Zero-Eval prompt format (Lin, 2024) for MMLU-Redux in a zero-shot setting.
For other datasets, we follow their original evaluation protocols with default prompts as provided by the dataset creators.
For code and math benchmarks, the HumanEval-Mul dataset includes 8 mainstream programming languages (Python, Java, Cpp, C#, JavaScript, TypeScript, PHP, and Bash) in total.
We use CoT and non-CoT methods to evaluate model performance on LiveCodeBench, where the data are collected from August 2024 to November 2024.
The Codeforces dataset is measured using the percentage of competitors.
SWE-Bench verified is evaluated using the agentless framework (Xia et al., 2024).
We use the “diff” format to evaluate the Aider-related benchmarks.
For mathematical assessments, AIME and CNMO 2024 are evaluated with a temperature of 0.7, and the results are averaged over 16 runs, while MATH-500 employs greedy decoding.
We allow all models to output a maximum of 8192 tokens for each benchmark.
对于标准基准,包括 MMLU,DROP,GPQA,和 SimpleQA,我们采用来自 simple-evals 框架 4 的评估提示。我们在零样本设置中使用 Zero-Eval 提示格式(Lin, 2024)进行 MMLU-Redux 评估。对于其他数据集,我们遵循原始评估协议,使用数据集创建者提供的默认提示。对于代码和数学基准,HumanEval-Mul 数据集总共包括 8 种主流编程语言(Python、Java、Cpp、C#、JavaScript、TypeScript、PHP 和 Bash)。我们使用 CoT 和非 CoT 方法来评估在 LiveCodeBench 上的模型性能,其中数据收集期为 2024 年 8 月至 2024 年 11 月。Codeforces 数据集通过选手百分比进行测量。SWE-Bench 认证使用无代理框架(Xia et al., 2024)进行评估。我们使用“差异”格式评估与 Aider 相关的基准。对于数学评估,AIME 和 CNMO 2024 以 0.7 的温度进行评估,结果平均 16 次运行,而 MATH-500 采用贪婪解码。我们允许所有模型为每个基准输出最多 8192 个标记。
Benchmark (Metric) 基准 (指标) | DeepSeek | DeepSeek | Qwen2.5 | LLaMA-3.1 | Claude-3.5- | GPT-4o | DeepSeek | |
V2-0506 | V2.5-0905 | 72B-Inst. | 405B-Inst. | Sonnet-1022 | 0513 | V3 | ||
Architecture 架构 | MoE | MoE | Dense 密集 | Dense 密集 | - | - | MoE | |
# Activated Params # 活动参数数 | 21B | 21B | 72B | 405B | - | - | 37B | |
# Total Params # 总参数数 | 236B | 236B | 72B | 405B | - | - | 671B | |
English 英语 | MMLU (EM) MMLU(EM) | 78.2 | 80.6 | 85.3 | 88.6 | 88.3 | 87.2 | 88.5 |
MMLU-Redux (EM) MMLU-Redux(EM) | 77.9 | 80.3 | 85.6 | 86.2 | 88.9 | 88.0 | 89.1 | |
MMLU-Pro (EM) MMLU-Pro(EM) | 58.5 | 66.2 | 71.6 | 73.3 | 78.0 | 72.6 | 75.9 | |
DROP (3-shot F1) DROP(3-shot F1) | 83.0 | 87.8 | 76.7 | 88.7 | 88.3 | 83.7 | 91. | |
IF-Eval (Prompt Strict) IF-Eval(提示严格) | 57.7 | 80.6 | 84.1 | 86.0 | 86.5 | 84.3 | 86.1 | |
GPQA-Diamond (Pass@1) GPQA-Diamond(Pass@1) | 35.3 | 41.3 | 49.0 | 51.1 | 65.0 | 49.9 | 59.1 | |
SimpleQA (Correct) SimpleQA(正确) | 9.0 | 10.2 | 9.1 | 17.1 | 28.4 | 38.2 | 24.9 | |
FRAMES (Acc.) FRAMES(准确率) | 66.9 | 65.4 | 69.8 | 70.0 | 72.5 | 80.5 | 73.3 | |
LongBench v2 (Acc.) LongBench v2(准确率) | 31.6 | 35.4 | 39.4 | 36.1 | 41.0 | 48.1 | 48.7 | |
Code 代码 | HumanEval-Mul (Pass@1) HumanEval-Mul(Pass@1) | 69.3 | 77.4 | 77.3 | 77.2 | 81.7 | 80.5 | 82.6 |
LiveCodeBench (Pass@1-COT) LiveCodeBench(Pass@1-COT) |
18.8 | 29.2 | 31.1 | 28.4 | 36.3 | 33.4 | 40.5 | |
LiveCodeBench (Pass@1) LiveCodeBench(Pass@1) | 20.3 | 28.4 | 28.7 | 30.1 | 32.8 | 34.2 | 37.6 | |
Codeforces (Percentile) Codeforces(百分比) | 17.5 | 35.6 | 24.8 | 25.3 | 20.3 | 23.6 | 51.6 | |
SWE Verified (Resolved) SWE Verified(已解决) | - | 22.6 | 23.8 | 24.5 | 50.8 | 38.8 | 42.0 | |
Aider-Edit (Acc.) Aider-Edit(准确率) | 60.3 | 71.6 | 65.4 | 63.9 | 84.2 | 72.9 | 79.7 | |
Aider-Polyglot (Acc.) Aider-Polyglot(准确率) | - | 18.2 | 7.6 | 5.8 | 45.3 | 16.0 | 49.6 | |
Math 数学 | AIME 2024 (Pass@1) AIME 2024(Pass@1) | 4.6 | 16.7 | 23.3 | 23.3 | 16.0 | 9.3 | 39.2 |
MATH-500 (EM) MATH-500(EM) | 56.3 | 74.7 | 80.0 | 73.8 | 78.3 | 74.6 | 90.2 | |
CNMO 2024 (Pass@1) CNMO 2024(Pass@1) | 2.8 | 10.8 | 15.9 | 6.8 | 13.1 | 10.8 | 43.2 | |
Chinese 中文 | CLUEWSC (EM) CLUEWSC(EM) | 89.9 | 90.4 | 91.4 | 84.7 | 85.4 | 87.9 | 90.9 |
C-Eval (EM) C-Eval(EM) | 78.6 | 79.5 | 86.1 | 61.5 | 76.7 | 76.0 | 86.5 | |
C-SimpleQA (Correct) C-SimpleQA(正确) | 48.5 | 54.1 | 48.4 | 50.4 | 51.3 | 59.3 | 64.8 |
表 6:DeepSeek-V3 与其他代表性聊天模型的比较。所有模型评估时都限制输出长度为 8K。包含不足 1000 个样本的基准测试通过多次使用不同温度设置进行测试,以获得稳健的最终结果。DeepSeek-V3 被认为是表现最佳的开源模型,同时在与最前沿的闭源模型对比中表现竞争力。
5.3.2 Standard Evaluation 5.3.2 标准评估
Table 6 presents the evaluation results, showcasing that DeepSeek-V3 stands as the best-performing open-source model.
Additionally, it is competitive against frontier closed-source models like GPT-4o and Claude-3.5-Sonnet.
表 6 展示了评估结果,显示 DeepSeek-V3 是表现最佳的开源模型。此外,它与诸如 GPT-4o 和 Claude-3.5-Sonnet 这样的最前沿闭源模型相比也具有竞争力。
English Benchmarks. 英文基准测试。
MMLU is a widely recognized benchmark designed to assess the performance of large language models, across diverse knowledge domains and tasks.
DeepSeek-V3 demonstrates competitive performance, standing on par with top-tier models such as LLaMA-3.1-405B, GPT-4o, and Claude-Sonnet 3.5, while significantly outperforming Qwen2.5 72B.
Moreover, DeepSeek-V3 excels in MMLU-Pro, a more challenging educational knowledge benchmark, where it closely trails Claude-Sonnet 3.5.
On MMLU-Redux, a refined version of MMLU with corrected labels, DeepSeek-V3 surpasses its peers.
In addition, on GPQA-Diamond, a PhD-level evaluation testbed, DeepSeek-V3 achieves remarkable results, ranking just behind Claude 3.5 Sonnet and outperforming all other competitors by a substantial margin.
MMLU 是一个广为认知的基准,旨在评估大语言模型在不同知识领域和任务中的表现。DeepSeek-V3 展示了具有竞争力的性能,与顶级模型如 LLaMA-3.1-405B、GPT-4o 和 Claude-Sonnet 3.5 不相上下,同时显著超越 Qwen2.5 72B。此外,在具有更大挑战的教育知识基准 MMLU-Pro 中,DeepSeek-V3 紧随 Claude-Sonnet 3.5。在经过改进标签的 MMLU 精简版中,DeepSeek-V3 超越了其同行。此外,在博士水平的评估试验台 GPQA-Diamond 中,DeepSeek-V3 取得了出色的成绩,仅次于 Claude 3.5 Sonnet,并以较大优势超越其他所有竞争者。
In long-context understanding benchmarks such as DROP, LongBench v2, and FRAMES, DeepSeek-V3 continues to demonstrate its position as a top-tier model.
It achieves an impressive 91.6 F1 score in the 3-shot setting on DROP, outperforming all other models in this category.
On FRAMES, a benchmark requiring question-answering over 100k token contexts, DeepSeek-V3 closely trails GPT-4o while outperforming all other models by a significant margin.
This demonstrates the strong capability of DeepSeek-V3 in handling extremely long-context tasks.
The long-context capability of DeepSeek-V3 is further validated by its best-in-class performance on LongBench v2, a dataset that was released just a few weeks before the launch of DeepSeek V3.
On the factual knowledge benchmark, SimpleQA, DeepSeek-V3 falls behind GPT-4o and Claude-Sonnet, primarily due to its design focus and resource allocation.
DeepSeek-V3 assigns more training tokens to learn Chinese knowledge, leading to exceptional performance on the C-SimpleQA.
On the instruction-following benchmark, DeepSeek-V3 significantly outperforms its predecessor, DeepSeek-V2-series, highlighting its improved ability to understand and adhere to user-defined format constraints.
在长上下文的理解基准中,如 DROP、LongBench v2 和 FRAMES,DeepSeek-V3 继续展示其作为顶级模型的地位。在 DROP 的 3-shot 设置中,它实现了令人印象深刻的 91.6 F1 分数,超越了该类别中的所有其他模型。在 FRAMES 这一需要在 10 万符号上下文中进行问答的基准上,DeepSeek-V3 紧随 GPT-4o 之后,同时大幅超越其他所有模型。这表明 DeepSeek-V3 在处理极长上下文任务方面的强大能力。其在 LongBench v2 上的最佳表现进一步验证了 DeepSeek-V3 的长上下文能力,LongBench v2 是一个在 DeepSeek V3 发布前几周才推出的数据集。在事实知识基准 SimpleQA 上,由于设计焦点和资源分配,DeepSeek-V3 落后于 GPT-4o 和 Claude-Sonnet。DeepSeek-V3 分配了更多的训练语料来学习中文知识,在 C-SimpleQA 上表现突出。在指令遵循基准上,DeepSeek-V3 明显优于其前身 DeepSeek-V2 系列,展示了其在理解和遵循用户定义格式限制方面的改进能力。
Code and Math Benchmarks.
编码和数学基准。
Coding is a challenging and practical task for LLMs, encompassing engineering-focused tasks like SWE-Bench-Verified and Aider, as well as algorithmic tasks such as HumanEval and LiveCodeBench.
In engineering tasks, DeepSeek-V3 trails behind Claude-Sonnet-3.5-1022 but significantly outperforms open-source models.
The open-source DeepSeek-V3 is expected to foster advancements in coding-related engineering tasks.
By providing access to its robust capabilities, DeepSeek-V3 can drive innovation and improvement in areas such as software engineering and algorithm development, empowering developers and researchers to push the boundaries of what open-source models can achieve in coding tasks.
In algorithmic tasks, DeepSeek-V3 demonstrates superior performance, outperforming all baselines on benchmarks like HumanEval-Mul and LiveCodeBench.
This success can be attributed to its advanced knowledge distillation technique, which effectively enhances its code generation and problem-solving capabilities in algorithm-focused tasks.
编码对于大型语言模型(LLMs)来说是一项具有挑战性且实用的任务,涵盖如 SWE-Bench-Verified 和 Aider 等工程任务,以及如 HumanEval 和 LiveCodeBench 等算法任务。在工程任务中,DeepSeek-V3 落后于 Claude-Sonnet-3.5-1022,但远远超过开源模型。预计开源的 DeepSeek-V3 将在编码相关的工程任务中推动进步。通过提供其强大的能力,DeepSeek-V3 可以推动诸如软件工程和算法开发领域的创新和改进,支持开发人员和研究人员突破开源模型在编码任务中的可能性。在算法任务中,DeepSeek-V3 展示了卓越的性能,在 HumanEval-Mul 和 LiveCodeBench 等基准上超越了所有基线。这一成功可归因于其先进的知识蒸馏技术,有效提升了其在算法任务中代码生成和问题解决的能力。
On math benchmarks, DeepSeek-V3 demonstrates exceptional performance, significantly surpassing baselines and setting a new state-of-the-art for non-o1-like models.
Specifically, on AIME, MATH-500, and CNMO 2024, DeepSeek-V3 outperforms the second-best model, Qwen2.5 72B, by approximately 10% in absolute scores, which is a substantial margin for such challenging benchmarks.
This remarkable capability highlights the effectiveness of the distillation technique from DeepSeek-R1, which has been proven highly beneficial for non-o1-like models.
在数学基准上,DeepSeek-V3 展现出了卓越的性能,大幅超越了基线,为非 o1 类模型设定了新的行业标准。具体而言,在 AIME、MATH-500 和 CNMO 2024 上,DeepSeek-V3 比第二名模型 Qwen2.5 72B 高出约 10%的绝对分数,这在如此具有挑战性的基准上是一个巨大的差距。这一显著的能力突显了 DeepSeek-R1 的蒸馏技术的有效性,这对非 o1 类模型特别有利。
Chinese Benchmarks. 中文基准。
Qwen and DeepSeek are two representative model series with robust support for both Chinese and English.
On the factual benchmark Chinese SimpleQA, DeepSeek-V3 surpasses Qwen2.5-72B by 16.4 points, despite Qwen2.5 being trained on a larger corpus compromising 18T tokens, which are 20% more than the 14.8T tokens that DeepSeek-V3 is pre-trained on.
Qwen 和 DeepSeek 是两个具有中英文双重强大支持的代表性模型系列。在事实知识基准中文 SimpleQA 上,DeepSeek-V3 超越了 Qwen2.5-72B 16.4 分,尽管 Qwen2.5 训练的语料库包含 18T tokens,比 DeepSeek-V3 的预训练语料库多出 20%。
On C-Eval, a representative benchmark for Chinese educational knowledge evaluation, and CLUEWSC (Chinese Winograd Schema Challenge), DeepSeek-V3 and Qwen2.5-72B exhibit similar performance levels, indicating that both models are well-optimized for challenging Chinese-language reasoning and educational tasks.
在 C-Eval(一个中文教育知识评估的代表性基准)和 CLUEWSC(中文 Winograd Schema Challenge)上,DeepSeek-V3 和 Qwen2.5-72B 展现了相似的性能水平,这表明这两个模型在具有挑战性的中文语言推理和教育任务中都表现得很好。
Model | Arena-Hard | AlpacaEval 2.0 |
---|---|---|
DeepSeek-V2.5-0905 | 76.2 | 50.5 |
Qwen2.5-72B-Instruct | 81.2 | 49.1 |
LLaMA-3.1 405B | 69.3 | 40.5 |
GPT-4o-0513 | 80.4 | 51.1 |
Claude-Sonnet-3.5-1022 | 85.2 | 52.0 |
DeepSeek-V3 | 85.5 | 70.0 |
表 7:英语开放式对话评估。对于 AlpacaEval 2.0,我们使用长度控制的获胜率作为指标。
5.3.3 Open-Ended Evaluation
5.3.3 开放式评估
In addition to standard benchmarks, we also evaluate our models on open-ended generation tasks using LLMs as judges, with the results shown in Table 7.
Specifically, we adhere to the original configurations of AlpacaEval 2.0 (Dubois et al., 2024) and Arena-Hard (Li et al., 2024a), which leverage GPT-4-Turbo-1106 as judges for pairwise comparisons.
On Arena-Hard, DeepSeek-V3 achieves an impressive win rate of over 86% against the baseline GPT-4-0314, performing on par with top-tier models like Claude-Sonnet-3.5-1022.
This underscores the robust capabilities of DeepSeek-V3, especially in dealing with complex prompts, including coding and debugging tasks.
Furthermore, DeepSeek-V3 achieves a groundbreaking milestone as the first open-source model to surpass 85% on the Arena-Hard benchmark.
This achievement significantly bridges the performance gap between open-source and closed-source models, setting a new standard for what open-source models can accomplish in challenging domains.
除了标准基准测试外,我们还使用大型语言模型作为评判,对模型进行开放式生成任务的评估,结果如表 7 所示。具体来说,我们遵循 AlpacaEval 2.0(Dubois et al., 2024)和 Arena-Hard(Li et al., 2024a)的原始配置,这些配置使用 GPT-4-Turbo-1106 作为评判进行成对比较。在 Arena-Hard 上,DeepSeek-V3 实现了超过 86%的惊人获胜率,对比基线 GPT-4-0314,与顶级模型 Claude-Sonnet-3.5-1022 表现相当。这凸显了 DeepSeek-V3 在应对复杂提示(包括编码和调试任务)方面的强大能力。此外,DeepSeek-V3 成为首个在 Arena-Hard 基准测试中超过 85%的开源模型,这一成就极大地缩小了开源与闭源模型之间的性能差距,为在挑战性领域中开源模型能够实现的能力设定了新的标准。
Similarly, DeepSeek-V3 showcases exceptional performance on AlpacaEval 2.0, outperforming both closed-source and open-source models.
This demonstrates its outstanding proficiency in writing tasks and handling straightforward question-answering scenarios.
Notably, it surpasses DeepSeek-V2.5-0905 by a significant margin of 20%, highlighting substantial improvements in tackling simple tasks and showcasing the effectiveness of its advancements.
同样,DeepSeek-V3 在 AlpacaEval 2.0 上展示了出色的性能,超越了闭源和开源模型。这展示了其在写作任务和处理简单问答场景中的卓越能力。值得注意的是,它超越了 DeepSeek-V2.5-0905 20%的显著差距,突显了其在处理简单任务方面的巨大改进,并展现了其进步的有效性。
5.3.4 DeepSeek-V3 as a Generative Reward Model
5.3.4DeepSeek-V3 作为生成性奖励模型
We compare the judgment ability of DeepSeek-V3 with state-of-the-art models, namely GPT-4o and Claude-3.5.
Table 8 presents the performance of these models in RewardBench (Lambert et al., 2024).
DeepSeek-V3 achieves performance on par with the best versions of GPT-4o-0806 and Claude-3.5-Sonnet-1022, while surpassing other versions.
Additionally, the judgment ability of DeepSeek-V3 can also be enhanced by the voting technique.
Therefore, we employ DeepSeek-V3 along with voting to offer self-feedback on open-ended questions, thereby improving the effectiveness and robustness of the alignment process.
我们将 DeepSeek-V3 的判断能力与最先进的模型(即 GPT-4o 和 Claude-3.5)进行比较。表 8 展示了这些模型在 RewardBench(Lambert et al., 2024)上的表现。DeepSeek-V3 在性能上与 GPT-4o-0806 和 Claude-3.5-Sonnet-1022 的最佳版本相当,同时超越了其他版本。此外,DeepSeek-V3 的判断能力也可以通过投票技术得到增强。因此,我们结合投票使用 DeepSeek-V3 对开放式问题提供自我反馈,从而提高对齐过程的有效性和稳健性。
Model | Chat 聊天 | Chat-Hard 困难聊天 | Safety 安全 | Reasoning 推理 | Average 平均 |
---|---|---|---|---|---|
GPT-4o-0513 | 96.6 | 70.4 | 86.7 | 84.9 | 84.7 |
GPT-4o-0806 | 96.1 | 76.1 | 88.1 | 86.6 | 86.7 |
GPT-4o-1120 | 95.8 | 71.3 | 86.2 | 85.2 | 84.6 |
Claude-3.5-sonnet-0620 | 96.4 | 74.0 | 81.6 | 84.7 | 84.2 |
Claude-3.5-sonnet-1022 | 96.4 | 79.7 | 91.1 | 87.6 | 88.7 |
DeepSeek-V3 | 96.9 | 79.8 | 87.0 | 84.3 | 87.0 |
DeepSeek-V3 (maj@6) | 96.9 | 82.6 | 89.5 | 89.2 | 89.6 |
表 8:GPT-4o、Claude-3.5-sonnet 和 DeepSeek-V3 在 RewardBench 上的表现。
5.4 Discussion 5.4 讨论
5.4.1 Distillation from DeepSeek-R1
5.4.1 从 DeepSeek-R1 的蒸馏
We ablate the contribution of distillation from DeepSeek-R1 based on DeepSeek-V2.5.
The baseline is trained on short CoT data, whereas its competitor uses data generated by the expert checkpoints described above.
我们在 DeepSeek-V2.5 基础上消融分析了 DeepSeek-R1 的蒸馏贡献。基线模型训练于短 CoT 数据,而其对手使用上文描述的专家检查点生成的数据。
Table 9 demonstrates the effectiveness of the distillation data, showing significant improvements in both LiveCodeBench and MATH-500 benchmarks.
Our experiments reveal an interesting trade-off: the distillation leads to better performance but also substantially increases the average response length.
To maintain a balance between model accuracy and computational efficiency, we carefully selected optimal settings for DeepSeek-V3 in distillation.
表 9 展示了蒸馏数据的效果,显示在 LiveCodeBench 和 MATH-500 基准测试中有显著的改进。我们的实验揭示了一个有趣的权衡:蒸馏提高了性能,但也显著增加了平均响应长度。为了在模型准确性和计算效率之间保持平衡,我们为 DeepSeek-V3 在蒸馏中精心选择了最佳设置。
Our research suggests that knowledge distillation from reasoning models presents a promising direction for post-training optimization.
While our current work focuses on distilling data from mathematics and coding domains, this approach shows potential for broader applications across various task domains.
The effectiveness demonstrated in these specific areas indicates that long-CoT distillation could be valuable for enhancing model performance in other cognitive tasks requiring complex reasoning.
Further exploration of this approach across different domains remains an important direction for future research.
我们的研究表明,来自推理模型的知识蒸馏为训练后优化提供了一个有前景的方向。虽然我们目前的工作集中在数学和编程领域的数据蒸馏,但这一方法在各种任务领域显示了广泛应用的潜力。这些特定领域中展示的有效性表明,长 CoT 蒸馏可能对提高模型在其他需要复杂推理的认知任务中的表现有价值。对这一方法在不同领域的进一步探索仍是未来研究的重要方向。
Model | LiveCodeBench-CoT | MATH-500 | ||
---|---|---|---|---|
Pass@1 | Length 长度 | Pass@1 | Length 长度 | |
DeepSeek-V2.5 Baseline DeepSeek-V2.5 基线 | 31.1 | 718 | 74.6 | 769 |
DeepSeek-V2.5 +R1 Distill DeepSeek-V2.5 +R1 蒸馏 |
37.4 | 783 | 83.2 | 1510 |
表 9:DeepSeek-R1 蒸馏的贡献。LiveCodeBench 和 MATH-500 的评估设置与表 6 相同。
5.4.2 Self-Rewarding 5.4.2 自我奖励
Rewards play a pivotal role in RL, steering the optimization process.
In domains where verification through external tools is straightforward, such as some coding or mathematics scenarios, RL demonstrates exceptional efficacy.
However, in more general scenarios, constructing a feedback mechanism through hard coding is impractical.
During the development of DeepSeek-V3, for these broader contexts, we employ the constitutional AI approach (Bai et al., 2022), leveraging the voting evaluation results of DeepSeek-V3 itself as a feedback source.
This method has produced notable alignment effects, significantly enhancing the performance of DeepSeek-V3 in subjective evaluations.
By integrating additional constitutional inputs, DeepSeek-V3 can optimize towards the constitutional direction.
We believe that this paradigm, which combines supplementary information with LLMs as a feedback source, is of paramount importance.
The LLM serves as a versatile processor capable of transforming unstructured information from diverse scenarios into rewards, ultimately facilitating the self-improvement of LLMs.
Beyond self-rewarding, we are also dedicated to uncovering other general and scalable rewarding methods to consistently advance the model capabilities in general scenarios.
奖励在强化学习中发挥着至关重要的作用,引导优化过程。在通过外部工具验证简单的领域中,例如一些编码或数学场景,强化学习显示出卓越的效果。然而,在更一般的场景中,通过硬编码构建反馈机制是不切实际的。在开发 DeepSeek-V3 的过程中,对于这些更广泛的上下文,我们采用宪法 AI 方法(Bai 等,2022),利用 DeepSeek-V3 自身的投票评估结果作为反馈来源。该方法产生了显著的对齐效果,显著提升了 DeepSeek-V3 在主观评估中的表现。通过整合额外的宪法输入,DeepSeek-V3 可以优化朝向宪法方向。我们相信,这种结合补充信息与大型语言模型作为反馈来源的范式至关重要。大型语言模型充当着一个多功能的处理器,能够将来自各种场景的非结构化信息转化为奖励,最终促进大型语言模型的自我改进。除了自我奖励外,我们还致力于探索其他通用且可扩展的奖励方法,以在一般场景中持续提升模型能力。
5.4.3 Multi-Token Prediction Evaluation
5.4.3 多标记预测评估
Instead of predicting just the next single token, DeepSeek-V3 predicts the next 2 tokens through the MTP technique.
Combined with the framework of speculative decoding (Leviathan et al., 2023; Xia et al., 2023), it can significantly accelerate the decoding speed of the model.
A natural question arises concerning the acceptance rate of the additionally predicted token.
Based on our evaluation, the acceptance rate of the second token prediction ranges between 85% and 90% across various generation topics, demonstrating consistent reliability.
This high acceptance rate enables DeepSeek-V3 to achieve a significantly improved decoding speed, delivering 1.8 times TPS (Tokens Per Second).
DeepSeek-V3 通过 MTP 技术预测下两个标记,而不是仅预测下一个单一标记。结合推测解码框架(Leviathan 等,2023;Xia 等,2023),它可以显著加快模型的解码速度。一个自然的问题是关于额外预测标记的接受率。根据我们的评估,第二个标记预测的接受率在 85%到 90%之间,跨多个生成主题表现出一致的可靠性。这种高接受率使得 DeepSeek-V3 可以实现显著提高的解码速度,达到 1.8 倍的每秒标记数(TPS)。
6 Conclusion, Limitations, and Future Directions
6 结论、局限性与未来方向
In this paper, we introduce DeepSeek-V3, a large MoE language model with 671B total parameters and 37B activated parameters, trained on 14.8T tokens.
In addition to the MLA and DeepSeekMoE architectures, it also pioneers an auxiliary-loss-free strategy for load balancing and sets a multi-token prediction training objective for stronger performance.
The training of DeepSeek-V3 is cost-effective due to the support of FP8 training and meticulous engineering optimizations.
The post-training also makes a success in distilling the reasoning capability from the DeepSeek-R1 series of models.
Comprehensive evaluations demonstrate that DeepSeek-V3 has emerged as the strongest open-source model currently available, and achieves performance comparable to leading closed-source models like GPT-4o and Claude-3.5-Sonnet.
Despite its strong performance, it also maintains economical training costs.
It requires only 2.788M H800 GPU hours for its full training, including pre-training, context length extension, and post-training.
在本文中,我们介绍了 DeepSeek-V3,这是一种具备 6710 亿总参数和 370 亿激活参数的大型 MoE 语言模型,训练于 14.8 万亿标记上。除了 MLA 和 DeepSeekMoE 架构外,它还开创了一种无辅助损失的负载平衡策略,并设定了多标记预测的训练目标以获得更强的性能。由于 FP8 训练的支持和精心设计的工程优化,DeepSeek-V3 的训练成本高效。后期训练在从 DeepSeek-R1 系列模型中提取推理能力方面也取得了成功。全面的评估表明,DeepSeek-V3 已成为当前最强的开源模型,性能可与 GPT-4o 和 Claude-3.5-Sonnet 等领先的闭源模型相媲美。尽管其性能强大,训练成本却很经济。其完整训练(包括预训练、上下文长度扩展和后期训练)仅需 278.8 万 H800 GPU 小时。
While acknowledging its strong performance and cost-effectiveness, we also recognize that DeepSeek-V3 has some limitations, especially on the deployment.
Firstly, to ensure efficient inference, the recommended deployment unit for DeepSeek-V3 is relatively large, which might pose a burden for small-sized teams.
Secondly, although our deployment strategy for DeepSeek-V3 has achieved an end-to-end generation speed of more than two times that of DeepSeek-V2, there still remains potential for further enhancement.
Fortunately, these limitations are expected to be naturally addressed with the development of more advanced hardware.
在承认其强大性能和成本效益的同时,我们也认识到 DeepSeek-V3 存在一些局限性,特别是在部署方面。首先,为了确保高效推理,推荐的 DeepSeek-V3 部署单元相对较大,这可能给小型团队带来负担。其次,尽管我们的 DeepSeek-V3 部署策略使其端到端生成速度超过 DeepSeek-V2 的两倍,但仍然有进一步提升的空间。幸运的是,随着更先进硬件的发展,这些局限预计会自然得到解决。
DeepSeek consistently adheres to the route of open-source models with longtermism, aiming to steadily approach the ultimate goal of AGI (Artificial General Intelligence).
In the future, we plan to strategically invest in research across the following directions.
DeepSeek 始终坚持长期主义的开源模型路线,旨在稳步接近 AGI(通用人工智能)的最终目标。未来,我们计划在以下几个方向进行战略性研究投资。
-
•
We will consistently study and refine our model architectures, aiming to further improve both the training and inference efficiency, striving to approach efficient support for infinite context length. Additionally, we will try to break through the architectural limitations of Transformer, thereby pushing the boundaries of its modeling capabilities.
• 我们将持续研究和优化我们的模型架构,致力于进一步提高训练和推理效率,努力实现对无限上下文长度的高效支持。此外,我们将尝试突破 Transformer 的架构限制,从而提升其建模能力的边界。 -
•
We will continuously iterate on the quantity and quality of our training data, and explore the incorporation of additional training signal sources, aiming to drive data scaling across a more comprehensive range of dimensions.
• 我们将持续改进训练数据的数量和质量,并探索整合更多的训练信号源,旨在推动数据规模在更全面的维度上扩展。 -
•
We will consistently explore and iterate on the deep thinking capabilities of our models, aiming to enhance their intelligence and problem-solving abilities by expanding their reasoning length and depth.
• 我们将持续探索和迭代模型的深度思维能力,旨在通过扩展推理的长度和深度来提升模型的智能和问题解决能力。 -
•
We will explore more comprehensive and multi-dimensional model evaluation methods to prevent the tendency towards optimizing a fixed set of benchmarks during research, which may create a misleading impression of the model capabilities and affect our foundational assessment.
• 我们将探索更全面和多维度的模型评估方法,以防止在研究过程中倾向于优化一组固定的基准测试,这可能会对模型的能力产生误导印象,并影响我们的基础评估。
References 参考文献
-
AI@Meta (2024a)
AI@Meta.
Llama 3 model card, 2024a.
URL https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md.
AI@Meta。Llama 3 模型卡,2024a。网址 https://github.com/meta-llama/llama3/blob/main/MODEL_CARD.md。 -
AI@Meta (2024b)
AI@Meta.
Llama 3.1 model card, 2024b.
URL https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md.
AI@Meta。Llama 3.1 模型卡,2024b。网址 https://github.com/meta-llama/llama-models/blob/main/models/llama3_1/MODEL_CARD.md。 -
Anthropic (2024)
Anthropic.
Claude 3.5 sonnet, 2024.
URL https://www.anthropic.com/news/claude-3-5-sonnet.
Anthropic。Claude 3.5 十四行诗,2024。网址 https://www.anthropic.com/news/claude-3-5-sonnet。 -
Austin et al. (2021) Austin 等(2021)
J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan, E. Jiang, C. Cai, M. Terry, Q. Le, et al.
Program synthesis with large language models.
arXiv preprint arXiv:2108.07732, 2021.
J. Austin, A. Odena, M. Nye, M. Bosma, H. Michalewski, D. Dohan, E. Jiang, C. Cai, M. Terry, Q. Le 等。使用大型语言模型进行程序合成。arXiv 预印本 arXiv:2108.07732,2021。 -
Bai et al. (2022) Bai 等(2022)
Y. Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones, A. Chen, A. Goldie, A. Mirhoseini, C. McKinnon, et al.
Constitutional AI: Harmlessness from AI feedback.
arXiv preprint arXiv:2212.08073, 2022.
Y. Bai, S. Kadavath, S. Kundu, A. Askell, J. Kernion, A. Jones, A. Chen, A. Goldie, A. Mirhoseini, C. McKinnon 等。宪法 AI:通过 AI 反馈实现无害性。arXiv 预印本 arXiv:2212.08073,2022。 -
Bai et al. (2024) Bai 等(2024)
Y. Bai, S. Tu, J. Zhang, H. Peng, X. Wang, X. Lv, S. Cao, J. Xu, L. Hou, Y. Dong, J. Tang, and J. Li.
LongBench v2: Towards deeper understanding and reasoning on realistic long-context multitasks.
arXiv preprint arXiv:2412.15204, 2024.
Y. Bai, S. Tu, J. Zhang, H. Peng, X. Wang, X. Lv, S. Cao, J. Xu, L. Hou, Y. Dong, J. Tang 和 J. Li。LongBench v2:迈向更深层次的理解和长上下文多任务推理。arXiv 预印本 arXiv:2412.15204,2024。 -
Bauer et al. (2014) Bauer 等(2014)
M. Bauer, S. Treichler, and A. Aiken.
Singe: leveraging warp specialization for high performance on GPUs.
In Proceedings of the 19th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, PPoPP ’14, page 119–130, New York, NY, USA, 2014. Association for Computing Machinery.
ISBN 9781450326568.
10.1145/2555243.2555258.
URL https://doi.org/10.1145/2555243.2555258.
M. Bauer, S. Treichler 和 A. Aiken。Singe:利用包专门化实现高性能 GPU。在第 19 届 ACM SIGPLAN 并行编程原则与实践研讨会,PPoPP '14,页面 119-130,美国纽约,2014。美国计算机协会。ISBN 9781450326568。10.1145/2555243.2555258。网址 https://doi.org/10.1145/2555243.2555258。 -
Bisk et al. (2020) Bisk 等(2020)
Y. Bisk, R. Zellers, R. L. Bras, J. Gao, and Y. Choi.
PIQA: reasoning about physical commonsense in natural language.
In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 7432–7439. AAAI Press, 2020.
10.1609/aaai.v34i05.6239.
URL https://doi.org/10.1609/aaai.v34i05.6239.
Y. Bisk, R. Zellers, R. L. Bras, J. Gao 和 Y. Choi。PIQA:关于自然语言中物理常识的推理。在第三十四届 AAAI 人工智能会议,AAAI 2020,第三十二届创新应用人工智能会议,IAAI 2020,第十届 AAAI 教育进展人工智能研讨会,EAAI 2020,美国纽约,2020 年 2 月 7-12 日,页面 7432-7439。AAAI 出版社,2020。10.1609/aaai.v34i05.6239。网址 https://doi.org/10.1609/aaai.v34i05.6239。 -
Chen et al. (2021) Chen 等人 (2021)
M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. H. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr, J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, and W. Zaremba.
Evaluating large language models trained on code.
CoRR, abs/2107.03374, 2021.
URL https://arxiv.org/abs/2107.03374.
M. Chen, J. Tworek, H. Jun, Q. Yuan, H. P. de Oliveira Pinto, J. Kaplan, H. Edwards, Y. Burda, N. Joseph, G. Brockman, A. Ray, R. Puri, G. Krueger, M. Petrov, H. Khlaaf, G. Sastry, P. Mishkin, B. Chan, S. Gray, N. Ryder, M. Pavlov, A. Power, L. Kaiser, M. Bavarian, C. Winter, P. Tillet, F. P. Such, D. Cummings, M. Plappert, F. Chantzis, E. Barnes, A. Herbert-Voss, W. H. Guss, A. Nichol, A. Paino, N. Tezak, J. Tang, I. Babuschkin, S. Balaji, S. Jain, W. Saunders, C. Hesse, A. N. Carr, J. Leike, J. Achiam, V. Misra, E. Morikawa, A. Radford, M. Knight, M. Brundage, M. Murati, K. Mayer, P. Welinder, B. McGrew, D. Amodei, S. McCandlish, I. Sutskever, 和 W. Zaremba. 评估在代码上训练的大型语言模型. CoRR, abs/2107.03374, 2021. URL https://arxiv.org/abs/2107.03374. -
Clark et al. (2018) Clark 等人 (2018)
P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, and O. Tafjord.
Think you have solved question answering? try arc, the AI2 reasoning challenge.
CoRR, abs/1803.05457, 2018.
URL http://arxiv.org/abs/1803.05457.
P. Clark, I. Cowhey, O. Etzioni, T. Khot, A. Sabharwal, C. Schoenick, 和 O. Tafjord. 认为你已经解决了问答问题?试试 ARC,AI2 推理挑战. CoRR, abs/1803.05457, 2018. URL http://arxiv.org/abs/1803.05457. -
Cobbe et al. (2021) Cobbe 等人 (2021)
K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, et al.
Training verifiers to solve math word problems.
arXiv preprint arXiv:2110.14168, 2021.
K. Cobbe, V. Kosaraju, M. Bavarian, M. Chen, H. Jun, L. Kaiser, M. Plappert, J. Tworek, J. Hilton, R. Nakano, 等人. 培训验证者解决数学文字问题. arXiv 预印本 arXiv:2110.14168, 2021. -
Cui et al. (2019) Cui 等人 (2019)
Y. Cui, T. Liu, W. Che, L. Xiao, Z. Chen, W. Ma, S. Wang, and G. Hu.
A span-extraction dataset for Chinese machine reading comprehension.
In K. Inui, J. Jiang, V. Ng, and X. Wan, editors, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 5883–5889, Hong Kong, China, Nov. 2019. Association for Computational Linguistics.
10.18653/v1/D19-1600.
URL https://aclanthology.org/D19-1600.
Y. Cui, T. Liu, W. Che, L. Xiao, Z. Chen, W. Ma, S. Wang, 和 G. Hu. 一份用于中文机器阅读理解的跨度抽取数据集. 由 K. Inui, J. Jiang, V. Ng, 和 X. Wan 编辑, 2019 年实证方法自然语言处理会议和第 9 届国际联合会议 (EMNLP-IJCNLP) 论文集,5883–5889 页, 香港,中国, 2019 年 11 月. 计算语言学协会. 10.18653/v1/D19-1600. URL https://aclanthology.org/D19-1600. -
Dai et al. (2024) Dai 等人 (2024)
D. Dai, C. Deng, C. Zhao, R. X. Xu, H. Gao, D. Chen, J. Li, W. Zeng, X. Yu, Y. Wu, Z. Xie, Y. K. Li, P. Huang, F. Luo, C. Ruan, Z. Sui, and W. Liang.
Deepseekmoe: Towards ultimate expert specialization in mixture-of-experts language models.
CoRR, abs/2401.06066, 2024.
URL https://doi.org/10.48550/arXiv.2401.06066.
D. Dai, C. Deng, C. Zhao, R. X. Xu, H. Gao, D. Chen, J. Li, W. Zeng, X. Yu, Y. Wu, Z. Xie, Y. K. Li, P. Huang, F. Luo, C. Ruan, Z. Sui, 和 W. Liang. Deepseekmoe: 朝向专家专业化的混合专家语言模型. CoRR, abs/2401.06066, 2024. URL https://doi.org/10.48550/arXiv.2401.06066. -
DeepSeek-AI (2024a)
DeepSeek-AI.
Deepseek-coder-v2: Breaking the barrier of closed-source models in code intelligence.
CoRR, abs/2406.11931, 2024a.
URL https://doi.org/10.48550/arXiv.2406.11931.
DeepSeek-AI. Deepseek-coder-v2: 打破闭源模型在代码智能中的障碍. CoRR, abs/2406.11931, 2024a. URL https://doi.org/10.48550/arXiv.2406.11931. -
DeepSeek-AI (2024b)
DeepSeek-AI.
Deepseek LLM: scaling open-source language models with longtermism.
CoRR, abs/2401.02954, 2024b.
URL https://doi.org/10.48550/arXiv.2401.02954.
DeepSeek-AI. Deepseek LLM: 以长期主义扩展开源语言模型. CoRR, abs/2401.02954, 2024b. URL https://doi.org/10.48550/arXiv.2401.02954. -
DeepSeek-AI (2024c)
DeepSeek-AI.
Deepseek-v2: A strong, economical, and efficient mixture-of-experts language model.
CoRR, abs/2405.04434, 2024c.
URL https://doi.org/10.48550/arXiv.2405.04434.
DeepSeek-AI. Deepseek-v2: 一种强大、经济且高效的混合专家语言模型. CoRR, abs/2405.04434, 2024c. URL https://doi.org/10.48550/arXiv.2405.04434. -
Dettmers et al. (2022) Dettmers 等人 (2022)
T. Dettmers, M. Lewis, Y. Belkada, and L. Zettlemoyer.
Gpt3. int8 (): 8-bit matrix multiplication for transformers at scale.
Advances in Neural Information Processing Systems, 35:30318–30332, 2022.
T. Dettmers, M. Lewis, Y. Belkada, 和 L. Zettlemoyer. GPT3.int8(): 用于大规模 Transformer 的 8 位矩阵乘法. 神经信息处理系统会议,35:30318–30332, 2022. -
Ding et al. (2024) Ding 等人 (2024)
H. Ding, Z. Wang, G. Paolini, V. Kumar, A. Deoras, D. Roth, and S. Soatto.
Fewer truncations improve language modeling.
arXiv preprint arXiv:2404.10830, 2024.
H. Ding, Z. Wang, G. Paolini, V. Kumar, A. Deoras, D. Roth, 和 S. Soatto. 更少的截断提高了语言建模. arXiv 预印本 arXiv:2404.10830, 2024. -
Dua et al. (2019) Dua 等人 (2019)
D. Dua, Y. Wang, P. Dasigi, G. Stanovsky, S. Singh, and M. Gardner.
DROP: A reading comprehension benchmark requiring discrete reasoning over paragraphs.
In J. Burstein, C. Doran, and T. Solorio, editors, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, June 2-7, 2019, Volume 1 (Long and Short Papers), pages 2368–2378. Association for Computational Linguistics, 2019.
10.18653/V1/N19-1246.
URL https://doi.org/10.18653/v1/n19-1246.
D. Dua, Y. Wang, P. Dasigi, G. Stanovsky, S. Singh, 和 M. Gardner. DROP: 一项需要在段落中进行离散推理的阅读理解基准. 编辑者:J. Burstein, C. Doran, 和 T. Solorio,2019 年北美计算语言学协会人类语言技术会议(NAACL-HLT 2019)论文集, 美国明尼阿波利斯,2019 年 6 月 2-7 日, 第 1 卷(长篇和短篇论文),2368–2378 页. 计算语言学协会, 2019. 10.18653/V1/N19-1246. URL https://doi.org/10.18653/v1/n19-1246. -
Dubois et al. (2024) Dubois 等人 (2024)
Y. Dubois, B. Galambosi, P. Liang, and T. B. Hashimoto.
Length-controlled alpacaeval: A simple way to debias automatic evaluators.
arXiv preprint arXiv:2404.04475, 2024.
Y. Dubois, B. Galambosi, P. Liang, 和 T. B. Hashimoto. 长度控制的 alpacaeval: 一种消除自动评估器偏差的简单方法. arXiv 预印本 arXiv:2404.04475, 2024. -
Fedus et al. (2021) Fedus 等人 (2021)
W. Fedus, B. Zoph, and N. Shazeer.
Switch transformers: Scaling to trillion parameter models with simple and efficient sparsity.
CoRR, abs/2101.03961, 2021.
URL https://arxiv.org/abs/2101.03961.
W. Fedus, B. Zoph 和 N. Shazeer. Switch transformers: 使用简单而高效的稀疏性扩展至万亿参数模型. CoRR, abs/2101.03961, 2021. URL https://arxiv.org/abs/2101.03961. -
Fishman et al. (2024) Fishman 等人 (2024)
M. Fishman, B. Chmiel, R. Banner, and D. Soudry.
Scaling FP8 training to trillion-token llms.
arXiv preprint arXiv:2409.12517, 2024.
M. Fishman, B. Chmiel, R. Banner 和 D. Soudry. 将 FP8 训练扩展至万亿令牌的 llms. arXiv 预印本 arXiv:2409.12517, 2024. -
Frantar et al. (2022) Frantar 等人 (2022)
E. Frantar, S. Ashkboos, T. Hoefler, and D. Alistarh.
Gptq: Accurate post-training quantization for generative pre-trained transformers.
arXiv preprint arXiv:2210.17323, 2022.
E. Frantar, S. Ashkboos, T. Hoefler 和 D. Alistarh. Gptq: 面向生成预训练 Transformer 的精确后训练量化. arXiv 预印本 arXiv:2210.17323, 2022. -
Gao et al. (2020) Gao 等人 (2020)
L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite, N. Nabeshima, et al.
The Pile: An 800GB dataset of diverse text for language modeling.
arXiv preprint arXiv:2101.00027, 2020.
L. Gao, S. Biderman, S. Black, L. Golding, T. Hoppe, C. Foster, J. Phang, H. He, A. Thite, N. Nabeshima 等人. The Pile: 用于语言建模的 800GB 多样文本数据集. arXiv 预印本 arXiv:2101.00027, 2020. -
Gema et al. (2024) Gema 等人 (2024)
A. P. Gema, J. O. J. Leang, G. Hong, A. Devoto, A. C. M. Mancino, R. Saxena, X. He, Y. Zhao, X. Du, M. R. G. Madani, C. Barale, R. McHardy, J. Harris, J. Kaddour, E. van Krieken, and P. Minervini.
Are we done with mmlu?
CoRR, abs/2406.04127, 2024.
URL https://doi.org/10.48550/arXiv.2406.04127.
A. P. Gema, J. O. J. Leang, G. Hong, A. Devoto, A. C. M. Mancino, R. Saxena, X. He, Y. Zhao, X. Du, M. R. G. Madani, C. Barale, R. McHardy, J. Harris, J. Kaddour, E. van Krieken 和 P. Minervini. 我们完成了 mmlu 吗? CoRR, abs/2406.04127, 2024. URL https://doi.org/10.48550/arXiv.2406.04127. -
Gloeckle et al. (2024) Gloeckle 等人 (2024)
F. Gloeckle, B. Y. Idrissi, B. Rozière, D. Lopez-Paz, and G. Synnaeve.
Better & faster large language models via multi-token prediction.
In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024.
URL https://openreview.net/forum?id=pEWAcejiU2.
F. Gloeckle, B. Y. Idrissi, B. Rozière, D. Lopez-Paz 和 G. Synnaeve. 通过多令牌预测提高大型语言模型的性能与速度. 在第四十一届国际机器学习会议, ICML 2024, 奥地利维也纳, 2024 年 7 月 21-27 日. OpenReview.net, 2024. URL https://openreview.net/forum?id=pEWAcejiU2. -
Google (2024) 谷歌(2024)
Google.
Our next-generation model: Gemini 1.5, 2024.
URL https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024.
Google. 我们的下一代模型: Gemini 1.5, 2024. URL https://blog.google/technology/ai/google-gemini-next-generation-model-february-2024. -
Graham et al. (2016) Graham 等人 (2016)
R. L. Graham, D. Bureddy, P. Lui, H. Rosenstock, G. Shainer, G. Bloch, D. Goldenerg, M. Dubman, S. Kotchubievsky, V. Koushnir, et al.
Scalable hierarchical aggregation protocol (SHArP): A hardware architecture for efficient data reduction.
In 2016 First International Workshop on Communication Optimizations in HPC (COMHPC), pages 1–10. IEEE, 2016.
R. L. Graham, D. Bureddy, P. Lui, H. Rosenstock, G. Shainer, G. Bloch, D. Goldenberg, M. Dubman, S. Kotchubievsky, V. Koushnir 等人. 可扩展的分层聚合协议 (SHArP): 用于高效数据缩减的硬件架构. 在 2016 年第一次高性能计算通信优化国际研讨会 (COMHPC), 第 1-10 页. IEEE, 2016. -
Gu et al. (2024) Gu 等人 (2024)
A. Gu, B. Rozière, H. Leather, A. Solar-Lezama, G. Synnaeve, and S. I. Wang.
Cruxeval: A benchmark for code reasoning, understanding and execution, 2024.
A. Gu, B. Rozière, H. Leather, A. Solar-Lezama, G. Synnaeve 和 S. I. Wang. Cruxeval:用于代码推理、理解与执行的基准测试, 2024. -
Guo et al. (2024) Guo 等人 (2024)
D. Guo, Q. Zhu, D. Yang, Z. Xie, K. Dong, W. Zhang, G. Chen, X. Bi, Y. Wu, Y. K. Li, F. Luo, Y. Xiong, and W. Liang.
Deepseek-coder: When the large language model meets programming - the rise of code intelligence.
CoRR, abs/2401.14196, 2024.
URL https://doi.org/10.48550/arXiv.2401.14196.
D. Guo, Q. Zhu, D. Yang, Z. Xie, K. Dong, W. Zhang, G. Chen, X. Bi, Y. Wu, Y. K. Li, F. Luo, Y. Xiong 和 W. Liang. Deepseek-coder: 当大型语言模型遇到编程 - 代码智能的兴起. CoRR, abs/2401.14196, 2024. URL https://doi.org/10.48550/arXiv.2401.14196. -
Harlap et al. (2018) Harlap 等人 (2018)
A. Harlap, D. Narayanan, A. Phanishayee, V. Seshadri, N. Devanur, G. Ganger, and P. Gibbons.
Pipedream: Fast and efficient pipeline parallel dnn training, 2018.
URL https://arxiv.org/abs/1806.03377.
A. Harlap, D. Narayanan, A. Phanishayee, V. Seshadri, N. Devanur, G. Ganger 和 P. Gibbons. Pipedream: 快速高效的管道并行深度神经网络训练, 2018. URL https://arxiv.org/abs/1806.03377. -
(32)
B. He, L. Noci, D. Paliotta, I. Schlag, and T. Hofmann.
Understanding and minimising outlier features in transformer training.
In The Thirty-eighth Annual Conference on Neural Information Processing Systems.
B. He, L. Noci, D. Paliotta, I. Schlag 和 T. Hofmann. 理解并最小化 Transformer 训练中的离群特征. 在第三十八届神经信息处理系统年度会议. -
He et al. (2024) He 等人 (2024)
Y. He, S. Li, J. Liu, Y. Tan, W. Wang, H. Huang, X. Bu, H. Guo, C. Hu, B. Zheng, et al.
Chinese simpleqa: A chinese factuality evaluation for large language models.
arXiv preprint arXiv:2411.07140, 2024.
Y. He, S. Li, J. Liu, Y. Tan, W. Wang, H. Huang, X. Bu, H. Guo, C. Hu, B. Zheng 等人. 中文 SimpleQA: 大型语言模型的中文事实性评估. arXiv 预印本 arXiv:2411.07140, 2024. -
Hendrycks et al. (2020) Hendrycks 等人 (2020)
D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song, and J. Steinhardt.
Measuring massive multitask language understanding.
arXiv preprint arXiv:2009.03300, 2020.
D. Hendrycks, C. Burns, S. Basart, A. Zou, M. Mazeika, D. Song 和 J. Steinhardt. 大规模多任务语言理解的测量. arXiv 预印本 arXiv:2009.03300, 2020. -
Hendrycks et al. (2021) Hendrycks 等人 (2021)
D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song, and J. Steinhardt.
Measuring mathematical problem solving with the math dataset.
arXiv preprint arXiv:2103.03874, 2021.
D. Hendrycks, C. Burns, S. Kadavath, A. Arora, S. Basart, E. Tang, D. Song 和 J. Steinhardt. 使用数学数据集测量数学问题解决能力. arXiv 预印本 arXiv:2103.03874, 2021. -
Huang et al. (2023) Huang 等人 (2023)
Y. Huang, Y. Bai, Z. Zhu, J. Zhang, J. Zhang, T. Su, J. Liu, C. Lv, Y. Zhang, J. Lei, et al.
C-Eval: A multi-level multi-discipline chinese evaluation suite for foundation models.
arXiv preprint arXiv:2305.08322, 2023.
Y. Huang, Y. Bai, Z. Zhu, J. Zhang, J. Zhang, T. Su, J. Liu, C. Lv, Y. Zhang, J. Lei 等人. C-Eval: 核心模型的多层次多学科中文评估套件. arXiv 预印本 arXiv:2305.08322, 2023. -
Jain et al. (2024) Jain 等人 (2024)
N. Jain, K. Han, A. Gu, W. Li, F. Yan, T. Zhang, S. Wang, A. Solar-Lezama, K. Sen, and I. Stoica.
Livecodebench: Holistic and contamination free evaluation of large language models for code.
CoRR, abs/2403.07974, 2024.
URL https://doi.org/10.48550/arXiv.2403.07974.
N. Jain, K. Han, A. Gu, W. Li, F. Yan, T. Zhang, S. Wang, A. Solar-Lezama, K. Sen 和 I. Stoica. Livecodebench: 大型语言模型代码的整体与无污染评估. CoRR, abs/2403.07974, 2024. URL https://doi.org/10.48550/arXiv.2403.07974. -
Jiang et al. (2023) 姜等人(2023)
A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. d. l. Casas, F. Bressand, G. Lengyel, G. Lample, L. Saulnier, et al.
Mistral 7b.
arXiv preprint arXiv:2310.06825, 2023.
A. Q. Jiang, A. Sablayrolles, A. Mensch, C. Bamford, D. S. Chaplot, D. d. l. Casas, F. Bressand, G. Lengyel, G. Lample, L. Saulnier 等人。Mistral 7b。arXiv 预印本 arXiv:2310.06825, 2023。 -
Joshi et al. (2017) Joshi 等人(2017)
M. Joshi, E. Choi, D. Weld, and L. Zettlemoyer.
TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension.
In R. Barzilay and M.-Y. Kan, editors, Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601–1611, Vancouver, Canada, July 2017. Association for Computational Linguistics.
10.18653/v1/P17-1147.
URL https://aclanthology.org/P17-1147.
M. Joshi, E. Choi, D. Weld 和 L. Zettlemoyer。TriviaQA:面向阅读理解的大规模远程监督挑战数据集。在 R. Barzilay 和 M.-Y. Kan 编辑的第 55 届计算语言学协会年会(卷 1:长论文)论文集,中,页面 1601-1611,加拿大温哥华,2017 年 7 月。计算语言学协会。10.18653/v1/P17-1147。URL https://aclanthology.org/P17-1147。 -
Kalamkar et al. (2019) Kalamkar 等人(2019)
D. Kalamkar, D. Mudigere, N. Mellempudi, D. Das, K. Banerjee, S. Avancha, D. T. Vooturi, N. Jammalamadaka, J. Huang, H. Yuen, et al.
A study of bfloat16 for deep learning training.
arXiv preprint arXiv:1905.12322, 2019.
D. Kalamkar, D. Mudigere, N. Mellempudi, D. Das, K. Banerjee, S. Avancha, D. T. Vooturi, N. Jammalamadaka, J. Huang, H. Yuen 等人。关于 bfloat16 用于深度学习训练的研究。arXiv 预印本 arXiv:1905.12322, 2019。 -
Krishna et al. (2024) Krishna 等人(2024)
S. Krishna, K. Krishna, A. Mohananey, S. Schwarcz, A. Stambler, S. Upadhyay, and M. Faruqui.
Fact, fetch, and reason: A unified evaluation of retrieval-augmented generation.
CoRR, abs/2409.12941, 2024.
10.48550/ARXIV.2409.12941.
URL https://doi.org/10.48550/arXiv.2409.12941.
S. Krishna, K. Krishna, A. Mohananey, S. Schwarcz, A. Stambler, S. Upadhyay 和 M. Faruqui。Fact, fetch, and reason: 统一的检索增强生成评估。CoRR, abs/2409.12941, 2024。10.48550/ARXIV.2409.12941。URL https://doi.org/10.48550/arXiv.2409.12941。 -
Kwiatkowski et al. (2019)
Kwiatkowski 等人(2019) T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. P. Parikh, C. Alberti, D. Epstein, I. Polosukhin, J. Devlin, K. Lee, K. Toutanova, L. Jones, M. Kelcey, M. Chang, A. M. Dai, J. Uszkoreit, Q. Le, and S. Petrov. Natural questions: a benchmark for question answering research. Trans. Assoc. Comput. Linguistics, 7:452–466, 2019. 10.1162/tacl_a_00276. URL https://doi.org/10.1162/tacl_a_00276.
T. Kwiatkowski, J. Palomaki, O. Redfield, M. Collins, A. P. Parikh, C. Alberti, D. Epstein, I. Polosukhin, J. Devlin, K. Lee, K. Toutanova, L. Jones, M. Kelcey, M. Chang, A. M. Dai, J. Uszkoreit, Q. Le 和 S. Petrov。《自然问题:问答研究的基准》。Trans. Assoc. Comput. Linguistics, 7:452–466, 2019。10.1162/tacl_a_00276。URL https://doi.org/10.1162/tacl_a_00276。 -
Lai et al. (2017) Lai 等人(2017)
G. Lai, Q. Xie, H. Liu, Y. Yang, and E. H. Hovy.
RACE: large-scale reading comprehension dataset from examinations.
In M. Palmer, R. Hwa, and S. Riedel, editors, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, EMNLP 2017, Copenhagen, Denmark, September 9-11, 2017, pages 785–794. Association for Computational Linguistics, 2017.
10.18653/V1/D17-1082.
URL https://doi.org/10.18653/v1/d17-1082.
G. Lai, Q. Xie, H. Liu, Y. Yang 和 E. H. Hovy。RACE:来自考试的大规模阅读理解数据集。在 M. Palmer, R. Hwa 和 S. Riedel 编辑的 2017 年实证方法自然语言处理会议(EMNLP 2017)论文集中,哥本哈根,丹麦,2017 年 9 月 9-11 日,页面 785-794。计算语言学协会,2017。10.18653/V1/D17-1082。URL https://doi.org/10.18653/v1/d17-1082。 -
Lambert et al. (2024) Lambert 等人(2024)
N. Lambert, V. Pyatkin, J. Morrison, L. Miranda, B. Y. Lin, K. Chandu, N. Dziri, S. Kumar, T. Zick, Y. Choi, et al.
Rewardbench: Evaluating reward models for language modeling.
arXiv preprint arXiv:2403.13787, 2024.
N. Lambert, V. Pyatkin, J. Morrison, L. Miranda, B. Y. Lin, K. Chandu, N. Dziri, S. Kumar, T. Zick, Y. Choi 等人。Rewardbench: 用于语言建模的奖励模型评估。arXiv 预印本 arXiv:2403.13787, 2024。 -
Lepikhin et al. (2021) Lepikhin 等人(2021)
D. Lepikhin, H. Lee, Y. Xu, D. Chen, O. Firat, Y. Huang, M. Krikun, N. Shazeer, and Z. Chen.
Gshard: Scaling giant models with conditional computation and automatic sharding.
In 9th International Conference on Learning Representations, ICLR 2021. OpenReview.net, 2021.
URL https://openreview.net/forum?id=qrwe7XHTmYb.
D. Lepikhin, H. Lee, Y. Xu, D. Chen, O. Firat, Y. Huang, M. Krikun, N. Shazeer 和 Z. Chen。Gshard: 通过条件计算和自动分片扩展巨大模型。在第 9 届国际学习表征会议(ICLR 2021)。OpenReview.net, 2021。URL https://openreview.net/forum?id=qrwe7XHTmYb。 -
Leviathan et al. (2023) Leviathan 等人(2023)
Y. Leviathan, M. Kalman, and Y. Matias.
Fast inference from transformers via speculative decoding.
In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 19274–19286. PMLR, 2023.
URL https://proceedings.mlr.press/v202/leviathan23a.html.
Y. Leviathan, M. Kalman 和 Y. Matias。《通过推测性解码从 Transformer 进行快速推理》。在 2023 年国际机器学习会议(ICML 2023),2023 年 7 月 23-29 日,夏威夷火奴鲁鲁,美国,机器学习研究论文集,第 202 卷,页面 19274-19286。PMLR, 2023。URL https://proceedings.mlr.press/v202/leviathan23a.html。 -
Li et al. (2023) Li 等人(2023)
H. Li, Y. Zhang, F. Koto, Y. Yang, H. Zhao, Y. Gong, N. Duan, and T. Baldwin.
CMMLU: Measuring massive multitask language understanding in Chinese.
arXiv preprint arXiv:2306.09212, 2023.
H. Li, Y. Zhang, F. Koto, Y. Yang, H. Zhao, Y. Gong, N. Duan 和 T. Baldwin。CMMLU: 在中文中测量大规模多任务语言理解。arXiv 预印本 arXiv:2306.09212, 2023。 -
Li and Hoefler (2021) Li 和 Hoefler (2021)
S. Li and T. Hoefler.
Chimera: efficiently training large-scale neural networks with bidirectional pipelines.
In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC ’21, page 1–14. ACM, Nov. 2021.
10.1145/3458817.3476145.
URL http://dx.doi.org/10.1145/3458817.3476145.
S. Li 和 T. Hoefler。Chimera: 使用双向流水线高效训练大规模神经网络。在高性能计算、网络、存储和分析国际会议(SC ’21)论文集中,页面 1–14。ACM, 2021 年 11 月。10.1145/3458817.3476145。URL http://dx.doi.org/10.1145/3458817.3476145。 -
Li et al. (2024a) Li 等人(2024a)
T. Li, W.-L. Chiang, E. Frick, L. Dunlap, T. Wu, B. Zhu, J. E. Gonzalez, and I. Stoica.
From crowdsourced data to high-quality benchmarks: Arena-hard and benchbuilder pipeline.
arXiv preprint arXiv:2406.11939, 2024a.
T. Li, W.-L. Chiang, E. Frick, L. Dunlap, T. Wu, B. Zhu, J. E. Gonzalez 和 I. Stoica。从众包数据到高质量基准:Arena-hard 和 benchbuilder 流水线。arXiv 预印本 arXiv:2406.11939, 2024a。 -
Li et al. (2021) Li 等人(2021)
W. Li, F. Qi, M. Sun, X. Yi, and J. Zhang.
Ccpm: A chinese classical poetry matching dataset, 2021.
W. Li, F. Qi, M. Sun, X. Yi 和 J. Zhang。CCPM: 一种中国古典诗歌匹配数据集,2021。 -
Li et al. (2024b) Li 等人(2024b)
Y. Li, F. Wei, C. Zhang, and H. Zhang.
EAGLE: speculative sampling requires rethinking feature uncertainty.
In Forty-first International Conference on Machine Learning, ICML 2024, Vienna, Austria, July 21-27, 2024. OpenReview.net, 2024b.
URL https://openreview.net/forum?id=1NdN7eXyb4.
Y. Li,F. Wei,C. Zhang 和 H. Zhang。EAGLE: 投机采样需要重新思考特征不确定性。发表于第四十一届国际机器学习会议,ICML 2024,奥地利维也纳,2024 年 7 月 21 日至 27 日。OpenReview.net,2024b。网址:https://openreview.net/forum?id=1NdN7eXyb4。 -
Lin (2024)
B. Y. Lin.
ZeroEval: A Unified Framework for Evaluating Language Models, July 2024.
URL https://github.com/WildEval/ZeroEval.
B. Y. Lin。ZeroEval: 一个评估语言模型的统一框架,2024 年 7 月。网址:https://github.com/WildEval/ZeroEval。 -
Loshchilov and Hutter (2017)
Loshchilov 和 Hutter (2017) I. Loshchilov and F. Hutter. Decoupled weight decay regularization. arXiv preprint arXiv:1711.05101, 2017.
I. Loshchilov 和 F. Hutter。解耦权重衰减正则化。arXiv 预印本 arXiv:1711.05101,2017。 -
Lundberg (2023)
S. Lundberg.
The art of prompt design: Prompt boundaries and token healing, 2023.
URL https://towardsdatascience.com/the-art-of-prompt-design-prompt-boundaries-and-token-healing-3b2448b0be38.
S. Lundberg。提示设计的艺术:提示边界与标记修复,2023。网址:https://towardsdatascience.com/the-art-of-prompt-design-prompt-boundaries-and-token-healing-3b2448b0be38。 -
Luo et al. (2024) Luo 等人 (2024)
Y. Luo, Z. Zhang, R. Wu, H. Liu, Y. Jin, K. Zheng, M. Wang, Z. He, G. Hu, L. Chen, et al.
Ascend HiFloat8 format for deep learning.
arXiv preprint arXiv:2409.16626, 2024.
Y. Luo,Z. Zhang,R. Wu,H. Liu,Y. Jin,K. Zheng,M. Wang,Z. He,G. Hu,L. Chen 等人。Ascend HiFloat8 格式用于深度学习。arXiv 预印本 arXiv:2409.16626,2024。 -
MAA (2024)
MAA.
American invitational mathematics examination - aime.
In American Invitational Mathematics Examination - AIME 2024, February 2024.
URL https://maa.org/math-competitions/american-invitational-mathematics-examination-aime.
MAA。美国邀请数学考试 - AIME。发表于美国邀请数学考试 - AIME 2024,2024 年 2 月。网址:https://maa.org/math-competitions/american-invitational-mathematics-examination-aime。 -
Micikevicius et al. (2022)
Micikevicius 等人 (2022) P. Micikevicius, D. Stosic, N. Burgess, M. Cornea, P. Dubey, R. Grisenthwaite, S. Ha, A. Heinecke, P. Judd, J. Kamalu, et al. FP8 formats for deep learning. arXiv preprint arXiv:2209.05433, 2022.
P. Micikevicius,D. Stosic,N. Burgess,M. Cornea,P. Dubey,R. Grisenthwaite,S. Ha,A. Heinecke,P. Judd,J. Kamalu 等人。FP8 格式用于深度学习。arXiv 预印本 arXiv:2209.05433,2022。 -
Mistral (2024)
Mistral.
Cheaper, better, faster, stronger: Continuing to push the frontier of ai and making it accessible to all, 2024.
URL https://mistral.ai/news/mixtral-8x22b.
Mistral。更便宜,更好,更快,更强:继续推动人工智能前沿并使其对所有人可及,2024。网址:https://mistral.ai/news/mixtral-8x22b。 -
Narang et al. (2017) Narang 等人 (2017)
S. Narang, G. Diamos, E. Elsen, P. Micikevicius, J. Alben, D. Garcia, B. Ginsburg, M. Houston, O. Kuchaiev, G. Venkatesh, et al.
Mixed precision training.
In Int. Conf. on Learning Representation, 2017.
S. Narang,G. Diamos,E. Elsen,P. Micikevicius,J. Alben,D. Garcia,B. Ginsburg,M. Houston,O. Kuchaiev,G. Venkatesh 等人。混合精度训练。发表于学习表征国际会议,2017。 -
Noune et al. (2022) Noune 等人 (2022)
B. Noune, P. Jones, D. Justus, D. Masters, and C. Luschi.
8-bit numerical formats for deep neural networks.
arXiv preprint arXiv:2206.02915, 2022.
B. Noune,P. Jones,D. Justus,D. Masters 和 C. Luschi。8 位数值格式用于深度神经网络。arXiv 预印本 arXiv:2206.02915,2022。 -
NVIDIA (2022) NVIDIA(2022 年)
NVIDIA.
Improving network performance of HPC systems using NVIDIA Magnum IO NVSHMEM and GPUDirect Async.
https://developer.nvidia.com/blog/improving-network-performance-of-hpc-systems-using-nvidia-magnum-io-nvshmem-and-gpudirect-async, 2022.
NVIDIA。利用 NVIDIA Magnum IO NVSHMEM 和 GPUDirect Async 改善 HPC 系统的网络性能。https://developer.nvidia.com/blog/improving-network-performance-of-hpc-systems-using-nvidia-magnum-io-nvshmem-and-gpudirect-async,2022。 -
NVIDIA (2024a) NVIDIA(2024a)
NVIDIA.
Blackwell architecture.
https://www.nvidia.com/en-us/data-center/technologies/blackwell-architecture/, 2024a.
NVIDIA。Blackwell 架构。https://www.nvidia.com/en-us/data-center/technologies/blackwell-architecture/,2024a。 -
NVIDIA (2024b) NVIDIA(2024b)
NVIDIA.
TransformerEngine, 2024b.
URL https://github.com/NVIDIA/TransformerEngine.
Accessed: 2024-11-19.
NVIDIA。TransformerEngine,2024b。网址:https://github.com/NVIDIA/TransformerEngine。访问时间:2024-11-19。 -
OpenAI (2024a) OpenAI(2024a)
OpenAI.
Hello GPT-4o, 2024a.
URL https://openai.com/index/hello-gpt-4o/.
OpenAI。Hello GPT-4o,2024a。网址:https://openai.com/index/hello-gpt-4o/。 -
OpenAI (2024b) OpenAI(2024b)
OpenAI.
Multilingual massive multitask language understanding (mmmlu), 2024b.
URL https://huggingface.co/datasets/openai/MMMLU.
OpenAI。多语言多任务大规模语言理解 (mmmlu),2024b。网址:https://huggingface.co/datasets/openai/MMMLU。 -
OpenAI (2024c) OpenAI(2024c)
OpenAI.
Introducing SimpleQA, 2024c.
URL https://openai.com/index/introducing-simpleqa/.
OpenAI。介绍 SimpleQA,2024c。网址:https://openai.com/index/introducing-simpleqa/。 -
OpenAI (2024d) OpenAI(2024d)
OpenAI.
Introducing SWE-bench verified we’re releasing a human-validated subset of swe-bench that more, 2024d.
URL https://openai.com/index/introducing-swe-bench-verified/.
OpenAI。介绍 SWE-bench 验证版:我们发布了经过人类验证的 SWE-bench 子集以及更多,2024d。网址:https://openai.com/index/introducing-swe-bench-verified/。 -
Peng et al. (2023a) Peng 等人 (2023a)
B. Peng, J. Quesnelle, H. Fan, and E. Shippole.
Yarn: Efficient context window extension of large language models.
arXiv preprint arXiv:2309.00071, 2023a.
B. Peng,J. Quesnelle,H. Fan 和 E. Shippole。Yarn: 大型语言模型的高效上下文窗口扩展。arXiv 预印本 arXiv:2309.00071,2023a。 -
Peng et al. (2023b) Peng 等人 (2023b)
H. Peng, K. Wu, Y. Wei, G. Zhao, Y. Yang, Z. Liu, Y. Xiong, Z. Yang, B. Ni, J. Hu, et al.
FP8-LM: Training FP8 large language models.
arXiv preprint arXiv:2310.18313, 2023b.
H. Peng,K. Wu,Y. Wei,G. Zhao,Y. Yang,Z. Liu,Y. Xiong,Z. Yang,B. Ni,J. Hu 等人。FP8-LM: 训练 FP8 大型语言模型。arXiv 预印本 arXiv:2310.18313,2023b。 -
Qi et al. (2023a) Qi 等人 (2023a)
P. Qi, X. Wan, G. Huang, and M. Lin.
Zero bubble pipeline parallelism.
arXiv preprint arXiv:2401.10241, 2023a.
P. Qi,X. Wan,G. Huang 和 M. Lin。零气泡流水线并行化。arXiv 预印本 arXiv:2401.10241,2023a。 -
Qi et al. (2023b) Qi 等人 (2023b)
P. Qi, X. Wan, G. Huang, and M. Lin.
Zero bubble pipeline parallelism, 2023b.
URL https://arxiv.org/abs/2401.10241.
P. Qi,X. Wan,G. Huang 和 M. Lin。零气泡流水线并行化,2023b。网址:https://arxiv.org/abs/2401.10241。 -
Qwen (2023) Qwen(2023 年)
Qwen.
Qwen technical report.
arXiv preprint arXiv:2309.16609, 2023.
Qwen。Qwen 技术报告。arXiv 预印本 arXiv:2309.16609,2023。 -
Qwen (2024a) Qwen(2024a 年)
Qwen.
Introducing Qwen1.5, 2024a.
URL https://qwenlm.github.io/blog/qwen1.5.
Qwen。介绍 Qwen1.5,2024a。网址:https://qwenlm.github.io/blog/qwen1.5。 -
Qwen (2024b) Qwen(2024b 年)
Qwen.
Qwen2.5: A party of foundation models, 2024b.
URL https://qwenlm.github.io/blog/qwen2.5.
Qwen。Qwen2.5:基础模型的盛宴,2024b。网址:https://qwenlm.github.io/blog/qwen2.5。 -
Rajbhandari et al. (2020)
Rajbhandari 等人 (2020) S. Rajbhandari, J. Rasley, O. Ruwase, and Y. He. Zero: Memory optimizations toward training trillion parameter models. In SC20: International Conference for High Performance Computing, Networking, Storage and Analysis, pages 1–16. IEEE, 2020.
S. Rajbhandari,J. Rasley,O. Ruwase 和 Y. He。Zero: 面向训练万亿参数模型的内存优化。发表于 SC20:高性能计算、网络、存储和分析国际会议,页码 1–16。IEEE,2020。 -
Rein et al. (2023) Rein 等人 (2023)
D. Rein, B. L. Hou, A. C. Stickland, J. Petty, R. Y. Pang, J. Dirani, J. Michael, and S. R. Bowman.
GPQA: A graduate-level google-proof q&a benchmark.
arXiv preprint arXiv:2311.12022, 2023.
D. Rein, B. L. Hou, A. C. Stickland, J. Petty, R. Y. Pang, J. Dirani, J. Michael 和 S. R. Bowman. GPQA: 研究生水平的防谷歌问答基准。arXiv 预印本 arXiv:2311.12022, 2023。 -
Rouhani et al. (2023a) Rouhani 等人 (2023a)
B. D. Rouhani, R. Zhao, A. More, M. Hall, A. Khodamoradi, S. Deng, D. Choudhary, M. Cornea, E. Dellinger, K. Denolf, et al.
Microscaling data formats for deep learning.
arXiv preprint arXiv:2310.10537, 2023a.
B. D. Rouhani, R. Zhao, A. More, M. Hall, A. Khodamoradi, S. Deng, D. Choudhary, M. Cornea, E. Dellinger, K. Denolf, 等人。用于深度学习的微缩数据格式。arXiv 预印本 arXiv:2310.10537, 2023a。 -
Rouhani et al. (2023b) Rouhani 等人 (2023b)
B. D. Rouhani, R. Zhao, A. More, M. Hall, A. Khodamoradi, S. Deng, D. Choudhary, M. Cornea, E. Dellinger, K. Denolf, et al.
Microscaling data formats for deep learning.
arXiv preprint arXiv:2310.10537, 2023b.
B. D. Rouhani, R. Zhao, A. More, M. Hall, A. Khodamoradi, S. Deng, D. Choudhary, M. Cornea, E. Dellinger, K. Denolf, 等人。用于深度学习的微缩数据格式。arXiv 预印本 arXiv:2310.10537, 2023b。 -
Sakaguchi et al. (2019) Sakaguchi 等人 (2019)
K. Sakaguchi, R. L. Bras, C. Bhagavatula, and Y. Choi.
Winogrande: An adversarial winograd schema challenge at scale, 2019.
K. Sakaguchi, R. L. Bras, C. Bhagavatula 和 Y. Choi. Winogrande: 大规模对抗性 Winograd 模式挑战, 2019. -
Shao et al. (2024) Shao 等人 (2024)
Z. Shao, P. Wang, Q. Zhu, R. Xu, J. Song, M. Zhang, Y. Li, Y. Wu, and D. Guo.
Deepseekmath: Pushing the limits of mathematical reasoning in open language models.
arXiv preprint arXiv:2402.03300, 2024.
Z. Shao, P. Wang, Q. Zhu, R. Xu, J. Song, M. Zhang, Y. Li, Y. Wu 和 D. Guo. Deepseekmath: 推动开放语言模型中的数学推理极限。arXiv 预印本 arXiv:2402.03300, 2024。 -
Shazeer et al. (2017) Shazeer 等人 (2017)
N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. V. Le, G. E. Hinton, and J. Dean.
Outrageously large neural networks: The sparsely-gated mixture-of-experts layer.
In 5th International Conference on Learning Representations, ICLR 2017. OpenReview.net, 2017.
URL https://openreview.net/forum?id=B1ckMDqlg.
N. Shazeer, A. Mirhoseini, K. Maziarz, A. Davis, Q. V. Le, G. E. Hinton 和 J. Dean. 超大规模神经网络:稀疏门控的专家混合层。在第五届国际学习表征会议 ICLR 2017 上。OpenReview.net, 2017. URL https://openreview.net/forum?id=B1ckMDqlg. -
Shi et al. (2023) Shi 等人 (2023)
F. Shi, M. Suzgun, M. Freitag, X. Wang, S. Srivats, S. Vosoughi, H. W. Chung, Y. Tay, S. Ruder, D. Zhou, D. Das, and J. Wei.
Language models are multilingual chain-of-thought reasoners.
In The Eleventh International Conference on Learning Representations, ICLR 2023, Kigali, Rwanda, May 1-5, 2023. OpenReview.net, 2023.
URL https://openreview.net/forum?id=fR3wGCk-IXp.
F. Shi, M. Suzgun, M. Freitag, X. Wang, S. Srivats, S. Vosoughi, H. W. Chung, Y. Tay, S. Ruder, D. Zhou, D. Das 和 J. Wei. 语言模型是多语言链式推理者。在第十一届国际学习表征会议 ICLR 2023 上,卢旺达基加利,2023 年 5 月 1-5 日。OpenReview.net, 2023. URL https://openreview.net/forum?id=fR3wGCk-IXp. -
Shibata et al. (1999) Shibata 等人 (1999)
Y. Shibata, T. Kida, S. Fukamachi, M. Takeda, A. Shinohara, T. Shinohara, and S. Arikawa.
Byte pair encoding: A text compression scheme that accelerates pattern matching.
1999.
Y. Shibata, T. Kida, S. Fukamachi, M. Takeda, A. Shinohara, T. Shinohara 和 S. Arikawa. 字节对编码:加速模式匹配的文本压缩方案。1999 年。 -
Su et al. (2024) Su 等人 (2024)
J. Su, M. Ahmed, Y. Lu, S. Pan, W. Bo, and Y. Liu.
Roformer: Enhanced transformer with rotary position embedding.
Neurocomputing, 568:127063, 2024.
J. Su, M. Ahmed, Y. Lu, S. Pan, W. Bo 和 Y. Liu. Roformer: 具有旋转位置嵌入的增强型 Transformer。神经计算, 568:127063, 2024。 -
Sun et al. (2019a) Sun 等人 (2019a)
K. Sun, D. Yu, D. Yu, and C. Cardie.
Investigating prior knowledge for challenging chinese machine reading comprehension, 2019a.
K. Sun, D. Yu, D. Yu 和 C. Cardie. 探索挑战性中文机器阅读理解的先验知识, 2019a. -
Sun et al. (2024) Sun 等人 (2024)
M. Sun, X. Chen, J. Z. Kolter, and Z. Liu.
Massive activations in large language models.
arXiv preprint arXiv:2402.17762, 2024.
M. Sun, X. Chen, J. Z. Kolter 和 Z. Liu. 大型语言模型中的大规模激活。arXiv 预印本 arXiv:2402.17762, 2024。 -
Sun et al. (2019b) Sun 等人 (2019b)
X. Sun, J. Choi, C.-Y. Chen, N. Wang, S. Venkataramani, V. V. Srinivasan, X. Cui, W. Zhang, and K. Gopalakrishnan.
Hybrid 8-bit floating point (HFP8) training and inference for deep neural networks.
Advances in neural information processing systems, 32, 2019b.
X. Sun, J. Choi, C.-Y. Chen, N. Wang, S. Venkataramani, V. V. Srinivasan, X. Cui, W. Zhang 和 K. Gopalakrishnan. 深度神经网络的混合 8 位浮点 (HFP8) 训练和推理。神经信息处理系统进展, 32, 2019b。 -
Suzgun et al. (2022) Suzgun 等人 (2022)
M. Suzgun, N. Scales, N. Schärli, S. Gehrmann, Y. Tay, H. W. Chung, A. Chowdhery, Q. V. Le, E. H. Chi, D. Zhou, et al.
Challenging big-bench tasks and whether chain-of-thought can solve them.
arXiv preprint arXiv:2210.09261, 2022.
M. Suzgun, N. Scales, N. Schärli, S. Gehrmann, Y. Tay, H. W. Chung, A. Chowdhery, Q. V. Le, E. H. Chi, D. Zhou, 等人。挑战大基准任务以及链式推理能否解决这些问题。arXiv 预印本 arXiv:2210.09261, 2022。 -
Thakkar et al. (2023) Thakkar 等人 (2023)
V. Thakkar, P. Ramani, C. Cecka, A. Shivam, H. Lu, E. Yan, J. Kosaian, M. Hoemmen, H. Wu, A. Kerr, M. Nicely, D. Merrill, D. Blasig, F. Qiao, P. Majcher, P. Springer, M. Hohnerbach, J. Wang, and M. Gupta.
CUTLASS, Jan. 2023.
URL https://github.com/NVIDIA/cutlass.
V. Thakkar, P. Ramani, C. Cecka, A. Shivam, H. Lu, E. Yan, J. Kosaian, M. Hoemmen, H. Wu, A. Kerr, M. Nicely, D. Merrill, D. Blasig, F. Qiao, P. Majcher, P. Springer, M. Hohnerbach, J. Wang 和 M. Gupta. CUTLASS, 2023 年 1 月。URL https://github.com/NVIDIA/cutlass. -
Touvron et al. (2023a) Touvron 等人 (2023a)
H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, et al.
LLaMA: Open and efficient foundation language models.
arXiv preprint arXiv:2302.13971, 2023a.
H. Touvron, T. Lavril, G. Izacard, X. Martinet, M.-A. Lachaux, T. Lacroix, B. Rozière, N. Goyal, E. Hambro, F. Azhar, 等人。LLaMA:开放高效的基础语言模型。arXiv 预印本 arXiv:2302.13971, 2023a。 -
Touvron et al. (2023b) Touvron 等人 (2023b)
H. Touvron, L. Martin, K. Stone, P. Albert, A. Almahairi, Y. Babaei, N. Bashlykov, S. Batra, P. Bhargava, S. Bhosale, D. Bikel, L. Blecher, C. Canton-Ferrer, M. Chen, G. Cucurull, D. Esiobu, J. Fernandes, J. Fu, W. Fu, B. Fuller, C. Gao, V. Goswami, N. Goyal, A. Hartshorn, S. Hosseini, R. Hou, H. Inan, M. Kardas, V. Kerkez, M. Khabsa, I. Kloumann, A. Korenev, P. S. Koura, M. Lachaux, T. Lavril, J. Lee, D. Liskovich, Y. Lu, Y. Mao, X. Martinet, T. Mihaylov, P. Mishra, I. Molybog, Y. Nie, A. Poulton, J. Reizenstein, R. Rungta, K. Saladi, A. Schelten, R. Silva, E. M. Smith, R. Subramanian, X. E. Tan, B. Tang, R. Taylor, A. Williams, J. X. Kuan, P. Xu, Z. Yan, I. Zarov, Y. Zhang, A. Fan, M. Kambadur, S. Narang, A. Rodriguez, R. Stojnic, S. Edunov, and T. Scialom.
Llama 2: Open foundation and fine-tuned chat models.
CoRR, abs/2307.09288, 2023b.
10.48550/arXiv.2307.09288.
URL https://doi.org/10.48550/arXiv.2307.09288.
H. Touvron、L. Martin、K. Stone、P. Albert、A. Almahairi、Y. Babaei、N. Bashlykov、S. Batra、P. Bhargava、S. Bhosale、D. Bikel、L. Blecher、C. Canton-Ferrer、M. Chen、G. Cucurull、D. Esiobu、J. Fernandes、J. Fu、W. Fu、B. Fuller、C. Gao、V. Goswami、N. Goyal、A. Hartshorn、S. Hosseini、R. Hou、H. Inan、M. Kardas、V. Kerkez、M. Khabsa、I. Kloumann、A. Korenev、P. S. Koura、M. Lachaux、T. Lavril、J. Lee、D. Liskovich、Y. Lu、Y. Mao、X. Martinet、T. Mihaylov、P. Mishra、I. Molybog、Y. Nie、A. Poulton、J. Reizenstein、R. Rungta、K. Saladi、A. Schelten、R. Silva、E. M. Smith、R. Subramanian、X. E. Tan、B. Tang、R. Taylor、A. Williams、J. X. Kuan、P. Xu、Z. Yan、I. Zarov、Y. Zhang、A. Fan、M. Kambadur、S. Narang、A. Rodriguez、R. Stojnic、S. Edunov、和 T. Scialom。Llama 2: 开放基础和微调聊天模型。CoRR, abs/2307.09288, 2023b。 10.48550/arXiv.2307.09288。 URL https://doi.org/10.48550/arXiv.2307.09288。 -
Vaswani et al. (2017) Vaswani 等 (2017)
A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, Ł. Kaiser, and I. Polosukhin.
Attention is all you need.
Advances in neural information processing systems, 30, 2017.
A. Vaswani、N. Shazeer、N. Parmar、J. Uszkoreit、L. Jones、A. N. Gomez、Ł. Kaiser、和 I. Polosukhin。注意力机制就是你所需要的。一项神经信息处理系统的进展, 30, 2017。 -
Wang et al. (2024a) Wang 等 (2024a)
L. Wang, H. Gao, C. Zhao, X. Sun, and D. Dai.
Auxiliary-loss-free load balancing strategy for mixture-of-experts.
CoRR, abs/2408.15664, 2024a.
URL https://doi.org/10.48550/arXiv.2408.15664.
L. Wang、H. Gao、C. Zhao、X. Sun、和 D. Dai。无辅助损失的专家混合负载平衡策略。CoRR, abs/2408.15664, 2024a。URL https://doi.org/10.48550/arXiv.2408.15664。 -
Wang et al. (2024b) Wang 等 (2024b)
Y. Wang, X. Ma, G. Zhang, Y. Ni, A. Chandra, S. Guo, W. Ren, A. Arulraj, X. He, Z. Jiang, T. Li, M. Ku, K. Wang, A. Zhuang, R. Fan, X. Yue, and W. Chen.
Mmlu-pro: A more robust and challenging multi-task language understanding benchmark.
CoRR, abs/2406.01574, 2024b.
URL https://doi.org/10.48550/arXiv.2406.01574.
Y. Wang、X. Ma、G. Zhang、Y. Ni、A. Chandra、S. Guo、W. Ren、A. Arulraj、X. He、Z. Jiang、T. Li、M. Ku、K. Wang、A. Zhuang、R. Fan、X. Yue、和 W. Chen。Mmlu-pro: 更具鲁棒性和挑战性的多任务语言理解基准。CoRR, abs/2406.01574, 2024b。URL https://doi.org/10.48550/arXiv.2406.01574。 -
Wei et al. (2023) Wei 等 (2023)
T. Wei, J. Luan, W. Liu, S. Dong, and B. Wang.
Cmath: Can your language model pass chinese elementary school math test?, 2023.
T. Wei、J. Luan、W. Liu、S. Dong、和 B. Wang。Cmath: 您的语言模型能通过中国小学数学测试吗?, 2023。 -
Wortsman et al. (2023) Wortsman 等 (2023)
M. Wortsman, T. Dettmers, L. Zettlemoyer, A. Morcos, A. Farhadi, and L. Schmidt.
Stable and low-precision training for large-scale vision-language models.
Advances in Neural Information Processing Systems, 36:10271–10298, 2023.
M. Wortsman、T. Dettmers、L. Zettlemoyer、A. Morcos、A. Farhadi、和 L. Schmidt。 大规模视觉语言模型的稳定和低精度训练。神经信息处理系统的进展, 36:10271–10298, 2023。 -
Xi et al. (2023) Xi 等 (2023)
H. Xi, C. Li, J. Chen, and J. Zhu.
Training transformers with 4-bit integers.
Advances in Neural Information Processing Systems, 36:49146–49168, 2023.
H. Xi、C. Li、J. Chen、和 J. Zhu。使用 4 位整数训练 Transformer。神经信息处理系统的进展, 36:49146–49168, 2023。 -
Xia et al. (2024) Xia 等 (2024)
C. S. Xia, Y. Deng, S. Dunn, and L. Zhang.
Agentless: Demystifying llm-based software engineering agents.
arXiv preprint, 2024.
C. S. Xia、Y. Deng、S. Dunn、和 L. Zhang。Agentless: 解密基于大型语言模型的软件工程代理。arXiv 预印本, 2024。 -
Xia et al. (2023) Xia 等 (2023)
H. Xia, T. Ge, P. Wang, S. Chen, F. Wei, and Z. Sui.
Speculative decoding: Exploiting speculative execution for accelerating seq2seq generation.
In Findings of the Association for Computational Linguistics: EMNLP 2023, Singapore, December 6-10, 2023, pages 3909–3925. Association for Computational Linguistics, 2023.
URL https://doi.org/10.18653/v1/2023.findings-emnlp.257.
H. Xia、T. Ge、P. Wang、S. Chen、F. Wei、和 Z. Sui。猜测性解码:利用推测执行加速序列到序列生成。计算语言学协会: EMNLP 2023, 新加坡, 2023 年 12 月 6-10 日, 页码 3909–3925。计算语言学协会, 2023。URL https://doi.org/10.18653/v1/2023.findings-emnlp.257。 -
Xiao et al. (2023) Xiao 等 (2023)
G. Xiao, J. Lin, M. Seznec, H. Wu, J. Demouth, and S. Han.
Smoothquant: Accurate and efficient post-training quantization for large language models.
In International Conference on Machine Learning, pages 38087–38099. PMLR, 2023.
G. Xiao、J. Lin、M. Seznec、H. Wu、J. Demouth、和 S. Han。Smoothquant: 大型语言模型的精确与高效的后量化。在国际机器学习会议上, 页码 38087–38099。PMLR, 2023。 -
Xu et al. (2020) Xu 等 (2020)
L. Xu, H. Hu, X. Zhang, L. Li, C. Cao, Y. Li, Y. Xu, K. Sun, D. Yu, C. Yu, Y. Tian, Q. Dong, W. Liu, B. Shi, Y. Cui, J. Li, J. Zeng, R. Wang, W. Xie, Y. Li, Y. Patterson, Z. Tian, Y. Zhang, H. Zhou, S. Liu, Z. Zhao, Q. Zhao, C. Yue, X. Zhang, Z. Yang, K. Richardson, and Z. Lan.
CLUE: A chinese language understanding evaluation benchmark.
In D. Scott, N. Bel, and C. Zong, editors, Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 4762–4772. International Committee on Computational Linguistics, 2020.
10.18653/V1/2020.COLING-MAIN.419.
URL https://doi.org/10.18653/v1/2020.coling-main.419.
L. Xu、H. Hu、X. Zhang、L. Li、C. Cao、Y. Li、Y. Xu、K. Sun、D. Yu、C. Yu、Y. Tian、Q. Dong、W. Liu、B. Shi、Y. Cui、J. Li、J. Zeng、R. Wang、W. Xie、Y. Li、Y. Patterson、Z. Tian、Y. Zhang、H. Zhou、S. Liu、Z. Zhao、Q. Zhao、C. Yue、X. Zhang、Z. Yang、K. Richardson、和 Z. Lan。CLUE: 中文语言理解评估基准。在 D. Scott、N. Bel、和 C. Zong 编辑的论文集中的第二十八届国际计算语言学会议, COLING 2020, 巴塞罗那, 西班牙 (线上), 2020 年 12 月 8-13 日, 页码 4762–4772。国际计算语言学委员会, 2020。10.18653/V1/2020.COLING-MAIN.419。URL https://doi.org/10.18653/v1/2020.coling-main.419。 -
Zellers et al. (2019) Zellers 等 (2019)
R. Zellers, A. Holtzman, Y. Bisk, A. Farhadi, and Y. Choi.
HellaSwag: Can a machine really finish your sentence?
In A. Korhonen, D. R. Traum, and L. Màrquez, editors, Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pages 4791–4800. Association for Computational Linguistics, 2019.
10.18653/v1/p19-1472.
URL https://doi.org/10.18653/v1/p19-1472.
R. Zellers、A. Holtzman、Y. Bisk、A. Farhadi 和 Y. Choi. HellaSwag: 机器真的可以完成你的句子吗?在 A. Korhonen、D. R. Traum 和 L. Màrquez 编辑的第 57 届计算语言学协会会议 ACL 2019 论文集,佛罗伦萨,意大利,2019 年 7 月 28 日至 8 月 2 日,卷 1:长篇论文,第 4791–4800 页。计算语言学协会,2019 年。10.18653/v1/p19-1472. URL https://doi.org/10.18653/v1/p19-1472. -
Zhong et al. (2023) Zhong 等(2023)
W. Zhong, R. Cui, Y. Guo, Y. Liang, S. Lu, Y. Wang, A. Saied, W. Chen, and N. Duan.
AGIEval: A human-centric benchmark for evaluating foundation models.
CoRR, abs/2304.06364, 2023.
10.48550/arXiv.2304.06364.
URL https://doi.org/10.48550/arXiv.2304.06364.
W. Zhong、R. Cui、Y. Guo、Y. Liang、S. Lu、Y. Wang、A. Saied、W. Chen 和 N. Duan. AGIEval: 人本评估基础模型的基准测试。CoRR,abs/2304.06364,2023 年。10.48550/arXiv.2304.06364. URL https://doi.org/10.48550/arXiv.2304.06364. -
Zhou et al. (2023) Zhou 等(2023)
J. Zhou, T. Lu, S. Mishra, S. Brahma, S. Basu, Y. Luan, D. Zhou, and L. Hou.
Instruction-following evaluation for large language models.
arXiv preprint arXiv:2311.07911, 2023.
J. Zhou、T. Lu、S. Mishra、S. Brahma、S. Basu、Y. Luan、D. Zhou 和 L. Hou. 大型语言模型的指令跟随评估。arXiv 预印本 arXiv:2311.07911,2023 年。
Appendix 附录
Appendix A Contributions and Acknowledgments
附录 A 贡献与致谢
Research & Engineering 研究与工程
Aixin Liu
刘艾欣
Bing Xue
薛冰
Bingxuan Wang
王炳轩
Bochao Wu
吴博超
Chengda Lu
卢承达
Chenggang Zhao
赵成刚
Chengqi Deng
邓成琦
Chenyu Zhang*
张晨宇*
Chong Ruan
阮冲
Damai Dai
戴达迈
Daya Guo
郭达雅
Dejian Yang
杨德建
Deli Chen
陈德立
Erhang Li
李二航
Fangyun Lin
林方昀
Fucong Dai
戴富聪
Fuli Luo*
罗福立*
Guangbo Hao
郝广博
Guanting Chen
陈冠廷
Guowei Li
李国炜
H. Zhang
Han Bao*
包晗*
Hanwei Xu
徐汉玮
Haocheng Wang*
王浩成*
Haowei Zhang
张浩伟
Honghui Ding
丁宏辉
Huajian Xin*
辛华剑*
Huazuo Gao
高华佐
Hui Qu
瞿辉
Jianzhong Guo
郭建中
Jiashi Li
李嘉实
Jiawei Wang*
王家维*
Jingchang Chen
陈靖昌
Jingyang Yuan
袁静阳
Junjie Qiu
邱俊杰
Junlong Li
李俊龙
Junxiao Song
宋俊潇
Kai Dong
董凯
Kai Hu*
胡凯*
Kaige Gao
高锴戈
Kang Guan
关康
Kexin Huang
黄克欣
Kuai Yu
余快
Lean Wang
王连
Lecong Zhang
张乐聪
Liang Zhao
赵亮
Litong Wang
王力通
Liyue Zhang
张丽月
Mingchuan Zhang
张明川
Minghua Zhang
张明华
Minghui Tang
唐明辉
Panpan Huang
黄盼盼
Peiyi Wang
王沛怡
Qiancheng Wang
王乾程
Qihao Zhu
朱祁豪
Qinyu Chen
陈钦宇
Qiushi Du
杜秋实
Ruiqi Ge
葛睿奇
Ruisong Zhang
张瑞松
Ruizhe Pan
潘睿哲
Runji Wang
王润基
Runxin Xu
徐润新
Ruoyu Zhang
张若愚
Shanghao Lu
路尚浩
Shangyan Zhou
周尚言
Shanhuang Chen
陈山煌
Shengfeng Ye
叶圣风
Shirong Ma
马士荣
Shiyu Wang
王世玉
Shuiping Yu
于水平
Shunfeng Zhou
周顺锋
Shuting Pan
潘淑婷
Tao Yun
云涛
Tian Pei
裴天
Wangding Zeng
曾望鼎
Wanjia Zhao*
赵婉佳
Wen Liu
刘文
Wenfeng Liang
梁文峰
Wenjun Gao
高文俊
Wenqin Yu
于文琴
Wentao Zhang
张文韬
Xiao Bi
毕晓
Xiaodong Liu
刘晓东
Xiaohan Wang
王晓涵
Xiaokang Chen
陈晓康
Xiaokang Zhang
张晓康
Xiaotao Nie
聂晓涛
Xin Cheng
程鑫
Xin Liu
刘鑫
Xin Xie
解鑫
Xingchao Liu
刘星超
Xingkai Yu
于星凯
Xinyu Yang
杨新宇
Xinyuan Li
李新源
Xuecheng Su
苏学成
Xuheng Lin
林旭恒
Y.K. Li
李 Y.K.
Y.Q. Wang
王 Y.Q.
Y.X. Wei
魏 Y.X.
Yang Zhang
张扬
Yanhong Xu
徐艳红
Yao Li
李瑶
Yao Zhao
赵瑶
Yaofeng Sun
孙耀锋
Yaohui Wang
王耀辉
Yi Yu
余易
Yichao Zhang
张奕超
Yifan Shi
石一帆
Yiliang Xiong
熊亦良
Ying He
何颖
Yishi Piao
朴一诗
Yisong Wang
王奕松
Yixuan Tan
谭艺轩
Yiyang Ma*
马亿阳
Yiyuan Liu
刘逸源
Yongqiang Guo
郭永强
Yu Wu
吴昱
Yuan Ou
欧元
Yuduan Wang
王玉端
Yue Gong
龚悦
Yuheng Zou
邹宇恒
Yujia He
何宇佳
Yunfan Xiong
熊云帆
Yuxiang Luo
罗玉祥
Yuxiang You
游玉翔
Yuxuan Liu
刘宇轩
Yuyang Zhou
周宇扬
Z.F. Wu
Z.Z. Ren
Zehui Ren
任泽辉
Zhangli Sha
沙张力
Zhe Fu
傅哲
Zhean Xu
许哲安
Zhenda Xie
谢震达
Zhengyan Zhang
张正言
Zhewen Hao
郝哲文
Zhibin Gou
苟志斌
Zhicheng Ma
马志成
Zhigang Yan
闫志刚
Zhihong Shao
邵志鸿
Zhiyu Wu
吴志宇
Zhuoshu Li
李琢书
Zihui Gu
顾子辉
Zijia Zhu
朱子佳
Zijun Liu*
刘子浔*
Zilin Li
李子霖
Ziwei Xie
谢子威
Ziyang Song
宋子扬
Ziyi Gao
高子逸
Zizheng Pan
潘子政
Data Annotation 数据标注
Bei Feng
冯贝
Hui Li
李慧
J.L. Cai
Jiaqi Ni
倪佳祺
Lei Xu
许磊
Meng Li
李萌
Ning Tian
田宁
R.J. Chen
R.L. Jin
Ruyi Chen
陈如意
S.S. Li
Shuang Zhou
周爽
Tianyu Sun
孙天宇
X.Q. Li
Xiangyue Jin
金湘月
Xiaojin Shen
沈晓瑾
Xiaosha Chen
陈晓莎
Xiaowen Sun
孙晓文
Xiaoxiang Wang
王晓翔
Xinnan Song
宋新南
Xinyi Zhou
周欣怡
Y.X. Zhu
Yanhong Xu
许艳红
Yanping Huang
黄艳萍
Yaohui Li
李耀辉
Yi Zheng
郑易
Yuchen Zhu
朱宇辰
Yunxian Ma
马云仙
Zhen Huang
黄震
Zhipeng Xu
徐志鹏
Zhongyu Zhang
张仲宇
Business & Compliance 商业与合规
Dongjie Ji
季冬杰
Jian Liang
梁健
Jin Chen
陈瑾
Leyi Xia
夏乐义
Miaojun Wang
王淼君
Mingming Li
李明明
Peng Zhang
张鹏
Shaoqing Wu
吴少青
Shengfeng Ye
叶生峰
T. Wang
W.L. Xiao
Wei An
魏安
Xianzu Wang
王贤祖
Xinxia Shan
单新霞
Ying Tang
唐英
Yukun Zha
查宇坤
Yuting Yan
阎语婷
Zhen Zhang
张震
Within each role, authors are listed alphabetically by the first name.
Names marked with * denote individuals who have departed from our team.
在每个角色中,作者按名字字母顺序排列。标有 * 的名字表示已经离开我们团队的人。
Appendix B Ablation Studies for Low-Precision Training
附录 B 低精度训练的消融研究
B.1 FP8 v.s. BF16 Training
B.1FP8 与 BF16 训练对比
We validate our FP8 mixed precision framework with a comparison to BF16 training on top of two baseline models across different scales. At the small scale, we train a baseline MoE model comprising approximately 16B total parameters on 1.33T tokens.
At the large scale, we train a baseline MoE model comprising approximately 230B total parameters on around 0.9T tokens. We show the training curves in Figure 10 and demonstrate that the relative error remains below 0.25% with our high-precision accumulation and fine-grained quantization strategies.
我们通过与 BF16 训练的比较验证了我们的 FP8 混合精度框架,涵盖了不同规模的两个基准模型。在小规模上,我们训练了一个基准 MoE 模型,大约包含 160 亿个参数,训练 1.33 兆亿个 tokens。在大规模上,我们训练了一个大约包含 2300 亿个参数的基准 MoE 模型,大约训练 0.9 兆亿个 tokens。我们在图 10 中展示了训练曲线,并证明我们通过高精度累积和细粒度量化策略使相对误差保持在 0.25%以下。
B.2 Discussion About Block-Wise Quantization
B.2 关于分块量化的讨论
Although our tile-wise fine-grained quantization effectively mitigates the error introduced by feature outliers, it requires different groupings for activation quantization, i.e., 1x128 in forward pass and 128x1 for backward pass.
A similar process is also required for the activation gradient.
A straightforward strategy is to apply block-wise quantization per 128x128 elements like the way we quantize the model weights.
In this way, only transposition is required for backward.
Therefore, we conduct an experiment where all tensors associated with Dgrad are quantized on a block-wise basis.
The results reveal that the Dgrad operation which computes the activation gradients and back-propagates to shallow layers in a chain-like manner, is highly sensitive to precision.
Specifically, block-wise quantization of activation gradients leads to model divergence on an MoE model comprising approximately 16B total parameters, trained for around 300B tokens.
We hypothesize that this sensitivity arises because activation gradients are highly imbalanced among tokens, resulting in token-correlated outliers (Xi et al., 2023).
These outliers cannot be effectively managed by a block-wise quantization approach.
虽然我们的基于块的细粒度量化有效地减轻了特征异常值引入的误差,但这需要不同的激活量化分组,即前向过程中的 1x128 和后向过程中的 128x1。激活梯度也需要类似的处理。一种直接的策略是每 128x128 个元素进行分块量化,类似于我们量化模型权重的方式。这样,后向只需进行转置。因此,我们进行了实验,将与 Dgrad 相关的所有张量以分块的方式进行量化。结果显示,在由大约 160 亿个参数组成的 MoE 模型上,这种计算激活梯度并以链式方式向浅层反向传播的 Dgrad 操作对精度极为敏感。特别是,激活梯度的分块量化导致模型在训练大约 3000 亿个 tokens 时发生发散。我们推测这种敏感性是因为在 tokens 之间激活梯度高度不平衡,导致与 token 相关的异常值(Xi 等,2023)。这些异常值不能通过分块量化方法有效管理。
Appendix C Expert Specialization Patterns of the 16B Aux-Loss-Based and Aux-Loss-Free Models
附录 C16B 基于辅助损失和无辅助损失模型的专家特化模式
We record the expert load of the 16B auxiliary-loss-based baseline and the auxiliary-loss-free model on the Pile test set.
The auxiliary-loss-free model tends to have greater expert specialization across all layers, as demonstrated in Figure 11.
我们记录了在 Pile 测试集上的 16B 基于辅助损失的基准模型与无辅助损失模型的专家负载。结果表明,无辅助损失模型在所有层中都有更高的专家特化,见图 11。