s1: Simple test-time scaling
s1: 简单测试时间缩放
Abstract 摘要
Test-time scaling is a promising new approach to language modeling that uses extra test-time compute to improve performance. Recently, OpenAI’s o1 model showed this capability but did not publicly share its methodology, leading to many replication efforts. We seek the simplest approach to achieve test-time scaling and strong reasoning performance. First, we curate a small dataset s1K of 1,000 questions paired with reasoning traces relying on three criteria we validate through ablations: difficulty, diversity, and quality. Second, we develop budget forcing to control test-time compute by forcefully terminating the model’s thinking process or lengthening it by appending “Wait” multiple times to the model’s generation when it tries to end. This can lead the model to double-check its answer, often fixing incorrect reasoning steps. After supervised finetuning the Qwen2.5-32B-Instruct language model on s1K and equipping it with budget forcing, our model s1-32B exceeds o1-preview on competition math questions by up to 27% (MATH and AIME24). Further, scaling s1-32B with budget forcing allows extrapolating beyond its performance without test-time intervention: from 50% to 57% on AIME24. Our model, data, and code are open-source at https://github.com/simplescaling/s1.
测试时间缩放是一种有前景的新语言建模方法,它利用额外的测试时间计算来提高性能。最近,OpenAI 的 o1 模型展示了这种能力,但没有公开其方法,导致许多复制工作。我们寻求实现测试时间缩放和强大推理性能的最简单方法。首先,我们根据三个我们通过消融验证的标准(难度、多样性和质量)创建了一个包含 1,000 个问题及其推理轨迹的小数据集 s1K。其次,我们开发了预算强制方法,通过强制终止模型的思考过程或通过多次添加“等待”来延长模型生成时的长度,以控制测试时间计算。这可以使模型重新检查其答案,通常修复错误的推理步骤。在 s1K 上对 Qwen2.5-32B-Instruct 语言模型进行监督微调并配备预算强制后,我们的 s1-32B 模型在竞赛数学问题上的表现超过了 o1-preview 高达 27%(MATH 和 AIME24)。此外,使用预算强制扩展 s1-32B 的性能,可以在没有测试时间干预的情况下进行外推:从 AIME24 的 50%提高到 57%。 我们的模型、数据和代码在 https://github.com/simplescaling/s1 上开源。
机器学习,ICML,大型语言模型,测试时缩放,测试时计算
1 Introduction 1 引言

图 1:使用 s1-32B 进行测试时缩放。我们在推理密集型任务上对 s1-32B 进行基准测试,并改变测试时的计算量。
Performance improvements of language models (LMs) over the past years have largely relied on scaling up train-time compute using large-scale self-supervised pretraining (Kaplan et al., 2020; Hoffmann et al., 2022). The creation of these powerful models has set the stage for a new scaling paradigm built on top of them: test-time scaling. The aim of this approach is to increase the compute at test time to get better results. There has been much work exploring this idea (Snell et al., 2024; Welleck et al., 2024), and the viability of this paradigm was recently validated by OpenAI o1 (OpenAI, 2024). o1 has demonstrated strong reasoning performance with consistent gains from scaling test-time compute. OpenAI describes their approach as using large-scale reinforcement learning (RL) implying the use of sizable amounts of data (OpenAI, 2024). This has led to various attempts to replicate their models relying on techniques like Monte Carlo Tree Search (Gao et al., 2024b; Zhang et al., 2024a), multi-agent approaches (Qin et al., 2024), and others (Wang et al., 2024a; Huang et al., 2024b, 2025). Among these approaches, DeepSeek R1 (DeepSeek-AI et al., 2025) has successfully replicated o1-level performance, also employing reinforcement learning via millions of samples and multiple training stages. However, despite the large number of o1 replication attempts, none have openly replicated a clear test-time scaling behavior. Thus, we ask: what is the simplest approach to achieve both test-time scaling and strong reasoning performance?
近年来语言模型(LMs)的性能提升主要依赖于通过大规模自监督预训练(Kaplan 等人,2020 年;Hoffmann 等人,2022 年)扩大训练时间计算规模。这些强大模型的创建为基于它们的新的扩展范式奠定了基础:测试时间扩展。这种方法的目的是在测试时增加计算量以获得更好的结果。许多研究探讨了这一想法(Snell 等人,2024 年;Welleck 等人,2024 年),并且这种范式的可行性最近得到了 OpenAI o1(OpenAI,2024 年)的验证。o1 展示了强大的推理性能,并通过扩展测试时间计算量实现了持续的增长。OpenAI 将他们的方法描述为使用大规模强化学习(RL),暗示着使用大量数据(OpenAI,2024 年)。这导致了各种尝试复制他们的模型,依赖于蒙特卡洛树搜索(Gao 等人,2024b;Zhang 等人,2024a)、多智能体方法(Qin 等人,2024 年)以及其他方法(Wang 等人,2024a;Huang 等人,2024b,2025 年)。 在这些方法中,DeepSeek R1(DeepSeek-AI 等,2025)成功复制了 o1 级性能,同时通过数百万个样本和多个训练阶段采用强化学习。然而,尽管进行了大量 o1 复制尝试,但没有公开复制出明显的测试时间缩放行为。因此,我们提出问题:实现测试时间缩放和强大推理性能的最简单方法是什么?
We show that training on only 1,000 samples with next-token prediction and controlling thinking duration via a simple test-time technique we refer to as budget forcing leads to a strong reasoning model that scales in performance with more test-time compute. Specifically, we construct s1K, which consists of 1,000 carefully curated questions paired with reasoning traces and answers distilled from Gemini Thinking Experimental (Google, 2024). We perform supervised fine-tuning (SFT) of an off-the-shelf pretrained model on our small dataset requiring just 26 minutes of training on 16 H100 GPUs. After training, we control the amount of test-time compute our model spends using budget forcing: (I) If the model generates more thinking tokens than a desired limit, we forcefully end the thinking process by appending an end-of-thinking token delimiter. Ending the thinking this way makes the model transition to generating its answer. (II) If we want the model to spend more test-time compute on a problem, we suppress the generation of the end-of-thinking token delimiter and instead append “Wait” to the model’s current reasoning trace to encourage more exploration. Equipped with this simple recipe – SFT on 1,000 samples and test-time budget forcing – our model s1-32B exhibits test-time scaling (Figure 1). Further, s1-32B is the most sample-efficient reasoning model and outperforms closed-source models like OpenAI’s o1-preview (Figure 2).
我们表明,仅使用 1,000 个样本进行下一标记预测,并通过我们称之为预算强制的简单测试时间技术控制思考时间,可以导致一个强大的推理模型,其性能随着更多测试时间计算能力的增加而扩展。具体来说,我们构建了 s1K,它由 1,000 个精心挑选的问题组成,这些问题与来自 Gemini Thinking Experimental(谷歌,2024)的推理轨迹和答案相匹配。我们在我们的小型数据集上对现成的预训练模型进行了监督微调(SFT),只需在 16 个 H100 GPU 上训练 26 分钟即可。训练后,我们使用预算强制来控制模型在测试时间上花费的计算量:(I)如果模型生成的思考标记超过了期望的限制,我们将通过添加思考结束标记分隔符强制结束思考过程。以这种方式结束思考使模型过渡到生成答案。(II)如果我们希望模型在某个问题上花费更多的测试时间计算量,我们将抑制生成思考结束标记分隔符,并代替地将其当前推理轨迹添加“等待”来鼓励更多的探索。 配备了这个简单的配方——在 1,000 个样本上执行 SFT 和测试时间预算强制——我们的模型 s1-32B 表现出测试时间缩放(图 1)。此外,s1-32B 是最高效的样本推理模型,并优于 OpenAI 的 o1-preview 等闭源模型(图 2)。
We conduct extensive ablation experiments targeting (a) our selection of 1,000 (1K) reasoning samples and (b) our test-time scaling. For (a), we find that jointly incorporating difficulty, diversity, and quality measures into our selection algorithm is important. Random selection, selecting samples with the longest reasoning traces, or only selecting maximally diverse samples all lead to significantly worse performance (around 30% on AIME24 on average). Training on our full data pool of 59K examples, a superset of s1K, does not offer substantial gains over our 1K selection. This highlights the importance of careful data selection and echoes prior findings for instruction tuning (Zhou et al., 2023). For (b), we define desiderata for test-time scaling methods to compare different approaches. Budget forcing leads to the best scaling as it has perfect controllability with a clear positive slope leading to strong performance.
我们针对(a)我们选择的 1,000(1K)推理样本和(b)我们的测试时间缩放进行了广泛的消融实验。对于(a),我们发现将难度、多样性和质量度量联合纳入我们的选择算法非常重要。随机选择、选择推理轨迹最长的样本或仅选择最大多样性的样本都会导致性能显著下降(平均在 AIME24 上约为 30%)。在包含 59K 个示例的完整数据池上进行训练,这个数据池是 s1K 的超集,并不比我们的 1K 选择带来实质性的提升。这突出了仔细数据选择的重要性,并呼应了先前关于指令微调(Zhou et al., 2023)的研究发现。对于(b),我们为测试时间缩放方法定义了期望目标,以比较不同的方法。预算强制导致最佳的缩放,因为它具有完美的可控性,清晰的正面斜率导致强大的性能。
In summary, our contributions are: We develop simple methods for creating a sample-efficient reasoning dataset (§2) and test-time scaling (§3);
Based on these we build s1-32B which is competitive with o1-preview (§4); We ablate subtleties of data (§5.1) and test-time scaling (§5.2). We end with a discussion to motivate future work on simple reasoning (§6). Our code, model, and data are open-source at https://github.com/simplescaling/s1.
总结来说,我们的贡献包括:我们开发了创建高效样本推理数据集的简单方法(§2)和测试时缩放(§3);基于这些,我们构建了与 o1-preview(§4)具有竞争力的 s1-32B;我们消除了数据(§5.1)和测试时缩放(§5.2)的细微差别。我们以对简单推理未来工作的讨论(§6)结束。我们的代码、模型和数据在 https://github.com/simplescaling/s1 上开源。


图 2:s1K 和 s1-32B。(左)s1K 是一个包含 1,000 个高质量、多样和难度大的问题及其推理轨迹的数据集。(右)s1-32B,一个在 s1K 上微调的 32B 参数模型位于样本效率前沿。参见标签:tab:perf,了解其他模型的详细信息。
2 Reasoning data curation to create s1K
2 理由数据整理以创建 s1K
In this section, we describe our process for creating a large dataset first in §2.1 and then filtering it down to s1K in §2.2.
在这一节中,我们首先在 2.1 节描述了创建大型数据集的过程,然后在 2.2 节将其筛选至 s1K。
2.1 Initial collection of 59K samples
2.1 初始收集 59K 个样本
We collect an initial 59,029 questions from 16 sources following three guiding principles. Quality: Datasets should be high-quality; we always inspect samples and ignore datasets with, e.g., poor formatting; Difficulty: Datasets should be challenging and require significant reasoning effort; Diversity: Datasets should stem from various fields to cover different reasoning tasks. We collect datasets of two categories:
我们从 16 个来源收集了最初的 59,029 个问题,遵循以下三个指导原则。质量:数据集应高质量;我们始终检查样本,并忽略例如格式不佳的数据集;难度:数据集应具有挑战性,并需要大量的推理努力;多样性:数据集应源自不同领域,以涵盖不同的推理任务。我们收集了两种类别的数据集:
Curation of existing datasets
现有数据集的整理
Our largest source is NuminaMATH (LI et al., 2024) with 30,660 mathematical problems from online websites. We also include historical AIME problems (1983-2021). To enhance diversity, we add OlympicArena (Huang et al., 2024a) with 4,250 questions spanning Astronomy, Biology, Chemistry, Computer Science, Geography, Mathematics, and Physics from various Olympiads. OmniMath (Gao et al., 2024a) adds 4,238 competition-level mathematics problems. We also include 2,385 problems from AGIEval (Zhong et al., 2023), which features questions from standardized tests like SAT and LSAT, covering English, Law, and Logic. We refer to Table 7 in §C for our other sources.
我们的最大来源是 NuminaMATH(LI 等,2024),包含来自在线网站的 30,660 个数学问题。我们还包括 1983-2021 年的历史 AIME 问题。为了增加多样性,我们添加了 OlympicArena(Huang 等,2024a),包含来自各种奥林匹克竞赛的天文学、生物学、化学、计算机科学、地理学、数学和物理学等领域的 4,250 个问题。OmniMath(Gao 等,2024a)增加了 4,238 个竞赛级别的数学问题。我们还包括来自 AGIEval(Zhong 等,2023)的 2,385 个问题,这些问题是来自 SAT 和 LSAT 等标准化考试的问题,涵盖英语、法律和逻辑。我们参考§C 中的表 7 以了解其他来源。
New datasets in quantitative reasoning
新的定量推理数据集
To complement these existing datasets, we create two original datasets. s1-prob consists of 182 questions from the probability section of Stanford University’s Statistics Department’s PhD Qualifying Exams (https://statistics.stanford.edu), accompanied by handwritten solutions that cover difficult proofs. The probability qualifying exam is held yearly and requires professional-level mathematical problem-solving. s1-teasers comprises 23 challenging brain-teasers commonly used in interview questions for quantitative trading positions. Each sample consists of a problem and solution taken from PuzzledQuant (https://www.puzzledquant.com/). We only take examples with the highest difficulty level (”Hard”).
为了补充这些现有数据集,我们创建了两个原始数据集。s1-prob 包含来自斯坦福大学统计学系博士资格考试(https://statistics.stanford.edu)概率部分的 182 个问题,附带涵盖困难证明的手写解答。概率资格考试每年举行一次,需要专业水平的数学问题解决能力。s1-teasers 包含 23 个常见的面试问题中的挑战性脑筋急转弯,这些脑筋急转弯常用于量化交易职位的面试问题。每个样本都包含来自 PuzzledQuant(https://www.puzzledquant.com/)的问题和解答。我们只选取难度最高的例子(“困难”)。
For each question, we generate a reasoning trace and solution using the Google Gemini Flash Thinking API (Google, 2024) extracting its reasoning trace and response. This yields 59K triplets of a question, generated reasoning trace, and generated solution. Examples from our dataset are in §D.2. We decontaminate all samples against our evaluation questions (MATH500, GPQA Diamond, AIME24; §C.5) using 8-grams and deduplicate the data.
对于每个问题,我们使用 Google Gemini Flash Thinking API(Google,2024)生成推理轨迹和解决方案,提取其推理轨迹和响应。这产生了 59K 个问题、生成推理轨迹和生成解决方案的三元组。我们的数据集示例见§D.2。我们使用 8-gram 对所有样本进行去污染,与我们的评估问题(MATH500、GPQA Diamond、AIME24;§C.5)进行对比,并去重数据。
2.2 Final selection of 1K samples
2.2 1K 样本的最终选择
We could directly train on our pool of 59K questions, however, our goal is to find the simplest approach with minimal resources. Thus, we go through three stages of filtering to arrive at a minimal set of 1,000 samples relying on our three guiding data principles: Quality, Difficulty, and Diversity.
我们可以在我们的 59K 个问题库中直接进行训练,然而,我们的目标是找到资源最少的简单方法。因此,我们经过三个阶段的筛选,最终得到一个包含 1,000 个样本的最小集合,这依赖于我们三个指导数据原则:质量、难度和多样性。
Quality 质量
We first remove any questions where we ran into any API errors reducing our dataset to 54,116 samples. Next, we filter out low-quality examples by checking if they contain any string patterns with formatting issues, such as ASCII art diagrams, non-existent image references, or inconsistent question numbering reducing our dataset to 51,581 examples. From this pool, we identify 384 samples for our final 1,000 samples from datasets that we perceive as high-quality and not in need of further filtering (see §C.4 for details).
我们首先删除了遇到任何 API 错误的问题,将数据集减少到 54,116 个样本。接下来,我们通过检查是否存在任何格式问题(如 ASCII 艺术图表、不存在的图像引用或不一致的问题编号)的字符串模式来过滤掉低质量示例,将数据集减少到 51,581 个示例。从这个池子中,我们确定了 384 个样本,作为最终 1,000 个样本,这些样本来自我们认为高质量且不需要进一步过滤的数据集(详情见§C.4)。
Difficulty 难度
For difficulty, we use two indicators: model performance and reasoning trace length. We evaluate two models on each question: Qwen2.5-7B-Instruct and Qwen2.5-32B-Instruct (Qwen et al., 2024), with correctness assessed by Claude 3.5 Sonnet comparing each attempt against the reference solution (see §C.3 for the grading protocol). We measure the token length of each reasoning trace to indicate problem difficulty using the Qwen2.5 tokenizer. This relies on the assumption that more difficult problems require more thinking tokens. Based on the grading, we remove questions that either Qwen2.5-7B-Instruct or Qwen2.5-32B-Instruct can solve correctly and thus may be too easy. By using two models we reduce the likelihood of an easy sample slipping through our filtering due to a rare mistake on an easy question of one of the models. This brings our total samples down to 24,496, setting the stage for the next round of subsampling based on diversity. While filtering with these two models may be optimized for our setup as we will also use Qwen2.5-32B-Instruct as our model to finetune, the idea of model-based filtering generalizes to other setups.
对于难度,我们使用两个指标:模型性能和推理轨迹长度。我们对每个问题评估两个模型:Qwen2.5-7B-Instruct 和 Qwen2.5-32B-Instruct(Qwen 等人,2024 年),通过 Claude 3.5 Sonnet 比较每个尝试与参考解决方案的正确性来评估(见§C.3 以了解评分协议)。我们使用 Qwen2.5 分词器测量每个推理轨迹的标记长度,以指示问题难度。这基于一个假设,即更困难的问题需要更多的思考标记。基于评分,我们移除 Qwen2.5-7B-Instruct 或 Qwen2.5-32B-Instruct 可以正确解决的题目,因此可能过于简单。通过使用两个模型,我们减少了由于模型在简单问题上的罕见错误而使简单样本通过过滤的可能性。这使得我们的总样本量降至 24,496,为基于多样性的下一轮子采样奠定了基础。虽然使用这两个模型进行过滤可能针对我们的设置进行了优化,因为我们还将使用 Qwen2.5-32B-Instruct 作为我们的模型进行微调,但基于模型的过滤思想可以推广到其他设置。
Diversity 多样性
To quantify diversity, we classify questions into domains using Claude 3.5 Sonnet based on the Mathematics Subject Classification (MSC) system (e.g., geometry, combinatorics, etc.) from the American Mathematical Society.111https://mathscinet.ams.org/mathscinet/msc/msc2020.html The taxonomy focuses on topics in mathematics but also includes other sciences such as biology, physics, and economics. To select our final examples from the pool of 24,496 questions, we first choose one domain uniformly at random. Then, we sample one problem from this domain according to a distribution that favors longer reasoning traces (see §C.4 for details) as motivated in Difficulty. We repeat this process until we have 1,000 total samples spanning 50 domains.
为了量化多样性,我们使用 Claude 3.5 Sonnet 根据美国数学学会的数学主题分类(MSC)系统(例如,几何学、组合数学等)将问题分类到各个领域。该分类法专注于数学领域,但也包括其他科学,如生物学、物理学和经济学。为了从 24,496 个问题的池中选出我们的最终示例,我们首先随机选择一个领域。然后,根据在难度中提出的、有利于更长的推理轨迹的分布,从这个领域抽取一个问题(详情见§C.4)。我们重复此过程,直到我们拥有跨越 50 个领域的 1,000 个总样本。
In §5.1, we will show that using our three criteria in combination is important, as only relying on quality, diversity, or difficulty in isolation leads to worse datasets. Some distilled generations are incorrect, which we allow in our data as we focus on capturing the reasoning process rather than entirely correct solutions. Our grader (§C.3) deems 53.6% correct in s1K and 63.0% in our follow-up s1K-1.1 (see §A).
在§5.1 中,我们将展示将我们的三个标准结合使用的重要性,因为仅依赖于质量、多样性或难度中的任何一个都会导致数据集质量下降。一些提炼出的生成内容是错误的,我们允许这些错误存在于我们的数据中,因为我们更关注捕捉推理过程而不是完全正确的解决方案。我们的评分者(§C.3)认为 s1K 中有 53.6%是正确的,在后续的 s1K-1.1 中有 63.0%(见§A)。
3 Test-time scaling 3 测试时缩放

图 3:使用 s1-32B 进行预算强制。模型试图在“…是 2。”后停止,但我们抑制了思考结束标记分隔符,而是附加“等待”,使 s1-32B 自行纠正答案。
3.1 Method 3.1 方法
We classify test-time scaling methods into 1) Sequential, where later computations depend on earlier ones (e.g., a long reasoning trace), and 2) Parallel, where computations run independently (e.g., majority voting) (Snell et al., 2024; Brown et al., 2024). We focus on sequential scaling as intuitively we believe it should scale better, since later computations can build on intermediate results, allowing for deeper reasoning and iterative refinement. We propose new sequential scaling methods and ways to benchmark them.
我们将测试时间缩放方法分为 1)顺序型,其中后续计算依赖于早期计算(例如,长推理轨迹),和 2)并行型,其中计算独立运行(例如,多数投票)(Snell 等,2024;Brown 等,2024)。我们专注于顺序型缩放,因为我们直观地认为它应该有更好的缩放效果,因为后续计算可以基于中间结果,从而实现更深入的推理和迭代优化。我们提出了新的顺序型缩放方法及其基准测试方法。
Budget forcing 预算强制
We propose a simple decoding-time intervention by forcing a maximum and/or minimum number of thinking tokens. Specifically, we enforce a maximum token count by simply appending the end-of-thinking token delimiter and optionally “Final Answer:” to early exit the thinking stage and make the model provide its current best answer. To enforce a minimum, we suppress the generation of the end-of-thinking token delimiter and optionally append the string “Wait” to the model’s current reasoning trace to encourage the model to reflect on its current generation. Figure 3 contains an example of how this simple approach can lead the model to arrive at a better answer.
我们提出了一种简单的解码时间干预方法,通过强制设置最大和/或最小数量的思考标记。具体来说,我们通过简单地添加思考结束标记并可选地添加“最终答案:”来提前退出思考阶段,使模型提供其当前最佳答案,从而强制执行最大标记计数。为了强制执行最小数量,我们抑制生成思考结束标记,并可选地添加字符串“等待”到模型的当前推理轨迹中,以鼓励模型反思其当前生成的内容。图 3 展示了这种简单方法如何引导模型得出更好的答案。
Baselines 基线
We benchmark budget forcing with: (I) Conditional length-control methods, which rely on telling the model in the prompt how long it should generate for. We group them by granularity into (a) Token-conditional control: We specify an upper bound of thinking tokens in the prompt; (b) Step-conditional control: We specify an upper bound of thinking steps, where each step is around 100 tokens; (c) Class-conditional control: We write two generic prompts that tell the model to either think for a short or long amount of time (see §E.1 for details). (II) Rejection sampling, which samples until a generation fits a predetermined compute budget. This oracle captures the posterior over responses conditioned on its length.
我们使用以下方法对预算强制进行基准测试:(I)条件长度控制方法,这些方法依赖于在提示中告诉模型它应该生成多长时间。我们将它们按粒度分为:(a)标记条件控制:我们在提示中指定思考标记的上限;(b)步骤条件控制:我们指定思考步骤的上限,每个步骤大约有 100 个标记;(c)类别条件控制:我们编写两个通用提示,告诉模型要么思考一段时间,要么思考很长时间(详情见§E.1)。(II)拒绝采样,它采样直到生成的内容符合预定的计算预算。这个预言家捕捉了基于其长度的响应的后验分布。
3.2 Metrics 3.2 度量指标
We establish a set of desiderata as evaluation metrics to measure test-time scaling across methods. Importantly, we do not only care about the accuracy a method can achieve but also its controllability and test-time scaling slope. For each method we consider, we run a set of evaluations varying test-time compute on a fixed benchmark, e.g. AIME24. This produces a piece-wise linear function with compute as the x-axis measured in thinking tokens and accuracy as the y-axis (see Figure 1, where the rightmost dot for AIME24 corresponds to ). We measure three metrics:
我们建立了一套期望指标作为评估标准,以衡量方法在测试时的缩放。重要的是,我们不仅关注方法可以达到的准确性,还关注其可控性和测试时缩放斜率。对于每种我们考虑的方法,我们在一个固定的基准上运行一系列评估,例如 AIME24。这产生了一个分段线性函数,其中计算作为 x 轴(以思维令牌衡量)和准确性作为 y 轴(见图 1,其中 AIME24 的最右端点对应于 )。我们测量三个指标:
(1) |
where refer to a pre-specified minimum and maximum amount of test-time compute; in our case thinking tokens. We usually only constrain . As tokens generated correspond to the amount of test-time compute spent, this metric measures the extent to which a method allows controllability over the use of that test-time compute. We report it as a percentage with 100% being perfect control.
指的是预先指定的测试时间计算的最小和最大量;在我们的情况下是思考令牌。我们通常只约束 。由于生成的令牌对应于花费的测试时间计算量,这个指标衡量了一种方法在控制该测试时间计算方面的程度。我们将其报告为百分比,100%表示完美控制。
(2) |
Scaling is the average slope of the piece-wise linear function. It must be positive for useful methods and larger is better.
缩放是分段线性函数的平均斜率。对于有效方法,它必须是正的,越大越好。
Performance 性能 | (3) |
Performance is simply the maximum performance the method achieves on the benchmark. A method with monotonically increasing scaling achieves 100% performance on any benchmark in the limit. However, the methods we investigate eventually flatten out or further scaling fails due to control or context window limitations.
性能仅仅是该方法在基准测试上达到的最大性能。具有单调递增缩放的方法在极限情况下在任何基准测试上都能达到 100%的性能。然而,我们研究的方法最终会趋于平稳或进一步缩放因控制或上下文窗口限制而失败。

(一) 通过预算强制实现的顺序缩放

( b ) 通过多数投票实现并行扩展
图 4:测试时顺序和并行缩放。(a):预算强制显示了明显的缩放趋势并在一定程度上进行了外推。对于最右侧的三个点,我们防止模型停止思考 2/4/6 次,每次将其当前推理轨迹附加“等待”。(b):对于 Qwen2.5-32B-Instruct,我们对每个样本进行 64 次评估,温度为 1,并在这些样本的 2、4、8、16、32 和 64 次多数投票中可视化性能。
4 Results 4 结果
4.1 Setup 4.1 设置
Training 训练
We perform supervised finetuning on Qwen2.5-32B-Instruct using s1K to obtain our model s1-32B using basic hyperparameters outlined in §D. Finetuning took 26 minutes on 16 NVIDIA H100 GPUs with PyTorch FSDP.
我们在 Qwen2.5-32B-Instruct 上使用 s1K 进行监督微调,以获得我们的 s1-32B 模型,该模型采用§D 中概述的基本超参数。微调在 16 个 NVIDIA H100 GPU 上使用 PyTorch FSDP 耗时 26 分钟。
Evaluation 评估
We select three representative reasoning benchmarks widely used in the field: AIME24 (of America, 2024) has 30 problems that were used in the 2024 American Invitational Mathematics Examination (AIME) held from January 31 – February 1, 2024. AIME tests mathematical problem-solving with arithmetic, algebra, counting, geometry, number theory, probability, and other secondary school math topics. High-scoring high school students in the test are invited to participate in the United States of America Mathematics Olympiad (USAMO). All AIME answers are integers ranging from to , inclusive. Some AIME problems rely on figures that we provide to our model using the vector graphics language Asymptote as it cannot take image inputs. MATH500 (Hendrycks et al., 2021) is a benchmark of competition math problems of varying difficulty. We evaluate on the same 500 samples selected by OpenAI in prior work (Lightman et al., 2023). GPQA Diamond (Rein et al., 2023) consists of 198 PhD-level science questions from Biology, Chemistry and Physics. Experts with PhDs in the corresponding domains only achieved 69.7% on GPQA Diamond (OpenAI, 2024). When we write “GPQA” in the context of evaluation in this work, we always refer to the Diamond subset. We build on the “lm-evaluation-harness” framework (Gao et al., 2021; Biderman et al., 2024). Unless otherwise specified, we evaluate with a temperature of 0 (greedy) and measure accuracy (equivalent to pass@1).
我们选择了三个在领域内广泛使用的代表性推理基准:AIME24(美国,2024)包含 30 个问题,这些问题用于 2024 年 1 月 31 日至 2 月 1 日举行的美国邀请数学竞赛(AIME)。AIME 通过算术、代数、计数、几何、数论、概率和其他中学数学主题来测试数学问题解决能力。在测试中得分较高的高中生被邀请参加美国数学奥林匹克(USAMO)。所有 AIME 答案都是整数,范围从 到 ,包括 和 。一些 AIME 问题依赖于我们使用向量图形语言 Asymptote 提供的图形,因为我们的模型不能接受图像输入。MATH500(Hendrycks 等人,2021)是一个不同难度的竞赛数学问题基准。我们在 OpenAI 先前工作中选择的相同 500 个样本上进行评估(Lightman 等人,2023)。GPQA Diamond(Rein 等人,2023)由来自生物学、化学和物理的 198 个博士水平科学问题组成。只有对应领域的博士专家在 GPQA Diamond 上达到了 69.7%(OpenAI,2024)。 当我们在这项工作中提到“GPQA”时,我们总是指代 Diamond 子集。我们基于“lm-evaluation-harness”框架(高翔等,2021;比德曼等,2024)进行构建。除非另有说明,我们以 0(贪婪)的温度进行评估,并测量准确率(相当于 pass@1)。
Other models 其他模型
We benchmark s1-32B against: OpenAI o1 series (OpenAI, 2024), closed-source models that popularized test-time scaling; DeepSeek r1 series (DeepSeek-AI et al., 2025), open-weight reasoning models with up to o1-level performance; Qwen’s QwQ-32B-preview (Team, 2024), a 32B open-weight reasoning model without disclosed methodology; Sky-T1-32B-Preview (Team, 2025) and Bespoke-32B (Labs, 2025), open models with open reasoning data distilled from QwQ-32B-preview and r1; Google Gemini 2.0 Flash Thinking Experimental (Google, 2024), the API that we distill from. As it has no official evaluation scores, we use the Gemini API to benchmark it ourselves. However, the “recitation error” of the Gemini API makes evaluation challenging.222https://github.com/google/generative-ai-docs/issues/257 We circumvent this, by manually inserting all 30 AIME24 questions in its web interface where the error does not appear. However, we leave out MATH500 (500 questions) and GPQA Diamond (198 questions), thus they are N.A. in LABEL:tab:perf. Our model, s1-32B, is fully open including weights, reasoning data, and code.
我们将 s1-32B 与以下基准模型进行比较:OpenAI o1 系列(OpenAI,2024),流行测试时缩放的闭源模型;DeepSeek r1 系列(DeepSeek-AI 等,2025),具有高达 o1 级性能的开源推理模型;Qwen 的 QwQ-32B-preview(团队,2024),一个 32B 开源推理模型,未公开方法;Sky-T1-32B-Preview(团队,2025)和 Bespoke-32B(实验室,2025),从 QwQ-32B-preview 和 r1 中提取开源推理数据的开放模型;Google Gemini 2.0 闪思实验(Google,2024),我们从其提炼的 API。由于它没有官方评估分数,我们使用 Gemini API 自行进行基准测试。然而,Gemini API 的“背诵错误”使得评估变得具有挑战性。通过手动将其所有 30 个 AIME24 问题插入其网页界面(错误不会出现),我们绕过了这一点。然而,我们排除了 MATH500(500 个问题)和 GPQA Diamond(198 个问题),因此它们在标签:tab:perf 中为 N.A.。我们的模型 s1-32B 完全开源,包括权重、推理数据和代码。
表 1:s1-32B 是一个强大的开放推理模型。我们评估了 s1-32B、Qwen 和 Gemini(部分条目未知(N.A.),见§4)。其他结果来自各自的报告(Qwen 等,2024;团队,2024;OpenAI,2024;DeepSeek-AI 等,2025;实验室,2025;团队,2025)。# ex. = 用于推理微调的示例数量;BF = 预算强制。见§A 了解我们更好的 s1.1 模型。
Model | # ex. # 示例 | AIME 2024 | MATH 数学 500 | GPQA Diamond 钻石 |
API only API 仅 | ||||
o1-preview o1-预览 | N.A. | 44.6 | 85.5 | 73.3 |
o1-mini o1-迷你 | N.A. | 70.0 | 90.0 | 60.0 |
o1 | N.A. | 74.4 | 94.8 | 77.3 |
Gemini 2.0 | N.A. | 60.0 | N.A. | N.A. |
Flash Think. 闪思。 | ||||
Open Weights 开放重量 | ||||
Qwen2.5- | N.A. | 26.7 | 84.0 | 49.0 |
32B-Instruct 32B-指令 | ||||
QwQ-32B | N.A. | 50.0 | 90.6 | 54.5 |
r1 r1:r1 | 800K | 79.8 | 97.3 | 71.5 |
r1-distill r1-蒸馏 | 800K | 72.6 | 94.3 | 62.1 |
Open Weights and Open Data 开放式重量和开放式数据 |
||||
Sky-T1 天际-T1 | 17K | 43.3 | 82.4 | 56.8 |
Bespoke-32B | 17K | 63.3 | 93.0 | 58.1 |
s1 w/o BF s1 无 BF | 1K | 50.0 | 92.6 | 56.6 |
s1-32B | 1K | 56.7 | 93.0 | 59.6 |
4.2 Performance 4.2 性能
Test-time scaling 测试时间缩放
Figure 1 shows the performance of s1-32B with budget forcing scales with more test-time compute. In Figure 4 (left), we expand the plot from Figure 1 (middle) showing that while we can improve AIME24 performance using our budget forcing technique (§3) and more test-time compute it does eventually flatten out at six times. Suppressing the end-of-thinking token delimiter too often can lead the model into repetitive loops instead of continued reasoning. In Figure 4 (right), we show that after training Qwen2.5-32B-Instruct on our 1,000 samples to produce s1-32B and equipping it with the simple budget forcing technique, it operates in a different scaling paradigm. Scaling test-time compute on the base model via majority voting cannot catch up with the performance of s1-32B which validates our intuition from §3 that sequential scaling is more effective than parallel. We provide example generations of s1-32B in Figure 5.
图 1 展示了 s1-32B 在预算强制缩放和更多测试时间计算下的性能。在图 4(左)中,我们扩展了图 1(中)的图表,表明虽然我们可以通过我们的预算强制技术(§3)和更多测试时间计算来提高 AIME24 的性能,但它最终会在六倍时趋于平坦。过度抑制思考结束标记分隔符可能导致模型陷入重复循环而不是持续推理。在图 4(右)中,我们展示了在 1,000 个样本上训练 Qwen2.5-32B-Instruct 以产生 s1-32B,并配备简单的预算强制技术后,它运行在不同的缩放范式下。通过多数投票在基础模型上缩放测试时间计算无法赶上 s1-32B 的性能,这验证了我们在§3 中的直觉,即顺序缩放比并行缩放更有效。我们在图 5 中提供了 s1-32B 的示例生成。
Sample-efficiency 样本效率
In Figure 2 (right) and LABEL:tab:perf we compare s1-32B with other models. We find that s1-32B is the most sample-efficient open data reasoning model. It performs significantly better than our base model (Qwen2.5-32B-Instruct) despite just training it on an additional 1,000 samples. The concurrently released r1-32B shows stronger performance than s1-32B while also only using SFT (DeepSeek-AI et al., 2025). However, it is trained on 800 more reasoning samples. It is an open question whether one can achieve their performance with just 1,000 samples. Finally, our model nearly matches Gemini 2.0 Thinking on AIME24. As the data for s1-32B is distilled from Gemini 2.0, this shows our distillation procedure was likely effective.
图 2(右)和标签:tab:perf 中,我们比较了 s1-32B 与其他模型。我们发现 s1-32B 是最具样本效率的开源数据推理模型。尽管只是在额外 1,000 个样本上进行了训练,它的表现显著优于我们的基础模型(Qwen2.5-32B-Instruct)。同时发布的 r1-32B 在 s1-32B 的基础上表现出更强的性能,同时仅使用了 SFT(DeepSeek-AI 等,2025)。然而,它是在 800 个 更多的推理样本上训练的。能否仅用 1,000 个样本达到其性能还是一个开放性问题。最后,我们的模型在 AIME24 上几乎与 Gemini 2.0 Thinking 相当。由于 s1-32B 的数据是从 Gemini 2.0 蒸馏而来,这表明我们的蒸馏过程可能是有效的。
[AIME24]
Alice and Bob play the following game. A stack of tokens lies before them. The players take turns with Alice going first. On each turn, the player removes either token or tokens from the stack. Whoever removes the last token wins. Find the number of positive integers less than or equal to for which there exists a strategy for Bob that guarantees that Bob will win the game regardless of Alice’s play.
Alice 和 Bob 玩以下游戏。面前有一堆 代币。玩家轮流玩,Alice 先手。每轮,玩家从堆中移除 个或 个代币。移除最后一个代币的玩家获胜。找出所有小于或等于 的正整数 ,对于这些数,Bob 存在一种策略可以保证无论 Alice 如何出牌,Bob 都能获胜。 The problem asks for the number of positive integers such that Bob has a winning strategy. This is a typical impartial game that can be analyzed using the Sprague-Grundy theorem or by directly finding the winning and losing positions. Let be the set of winning positions (number of tokens such that the player whose turn it is can force a win) and be the set of losing positions. […] 问题要求找出 Bob 有获胜策略的正整数 的个数。这是一个典型的公平游戏,可以使用 Sprague-Grundy 定理或直接找出获胜和失败位置来分析。设 为获胜位置集合(玩家当前回合可以迫使获胜的代币 的数量), 为失败位置集合。[...] Let be the statement ”the player whose turn it is when there are tokens has a winning strategy. is false (the player whose turn it is and faces tokens has already lost). Consider the small values of : 令 为陈述:“当有 个标记时轮到该玩家的回合,该玩家有获胜策略。” 是错误的(轮到该玩家回合且面对 个标记的玩家已经输了)。考虑 的小值: : The player whose turn it is can take token and win. So is true. . 玩家轮到其行动时可以拿取 代币并获胜。因此 为真。 。 : The player whose turn it is can take token, leaving token for the opponent. Since , the opponent wins. So is false. . 当前玩家可以拿走 个标记,留给对手 个标记。由于 ,对手获胜。因此 是错误的。 。 : The player whose turn it is can take token, leaving tokens for the opponent. Since , the player whose turn it is wins. So is true. .[…] 当前玩家可以拿取 代币,留给对手 代币。由于 ,当前玩家获胜。因此 为真。 。[...] We are looking for the number of positive integers for which Bob has a winning strategy. This means that if the game starts with tokens, and Alice goes first, Bob can win. This happens if and only if is a losing position (because if , then Alice’s first move must lead to a winning position for Bob). […] 我们正在寻找正整数 的个数,对于这些数,Bob 有一个获胜策略。这意味着如果游戏从 个代币开始,Alice 先手,Bob 可以获胜。这只有在 是一个败局位置时才会发生(因为如果 ,那么 Alice 的第一步必须导致 Bob 获得一个获胜位置)。 [...] Let’s try to find a pattern in the losing positions . Differences between consecutive terms: The pattern of differences is alternating . […] 让我们尝试在败势位置 中找到一个模式。相邻项之间的差异: 差异的模式是交替的 [...] The losing positions are of the form or , where . We need to find the number of positive integers that are in the set of losing positions . […] 输掉的位置形式为 或 ,其中 。我们需要找到位于输掉位置集合 中的正整数 的数量。[...] Case 1: , where . Since is an integer, . The number of values of is . 案例 1: ,其中 . 由于 是整数, . 的值有 个。 Case 2: , where . Since is an integer, . The number of values of is . 案例 2: ,其中 . 由于 是整数, . 的值有 个。 The total number of positive integers that are losing positions is . 正整数中失去位置的整数总数为 。 |
[MATH500]
Find the projection of onto if 找到向量 在向量 上的投影 The projection of vector onto vector is given by the formula: proj_b a = a ⋅b∥b∥2 b We are given and 向量 在向量 上的投影由以下公式给出:proj_b a = a ⋅ b / ∥b∥^2 b 已知向量 和向量 First, we need to find the magnitude squared of , which is . ∥b∥^2 = b ⋅b = 2^2 + 6^2 + 3^2 = 4 + 36 + 9 = 49. 首先,我们需要找到 的平方模,即 。∥b∥^2 = b ⋅ b = 2^2 + 6^2 + 3^2 = 4 + 36 + 9 = 49。 Now we can plug the values into the projection formula: proj_b a = 849 b Substitute the vector : proj_b a = 849 (2 6 3 ) 现在我们可以将数值代入投影公式:proj_b a = 849 b 将向量 代入:proj_b a = 849 (2 6 3) Multiply the scalar by each component of the vector: proj_b a = (849×2 849×6 849×3 ) = (164948492449) 将标量乘以向量的每个分量:proj_b a = (849×2 849×6 849×3 ) = (164948492449) Final Answer: The final answer is 最终答案:最终答案为 |
[GPQA]
A quantum mechanical particle of mass moves in two dimensions in the following potential, as a function of the polar coordinates :
V(r, θ) = 12 k r^2 + 32 k r^2 cos^2(θ)
Find the energy spectrum.
Hint: Write the potential in Cartesian coordinates.
一个质量为 的量子力学粒子在以下势场中在二维空间运动,势场是极坐标 的函数:V(r, θ) = 12 k r^2 + 32 k r^2 cos^2(θ) 求能级谱。提示:将势场写成笛卡尔坐标系。 The problem asks for the energy spectrum of a particle in a given 2D potential. The potential is given in polar coordinates : V(r, θ) = 12 kr^2 + 32 kr^2 cos^2(θ). We can rewrite the potential as: V(r, θ) = 12 kr^2 (1 + 3cos^2(θ)). The Hamiltonian for a particle in 2D is given by: H = -ℏ22m ( ∂2∂x2 + ∂2∂y2 ) + V(x, y). In polar coordinates , the Laplacian operator is: ∇^2 = 1r ∂∂r (r ∂∂r ) + 1r2 ∂2∂θ2. So the Hamiltonian in polar coordinates is: H = -ℏ22m [ 1r ∂∂r (r ∂∂r ) + 1r2 ∂2∂θ2 ] + V(r, θ). The time-independent Schrödinger equation is . We can try to separate variables by assuming a solution of the form . […] 问题要求求解给定二维势场中粒子的能量谱。势场以极坐标给出 :V(r, θ) = 12 kr^2 + 32 kr^2 cos^2(θ)。我们可以将势场重写为:V(r, θ) = 12 kr^2 (1 + 3cos^2(θ))。二维粒子的哈密顿量为:H = -ℏ^2/2m ( ∂^2/∂x^2 + ∂^2/∂y^2 ) + V(x, y)。在极坐标 中,拉普拉斯算子为:∇^2 = 1/r ∂/∂r (r ∂/∂r) + 1/r^2 ∂^2/∂θ^2。因此,极坐标中的哈密顿量为:H = -ℏ^2/2m [ 1/r ∂/∂r (r ∂/∂r) + 1/r^2 ∂^2/∂θ^2 ] + V(r, θ)。定态薛定谔方程为 。我们可以尝试通过假设解的形式 来分离变量。 [...] This confirms the energy spectrum is: E = ℏkm (2n_x + n_y + 32). Final Answer: . 这证实了能量谱为:E = ℏkm (2n_x + n_y + 32)。最终答案: 。 |
图 5:示例模型输出。我们选取了 AIME24(左侧)、MATH500(中间)和 GPQA(右侧)中的一个问题,其中我们的模型生成了正确答案。黑色文字是提示,浅蓝色文字是推理轨迹,蓝色文字是 s1-32B 的答案。灰色省略号[...]表示文本已被裁剪以适应此页面,但实际上生成的文本更长。
5 Ablations 5Ablations 无法翻译
5.1 Data Quantity, Diversity, and Difficulty
5.1 数据量、多样性和难度
表 2:s1K 数据消融。我们为表中所有分数分配的最大力预算(BF)约为 30,000 个思考标记。这比没有 BF 的分数表现略好,因为它允许模型在陷入无限循环时完成最佳猜测。我们使用 10,000 个自举样本报告相对于 s1K 模型的 95%配对自举置信区间。例如,区间[-13%,20%]表示,在 95%的置信度下,59K-full 与 s1K 的真实差异在-13%到+20%之间。如果整个区间为负,例如[-27%,-3%],则可以自信地说性能比 s1K 差。
Model | AIME 2024 | MATH 数学 500 | GPQA Diamond 钻石 |
1K-random 1K-随机 | 36.7 | 90.6 | 52.0 |
[-26.7%, -3.3%] [-26.7%,-3.3%] | [-4.8%, 0.0%] [-4.8%,0.0%] | [-12.6%, 2.5%] [-12.6%,2.5%] | |
1K-diverse 1K-多样性 | 26.7 | 91.2 | 54.6 |
[-40.0%, -10.0%] [-40.0%,-10.0%] | [-4.0%, 0.2%] [-4.0%,0.2%] | [-10.1%, 5.1%] [-10.1%,5.1%] | |
1K-longest 1K 最长 | 33.3 | 90.4 | 59.6 |
[-36.7%, 0.0%] [-36.7%,0.0%] | [-5.0%, -0.2%] [-5.0%,-0.2%] | [-5.1%, 10.1%] [-5.1%,10.1%] | |
59K-full | 53.3 | 92.8 | 58.1 |
[-13.3%, 20.0%] [-13.3%,20.0%] | [-2.6%, 2.2%] [-2.6%,2.2%] | [-6.6%, 8.6%] [-6.6%,8.6%] | |
s1K | 50.0 | 93.0 | 57.6 |
In §2 we outlined our three guiding principles in curating s1K: Quality, Difficulty, and Diversity. Here we test the importance of combining them and the overall efficacy of our selection. Only Quality (1K-random): After obtaining our high-quality reasoning chains from Gemini, we select 1,000 samples at random; not relying on our difficulty and diversity filtering at all. Table 2 shows this approach performs much worse than s1K across all benchmarks. Only Diversity (1K-diverse): For this dataset, we sample uniformly across domains to maximize diversity disregarding any notion of difficulty. This approach also leads to poor performance similar to 1K-random. Only Difficulty (1K-longest): Here we rely on one of our difficulty indicators introduced in §2 by selecting the 1,000 samples with the longest reasoning traces. This approach significantly boosts GPQA performance but overall still falls short of using s1K. Maximize Quantity: Finally, we compare with just training on all of our 59K samples, a superset of all the 1K-sample versions. This leads to a strong model but uses much more resources. To finetune on 59K samples, we use 394 H100 GPU hours while s1-32B only required 7 H100 GPU hours. Moreover, relying only on s1K is extremely competitive as shown in §2. Overall, combining all three criteria – Quality, Difficulty, Diversity – via our methodology in §2 is key for sample-efficient reasoning training.
在§2 中,我们概述了我们在 s1K 编纂中的三个指导原则:质量、难度和多样性。在这里,我们测试了将它们结合起来的重要性以及我们选择的总体有效性。仅质量(1K-random):在从 Gemini 获得我们高质量推理链后,我们随机选择 1,000 个样本;完全不依赖我们的难度和多样性过滤。表 2 显示,这种方法在所有基准测试中都比 s1K 表现差。仅多样性(1K-diverse):对于这个数据集,我们在各个领域内均匀采样以最大化多样性,不考虑任何难度概念。这种方法也导致性能较差,类似于 1K-random。仅难度(1K-longest):在这里,我们依靠我们在§2 中介绍的一个难度指标,选择具有最长推理轨迹的 1,000 个样本。这种方法显著提高了 GPQA 性能,但总体上仍然不如使用 s1K。最大化数量:最后,我们比较了仅训练我们的 59K 个样本的情况,这是所有 1K 样本版本的超集。这导致了一个强大的模型,但使用了更多的资源。 在 59K 个样本上进行微调,我们使用了 394 个 H100 GPU 小时,而 s1-32B 仅需 7 个 H100 GPU 小时。此外,仅依赖 s1K 在§2 中展示的竞争极为激烈。总的来说,通过第 2 节中的方法结合所有三个标准——质量、难度、多样性——对于样本高效的推理训练至关重要。
5.2 Test-time scaling methods
5.2 测试时缩放方法
表 3:在 AIME24 上扩展测试时计算的消融方法。 指用于估计属性的评价运行次数;因此,更高的值表示更强的鲁棒性。粗体表示我们选择的方法和最佳值。BF = 预算强制,TCC/SCC/CCC = 令牌/步/类条件控制,RS = 拒绝采样。
Method 方法 | Control 控制 | Scaling 缩放 | Performance 性能 | |
BF | 100% | 15 | 56.7 | 5 |
TCC | 40% | -24 | 40.0 | 5 |
TCC + BF | 100% | 13 | 40.0 | 5 |
SCC | 60% | 3 | 36.7 | 5 |
SCC + BF | 100% | 6 | 36.7 | 5 |
CCC | 50% | 25 | 36.7 | 2 |
RS | 100% | -35 | 40.0 | 5 |
Budget forcing 预算强制
In Table 3 we compare the test-time scaling methods we have introduced in §3. Overall, we find that budget forcing provides perfect control, good scaling, and leads to our best AIME24 score. Thus, this is the method we use for s1-32B in Figure 1 and in §4. In Table 4, we compare different strings for extrapolating performance. We find that “Wait” generally gives the best performance.
在表 3 中,我们比较了我们在第 3 节中介绍的计算时间缩放方法。总体而言,我们发现预算强制提供了完美的控制、良好的缩放,并导致了我们最佳的 AIME24 分数。因此,这是我们用于图 1 和第 4 节中 s1-32B 的方法。在表 4 中,我们比较了不同的字符串以预测性能。我们发现“等待”通常提供最佳性能。
Class-conditional control We provide benchmark scores for this method in §E.1 and summarize three findings here: (1) Token-conditional control fails without budget forcing, as our model cannot reliably count tokens - even when trained to do so. (2) Under step-conditional control, the model generates a similar total number of tokens when given different step targets, as the model goes from few steps with many tokens per step, to many steps with few tokens in each step. Thus, the model learns to hack its way around the compute constraint making the controllability of this method mediocre. (3) Class-conditional control can work - telling a model to simply think longer can increase its test-time compute and performance, which leads good scaling in Table 3.
类条件控制 我们在第 E.1 节中提供了此方法的基准分数,并在此总结三个发现:(1)在没有预算强制的情况下,标记条件控制失败,因为我们的模型无法可靠地计数标记——即使经过训练也是如此。(2)在步骤条件控制下,模型在给定不同的步骤目标时生成相似的总标记数,因为模型从每步许多标记的少量步骤,转变为每步少量标记的许多步骤。因此,模型学会了通过绕过计算约束来“黑客”其方式,使得此方法的可控性平庸。(3)类条件控制可以工作——告诉模型简单地思考更长可以增加其测试时的计算能力和性能,这在表 3 中导致了良好的扩展。
表 4:预算强制外推消融。我们比较了两次忽略思考结束分隔符以及附加空字符串或各种字符串的情况。
Model | AIME 2024 | MATH 数学 500 | GPQA Diamond 钻石 |
No extrapolation 无外推 | 50.0 | 93.0 | 57.6 |
2x without string 2x 无字符串 | 50.0 | 90.2 | 55.1 |
2x “Alternatively” 2x “或者” | 50.0 | 92.2 | 59.6 |
2x “Hmm” 2x “嗯” | 50.0 | 93.0 | 59.6 |
2x “Wait” 2x “等待” | 53.3 | 93.0 | 59.6 |

图 6:在 AIME24 上的拒绝采样,使用 s1-32B。以 1 的温度进行采样,直到所有生成物(从左到右)的 3500、4000、5000、8000 和 16000 个思维令牌都小于,平均每个样本需要尝试 655、97、8、3、2 和 1 次。
Rejection sampling 拒绝采样
Surprisingly, we find that simply sampling until the generation fits a specific length leads to an inverse scaling trend as depicted in Figure 6. In §E.2 we inspect a question, which was answered correctly by the model when rejection sampling for , but not for the token setting. In the setting the model directly jumps to the correct approach, while for the setting it backtracks a lot. We hypothesize that there is a correlation such that shorter generations tend to be the ones where the model was on the right track from the start, whereas longer ones tend to be ones where the model made mistakes and thus backtracks or questions itself. This leads to longer samples often being wrong when rejection sampling and thus the inverse scaling trend.
令人惊讶的是,我们发现简单地采样直到生成文本符合特定长度,会导致如图 6 所示的逆比例趋势。在§E.2 中,我们检查了一个问题,当对 进行拒绝采样时,模型正确回答了这个问题,但在 标记设置下没有。在 设置中,模型直接跳到正确的方法,而在 设置中,它后退了很多。我们假设存在一种相关性,即较短的生成文本往往是在模型从一开始就正确的情况下产生的,而较长的生成文本往往是在模型犯错误并因此回溯或质疑自己的情况下产生的。这导致在拒绝采样时,较长的样本往往是不正确的,从而产生了逆比例趋势。
6 Discussion and related work
6 讨论及相关工作
6.1 Sample-efficient reasoning
6.1 样本高效推理
Models 模型
There are a number of concurrent efforts to build models that replicate the performance of o1 (OpenAI, 2024). For example, DeepSeek-r1 and k1.5 (DeepSeek-AI et al., 2025; Team et al., 2025) are built with reinforcement learning methods, while others rely on SFT using tens of thousands of distilled examples (Team, 2025; Xu et al., 2025; Labs, 2025). We show that SFT on only 1,000 examples suffices to build a competitive reasoning model matching o1-preview and produces a model that lies on the pareto frontier (Figure 2). Further, we introduce budget forcing which combined with our reasoning model leads to the first reproduction of OpenAI’s test-time scaling curves (OpenAI, 2024). Why does supervised finetuning on just 1,000 samples lead to such performance gains? We hypothesize that the model is already exposed to large amounts of reasoning data during pretraining which spans trillions of tokens. Thus, the ability to perform reasoning is already present in our model. Our sample-efficient finetuning stage just activates it and we scale it further at test time with budget forcing. This is similar to the ”Superficial Alignment Hypothesis” presented in LIMA (Zhou et al., 2023), where the authors find that 1,000 examples can be sufficient to align a model to adhere to user preferences.
存在许多同时进行的努力,旨在构建能够复制 o1(OpenAI,2024)性能的模型。例如,DeepSeek-r1 和 k1.5(DeepSeek-AI 等,2025;团队等,2025)是使用强化学习方法构建的,而其他模型则依赖于使用数万个蒸馏示例的 SFT(Team,2025;徐等,2025;实验室,2025)。我们表明,在仅 1,000 个示例上进行 SFT 就足以构建一个与 o1-preview 相匹配的竞争性推理模型,并产生一个位于帕累托前沿的模型(图 2)。进一步地,我们引入了预算强制,它与我们的推理模型相结合,导致了 OpenAI 测试时缩放曲线的第一次复制(OpenAI,2024)。为什么仅在 1,000 个样本上进行监督微调就能带来如此大的性能提升?我们假设,在预训练期间,模型已经接触到了大量跨越万亿个标记的推理数据。因此,执行推理的能力已经存在于我们的模型中。我们的样本高效微调阶段只是激活了它,并在测试时通过预算强制进一步扩展它。 这与 LIMA(周等,2023)中提出的“表面对齐假设”类似,其中作者发现 1000 个示例就足以将模型对齐以符合用户偏好。
Benchmarks and methods 基准和方法
To evaluate and push the limits of these models, increasingly challenging benchmarks have been introduced, such as Olympiad-level science competitions (He et al., 2024; Jain et al., 2024; Zhong et al., 2023) and others (Srivastava et al., 2023; Glazer et al., 2024; Su et al., 2024; Kim et al., 2024; Phan et al., 2025). To enhance models’ performance on reasoning-related tasks, researchers have pursued several strategies: Prior works have explored continuing training language models on specialized corpora related to mathematics and science (Azerbayev et al., 2023; Yang et al., 2024), sometimes even synthetically generated data (Yu et al., 2024). Others have developed training methodologies specifically aimed at reasoning performance (Zelikman et al., 2022, 2024; Luo et al., 2025; Yuan et al., 2025; Wu et al., 2024a). Another significant line of work focuses on prompting-based methods to elicit and improve reasoning abilities, including methods like Chain-of-Thought prompting (Wei et al., 2023; Yao et al., 2023a, b; Bi et al., 2023; Fu et al., 2023; Zhang et al., 2024b; Xiang et al., 2025; Hu et al., 2024; Diao et al., 2024). These combined efforts aim to advance the reasoning ability of language models, enabling them to handle more complex and abstract tasks effectively.
为了评估和推动这些模型的极限,引入了越来越具有挑战性的基准,例如奥林匹克级别的科学竞赛(He 等,2024;Jain 等,2024;Zhong 等,2023)以及其他(Srivastava 等,2023;Glazer 等,2024;Su 等,2024;Kim 等,2024;Phan 等,2025)。为了提高模型在推理相关任务上的性能,研究人员采取了多种策略:先前的研究探讨了在数学和科学相关的专业语料库上继续训练语言模型(Azerbayev 等,2023;Yang 等,2024),有时甚至是合成的数据(Yu 等,2024)。其他人开发了专门针对推理性能的训练方法(Zelikman 等,2022,2024;Luo 等,2025;Yuan 等,2025;Wu 等,2024a)。另一项重要的工作线集中在基于提示的方法来激发和改进推理能力,包括如思维链提示(Wei 等,2023;Yao 等,2023a,b;Bi 等,2023;Fu 等,2023;Zhang 等,2024b;Xiang 等,2025;Hu 等,2024;Diao 等,2024)等方法。 这些联合努力旨在提升语言模型的推理能力,使它们能够有效地处理更复杂和抽象的任务。
6.2 Test-time scaling 6.2 测试时缩放
Methods 方法
As we introduce in §3, we differentiate two methods to scale test-time compute: parallel and sequential. The former relies on multiple solution attempts generated in parallel and selecting the best outcome via specific criteria. These criteria include choosing the most frequent response for majority voting or the best response based on an external reward for Best-of-N (Brown et al., 2024; Irvine et al., 2023; Levi, 2024). Unlike repeated sampling, previous sequential scaling methods let the model generate solution attempts sequentially based on previous attempts, allowing it to refine each attempt based on previous outcomes (Snell et al., 2024; Hou et al., 2025; Lee et al., 2025). Tree-based search methods (Gandhi et al., 2024; Wu et al., 2024b) offer a hybrid approach between sequential and parallel scaling, such as Monte-Carlo Tree Search (MCTS) (Liu et al., 2024; Zhang et al., 2023; Zhou et al., 2024; Choi et al., 2023) and guided beam search (Xie et al., 2023). REBASE (Wu et al., 2024b) employs a process reward model to balance exploitation and pruning during tree search. Empirically, REBASE has been shown to outperform sampling-based methods and MCTS (Wu et al., 2024b). Reward models (Lightman et al., 2023; Wang et al., 2024b, c) play a key role in these methods. They come in two variants: outcome reward models and process reward models. Outcome reward models (Xin et al., 2024; Ankner et al., 2024) assign a score to complete solutions and are particularly useful in Best-of-N selection, while process reward models (Lightman et al., 2023; Wang et al., 2024b; Wu et al., 2024b) assess individual reasoning steps and are effective in guiding tree-based search methods.
如我们在第 3 节中所述,我们区分了两种测试时计算缩放的两种方法:并行和顺序。前者依赖于并行生成的多个解决方案尝试,并通过特定标准选择最佳结果。这些标准包括选择最频繁的响应进行多数投票或基于外部奖励的最佳响应(Brown 等人,2024;Irvine 等人,2023;Levi,2024)。与重复抽样不同,之前的顺序缩放方法允许模型根据之前的尝试顺序生成解决方案尝试,从而根据之前的成果细化每个尝试(Snell 等人,2024;Hou 等人,2025;Lee 等人,2025)。基于树的方法(Gandhi 等人,2024;Wu 等人,2024b)提供了顺序和并行缩放之间的混合方法,例如蒙特卡洛树搜索(MCTS)(Liu 等人,2024;Zhang 等人,2023;Zhou 等人,2024;Choi 等人,2023)和引导束搜索(Xie 等人,2023)。REBASE(Wu 等人,2024b)在树搜索过程中采用过程奖励模型来平衡利用和剪枝。经验上,REBASE 已被证明优于基于采样的方法和 MCTS(Wu 等人,2024b)。 奖励模型(Lightman 等人,2023;王等人,2024b, c)在这些方法中发挥着关键作用。它们有两种变体:结果奖励模型和过程奖励模型。结果奖励模型(辛等人,2024;安克纳等人,2024)为完整解决方案分配分数,在最佳选择 N 中特别有用,而过程奖励模型(Lightman 等人,2023;王等人,2024b;吴等人,2024b)评估单个推理步骤,在指导基于树的搜索方法中非常有效。
Limits to further test-time scaling
进一步测试时间缩放的限制
We have shown that budget forcing allows extrapolating test-time compute in §4, e.g., improving AIME24 performance from 50% to 57%. However, it has two key limitations when scaling further: it eventually flattens out (Figure 4), and the context window of the underlying language model constrains it. Despite these constraints, our work shows test-time scaling across a wide range of accuracies (Figure 1), partly because scaling down test-time compute behaves predictably and does not suffer from these constraints.
我们已经证明了预算强制可以在第 4 节中扩展测试时的计算,例如,将 AIME24 的性能从 50%提高到 57%。然而,当进一步扩展时,它有两个关键限制:最终会趋于平稳(见图 4),并且底层语言模型的上下文窗口对其进行了限制。尽管存在这些限制,我们的工作表明在广泛的准确性范围内实现了测试时的扩展(见图 1),部分原因是因为降低测试时的计算行为具有可预测性,并且不受这些限制的影响。
Continuing test-time scaling will require approaches that can further extrapolate test-time compute. How can we get such extrapolation? There may be improvements to budget forcing such as rotating through different strings, not only “Wait”, or combining it with frequency penalties or higher temperature to avoid repetitive loops. An exciting direction for future work is also researching whether applying budget forcing to a reasoning model trained with reinforcement learning yields better extrapolation; or if RL allows for new ways of test-time scaling beyond budget forcing. Our work defines the right metrics (§3.2) – Control, Scaling, and Performance – to enable future research and progress on extrapolating test-time compute.
持续进行测试时缩放需要能够进一步外推测试时计算的方法。我们如何获得这种外推?可能可以通过旋转不同的字符串来改进预算强制,而不仅仅是“等待”,或者将其与频率惩罚或更高温度结合,以避免重复循环。未来工作的一个令人兴奋的方向是研究将预算强制应用于用强化学习训练的推理模型是否会产生更好的外推;或者 RL 是否允许在预算强制之外的新方法进行测试时缩放。我们的工作定义了正确的指标(§3.2)——控制、缩放和性能——以使未来的研究在测试时计算外推方面取得进展。

图 7:进一步扩展使用并行缩放方法。AIME24 中的 30 个问题的所有指标平均值。REBASE 的平均思考令牌不计入奖励模型的额外计算。对于顺序缩放,我们提示模型使用最多(从左到右)32、64、256 和 512 步。对于 REBASE 和多数投票,我们生成 16 个并行轨迹以进行聚合。
Parallel scaling as a solution
并行扩展作为解决方案
Parallel scaling offers one solution to the limits of sequential scaling, thus we augment our sequentially scaled model with two methods: (I) Majority voting: After generating solutions, the final solution is the most frequent one across generations; (II) Tree search via REBASE: We use the REBASE process reward model, which is initialized from LLaMA-34B and further finetuned on a synthetic process reward modeling dataset (Wu et al., 2024b). We then aggregate the solutions generated by REBASE via majority voting. As shown in Figure 7, augmenting our model with REBASE scales better than majority voting, and even sequential scaling in this scenario. However, REBASE requires an additional forward pass at each step for the reward model adding some computation overhead. For sequential scaling, when prompted to use up to 512 steps, for 12 out of the 30 evaluation questions the model generates a response that exceeds the context window leading to a large performance drop. Overall, we find that these parallel scaling methods complement sequential scaling thus they offer an avenue for scaling test-time compute even further; beyond fixed context windows.
并行扩展提供了一种解决顺序扩展限制的方法,因此我们通过两种方法增强我们的顺序扩展模型:(I)多数投票:在生成 解决方案后,最终解决方案是跨代中最频繁的一个;(II)通过 REBASE 进行树搜索:我们使用 REBASE 过程奖励模型,该模型从 LLaMA-34B 初始化,并在合成过程奖励建模数据集(Wu 等人,2024b)上进行进一步微调。然后我们通过多数投票聚合 REBASE 生成的解决方案。如图 7 所示,将 REBASE 添加到我们的模型中比多数投票和在此场景中的顺序扩展具有更好的扩展性。然而,REBASE 需要在每个步骤进行额外的正向传递以添加奖励模型,这增加了计算开销。对于顺序扩展,当被提示使用最多 512 个步骤时,对于 30 个评估问题中的 12 个,模型生成的响应超出了上下文窗口,导致性能大幅下降。总的来说,我们发现这些并行扩展方法补充了顺序扩展,因此它们为扩展测试时计算提供了途径;超越固定上下文窗口。
Impact Statement 影响声明
Language models with strong reasoning capabilities have the potential to greatly enhance human productivity, from assisting in complex decision-making to driving scientific breakthroughs. However, recent advances in reasoning, such as OpenAI’s o1 and DeepSeek’s r1, lack transparency, limiting broader research progress. Our work aims to push the frontier of reasoning in a fully open manner, fostering innovation and collaboration to accelerate advancements that ultimately benefit society.
具有强大推理能力的语言模型有望极大地提高人类生产力,从协助复杂决策到推动科学突破。然而,推理方面的最新进展,如 OpenAI 的 o1 和 DeepSeek 的 r1,缺乏透明度,限制了更广泛的研究进展。我们的工作旨在以完全开放的方式推动推理的前沿,促进创新和合作,以加速最终造福社会的进步。
Acknowledgements 致谢
We thank Ryan Marten for generating traces from DeepSeek r1 for s1.1 using Bespoke Curator (Marten et al., 2025). This work was partly conducted using the Stanford Marlowe GPU cluster (Kapfer et al., 2025), made possible by financial support from Stanford University. We thank Alexander M. Rush, Andrew Ilyas, Banghua Zhu, Chenglei Si, Chunting Zhou, John Yang, Ludwig Schmidt, Samy Jelassi, Suhas Kotha, Tengyu Ma, Xuechen Li, Yu Sun, and Yue Zhang for very constructive discussions.
我们感谢 Ryan Marten 使用 Bespoke Curator(Marten 等人,2025)从 DeepSeek r1 为 s1.1 生成跟踪。这项工作部分是在斯坦福 Marlowe GPU 集群(Kapfer 等人,2025)上进行的,该集群得到了斯坦福大学的资金支持。我们感谢 Alexander M. Rush、Andrew Ilyas、Banghua Zhu、Chenglei Si、Chunting Zhou、John Yang、Ludwig Schmidt、Samy Jelassi、Suhas Kotha、Tengyu Ma、Xuechen Li、Yu Sun 和 Yue Zhang 就非常有益的讨论。
References 参考文献
-
Ankner et al. (2024) 安克纳等(2024 年)
Ankner, Z., Paul, M., Cui, B., Chang, J. D., and Ammanabrolu, P.
Critique-out-loud reward models, 2024.
URL https://arxiv.org/abs/2408.11791.
Ankner, Z., Paul, M., Cui, B., Chang, J. D., and Ammanabrolu, P. 大声批评奖励模型,2024。URL https://arxiv.org/abs/2408.11791。 -
Arora et al. (2023) 阿罗拉等人(2023 年)
Arora, D., Singh, H. G., and Mausam.
Have llms advanced enough? a challenging problem solving benchmark for large language models, 2023.
URL https://arxiv.org/abs/2305.15074.
阿罗拉,D.,辛格,H.G.,和马乌萨姆。llms是否足够先进?大型语言模型的一个具有挑战性的问题解决基准,2023。URL https://arxiv.org/abs/2305.15074。 -
Azerbayev et al. (2023) 阿泽尔拜夫等人(2023 年)
Azerbayev, Z., Schoelkopf, H., Paster, K., Santos, M. D., McAleer, S., Jiang, A. Q., Deng, J., Biderman, S., and Welleck, S.
Llemma: An open language model for mathematics, 2023.
阿泽尔拜夫,Z.,舍尔科普夫,H.,帕斯特,K.,桑托斯,M. D.,麦克利尔,S.,江,A. Q.,邓,J.,比德曼,S.,韦莱克,S. L.:Llemma:一个用于数学的开放语言模型,2023 年。 -
Bi et al. (2023) Bi 等(2023 年)
Bi, Z., Zhang, N., Jiang, Y., Deng, S., Zheng, G., and Chen, H.
When do program-of-thoughts work for reasoning?, 2023.
URL https://arxiv.org/abs/2308.15452.
思维程序何时适用于推理?2023。URL https://arxiv.org/abs/2308.15452 -
Biderman et al. (2024) 比德曼等人(2024 年)
Biderman, S., Schoelkopf, H., Sutawika, L., Gao, L., Tow, J., Abbasi, B., Aji, A. F., Ammanamanchi, P. S., Black, S., Clive, J., DiPofi, A., Etxaniz, J., Fattori, B., Forde, J. Z., Foster, C., Hsu, J., Jaiswal, M., Lee, W. Y., Li, H., Lovering, C., Muennighoff, N., Pavlick, E., Phang, J., Skowron, A., Tan, S., Tang, X., Wang, K. A., Winata, G. I., Yvon, F., and Zou, A.
Lessons from the trenches on reproducible evaluation of language models, 2024.
比德曼,S.,舍尔科普夫,H.,苏塔维卡,L.,高,L.,托,J.,阿巴斯,B.,阿吉,A. F.,阿曼纳曼奇,P. S.,布莱克,S.,克莱夫,J.,迪波菲,A.,埃查尼兹,J.,法托里,B.,福德,J. Z.,福斯特,C.,胡斯,J.,贾斯瓦尔,M.,李,W. Y.,李,H.,洛弗林,C.,穆尼希霍夫,N.,帕夫利克,E.,方,J.,斯科罗恩,A.,谭,S.,唐,X.,王,K. A.,温纳塔,G. I.,于冯,F.,和邹,A. 从战场上的经验谈语言模型的可重复性评估,2024。 -
Brown et al. (2024) Brown 等人(2024 年)
Brown, B., Juravsky, J., Ehrlich, R., Clark, R., Le, Q. V., Ré, C., and Mirhoseini, A.
Large language monkeys: Scaling inference compute with repeated sampling, 2024.
URL https://arxiv.org/abs/2407.21787.
布朗,B.,尤拉夫斯基,J.,厄利希,R.,克拉克,R.,黎,Q. V.,雷,C.,和米尔霍塞尼,A. 大语言猴子:通过重复采样扩展推理计算,2024。URL https://arxiv.org/abs/2407.21787。 -
Cesista (2024)
Cesista, F. L.
Multimodal structured generation: Cvpr’s 2nd mmfm challenge technical report, 2024.
URL https://arxiv.org/abs/2406.11403.
Cesista, F. L. 多模态结构化生成:Cvpr 的第二届 mmfm 挑战技术报告,2024。URL https://arxiv.org/abs/2406.11403。 -
Chen et al. (2023) 陈等(2023)
Chen, W., Yin, M., Ku, M., Lu, P., Wan, Y., Ma, X., Xu, J., Wang, X., and Xia, T.
Theoremqa: A theorem-driven question answering dataset, 2023.
URL https://arxiv.org/abs/2305.12524.
陈,W.,尹,M.,库,M.,卢,P.,万,Y.,马,X.,徐,J.,王,X.,夏,T. 《Theoremqa:一个以定理驱动的问答数据集》,2023。URL https://arxiv.org/abs/2305.12524。 -
Choi et al. (2023) 崔等. (2023)
Choi, S., Fang, T., Wang, Z., and Song, Y.
Kcts: Knowledge-constrained tree search decoding with token-level hallucination detection, 2023.
URL https://arxiv.org/abs/2310.09044.
Choi, S., Fang, T., Wang, Z. 和 Song, Y. KCTS:基于知识约束的树搜索解码与 token 级幻觉检测,2023。URL https://arxiv.org/abs/2310.09044。 -
DeepSeek-AI et al. (2025)
DeepSeek-AI 等人(2025 年) DeepSeek-AI, Guo, D., Yang, D., Zhang, H., Song, J., Zhang, R., Xu, R., Zhu, Q., Ma, S., Wang, P., Bi, X., Zhang, X., Yu, X., Wu, Y., Wu, Z. F., Gou, Z., Shao, Z., Li, Z., Gao, Z., Liu, A., Xue, B., Wang, B., Wu, B., Feng, B., Lu, C., Zhao, C., Deng, C., Zhang, C., Ruan, C., Dai, D., Chen, D., Ji, D., Li, E., Lin, F., Dai, F., Luo, F., Hao, G., Chen, G., Li, G., Zhang, H., Bao, H., Xu, H., Wang, H., Ding, H., Xin, H., Gao, H., Qu, H., Li, H., Guo, J., Li, J., Wang, J., Chen, J., Yuan, J., Qiu, J., Li, J., Cai, J. L., Ni, J., Liang, J., Chen, J., Dong, K., Hu, K., Gao, K., Guan, K., Huang, K., Yu, K., Wang, L., Zhang, L., Zhao, L., Wang, L., Zhang, L., Xu, L., Xia, L., Zhang, M., Zhang, M., Tang, M., Li, M., Wang, M., Li, M., Tian, N., Huang, P., Zhang, P., Wang, Q., Chen, Q., Du, Q., Ge, R., Zhang, R., Pan, R., Wang, R., Chen, R. J., Jin, R. L., Chen, R., Lu, S., Zhou, S., Chen, S., Ye, S., Wang, S., Yu, S., Zhou, S., Pan, S., Li, S. S., Zhou, S., Wu, S., Ye, S., Yun, T., Pei, T., Sun, T., Wang, T., Zeng, W., Zhao, W., Liu, W., Liang, W., Gao, W., Yu, W., Zhang, W., Xiao, W. L., An, W., Liu, X., Wang, X., Chen, X., Nie, X., Cheng, X., Liu, X., Xie, X., Liu, X., Yang, X., Li, X., Su, X., Lin, X., Li, X. Q., Jin, X., Shen, X., Chen, X., Sun, X., Wang, X., Song, X., Zhou, X., Wang, X., Shan, X., Li, Y. K., Wang, Y. Q., Wei, Y. X., Zhang, Y., Xu, Y., Li, Y., Zhao, Y., Sun, Y., Wang, Y., Yu, Y., Zhang, Y., Shi, Y., Xiong, Y., He, Y., Piao, Y., Wang, Y., Tan, Y., Ma, Y., Liu, Y., Guo, Y., Ou, Y., Wang, Y., Gong, Y., Zou, Y., He, Y., Xiong, Y., Luo, Y., You, Y., Liu, Y., Zhou, Y., Zhu, Y. X., Xu, Y., Huang, Y., Li, Y., Zheng, Y., Zhu, Y., Ma, Y., Tang, Y., Zha, Y., Yan, Y., Ren, Z. Z., Ren, Z., Sha, Z., Fu, Z., Xu, Z., Xie, Z., Zhang, Z., Hao, Z., Ma, Z., Yan, Z., Wu, Z., Gu, Z., Zhu, Z., Liu, Z., Li, Z., Xie, Z., Song, Z., Pan, Z., Huang, Z., Xu, Z., Zhang, Z., and Zhang, Z. Deepseek-r1: Incentivizing reasoning capability in llms via reinforcement learning, 2025. URL https://arxiv.org/abs/2501.12948.
DeepSeek-AI, 国,D.,杨,D.,张,H.,宋,J.,张,R.,徐,R.,朱,Q.,马,S.,王,P.,毕,X.,张,X.,余,X.,吴,Y.,吴,Z. F.,顾,Z.,邵,Z.,李,Z.,高,Z.,刘,A.,薛,B.,王,B.,吴,B.,冯,B.,陆,C.,赵,C.,邓,C.,张,C.,阮,C.,戴,D.,陈,D.,季,D.,李,E.,林,F.,戴,F.,罗,F.,郝,G.,陈,G.,李,G.,张,H.,包,H.,徐,H.,王,H.,丁,H.,辛,H.,高,H.,曲,H.,李,H.,郭,J.,李,J.,王,J.,陈,J.,袁,J.,邱,J.,李,J.,蔡,J. L.,倪,J.,梁,J.,陈,J.,董,K.,胡,K.,高,K.,关,K.,黄,K.,余,K.,王,L.,张,L.,赵,L.,王,L.,张,L.,徐,L.,夏,L.,张,M.,张,M.,唐,M.,李,M.,王,M.,李,M.,田,N.,黄,P.,张,P.,王,Q.,陈,Q.,杜,Q.,葛,R.,张,R.,潘,R.,王,R.,陈,R. J.,金,R. L.,陈,R.,卢,S.,周,S.,陈,S.,叶,S.,王,S.,余,S.,周,S.,潘,S.,李,S. S.,周,S.,吴,S.,叶,S.,云,T.,裴,T.,孙,T.,王,T.,曾,W.,赵,W.,刘,W.,梁,W.,高,W.,余,W.,张,W.,肖,W. L., An, W., 刘, X., 王晓, X., 陈, X., 邢, X., 成, X., 刘, X., 谢晓, X., 刘, X., 杨, X., 李, X., 苏, X., 林, X., 李, X. Q., 金, X., 沈晓, X., 陈, X., 孙, X., 王晓, X., 宋, X., 周晓, X., 王晓, X., 山晓, X., 李, Y. K., 王晓, Y. Q., 魏晓, Y. X., 张晓, Y., 徐晓, Y., 李, Y., 赵晓, Y., 孙晓, Y., 王晓, Y., 余晓, Y., 张晓, Y., 石晓, Y., 胸晓, Y., 何晓, Y., 蔡晓, Y., 王晓, Y., 谭晓, Y., 马晓, Y., 刘晓, Y., 郭晓, Y., 欧晓, Y., 王晓, Y., 龚晓, Y., 邹晓, Y., 何晓, Y., 胸晓, Y., 罗晓, Y., 有晓, Y., 刘晓, Y., 周晓, Y., 朱晓, Y. X., 徐晓, Y., 黄晓, Y., 李晓, Y., 郑晓, Y., 朱晓, Y., 马晓, Y., 唐晓, Y., 赵晓, Y., 颜晓, Y., 任志, Z. Z., 任, Z., 沙晓, Z., 付晓, Z., 徐晓, Z., 谢晓, Z., 张晓, Z., 郝晓, Z., 马晓, Z., 艳晓, Z., 吴晓, Z., 郭晓, Z., 朱晓, Z., 刘晓, Z., 李晓, Z., 谢晓, Z., 宋晓, Z., 潘晓, Z., 黄晓, Z., 徐晓, Z., 张晓, Z., 和晓, Z. Deepseek-r1:通过强化学习激励llms中的推理能力,2025。URL https://arxiv.org/abs/2501.12948. -
Diao et al. (2024) Diao 等人(2024 年)
Diao, S., Wang, P., Lin, Y., Pan, R., Liu, X., and Zhang, T.
Active prompting with chain-of-thought for large language models, 2024.
URL https://arxiv.org/abs/2302.12246.
Diao, S., Wang, P., Lin, Y., Pan, R., Liu, X., 和 Zhang, T. 基于思维链的大语言模型主动提示,2024。URL https://arxiv.org/abs/2302.12246。 -
Dubey et al. (2024) 杜贝等(2024 年)
Dubey, A., Jauhri, A., Pandey, A., Kadian, A., Al-Dahle, A., Letman, A., Mathur, A., Schelten, A., Yang, A., Fan, A., Goyal, A., Hartshorn, A., Yang, A., Mitra, A., Sravankumar, A., Korenev, A., Hinsvark, A., Rao, A., Zhang, A., Rodriguez, A., Gregerson, A., et al.
The llama 3 herd of models, 2024.
URL https://arxiv.org/abs/2407.21783.
Dubey, A.,Jauhri, A.,Pandey, A.,Kadian, A.,Al-Dahle, A.,Letman, A.,Mathur, A.,Schelten, A.,Yang, A.,Fan, A.,Goyal, A.,Hartshorn, A.,Yang, A.,Mitra, A.,Sravankumar, A.,Korenev, A.,Hinsvark, A.,Rao, A.,Zhang, A.,Rodriguez, A.,Gregerson, A.,等. Llama 3 模型群,2024。URL https://arxiv.org/abs/2407.21783。 - Fu et al. (2023) Fu, Y., Peng, H., Sabharwal, A., Clark, P., and Khot, T. Complexity-based prompting for multi-step reasoning, 2023. URL https://arxiv.org/abs/2210.00720.
- Gandhi et al. (2024) Gandhi, K., Lee, D., Grand, G., Liu, M., Cheng, W., Sharma, A., and Goodman, N. D. Stream of search (sos): Learning to search in language, 2024. URL https://arxiv.org/abs/2404.03683.
- Gao et al. (2024a) Gao, B., Song, F., Yang, Z., Cai, Z., Miao, Y., Dong, Q., Li, L., Ma, C., Chen, L., Xu, R., Tang, Z., Wang, B., Zan, D., Quan, S., Zhang, G., Sha, L., Zhang, Y., Ren, X., Liu, T., and Chang, B. Omni-math: A universal olympiad level mathematic benchmark for large language models, 2024a. URL https://arxiv.org/abs/2410.07985.
- Gao et al. (2021) Gao, L., Tow, J., Biderman, S., Black, S., DiPofi, A., Foster, C., Golding, L., Hsu, J., McDonell, K., Muennighoff, N., Phang, J., Reynolds, L., Tang, E., Thite, A., Wang, B., Wang, K., and Zou, A. A framework for few-shot language model evaluation, September 2021. URL https://doi.org/10.5281/zenodo.5371628.
- Gao et al. (2024b) Gao, Z., Niu, B., He, X., Xu, H., Liu, H., Liu, A., Hu, X., and Wen, L. Interpretable contrastive monte carlo tree search reasoning, 2024b. URL https://arxiv.org/abs/2410.01707.
- Glazer et al. (2024) Glazer, E., Erdil, E., Besiroglu, T., Chicharro, D., Chen, E., Gunning, A., Olsson, C. F., Denain, J.-S., Ho, A., de Oliveira Santos, E., Järviniemi, O., Barnett, M., Sandler, R., Vrzala, M., Sevilla, J., Ren, Q., Pratt, E., Levine, L., Barkley, G., Stewart, N., Grechuk, B., Grechuk, T., Enugandla, S. V., and Wildon, M. Frontiermath: A benchmark for evaluating advanced mathematical reasoning in ai, 2024. URL https://arxiv.org/abs/2411.04872.
- Google (2024) Google. Gemini 2.0 flash thinking mode (gemini-2.0-flash-thinking-exp-1219), December 2024. URL https://cloud.google.com/vertex-ai/generative-ai/docs/thinking-mode.
- Groeneveld et al. (2024) Groeneveld, D., Beltagy, I., Walsh, P., Bhagia, A., Kinney, R., Tafjord, O., Jha, A. H., Ivison, H., Magnusson, I., Wang, Y., Arora, S., Atkinson, D., Authur, R., Chandu, K. R., Cohan, A., Dumas, J., Elazar, Y., Gu, Y., Hessel, J., Khot, T., Merrill, W., Morrison, J., Muennighoff, N., Naik, A., Nam, C., Peters, M. E., Pyatkin, V., Ravichander, A., Schwenk, D., Shah, S., Smith, W., Strubell, E., Subramani, N., Wortsman, M., Dasigi, P., Lambert, N., Richardson, K., Zettlemoyer, L., Dodge, J., Lo, K., Soldaini, L., Smith, N. A., and Hajishirzi, H. Olmo: Accelerating the science of language models, 2024.
- He et al. (2024) He, C., Luo, R., Bai, Y., Hu, S., Thai, Z. L., Shen, J., Hu, J., Han, X., Huang, Y., Zhang, Y., Liu, J., Qi, L., Liu, Z., and Sun, M. Olympiadbench: A challenging benchmark for promoting agi with olympiad-level bilingual multimodal scientific problems, 2024. URL https://arxiv.org/abs/2402.14008.
- Hendrycks et al. (2021) Hendrycks, D., Burns, C., Kadavath, S., Arora, A., Basart, S., Tang, E., Song, D., and Steinhardt, J. Measuring mathematical problem solving with the math dataset, 2021. URL https://arxiv.org/abs/2103.03874.
- Hoffmann et al. (2022) Hoffmann, J., Borgeaud, S., Mensch, A., Buchatskaya, E., Cai, T., Rutherford, E., de Las Casas, D., Hendricks, L. A., Welbl, J., Clark, A., Hennigan, T., Noland, E., Millican, K., van den Driessche, G., Damoc, B., Guy, A., Osindero, S., Simonyan, K., Elsen, E., Rae, J. W., Vinyals, O., and Sifre, L. Training compute-optimal large language models, 2022. URL https://arxiv.org/abs/2203.15556.
- Hou et al. (2025) Hou, Z., Lv, X., Lu, R., Zhang, J., Li, Y., Yao, Z., Li, J., Tang, J., and Dong, Y. Advancing language model reasoning through reinforcement learning and inference scaling, 2025. URL https://arxiv.org/abs/2501.11651.
- Hu et al. (2024) Hu, Y., Shi, W., Fu, X., Roth, D., Ostendorf, M., Zettlemoyer, L., Smith, N. A., and Krishna, R. Visual sketchpad: Sketching as a visual chain of thought for multimodal language models, 2024. URL https://arxiv.org/abs/2406.09403.
- Huang et al. (2024a) Huang, Z., Wang, Z., Xia, S., Li, X., Zou, H., Xu, R., Fan, R.-Z., Ye, L., Chern, E., Ye, Y., Zhang, Y., Yang, Y., Wu, T., Wang, B., Sun, S., Xiao, Y., Li, Y., Zhou, F., Chern, S., Qin, Y., Ma, Y., Su, J., Liu, Y., Zheng, Y., Zhang, S., Lin, D., Qiao, Y., and Liu, P. Olympicarena: Benchmarking multi-discipline cognitive reasoning for superintelligent ai, 2024a. URL https://arxiv.org/abs/2406.12753.
- Huang et al. (2024b) Huang, Z., Zou, H., Li, X., Liu, Y., Zheng, Y., Chern, E., Xia, S., Qin, Y., Yuan, W., and Liu, P. O1 replication journey – part 2: Surpassing o1-preview through simple distillation, big progress or bitter lesson?, 2024b. URL https://arxiv.org/abs/2411.16489.
- Huang et al. (2025) Huang, Z., Geng, G., Hua, S., Huang, Z., Zou, H., Zhang, S., Liu, P., and Zhang, X. O1 replication journey – part 3: Inference-time scaling for medical reasoning, 2025. URL https://arxiv.org/abs/2501.06458.
- Irvine et al. (2023) Irvine, R., Boubert, D., Raina, V., Liusie, A., Zhu, Z., Mudupalli, V., Korshuk, A., Liu, Z., Cremer, F., Assassi, V., Beauchamp, C.-C., Lu, X., Rialan, T., and Beauchamp, W. Rewarding chatbots for real-world engagement with millions of users, 2023. URL https://arxiv.org/abs/2303.06135.
- Jain et al. (2024) Jain, N., Han, K., Gu, A., Li, W.-D., Yan, F., Zhang, T., Wang, S., Solar-Lezama, A., Sen, K., and Stoica, I. Livecodebench: Holistic and contamination free evaluation of large language models for code, 2024. URL https://arxiv.org/abs/2403.07974.
- Kapfer et al. (2025) Kapfer, C., Stine, K., Narasimhan, B., Mentzel, C., and Candes, E. Marlowe: Stanford’s gpu-based computational instrument, January 2025. URL https://doi.org/10.5281/zenodo.14751899.
- Kaplan et al. (2020) Kaplan, J., McCandlish, S., Henighan, T., Brown, T. B., Chess, B., Child, R., Gray, S., Radford, A., Wu, J., and Amodei, D. Scaling laws for neural language models, 2020. URL https://arxiv.org/abs/2001.08361.
- Kim et al. (2024) Kim, E., Suk, J., Kim, S., Muennighoff, N., Kim, D., and Oh, A. Llm-as-an-interviewer: Beyond static testing through dynamic llm evaluation, 2024. URL https://arxiv.org/abs/2412.10424.
- Kwon et al. (2023) Kwon, W., Li, Z., Zhuang, S., Sheng, Y., Zheng, L., Yu, C. H., Gonzalez, J. E., Zhang, H., and Stoica, I. Efficient memory management for large language model serving with pagedattention, 2023. URL https://arxiv.org/abs/2309.06180.
- Labs (2025) Labs, B. Bespoke-stratos: The unreasonable effectiveness of reasoning distillation, 2025. URL https://hf.co/bespokelabs/Bespoke-Stratos-32B. Accessed: 2025-01-22.
- Lee et al. (2025) Lee, K.-H., Fischer, I., Wu, Y.-H., Marwood, D., Baluja, S., Schuurmans, D., and Chen, X. Evolving deeper llm thinking, 2025. URL https://arxiv.org/abs/2501.09891.
- Levi (2024) Levi, N. A simple model of inference scaling laws, 2024. URL https://arxiv.org/abs/2410.16377.
- LI et al. (2024) LI, J., Beeching, E., Tunstall, L., Lipkin, B., Soletskyi, R., Huang, S. C., Rasul, K., Yu, L., Jiang, A., Shen, Z., Qin, Z., Dong, B., Zhou, L., Fleureau, Y., Lample, G., and Polu, S. Numinamath, 2024. URL https://github.com/project-numina/aimo-progress-prize/blob/main/report/numina_dataset.pdf.
- Lightman et al. (2023) Lightman, H., Kosaraju, V., Burda, Y., Edwards, H., Baker, B., Lee, T., Leike, J., Schulman, J., Sutskever, I., and Cobbe, K. Let’s verify step by step, 2023. URL https://arxiv.org/abs/2305.20050.
- Ling et al. (2017) Ling, W., Yogatama, D., Dyer, C., and Blunsom, P. Program induction by rationale generation : Learning to solve and explain algebraic word problems, 2017. URL https://arxiv.org/abs/1705.04146.
- Liu et al. (2020) Liu, J., Cui, L., Liu, H., Huang, D., Wang, Y., and Zhang, Y. Logiqa: A challenge dataset for machine reading comprehension with logical reasoning, 2020. URL https://arxiv.org/abs/2007.08124.
- Liu et al. (2024) Liu, J., Cohen, A., Pasunuru, R., Choi, Y., Hajishirzi, H., and Celikyilmaz, A. Don’t throw away your value model! generating more preferable text with value-guided monte-carlo tree search decoding, 2024. URL https://arxiv.org/abs/2309.15028.
- Loshchilov & Hutter (2019) Loshchilov, I. and Hutter, F. Decoupled weight decay regularization, 2019.
- Luo et al. (2025) Luo, H., Sun, Q., Xu, C., Zhao, P., Lou, J., Tao, C., Geng, X., Lin, Q., Chen, S., Tang, Y., and Zhang, D. Wizardmath: Empowering mathematical reasoning for large language models via reinforced evol-instruct, 2025. URL https://arxiv.org/abs/2308.09583.
- Marten et al. (2025) Marten, R., Vu, T., Ji, C. C.-J., Sharma, K., Pimpalgaonkar, S., Dimakis, A., and Sathiamoorthy, M. Curator: A tool for synthetic data creation. https://github.com/bespokelabsai/curator, January 2025.
- Muennighoff et al. (2024) Muennighoff, N., Soldaini, L., Groeneveld, D., Lo, K., Morrison, J., Min, S., Shi, W., Walsh, P., Tafjord, O., Lambert, N., Gu, Y., Arora, S., Bhagia, A., Schwenk, D., Wadden, D., Wettig, A., Hui, B., Dettmers, T., Kiela, D., Farhadi, A., Smith, N. A., Koh, P. W., Singh, A., and Hajishirzi, H. Olmoe: Open mixture-of-experts language models, 2024. URL https://arxiv.org/abs/2409.02060.
- of America (2024) of America, M. A. Aime, February 2024. URL https://artofproblemsolving.com/wiki/index.php/AIME_Problems_and_Solutions/.
- OpenAI (2024) OpenAI. Learning to reason with llms, September 2024. URL https://openai.com/index/learning-to-reason-with-llms/.
- OpenAI (2025) OpenAI. Openai o3-mini, 2025. URL https://openai.com/index/openai-o3-mini/. Accessed: 2025-02-24.
- Phan et al. (2025) Phan, L., Gatti, A., Han, Z., Li, N., Hu, J., Zhang, H., Shi, S., Choi, M., Agrawal, A., Chopra, A., et al. Humanity’s last exam, 2025. URL https://arxiv.org/abs/2501.14249.
- Qin et al. (2024) Qin, Y., Li, X., Zou, H., Liu, Y., Xia, S., Huang, Z., Ye, Y., Yuan, W., Liu, H., Li, Y., and Liu, P. O1 replication journey: A strategic progress report – part 1, 2024. URL https://arxiv.org/abs/2410.18982.
- Qwen et al. (2024) Qwen, :, Yang, A., Yang, B., Zhang, B., Hui, B., Zheng, B., Yu, B., Li, C., Liu, D., Huang, F., Wei, H., Lin, H., Yang, J., Tu, J., Zhang, J., Yang, J., Yang, J., Zhou, J., Lin, J., Dang, K., Lu, K., Bao, K., Yang, K., Yu, L., Li, M., Xue, M., Zhang, P., Zhu, Q., Men, R., Lin, R., Li, T., Xia, T., Ren, X., Ren, X., Fan, Y., Su, Y., Zhang, Y., Wan, Y., Liu, Y., Cui, Z., Zhang, Z., and Qiu, Z. Qwen2.5 technical report, 2024. URL https://arxiv.org/abs/2412.15115.
- Rein et al. (2023) Rein, D., Hou, B. L., Stickland, A. C., Petty, J., Pang, R. Y., Dirani, J., Michael, J., and Bowman, S. R. Gpqa: A graduate-level google-proof q&a benchmark, 2023. URL https://arxiv.org/abs/2311.12022.
- Shi et al. (2024) Shi, Q., Tang, M., Narasimhan, K., and Yao, S. Can language models solve olympiad programming?, 2024. URL https://arxiv.org/abs/2404.10952.
- Snell et al. (2024) Snell, C., Lee, J., Xu, K., and Kumar, A. Scaling llm test-time compute optimally can be more effective than scaling model parameters, 2024. URL https://arxiv.org/abs/2408.03314.
- Srivastava et al. (2023) Srivastava, A., Rastogi, A., Rao, A., Shoeb, A. A. M., Abid, A., Fisch, A., Brown, A. R., Santoro, A., Gupta, A., Garriga-Alonso, A., et al. Beyond the imitation game: Quantifying and extrapolating the capabilities of language models, 2023.
- Su et al. (2024) Su, H., Yen, H., Xia, M., Shi, W., Muennighoff, N., yu Wang, H., Liu, H., Shi, Q., Siegel, Z. S., Tang, M., Sun, R., Yoon, J., Arik, S. O., Chen, D., and Yu, T. Bright: A realistic and challenging benchmark for reasoning-intensive retrieval, 2024. URL https://arxiv.org/abs/2407.12883.
- Sun et al. (2024) Sun, L., Han, Y., Zhao, Z., Ma, D., Shen, Z., Chen, B., Chen, L., and Yu, K. Scieval: A multi-level large language model evaluation benchmark for scientific research, 2024. URL https://arxiv.org/abs/2308.13149.
- Team et al. (2025) Team, K., Du, A., Gao, B., Xing, B., Jiang, C., Chen, C., Li, C., Xiao, C., Du, C., Liao, C., Tang, C., Wang, C., Zhang, D., Yuan, E., Lu, E., Tang, F., Sung, F., Wei, G., Lai, G., Guo, H., Zhu, H., Ding, H., Hu, H., Yang, H., Zhang, H., Yao, H., Zhao, H., Lu, H., Li, H., Yu, H., Gao, H., Zheng, H., Yuan, H., Chen, J., Guo, J., Su, J., Wang, J., Zhao, J., Zhang, J., Liu, J., Yan, J., Wu, J., Shi, L., Ye, L., Yu, L., Dong, M., Zhang, N., Ma, N., Pan, Q., Gong, Q., Liu, S., Ma, S., Wei, S., Cao, S., Huang, S., Jiang, T., Gao, W., Xiong, W., He, W., Huang, W., Wu, W., He, W., Wei, X., Jia, X., Wu, X., Xu, X., Zu, X., Zhou, X., Pan, X., Charles, Y., Li, Y., Hu, Y., Liu, Y., Chen, Y., Wang, Y., Liu, Y., Qin, Y., Liu, Y., Yang, Y., Bao, Y., Du, Y., Wu, Y., Wang, Y., Zhou, Z., Wang, Z., Li, Z., Zhu, Z., Zhang, Z., Wang, Z., Yang, Z., Huang, Z., Huang, Z., Xu, Z., and Yang, Z. Kimi k1.5: Scaling reinforcement learning with llms, 2025. URL https://arxiv.org/abs/2501.12599.
- Team (2025) Team, N. Sky-t1: Fully open-source reasoning model with o1-preview performance in $450 budget, 2025. URL https://novasky-ai.github.io/posts/sky-t1. Accessed: 2025-01-09.
- Team (2024) Team, Q. Qwq: Reflect deeply on the boundaries of the unknown, November 2024. URL https://qwenlm.github.io/blog/qwq-32b-preview/.
- Wang et al. (2024a) Wang, J., Meng, F., Liang, Y., and Zhou, J. Drt-o1: Optimized deep reasoning translation via long chain-of-thought, 2024a. URL https://arxiv.org/abs/2412.17498.
- Wang et al. (2024b) Wang, P., Li, L., Shao, Z., Xu, R. X., Dai, D., Li, Y., Chen, D., Wu, Y., and Sui, Z. Math-shepherd: Verify and reinforce llms step-by-step without human annotations, 2024b. URL https://arxiv.org/abs/2312.08935.
- Wang et al. (2021) Wang, S., Liu, Z., Zhong, W., Zhou, M., Wei, Z., Chen, Z., and Duan, N. From lsat: The progress and challenges of complex reasoning, 2021. URL https://arxiv.org/abs/2108.00648.
- Wang et al. (2024c) Wang, Z., Dong, Y., Delalleau, O., Zeng, J., Shen, G., Egert, D., Zhang, J. J., Sreedhar, M. N., and Kuchaiev, O. Helpsteer2: Open-source dataset for training top-performing reward models, 2024c. URL https://arxiv.org/abs/2406.08673.
- Wei et al. (2023) Wei, J., Wang, X., Schuurmans, D., Bosma, M., Ichter, B., Xia, F., Chi, E., Le, Q., and Zhou, D. Chain-of-thought prompting elicits reasoning in large language models, 2023. URL https://arxiv.org/abs/2201.11903.
- Welleck et al. (2024) Welleck, S., Bertsch, A., Finlayson, M., Schoelkopf, H., Xie, A., Neubig, G., Kulikov, I., and Harchaoui, Z. From decoding to meta-generation: Inference-time algorithms for large language models, 2024. URL https://arxiv.org/abs/2406.16838.
- Wu et al. (2024a) Wu, T., Lan, J., Yuan, W., Jiao, J., Weston, J., and Sukhbaatar, S. Thinking llms: General instruction following with thought generation, 2024a. URL https://arxiv.org/abs/2410.10630.
- Wu et al. (2024b) Wu, Y., Sun, Z., Li, S., Welleck, S., and Yang, Y. Inference scaling laws: An empirical analysis of compute-optimal inference for problem-solving with language models, 2024b. URL https://arxiv.org/abs/2408.00724.
- Xiang et al. (2025) Xiang, V., Snell, C., Gandhi, K., Albalak, A., Singh, A., Blagden, C., Phung, D., Rafailov, R., Lile, N., Mahan, D., Castricato, L., Franken, J.-P., Haber, N., and Finn, C. Towards system 2 reasoning in llms: Learning how to think with meta chain-of-thought, 2025. URL https://arxiv.org/abs/2501.04682.
- Xie et al. (2023) Xie, Y., Kawaguchi, K., Zhao, Y., Zhao, X., Kan, M.-Y., He, J., and Xie, Q. Self-evaluation guided beam search for reasoning, 2023. URL https://arxiv.org/abs/2305.00633.
- Xin et al. (2024) Xin, H., Guo, D., Shao, Z., Ren, Z., Zhu, Q., Liu, B., Ruan, C., Li, W., and Liang, X. Deepseek-prover: Advancing theorem proving in llms through large-scale synthetic data, 2024. URL https://arxiv.org/abs/2405.14333.
- Xu et al. (2025) Xu, H., Wu, X., Wang, W., Li, Z., Zheng, D., Chen, B., Hu, Y., Kang, S., Ji, J., Zhang, Y., Guo, Z., Yang, Y., Zhang, M., and Zhang, D. Redstar: Does scaling long-cot data unlock better slow-reasoning systems?, 2025. URL https://arxiv.org/abs/2501.11284.
- Yang et al. (2024) Yang, Z., Band, N., Li, S., Candès, E., and Hashimoto, T. Synthetic continued pretraining, 2024. URL https://arxiv.org/abs/2409.07431.
- Yao et al. (2023a) Yao, S., Yu, D., Zhao, J., Shafran, I., Griffiths, T. L., Cao, Y., and Narasimhan, K. Tree of thoughts: Deliberate problem solving with large language models, 2023a. URL https://arxiv.org/abs/2305.10601.
- Yao et al. (2023b) Yao, S., Zhao, J., Yu, D., Du, N., Shafran, I., Narasimhan, K., and Cao, Y. React: Synergizing reasoning and acting in language models, 2023b. URL https://arxiv.org/abs/2210.03629.
- Ye et al. (2025a) Ye, Y., Huang, Z., Xiao, Y., Chern, E., Xia, S., and Liu, P. Limo: Less is more for reasoning, 2025a. URL https://arxiv.org/abs/2502.03387.
- Ye et al. (2025b) Ye, Y., Xiao, Y., Mi, T., and Liu, P. Aime-preview: A rigorous and immediate evaluation framework for advanced mathematical reasoning. https://github.com/GAIR-NLP/AIME-Preview, 2025b. GitHub repository.
- Yu et al. (2024) Yu, L., Jiang, W., Shi, H., Yu, J., Liu, Z., Zhang, Y., Kwok, J. T., Li, Z., Weller, A., and Liu, W. Metamath: Bootstrap your own mathematical questions for large language models, 2024. URL https://arxiv.org/abs/2309.12284.
- Yuan et al. (2025) Yuan, S., Chen, Z., Xi, Z., Ye, J., Du, Z., and Chen, J. Agent-r: Training language model agents to reflect via iterative self-training, 2025. URL https://arxiv.org/abs/2501.11425.
-
Zelikman et al. (2022) 泽利克曼等人(2022 年)
Zelikman, E., Wu, Y., Mu, J., and Goodman, N. D.
Star: Bootstrapping reasoning with reasoning, 2022.
URL https://arxiv.org/abs/2203.14465.
Zelikman, E.,吴宇,穆佳,和古德曼,N. D. 星:借助推理进行推理自举,2022。URL https://arxiv.org/abs/2203.14465。 -
Zelikman et al. (2024) 泽利克曼等人(2024 年)
Zelikman, E., Harik, G., Shao, Y., Jayasiri, V., Haber, N., and Goodman, N. D.
Quiet-star: Language models can teach themselves to think before speaking, 2024.
URL https://arxiv.org/abs/2403.09629.
Zelikman, E.,Harik, G.,Shao, Y.,Jayasiri, V.,Haber, N.,和 Goodman, N. D. 静默星:语言模型可以教会自己在说话前思考,2024。URL https://arxiv.org/abs/2403.09629。 -
Zhang & Chen (2024)
张 & 陈 (2024) Zhang, H. and Chen, C. Test-time compute scaling laws, 2024. URL https://github.com/hughbzhang/o1_inference_scaling_laws.
张,H. 和 陈,C. 测试时计算缩放定律,2024。URL https://github.com/hughbzhang/o1_inference_scaling_laws。 -
Zhang et al. (2023) 张等人(2023 年)
Zhang, S., Chen, Z., Shen, Y., Ding, M., Tenenbaum, J. B., and Gan, C.
Planning with large language models for code generation, 2023.
URL https://arxiv.org/abs/2303.05510.
张,S.,陈,Z.,沈,Y.,丁,M.,Tenenbaum,J. B.,和甘,C. 使用大型语言模型进行代码生成规划,2023。URL https://arxiv.org/abs/2303.05510。 -
Zhang et al. (2024a) 张等人(2024a)
Zhang, Y., Wu, S., Yang, Y., Shu, J., Xiao, J., Kong, C., and Sang, J.
o1-coder: an o1 replication for coding, 2024a.
URL https://arxiv.org/abs/2412.00154.
张, Y., 吴, S., 杨, Y., 殊, J., 肖杰, J., 孔, C.,和桑, J. o1-coder:一种编码的 o1 复制,2024a。网址 https://arxiv.org/abs/2412.00154。 -
Zhang et al. (2024b) 张等(2024b)
Zhang, Y., Yang, J., Yuan, Y., and Yao, A. C.-C.
Cumulative reasoning with large language models, 2024b.
URL https://arxiv.org/abs/2308.04371.
张, Y., 杨, J., 元, Y.,姚, A. C.-C. 大型语言模型的累积推理,2024b。URL https://arxiv.org/abs/2308.04371。 -
Zhong et al. (2019) 钟等(2019)
Zhong, H., Xiao, C., Tu, C., Zhang, T., Liu, Z., and Sun, M.
Jec-qa: A legal-domain question answering dataset, 2019.
URL https://arxiv.org/abs/1911.12011.
Zhong, H., Xiao, C., Tu, C., Zhang, T., Liu, Z. 和 Sun, M. Jec-qa:法律领域问答数据集,2019。URL https://arxiv.org/abs/1911.12011。 -
Zhong et al. (2023) 钟等(2023)
Zhong, W., Cui, R., Guo, Y., Liang, Y., Lu, S., Wang, Y., Saied, A., Chen, W., and Duan, N.
Agieval: A human-centric benchmark for evaluating foundation models, 2023.
URL https://arxiv.org/abs/2304.06364.
Zhong, W., Cui, R., Guo, Y., Liang, Y., Lu, S., Wang, Y., Saied, A., Chen, W., and Duan, N. Agieval:以人为中心的评估基础模型的基准,2023。URL https://arxiv.org/abs/2304.06364。 -
Zhou et al. (2024) 周等(2024 年)
Zhou, A., Yan, K., Shlapentokh-Rothman, M., Wang, H., and Wang, Y.-X.
Language agent tree search unifies reasoning acting and planning in language models, 2024.
URL https://arxiv.org/abs/2310.04406.
周,A.,颜,K.,Shlapentokh-Rothman,M.,王,H.,和王,Y.-X. 语言代理树搜索统一语言模型中的推理、行动和规划,2024。URL https://arxiv.org/abs/2310.04406。 -
Zhou et al. (2023) 周等(2023 年)
Zhou, C., Liu, P., Xu, P., Iyer, S., Sun, J., Mao, Y., Ma, X., Efrat, A., Yu, P., Yu, L., Zhang, S., Ghosh, G., Lewis, M., Zettlemoyer, L., and Levy, O.
Lima: Less is more for alignment, 2023.
URL https://arxiv.org/abs/2305.11206.
周,C.,刘,P.,徐,P.,艾耶尔,S.,孙,J.,毛,Y.,马,X.,艾弗拉特,A.,余,P.,余,L.,张,S.,戈什,G.,刘易斯,M.,泽特莱莫伊,L.,以及利维,O.。Lima:对齐中“少即是多”,2023。URL https://arxiv.org/abs/2305.11206。
Appendix A s1.1 附录 A s1.1
Seven days after our release of s1, we released s1.1. We regenerated traces for our 1,000 samples in s1K using DeepSeek r1 (DeepSeek-AI et al., 2025) to create s1K-1.1. We use the same training procedure to train our model s1.1. Other updates since our launch include the release of o3 (OpenAI, 2025), LIMO (Ye et al., 2025a), and AIME 2025. We consider all these new developments in LABEL:tab:perf2. We find that s1.1 performs significantly better than s1. We also tried distilling from Claude 3.7, which led to worse performance than from r1 (not reported).
七天之后,我们发布了 s1,随后又发布了 s1.1。我们使用 DeepSeek r1(DeepSeek-AI 等,2025)重新生成了 s1K 中的 1,000 个样本的轨迹,创建了 s1K-1.1。我们采用相同的训练程序来训练我们的模型 s1.1。自我们发布以来,其他更新还包括 o3(OpenAI,2025)、LIMO(Ye 等,2025a)和 AIME 2025 的发布。我们在 LABEL:tab:perf2 中考虑了所有这些新进展。我们发现 s1.1 的性能显著优于 s1。我们还尝试从 Claude 3.7 中进行提炼,但性能不如从 r1(未报告)。
表 5:s1-32B 是一个开放且样本高效的推理模型。我们评估了 s1-32B、Qwen 和 Gemini(部分条目未知(N.A.),见§4)。其他结果来自各自的报告(Qwen 等,2024;团队,2024;OpenAI,2024,2025;DeepSeek-AI 等,2025;实验室,2025;团队,2025),除了 AIME 2025(Ye 等,2025b)。# ex. = 用于推理微调的示例数量;BF = 预算强制。
Model | # Examples # 示例 | MATH500 | GPQA | AIME 2024 | AIME 2025 |
API only API 仅 | |||||
o3-mini-low o3-迷你-低 | N/A | 95.8 | 70.6 | 56.3 | 42.1 |
o3-mini-medium o3-迷你-中等 | N/A | 97.3 | 76.8 | 75.8 | 70.4 |
o3-mini-high o3-迷你-高 | N/A | 97.9 | 79.7 | 83.8 | 80.9 |
Open Weights 开放重量 | |||||
QwQ-32B | N.A. | 90.6 | 54.5 | 46.7 | 32.7 |
r1 r1:r1 | 800K | 07.3 | 71.5 | 79.8 | 70.0 |
r1-distill-Llama-70B r1-蒸馏-羊驼-70B | 800K | 94.5 | 65.2 | 57.1 | 56.3 |
r1-distill-Qwen-14B r1-蒸馏-秋文-14B | 800K | 93.9 | 59.1 | 61.7 | 48.0 |
r1-distill-Qwen-32B r1-蒸馏-秋文-32B | 800K | 94.3 | 62.1 | 58.3 | 49.6 |
Open Weights and Open Data 开放式重量和开放式数据 |
|||||
LIMO | 817 | 94.8 | 66.7 | 56.3 | 44.6 |
s1 w/o BF s1 无 BF | 1K | 92.6 | 56.6 | 50.0 | 26.7 |
s1 with Budget Forcing “Wait” 1x s1 与预算强制“等待”1x |
1K | 92.8 | 59.6 | 53.3 | 30.0 |
s1 with Budget Forcing “Wait” 2x s1 与预算强制“等待”2 倍 |
1K | 93.0 | 59.6 | 53.3 | 33.3 |
s1 with Budget Forcing “Wait” 4x s1 与预算强制“等待”4x |
1K | 92.2 | 58.6 | 56.7 | 36.7 |
s1.1 w/o BF s1.1 无 BF | 1K | 94.4 | 60.6 | 56.7 | 50.0 |
s1.1 with Budget Forcing “Wait” 1x s1.1 预算强制“等待”1x |
1K | 95.4 | 62.6 | 56.7 | 50.0 |
s1.1 with Budget Forcing “Wait” 2x s1.1 预算强制“等待”2 倍 |
1K | 95.4 | 63.6 | 56.7 | 50.0 |
Appendix B Evaluation determinism
附录 BEvaluation determinism
We run our evaluations using vLLM (Kwon et al., 2023) as it is faster than the alternatives we tried. However, we find that even when using the same random seeds and greedy sampling, evaluation scores can change significantly across runs:
我们使用 vLLM(Kwon 等人,2023)进行评估,因为它比我们尝试过的其他方法更快。然而,我们发现即使在使用相同的随机种子和贪婪采样的情况下,评估分数在运行之间也可能发生显著变化:
-
•
Different batch sizes causing different results see https://github.com/vllm-project/vllm/issues/5898
• 不同批次大小导致不同结果,请参阅 https://github.com/vllm-project/vllm/issues/5898 -
•
Continuing generations causing different results see https://github.com/vllm-project/vllm/issues/11783
• 持续的代际导致不同的结果,请参阅 https://github.com/vllm-project/vllm/issues/11783 -
•
Changes in tensor parallelism causing different results
张量并行变化导致结果不同
As our model generates long reasoning traces prior to its answer, small numeric changes can snowball into large differences. We encounter many generations that are exactly the same for thousands of tokens and then suddenly differ in one token eventually ending up with an entirely different answer. To partly counter this issue we generally run our final evaluations using full precision unless otherwise indicated.
由于我们的模型在给出答案之前会生成长的推理轨迹,小的数值变化可能会累积成大的差异。我们遇到了许多代数,它们在数千个标记上完全相同,然后突然在一个标记上有所不同,最终得到一个完全不同的答案。为了部分解决这个问题,我们通常使用完整精度进行最终评估,除非另有说明。
Appendix C s1K details 附录 C s1K 详情
C.1 s1K summary C.1 s1K 概述
表 6:我们数据集 s1K 的总结。使用 Qwen-2.5 分词器测量的标记计数。我们提示 Claude 根据几个领域问题生成关键词。
Domain 领域 | #questions | Total token count 总标记数 | Keywords 关键词 |
Geometry 几何学 | 109 | 560.2K | Area, Triangle, Distance 区域,三角形,距离 |
Number theory 数论 | 98 | 522.5K | Sequences, Divisibility 序列,可除性 |
Combinatorics 组合数学 | 75 | 384.7K | Permutations, Counting 排列,计数 |
Real functions 实函数 | 43 | 234.8K | Trigonometry, Calculus 三角学,微积分 |
Biology 生物学 | 41 | 120.9K | Organic reactions 有机反应 |
Complex functions 复杂函数 | 32 | 170.2K | Complex roots 复杂数根 |
Quantum theory 量子理论 | 32 | 127.9K | Particles, Wave functions 粒子,波函数 |
Field theory 场论 | 28 | 150.1K | Polynomials, Roots 多项式,根 |
Calculus of variations 变分法 | 28 | 155.5K | Optimization, Control 优化,控制 |
Difference equations 差分方程 | 24 | 132.5K | Recurrence, Recursion 复发,递归 |
Electromagnetic theory 电磁理论 | 23 | 95.8K | Optics, Waves, Diffraction 光学,波,衍射 |
Group theory 群论 | 22 | 100.0K | Groups, Automorphisms 群,自同构 |
Linear algebra 线性代数 | 22 | 128.3K | Matrices, Determinants 矩阵,行列式 |
Probability theory 概率论 | 20 | 114.6K | Random walk, Expectation 随机游走,期望 |
Algebraic systems 代数系统 | 19 | 109.9K | Functional equations 函数方程 |
Mechanics 力学 | 19 | 103.6K | Forces, Motion, Energy 力,运动,能量 |
Thermodynamics 热力学 | 19 | 74.2K | Heat engines, Entropy 热机,熵 |
Differential equations 微分方程 | 18 | 89.6K | Substitution, Existence 替代,存在 |
Computer science 计算机科学 | 18 | 34.2K | Complexity theory, Algorithms 复杂性理论,算法 |
Numerical analysis 数值分析 | 18 | 76.5K | Error analysis, Stability 错误分析,稳定性 |
Calculus 微积分 | 17 | 96.3K | Convergence, Summation 收敛,求和 |
Algebraic structures 代数结构 | 17 | 90.4K | Inequalities, Sets 不平等,集合 |
Astronomy 天文学 | 16 | 37.7K | Stellar populations, Orbits 恒星群体,轨道 |
Remaining 27 domains 剩余 27 个域 | 242 | 982.2K | Domains with 16 questions 域包含 16 个问题 |
All domains (51) 所有域(51) | 1000 | 4.7M | s1K |
C.2 Dataset composition for full 59K questions
C.2 全 59K 个问题的数据集组成
表 7:59K 个完整问题的组成。思考和响应长度使用 Qwen2.5-32B-Instruct 分词器(Qwen 等,2024)进行测量。除了排除我们的评估基准 AIME24 之外,我们还排除了 2022-2023 年的 AIME 问题,因为我们在这 90 个问题中使用了这些问题作为 s1-32B 开发阶段的素材。
Source 源 | Description 描述 | #Samples | Avg. thinking length 平均思考长度 |
NuminaMATH (LI et al., 2024) NuminaMATH(李等,2024 年) |
Math problems from online websites 在线网站上的数学问题 |
30660 | 4.1K |
MATH (Hendrycks et al., 2021) MATH(Hendrycks 等人,2021 年) |
Math problems from competitions 竞赛中的数学问题 |
11999 | 2.9K |
OlympicArena (Huang et al., 2024a) 奥林匹克体育场(黄等,2024a) |
Astronomy, Biology, Chemistry, Computer Science, Geography, Math, and Physics olympiad questions 天文学、生物学、化学、计算机科学、地理学、数学和物理学奥林匹克竞赛题目 |
4250 | 3.2K |
OmniMath (Gao et al., 2024a) OmniMath(高等人,2024a) |
Math problems from competitions 竞赛中的数学问题 |
4238 | 4.4K |
AGIEval (Zhong et al., 2023; Ling et al., 2017; Hendrycks et al., 2021; Liu et al., 2020; Zhong et al., 2019; Wang et al., 2021) AGIEval(钟等,2023;凌等,2017;亨德里克斯等,2021;刘等,2020;钟等,2019;王等,2021) |
English, Law, Logic and Math problems from the SAT, LSAT and other exams 英语、法律、逻辑和数学问题来自 SAT、LSAT 及其他考试 |
2385 | 1.2K |
xword | Crossword puzzles 纵横字谜 | 999 | 0.7K |
OlympiadBench (He et al., 2024) 奥林匹亚德基准(He 等,2024) |
Math and Physics olympiad questions 数学与物理奥林匹克竞赛题目 |
896 | 3.9K |
AIME (1983-2021) AIME(1983-2021) |
American Invitational Mathematics Examination 美国邀请赛数学考试 |
890 | 4.7K |
TheoremQA (Chen et al., 2023) 定理 QA(陈等,2023) |
Computer Science, Finance, Math, and Physics university-level questions relating to theorems 计算机科学、金融、数学和物理学大学水平有关定理的问题 |
747 | 2.1K |
USACO (Shi et al., 2024) USACO(石等,2024) |
Code problems from the USA Computing Olympiad 美国计算机奥林匹克竞赛的代码问题 |
519 | 3.6K |
JEEBench (Arora et al., 2023) JEEBench(阿罗拉等,2023 年) |
Chemistry, Math, and Physics problems used in the university entrance examination of the Indian Institute of Technology 化学、数学和物理学问题,用于印度理工学院入学考试 |
515 | 2.9K |
GPQA (Rein et al., 2023) GPQA(Rein 等人,2023 年) |
PhD-Level Science Questions 博士级别科学问题 |
348 | 2.9K |
SciEval (Sun et al., 2024) SciEval(孙等,2024) |
Biology, Chemistry, and Physics problems from various sources 生物学、化学和物理学问题来自多个来源 |
227 | 0.7K |
s1-prob s1-prob:s1 概率 |
Stanford statistics qualifying exams 斯坦福统计学资格考试 |
182 | 4.0K |
LiveCodeBench (Jain et al., 2024) LiveCodeBench(贾因等,2024 年) |
Code problems from coding websites (LeetCode, AtCoder, and CodeForces) 代码网站(LeetCode、AtCoder 和 CodeForces)的代码问题 |
151 | 3.5K |
s1-teasers s1-预告 |
Math brain-teasers crawled from the Internet 数学智力题来自互联网 |
23 | 4.1K |
All 59K questions 所有 59K 个问题 |
Composite of the above datasets with reasoning traces and solutions 上述数据集与推理轨迹及解决方案的复合 |
59029 | 3.6K |
C.3 s1K grading prompt C.3 s1K 评分提示
To grade whether an example is correct for our dataset selection in §2, we use the prompt in Figure 8. We grade using Claude 3.5 except for the correctness among the final 1,000 samples, which we graded with Claude 3.7.
为了评估第 2 节中示例是否符合我们的数据集选择,我们使用图 8 中的提示。我们使用 Claude 3.5 进行评分,除了最后 1,000 个样本的正确性,我们使用 Claude 3.7 进行评分。
You are an AI assistant for grading a science problem.
The user will provide you with the question itself, an attempt made by a student and the correct answer to the problem.
Your job is to judge whether the attempt is correct by comparing it with the correct answer.
If the expected solution concludes with a number or choice, there should be no ambiguity.
If the expected solution involves going through the entire reasoning process, you should judge the attempt based on whether the reasoning process is correct with correct answer if helpful. 您是用于评分科学问题的 AI 助手。用户会向您提供问题本身、学生尝试的答案以及问题的正确答案。您的任务是通过对正确答案的比较来判断学生的尝试是否正确。如果预期的解决方案以数字或选择结束,则不应存在歧义。如果预期的解决方案涉及整个推理过程,则应根据推理过程是否正确以及是否有帮助的正确答案进行判断。 The user will provide the attempt and the correct answer in the following format: 用户将以以下格式提供尝试和正确答案: # Problem # 问题 {problem} {问题} ## Attempt 尝试 {attempt} 尝试 ## Correct answer ## 正确答案 {solution} {解决方案} Explain your reasoning, and end your response on a new line with only ”Yes” or ”No” (without quotes). 解释你的理由,并在新的一行上仅以“是”或“否”(不带引号)结束你的回答。 |
图 8:评分提示
C.4 s1K diversity selection
C.4 s1K 多样性选择
算法 1:s1K 的两阶段采样
输入: := 包含 24,496 个问题的特征集
输出: := 1,000 个选定问题的集合
初始化输出集(仅跟踪唯一元素)
4: for do
5: 如果 IsGeminiCorrect( ) 且 (IsAIME( ) 或 IsGPQA( )) 则
选择所有正确的 AIME/GPQA 解
否则,如果 IsGeminiCorrect( )和 IsMATH( )以及 ThinkingLength( )大于 5600,则
选择具有长链的 MATH500 正确解
13: 所有可用域名
15: while ( ) do
16: 随机选择( )
随机选择一个领域
18: 领域 中的问题
从该领域获取问题
20: 排名 按思考长度排序( )
按思考长度排名
应用幂律权重
24: 加权样本( , 权重)
样本偏好较长的链
29: 如果 则
移除耗尽域名
Algorithm 1 provides our algorithm for selecting data in our diversity selection stage. As mentioned in §2, we also include samples from some specific benchmarks we perceive as high-quality. None of the samples overlap with our final evaluation.
算法 1 提供了我们在多样性选择阶段选择数据的方法。如§2 所述,我们还包括了一些我们认为高质量的特定基准的样本。这些样本与我们的最终评估没有重叠。
C.5 Decontamination C.5 清洁消毒
We filter all samples by checking for an 8-gram overlap between the selected examples and the evaluation benchmarks: MATH500, GPTQA Diamond, and AIME24. We exclude questions with more than an 8-gram overlap.
我们通过检查所选示例与评估基准(MATH500、GPTQA Diamond 和 AIME24)之间的 8-gram 重叠来过滤所有样本:排除超过 8-gram 重叠的问题。
Appendix D Training details
附录 D 培训详情
We take a model that has already been pretrained and instruction tuned and further finetune it for reasoning. Specifically, we use Qwen2.5-32B-Instruct (Qwen et al., 2024), which on math tasks generally matches or outperforms the larger Qwen2.5-72B-Instruct (Qwen et al., 2024) or other open models (Dubey et al., 2024; Groeneveld et al., 2024; Muennighoff et al., 2024). We use token delimiters to separate the thinking stage from the answering stage. We enclose the thinking stage with <|im_start|>think
and <|im_start|>answer
; both preceded and followed by a newline. Samples from our dataset are in §D.2. We use basic fine-tuning hyperparameters: we train for 5 epochs with a batch size of 16 for a total of 315 gradient steps. We train in bfloat16 precision with a learning rate of warmed up linearly for 5% (16 steps) and then decayed to 0 over the rest of training (299 steps) following a cosine schedule. We use the AdamW optimizer (Loshchilov & Hutter, 2019) with and weight decay of . We do not compute loss on questions, only on reasoning traces and solutions. We ensure the sequence length is large enough to avoid cutting off any samples; a setting we ablate in §D.1. The training takes just 26 minutes on 16 NVIDIA H100 GPUs.
我们采用已经预训练和指令微调的模型,并进一步对其进行推理微调。具体来说,我们使用 Qwen2.5-32B-Instruct(Qwen 等人,2024 年),在数学任务上通常与更大的 Qwen2.5-72B-Instruct(Qwen 等人,2024 年)或其他开源模型(Dubey 等人,2024 年;Groeneveld 等人,2024 年;Muennighoff 等人,2024 年)相当或优于。我们使用标记分隔符来区分思考阶段和回答阶段。我们用 <|im_start|>think
和 <|im_start|>answer
括起思考阶段,前后都带有换行符。我们的数据集样本在§D.2 中。我们使用基本的微调超参数:以 16 个批次的规模训练 5 个 epoch,总共 315 个梯度步骤。我们以 bfloat16 精度训练,学习率 线性预热 5%(16 步),然后按照余下的训练(299 步)余弦衰减至 0。我们使用 AdamW 优化器(Loshchilov & Hutter,2019 年)以及 和权重衰减 。我们只在推理轨迹和解决方案上计算损失,不计算问题损失。 我们确保序列长度足够长,以避免截断任何样本;我们在§D.1 中删除了这一设置。训练在 16 个 NVIDIA H100 GPU 上仅需 26 分钟。

图 9:s1-32B 在 s1K 上的训练动态
D.1 Training Ablations: Sequence length
D.1 训练消融:序列长度
表 8:训练序列长度消融。我们报告“准确率/平均每个样本的思考令牌数”;准确率越高,思考令牌(推理成本)越少,效果越好。
Model A 模型 A | Model B 模型 B | |
Training sequence length 训练序列长度 |
4096 | 32768 |
% training samples cutoff 训练样本截止值 |
74% | 0% |
AIME24 | 30.0% / 20721 | 50.0% / 6984 |
MATH500 | 90.0% / 5324 | 91.0% / 3268 |
GPQA | 52.5% / 6841 | 53.0% / 3568 |
Besides our scaling ablations in §5.2, the main training hyperparameter we ablate is the sequence length used during training. We find that a shorter training sequence length leads to longer reasoning traces at test time. This is because when training with a shorter sequence length the answer section of the training sample is more commonly cut off. Inversely, when the training sequence length is longer, more samples appear in their entirety with the section where the model answers. Thus the model receives more gradient updates where it learns to generate an answer following its chain. This in turn leads to a higher log probability of the answer section at any point during the generation and thus shorter reasoning traces at test time. Performance-wise, we also find that the model trained with a longer sequence length performs better. Thus we opt for the longest training sequence length as it leads to better performance and makes inference more efficient by leading to shorter reasoning traces.
除了第 5.2 节中提到的缩放消融外,我们消融的主要训练超参数是训练过程中使用的序列长度。我们发现较短的训练序列长度会导致测试时的推理轨迹更长。这是因为当使用较短的序列长度进行训练时,训练样本的答案部分更常见地被截断。相反,当训练序列长度较长时,更多样本的答案部分会完整出现。因此,模型在生成答案时接收到的梯度更新更多,从而在生成过程中任何时刻答案部分的日志概率更高,因此在测试时推理轨迹更短。在性能方面,我们还发现使用较长序列长度训练的模型表现更好。因此,我们选择最长的训练序列长度,因为它能带来更好的性能,并通过导致更短的推理轨迹来提高推理效率。
D.2 Training Samples D.2 训练样本
LABEL:tab:prob, LABEL:tab:teasers, LABEL:tab:arena contain training samples from s1K.
标签:制表符:概率, 标签:制表符:预告, 标签:制表符:竞技场包含来自 s1K 的训练样本。
表 9:来自 s1-prob 的 s1K 样本。黑色文字是问题,黄色文字是解决方案。仅基于问题,我们使用 Gemini 生成浅蓝色推理轨迹和蓝色响应,这是我们用于训练的内容。
You are given integrable random variables and on the same probability space , and two -algebras For , let
您在同一概率空间 上被赋予可积随机变量 和 ,以及两个 代数 。对于 ,令 (a) Show that there exist integrable random variables and such that as both and . (b) Prove that almost surely (a) 证明存在可积随机变量 和 ,使得当 时, 和 均成立。 (b) 证明几乎必然有 |
1.
Introduce the notations, and , for all and , . Clearly, both are filtrations and by definition, and , for all . Thus (or ) is a Doob’s Martingale with respect to filtration (or ) and hence is U.I., being integrable. Now apply the fact that a Martingale is U.I. if and only if it converges in and conclude.
介绍所有 和 的记号 和 , 。显然,这两个都是滤波,并且根据定义,对于所有 , 和 。因此, (或 )是相对于滤波 (或 )的 Doob 鞅,因此是 U.I., 是可积的。现在应用一个鞅是 U.I.当且仅当它在 中收敛的事实,并得出结论。 2. We provide two different proofs. 我们提供了两种不同的证明。 1. Observe the following identity which follows from Tower Property. (4) Note that and by Levy’s Upward Theorem, , where It is evident that (since it is almost sure limit of ) and hence almost surely. Combining the above observations we conclude that Similarly, Now invoke Equation (LABEL:tower), to conclude that almost surely. 请注意, 和根据 Levy 的上向定理 ,其中 显然, (因为它几乎是 的极限)因此 几乎必然。结合上述观察,我们得出结论 同样地, 现在调用方程(LABEL:tower),以得出结论 几乎必然。 观察以下由塔性质得出的恒等式。 2. Since is integrable, we have by Levy’s Upward Theorem, Similarly, On the otherhand, in first proof we have observed that and hence . This yields the following. Y_∞ = E [Y_∞ |G_∞ ∩F_∞ ] = E [ E [X |F_∞ ]|G_∞ ∩F_∞ ] = E [X|G_∞ ∩F_∞ ]. Similarly, This proves that almost surely. 由于 可积,根据 Levy 上推定理,我们有 同样地, 另一方面,在第一个证明中,我们观察到 因此 这导致以下结果。Y_∞ = E [Y_∞ |G_∞ ∩F_∞ ] = E [ E [X |F_∞ ]|G_∞ ∩F_∞ ] = E [X|G_∞ ∩F_∞ ].同样地, 这证明了 几乎必然。 |
Let’s analyze the problem. We are given two sequences of random variables and defined by conditional expectations. We need to show that these sequences converge in and that their limits are equal almost surely. 让我们分析这个问题。我们有两个由条件期望定义的随机变量序列 和 。我们需要证明这两个序列在 中收敛,并且它们的极限几乎处处相等。 Part (a): Convergence in . 部分(a): 中的收敛 The sigma-algebras involved are increasing: 涉及到的 sigma 代数是递增的: For the sequence: 对于 序列: … The sequence of sigma-algebras is increasing. By the Martingale Convergence Theorem (for conditional expectations), since is integrable, the sequence converges almost surely and in to . Let . sigma-代数序列 是递增的。根据鞅收敛定理(对于条件期望),由于 是可积的,序列 几乎处处收敛,并在 中收敛到 。令 。 |
Similarly, for the sequence: 同样,对于 序列: … The sequence of sigma-algebras is increasing. By the Martingale Convergence Theorem, the sequence converges almost surely and in to . Let . sigma-代数序列 是递增的。根据鞅收敛定理,序列 几乎必然且在 中收敛到 。令 。 Part (b): Prove that almost surely . 部分(b):证明几乎处处 。 We have and . 我们有 和 。 Let and . 令 和 。 Then and . 然后 和 。 Consider . . 考虑 . . . Since , . 自 以来, 。 . Since is not defined, let’s assume and are given. 由于 未定义,我们假设 和 已给出。 Let’s consider the sigma-algebra generated by all the and . 让我们考虑由所有 和 生成的 σ-代数。 Let . Both and are measurable with respect to . Thus, and are measurable with respect to . 两者均可用 来衡量。因此, 和 也可用 来衡量。 Consider . 考虑 。 Consider . 考虑 。 Let . Consider . 考虑 。 Consider . 考虑 。 Let’s consider the limit sigma-algebras and . 让我们考虑极限σ代数 和 。 We know that is -measurable and is -measurable. 我们知道 是 可测的, 是 可测的。 Let . . . . . Consider the case when and are independent. Then and . 考虑当 和 相互独立的情况。然后 和 。 . Let’s consider the sigma-algebra generated by all the information. Let . 让我们考虑由所有信息生成的 sigma 代数。令 。 Then both and are -measurable. 然后, 和 都是 -可测量的。 We have and a.s. and in . 我们拥有 和 a.s.以及 。 Consider and . 考虑 和 。 Let . Consider . 考虑 。 Let’s try to show that both and are equal to . This seems too simple. 让我们尝试证明 和 都等于 。这似乎太简单了。 Let’s consider the definition of conditional expectation. For any set , . 让我们考虑条件期望的定义。对于任意集合 , 。 For any set , . 对于任何集合 , 。 Consider where . 考虑 ,其中 。 Consider where . 考虑 ,其中 。 Consider the sigma-algebra . 考虑σ-代数 。 Let . We have . 我们已有 。 Since depends on which depends on etc., it is complex. 由于 依赖于 ,而 又依赖于 等等,因此它很复杂。 Let’s consider the case where and . 让我们考虑以下情况: 和 。 Then . 然后 。 . If this were the case, then and , so and . 如果这种情况成立,那么 和 ,所以 和 。 We need a.s., i.e., a.s. This is not generally true. 我们需要 a.s.,即 a.s. 这通常是不正确的。 Consider the sigma-algebra generated by all and , denoted by . 考虑由所有 和 生成的σ-代数,记为 。 Then . 然后 。 . |
Consider . This means for any , . 考虑 。这意味着对于任何 , 。 Consider . This means for any , . 考虑 。这意味着对于任何 , 。 Let . Consider . 考虑 。 Consider . 考虑 。 Let’s use the fact that is -measurable and is -measurable. 让我们利用事实,即 是 -可测量的,并且 是 -可测量的。 Consider the smallest sigma-algebra containing both and . 考虑包含 和 的最小 σ-代数。 This is . 这是 。 Let’s try to show that . 让我们尝试证明 。 . Consider the definition of and . 考虑 和 的定义。 . . Consider . 考虑 。 Since is measurable with respect to , this does not simplify easily. 由于 与 可测,这并不容易简化。 Let’s assume on a set of positive measure. 设集合上的测度为正的 。 Consider the case where and are trivial. 考虑 和 是平凡的情况。 Then and . 然后 和 。 Let . Consider . 考虑 。 We have . 我们已有 。 By the properties of conditional expectation, . 根据条件期望的性质, 。 Consider the sigma-algebra generated by all the random variables involved. 考虑由所有相关随机变量生成的 sigma-代数 。 . Let . Consider the limits. As , . 考虑限制。如 , 。 As , . 作为 , 。 Let and . 令 和 。 We have and . 我们有 和 。 Consider 考虑 Consider the sigma-algebra . 考虑σ-代数 。 Let . Consider the limit sigma-algebra generated by all the information. 考虑由所有信息生成的极限σ-代数。 Let . Then and . 然后 和 。 We have and . 我们有 和 。 Consider and . 考虑 和 。 Let . Consider . 考虑 。 Consider the sequence of pairs . 考虑序列对 。 If the limits are the same, then for any bounded measurable function , . 如果极限相同,那么对于任何有界可测函数 , 。 Consider the sigma-algebra . 考虑σ-代数 。 Then both and are measurable with respect to . 然后, 和 都可以相对于 进行测量。 Consider . 考虑 。 Let . Consider . 考虑 。 Then and . 然后 和 。 |
Consider the limit sigma-algebra . 考虑极限σ-代数 。 We have and . 我们有 和 。 Let . Consider . 考虑 。 Consider the random variable . Then and . 考虑随机变量 。然后 和 。 Let . Consider . Suppose . 考虑 。假设 。 Consider the conditional expectation with respect to the join of the sigma-algebras. 考虑关于 sigma 代数并集的条件期望。 Let . Consider . 考虑 。 Then . 然后 。 And . So and . 所以 和 。 Let and . 令 和 。 Consider . 考虑 。 Then . 然后 。 And . This means a.s. and a.s. 这意味着 a.s. 和 a.s. Therefore, a.s. 因此, a.s. Solution: 解决方案: (a) For the sequence , the -algebras are increasing as increases. Since is integrable, by the Martingale Convergence Theorem for conditional expectations, the sequence converges almost surely and in to . Similarly, for the sequence , the -algebras are increasing. Since is integrable, by the Martingale Convergence Theorem, the sequence converges almost surely and in to . (a)对于序列 ,随着 的增加, -代数 是递增的。由于 是可积的,根据条件期望的鞅收敛定理,序列 几乎处处且在 中收敛到 。类似地,对于序列 , -代数 是递增的。由于 是可积的,根据鞅收敛定理,序列 几乎处处且在 中收敛到 。 (b) Let and . We have and . Consider the -algebra . Consider . (b) 设 和 。我们有 和 。考虑代数 - 。考虑 。 We have , so . 我们有 ,因此 。 Since is -measurable, implies that is the conditional expectation of given . But since is already -measurable, we have if is -measurable, which is not necessarily true. 由于 是 可测的, 意味着 是给定 时 的条件期望。但是,由于 已经是 可测的,如果 是 可测的,则有 ,这并不一定成立。 However, we know that and . 然而,我们知道 和 。 Consider . 考虑 。 Then . 然后 。 This implies that almost surely. 这几乎肯定意味着 。 Similarly, . 同样, 。 This implies that almost surely. 这几乎肯定意味着 。 Therefore, almost surely. 因此, 几乎肯定。 Final Answer: The final answer is 最终答案:最终答案为 |
表 10:来自 s1-teasers 的 s1K 样本。黑色文字是问题,黄色文字是解决方案。仅基于问题,我们使用 Gemini 生成浅蓝色推理轨迹和蓝色响应,这是我们用于训练的内容。
Alice and Bob each have a coin and flip it until they get a heads. If Bob flipped his coin more times than Alice, what is the expected number of times Alice flipped her coin? Alice 和 Bob 各自有一枚硬币,他们轮流抛掷直到出现正面。如果 Bob 抛掷硬币的次数比 Alice 多,Alice 抛掷硬币的期望次数是多少? |
4/3 |
The problem asks for the expected number of times Alice flipped her coin, given that Bob flipped his coin more times than Alice. 问题要求在 Bob 掷硬币次数多于 Alice 的情况下,求 Alice 掷硬币的期望次数。 Let be the number of flips Alice makes until she gets a head. 设 为爱丽丝得到正面之前翻转的次数。 Let be the number of flips Bob makes until he gets a head. 设 为鲍勃得到正面所需的翻转次数。 and are independent random variables following a geometric distribution with parameter (probability of getting a head). 和 是独立随机变量,服从参数为 (得到正面的概率)的几何分布。 The probability mass function for a geometric distribution is , for . 几何分布的概率质量函数为 ,对于 。 In this case, , so . 在这种情况下, ,因此 。 We are given the condition that Bob flipped his coin more times than Alice, which means . 我们得到条件,Bob 掷硬币的次数比 Alice 多,这意味着 。 We want to find the expected value of given that , i.e., . 我们希望找到在给定 的条件下, 的期望值,即 。 The formula for conditional expectation is . 条件期望的公式为 。 Also, . 此外, 。 In our case, is the event . 在我们的情况下, 是事件 。 We need to find . 我们需要找到 。 Since and are independent, . 由于 和 是独立的, 。 The inner sum is a geometric series: 内和是一个几何级数: This is a geometric series with first term and common ratio . 这是一个首项为 ,公比为 的几何级数。 The sum is . 总和为 。 So, . 因此, 。 This is another geometric series with first term and common ratio . 这是另一个首项为 ,公比为 的几何级数。 The sum is . 总和为 。 Now, we need to find . 现在,我们需要找到 。 , due to independence. ,由于独立性。 . So, . 因此, 。 Now, we can find . 现在,我们可以找到 。 Finally, we can calculate the conditional expectation: 最后,我们可以计算条件期望: . Consider the series , where . 考虑序列 ,其中 。 Consider the geometric series . 考虑几何级数 。 Differentiating with respect to : 对 求导 . So, , which means . 因此, ,即表示 。 In our case, . 在我们的情况下, 。 |
.
Therefore, . 因此, 。
Alternative approach: Symmetry. 替代方法:对称性。 Consider the events , , and . 考虑事件 、 和 。 due to symmetry. 由于对称性。 . We have . 我们已有 。 . . . This confirms our previous calculation. 这证实了我们的先前计算。 Let . We are looking for the expected number of flips Alice made, given Bob flipped more times. 我们正在寻找在鲍勃翻动次数更多的情况下,爱丽丝预期的翻动次数。 Consider the case where the first flip for both is tails. Then the process restarts, and the condition remains. 考虑这种情况,两人的第一次抛掷都是反面。然后过程重新开始,条件保持不变。 Let be Alice’s first flip, be Bob’s first flip. 令 为 Alice 的第一次翻转, 为 Bob 的第一次翻转。 . . Case 1: . , so is false. 案例 1: . ,因此 为假。 Case 2: . Alice stops at 1, Bob continues. . 案例 2: . 爱丽丝停在 1,鲍勃继续。 . Case 3: . Bob stops at 1, Alice continues. 案例 3: . 鲍勃停在 1,爱丽丝继续。 Case 4: . Both continue. 案例 4: 。两者均继续。 Consider the conditional probabilities given . 考虑给定的条件概率 。 Let . If the first flips are HH, the condition is not met. 如果第一次翻转是 HH,则不满足条件 。 If the first flips are HT, Alice’s number of flips is 1. 如果第一次翻转是 HT,Alice 的翻转次数为 1。 If the first flips are TH, Bob’s number of flips is 1, Alice’s is at least 2, so . 如果第一次翻转是 TH,Bob 的翻转次数是 1,Alice 的至少是 2,所以 。 If the first flips are TT, the problem restarts, and the expected number of additional flips for Alice is still . 如果第一次翻转是 TT,问题将重新开始,Alice 期望的额外翻转次数仍然是 。 Let be the number of flips Alice makes, be the number of flips Bob makes. 设 为 Alice 翻转的次数, 为 Bob 翻转的次数。 We are given . 我们得到了 。 Consider the first flips: 考虑第一次翻转: HH (prob 1/4): Condition not met. HH(概率 1/4):条件未满足。 HT (prob 1/4): Alice flips 1 time. HT(概率 1/4):Alice 翻转 1 次。 TH (prob 1/4): Bob flips 1 time, Alice ¿ 1, so , condition not met. TH(概率 1/4):Bob 翻转 1 次,Alice 翻转 1 次,所以 ,条件未满足。 TT (prob 1/4): Both continue. TT(概率 1/4):两者继续。 Let be the expected number of Alice’s flips given Bob flipped more. 设 为当 Bob 翻转次数更多时,Alice 预期翻转的次数。 If the first flips are HT, Alice flipped 1 time, and the condition is met. 如果第一次翻转是 HT,Alice 翻转了 1 次,条件得到满足。 If the first flips are TT, the situation is the same as the beginning, so the expected number of additional flips for Alice is . 如果第一次翻转是 TT,情况与开始时相同,因此 Alice 额外翻转的期望次数为 。 Consider the event . 考虑事件 。 Conditioning on the first flips: 基于第一次翻转: . . . depends on the future flips. 取决于未来的翻转。 Let’s use the definition of conditional expectation directly. 让我们直接使用条件期望的定义。 Consider the probabilities of the first flips given . 考虑给定 的首次翻转的概率。 . . |
.
If the first flips are TT, the future behavior is the same, and the condition holds if Bob takes more flips than Alice in the subsequent trials. 如果第一次翻转是 TT,未来的行为相同,如果鲍勃在随后的试验中比爱丽丝翻转次数多,则条件 成立。 . . Sum of conditional probabilities: . 条件概率之和: 。 Now consider the conditional expectations of : 现在考虑条件期望 : Not relevant since . : 由于 ,不相关。 : Given HT and , Alice flipped 1 time. . 给定 HT 和 ,Alice 翻转了 1 次。 。 : Not relevant. : 不相关。 : Given TT and , Alice has already flipped once, and the expected number of additional flips is . So, the total number of flips is . 给定 TT 和 ,Alice 已经翻转了一次,期望额外翻转的次数是 。因此,总翻转次数是 。 . This confirms the previous result. 这证实了之前的结果。 Let’s think about the intuition. If we have the condition , it means Alice was ”lucky” and got a head early, or Bob was ”unlucky”. 让我们思考一下直觉。如果我们有条件 ,这意味着 Alice“幸运”地早早得到了正面,或者 Bob“不幸”。 Consider the random variable . We are given . 考虑随机变量 。我们已知 。 We want . 我们希望 。 Let . The number of tails before the first head follows a geometric distribution with parameter . 第一次出现正面之前尾巴的数量遵循参数为 的几何分布。 , . . . Consider the joint distribution of and . 考虑 和 的联合分布。 Consider the event , where . 考虑事件 ,其中 。 . If , then , . 如果 ,则 , 。 . Check if . 检查 。 . Something is wrong here. 这里有问题。 Let’s recalculate . 让我们重新计算 。 Inner sum: . 内和: 。 . If , . Correct. 如果 , 。正确。 |
Now, let’s find . 现在,让我们找到 。 . (independence) (独立) . . . (error in power) (功率错误) . . Let . . 让 . . . . . . If , . 如果 , 。 |
Solution: 解决方案:
Let be the number of flips Alice makes until she gets a head, and be the number of flips Bob makes until he gets a head. and are independent geometric random variables with parameter . The probability mass function is . 设 为爱丽丝得到正面之前翻转的次数, 为鲍勃得到正面之前翻转的次数。 和 是参数为 的独立几何随机变量。概率质量函数为 。 We are interested in the expected number of times Alice flipped her coin, given that Bob flipped his coin more times than Alice, i.e., . 我们对 Alice 抛硬币的期望次数感兴趣,在已知 Bob 抛硬币的次数比 Alice 多的情况下,即 。 Using the definition of conditional expectation, . 使用条件期望的定义, 。 . First, calculate . 首先,计算 。 . . . . Next, calculate . 接下来,计算 。 Now, find . 现在,找到 。 Finally, calculate the conditional expectation: 最后,计算条件期望: . Using the formula with : 使用公式 ,其中 : . . Final Answer: The final answer is 最终答案:最终答案为 |
表 11:来自 OlympicArena 天文学子集的 s1K 样本。黑色文字是问题,蓝色文字是解决方案。仅基于问题,我们使用 Gemini 生成浅蓝色推理轨迹和蓝色响应,这些用于训练。
An often-repeated fun fact is that humans produce more power per unit volume than stars. If the sun were the same size, but it produced the same amount of power per unit volume as a human, what would its surface temperature be? Assume the ”average human” produces 100 watts of power and has a volume of 66400 cubic centimeters. 一个经常被提及的有趣事实是,人类每单位体积产生的能量比恒星还要多。如果太阳的体积与人类相同,但每单位体积产生的能量与人类相同,那么它的表面温度会是多少?假设“平均人类”产生 100 瓦特的能量,体积为 66400 立方厘米。 A: B: C: D: E: |
Using the numbers from the problem, the average human produces 使用问题中的数字,平均人类产生 u=10066400 ×10-6=1506 W / m^3 u=10066400×10^-6=1506 W/m^3 The volume of the sun is , so its new power output would be . To find the equilibrium temperature, we use 太阳的体积为 ,因此其新的功率输出将为 。为了找到平衡温度,我们使用 P=σA T^4=4 πσR_⊙^2 T^4 P=σAT^4=4πσR⊙^2T^4 Solving for , we get 求解 ,得到 T=4R⊙u3 σ=4(6.96 ×108)(1506)3(5.67 ×10-8)=49823 ≈50000 K . T=4R⊙u³ σ=4(6.96×10⁸)(1506)³(5.67×10⁻⁸)=49823 ≈50000 K . |
The problem asks for the surface temperature of the sun if it had the same size but produced power per unit volume as a human. 这个问题要求计算如果太阳具有与人类相同的体积功率密度,其表面温度是多少。 First, let’s find the power density of a human. 首先,让我们找出人的功率密度。 Power produced by a human, 人类产生的电力 Volume of a human, 人体体积 Convert the volume to cubic meters: 将体积转换为立方米: Power density of a human, 人类功率密度 Now, consider the sun with the same size as it currently is, but with a power density equal to that of a human. 现在,考虑一个与当前太阳大小相同的太阳,但其功率密度与人类相等。 We need the radius of the sun to calculate its volume. The average radius of the sun is . 我们需要太阳的半径来计算其体积。太阳的平均半径为 。 Volume of the sun, 太阳体积 If the sun had the same power density as a human, the total power produced by this modified sun would be: 如果太阳具有与人类相同的功率密度,这个修改后的太阳产生的总功率将是: |
The surface temperature of a star is related to its luminosity (total power output) through the Stefan-Boltzmann law: 恒星表面温度与其光度(总功率输出)通过斯特藩-玻尔兹曼定律相关: where is the luminosity (total power output), is the radius of the star, is the Stefan-Boltzmann constant, and is the surface temperature. 是光度(总功率输出), 是恒星的半径, 是斯特藩-玻尔兹曼常数, 是表面温度。 In this case, the luminosity of the modified sun is . The radius of the modified sun is the same as the actual sun, . 在这种情况下,修改后的太阳的亮度为 。修改后的太阳半径与实际太阳相同, 。 The Stefan-Boltzmann constant, . 斯特藩-玻尔兹曼常数, 。 Calculate the term : 计算术语 : Now, substitute this back into the equation: 现在,将此代回方程式中: Solve for : 求解 : Solve for : 求解 : Calculate : 计算 : Let , then 令 ,则 So, 因此, We know and , so is between 5 and 6. 我们知道 和 ,因此 介于 5 和 6 之间。 Alternatively, 或者, Let’s double-check the calculations. 让我们再次核对计算。 Power density of human: 人类功率密度: Volume of sun: 太阳体积: Power of modified sun: 功率修正太阳: Stefan-Boltzmann law: 斯特藩-玻尔兹曼定律: Calculate : 计算 : Let , then 令 ,则 The result is around . This is not among the options. Let’s review the assumptions and calculations. 结果是大约 。这个选项不在其中。让我们回顾一下假设和计算。 Assumption: The sun radiates as a black body, which is reasonable for estimating surface temperature. 假设:太阳以黑体的形式辐射,这对于估计表面温度是合理的。 |
Consider potential errors in calculations. 考虑计算中的潜在错误。 Let’s try a simpler approach by setting up ratios. 让我们通过设置比率尝试一种更简单的方法。 The power per unit volume of the modified sun is equal to that of a human. 修改后的太阳单位体积功率等于人类。 We also know . 我们同样知道 。 Substitute the values: 替换值: Calculate : 计算 : Let , then 令 ,则 This is close to option E: . 这接近选项 E: 。 Let’s re-check the units and calculations carefully. 让我们仔细复查单位和计算。 Power density of human: 人类功率密度: The formula derived was . 推导出的公式为 。 Units check: 单元检查: The units match for . 单元与 匹配。 Numerical calculation check: 数值计算校验: Numerator: 分子: Denominator: 分母: Ratio: 比: Let’s use logarithms for precision. 让我们使用对数来提高精度。 This is very close to . 这非常接近 。 Possible sources of minor discrepancies: rounding errors, precision of constants. 可能存在微小差异的来源:舍入误差、常数的精度。 Let’s verify the formula derivation. 让我们验证公式的推导。 Power per unit volume = 功率密度 = |
If the modified sun has the power density of a human: 如果修改后的太阳具有人类的功率密度: The formula derivation seems correct. 公式推导似乎是正确的。 Let’s use approximate values to check the order of magnitude. 让我们使用近似值来检查数量级。 : , . So, between 7 and 8. : , 。因此,在 7 到 8 之间。 The order of magnitude seems consistent with the detailed calculation. 量级似乎与详细计算一致。 The problem statement is straightforward and doesn’t seem to have any hidden complexities. The assumptions made are standard for such problems. 问题描述简单明了,似乎没有隐藏的复杂性。所做出的假设对于此类问题来说是标准的。 Consider the precision of the given values. 考虑给定值的精度。 Power of human: 100 W (exact) 人类功率:100 W(精确值) Volume of human: 66400 cm3 (exact) 人类体积:66400 cm³ 3 (精确) The result obtained is very close to option E. Given the nature of multiple-choice questions, the closest option is likely the intended answer. 得到的结果非常接近选项 E。考虑到多选题的性质,最接近的选项很可能是正确答案。 If there were any misinterpretations of the question, it would likely be in the setup. However, the phrase ”If the sun were the same size, but it produced the same amount of power per unit volume as a human” seems clear. 如果对问题的理解有任何误解,很可能出现在设置上。然而,“如果太阳的大小相同,但单位体积产生的能量与人类相同”这个说法似乎是清晰的。 The surface temperature is calculated using the Stefan-Boltzmann law, assuming black-body radiation. 表面温度是根据斯蒂芬-玻尔兹曼定律计算的,假设为黑体辐射。 Double-check the provided options: 仔细核对提供的选项: A: B: C: D: E: Our calculated value is approximately , which is closest to . 我们的计算值约为 ,最接近于 。 If we used more precise values for the constants, the result might be slightly different, but likely not enough to shift to another option. 如果我们使用更精确的常数值,结果可能会有所不同,但可能不足以切换到另一个选项。 The fun fact mentioned in the question implies that the power density of a human is higher than that of the sun. Let’s verify this. 问题中提到的有趣事实表明,人的功率密度高于太阳。让我们来验证这一点。 Power output of the sun: 太阳功率输出: Volume of the sun: 太阳体积: Power density of the sun: 太阳功率密度: Power density of human: 人类功率密度: Indeed, the power density of a human is significantly higher than that of the sun. 确实,人的功率密度远高于太阳。 Final check of the calculation steps and logic. The approach of equating the power density of the modified sun to that of a human and then using the Stefan-Boltzmann law seems correct. 最终检查计算步骤和逻辑。将修改后的太阳功率密度等同于人类,然后使用斯蒂芬-玻尔兹曼定律的方法似乎是正确的。 |
Solution: 解决方案:
The power produced by an average human is , and the volume of an average human is . 一个人平均产生的功率为 ,一个人的平均体积为 。 The power density of a human is . 人类功率密度为 。 If the sun were the same size but produced the same amount of power per unit volume as a human, its power density would be . 如果太阳的体积与人类相同,但单位体积产生的能量与人类相同,其功率密度将是 。 The radius of the sun is . 太阳的半径为 。 The volume of the sun is . 太阳的体积为 。 The power density of the modified sun is given by . 修改后的太阳功率密度由 给出。 We also know that , where is the surface temperature of the modified sun. 我们同样知道 ,其中 是修改后的太阳的表面温度。 Equating the power density: 等价功率密度: Solving for : 求解 : Substitute the values: 替换值: Solving for : 求解 : Using logarithms: 使用对数: Let’s redo the exponent calculation: . 让我们重新进行指数计算: 。 - error in previous step. - 上一步错误。 The surface temperature would be approximately , which is closest to . 地表温度大约为 ,最接近于 。 Final Answer: The final answer is 最终答案:最终答案为 |
Appendix E Test-time scaling details
附录 E 测试时间缩放细节
E.1 Sequential scaling ablations
E.1 顺序缩放消融
¡—im_start—¿user 输入文本: ¡—im_start—¿user
翻译: ¡—im_start—¿用户
What is the answer to Life, the Universe and Everything? 生命、宇宙以及万物的答案是什么? Think for up to 2048 tokens. 思考最多 2048 个标记。 ¡—im_start—¿assistant !—im_start—¿助手 |
¡—im_start—¿user 输入文本: ¡—im_start—¿user
翻译: ¡—im_start—¿用户
What is the answer to Life, the Universe and Everything? 生命、宇宙以及万物的答案是什么? Think for up to 64 steps. 思考最多 64 步。 ¡—im_start—¿assistant !—im_start—¿助手 |
¡—im_start—¿think !—im_start—?思考
Let me break down this question into the three parts it is asking for: 1) Life 2) Universe 3) Everything 让我将这个问题分解为它所要求的三个部分:1)生命 2)宇宙 3)一切 Let me start with life… 让我从生命开始…… |
¡—im_start—¿64 steps left !—im_start—¿剩余 64 步 Let me break down this question into the three parts it is asking for: 1) Life 2) Universe 3) Everything 让我将这个问题分解为它所要求的三个部分:1)生命 2)宇宙 3)一切 ¡—im_start—¿63 steps left 剩余 63 步 Let me start with life… 让我从生命开始…… |
¡—im_start—¿answer !—im_start—¿answer The answer is… 答案是… | ¡—im_start—¿answer !—im_start—¿answer The answer is… 答案是… |
图 10:控制测试时计算的标记和步骤指令数据格式。我们仅在推理轨迹和答案上训练我们的模型。
表 12:通过标记-条件控制扩展思考时间。所有指标均平均于 AIME24 中的 30 个问题。
Tokens instructed () 指令标记( ) |
1024 | 2048 | 4096 | 8192 | 16384 |
No intervention at test-time 测试时无干预 |
|||||
Thinking tokens 思考标记 | 7939 | 7158 | 8263 | 7108 | 7500 |
Answer tokens 答案标记 | 689 | 669 | 659 | 722 | 724 |
AIME24 | 26.7 | 30.0 | 33.3 | 33.3 | 40.0 |
Forcing end of thinking when token budget is reached 强制在达到令牌预算时结束思考 |
|||||
Thinking tokens 思考标记 | 1024 | 2048 | 4031 | 5664 | 6330 |
Answer tokens 答案标记 | 15 | 15 | 142 | 722 | 691 |
AIME24 | 3.3 | 30.0 | 33.3 | 33.3 | 40.0 |
表 13:通过步骤条件控制扩展思考时间。所有指标均在 AIME24 的 30 个样本上平均。标记计数忽略思考和步骤分隔符。
Steps instructed () 步骤指令( ) |
16 | 32 | 64 | 128 | 256 |
No intervention at test-time 测试时无干预 |
|||||
Steps used 步骤 | 123 | 90 | 80 | 82 | 136 |
Tokens per step 每步标记数 | 60 | 70 | 69 | 66 | 56 |
Thinking tokens 思考标记 | 7252 | 6277 | 5396 | 5552 | 7551 |
Answer tokens 答案标记 | 665 | 653 | 735 | 777 | 754 |
AIME24 | 33.3 | 23.3 | 33.3 | 36.7 | 33.3 |
Forcing end of thinking when 0 steps are reached 强制在达到 0 步时结束思考 |
|||||
Steps used 步骤 | 16 | 32 | 59 | 78 | 136 |
Tokens per step 每步标记数 | 96 | 94 | 80 | 70 | 56 |
Thinking tokens 思考标记 | 1517 | 2963 | 4636 | 5409 | 7551 |
Answer tokens 答案标记 | 1111 | 788 | 799 | 794 | 754 |
AIME24 | 23.3 | 23.3 | 33.3 | 36.7 | 33.3 |
表 14:通过类条件控制扩展思考时间。我们报告“准确率/每个样本的平均思考标记数”;准确率越高,思考标记(推理成本)越少,效果越好。
Prompt appended to the question after two newlines 问题后追加提示,并在两行新行之后 |
AIME24 | MATH500 | GPQA |
Answer after a short amount of thinking. Do not spend excessive time double-checking your work. 思考片刻后给出答案。不要花费过多时间反复检查你的工作。 |
30.0% / 8033 | 90.4% / 2537 | 56.6% / 4177 |
Answer after a long amount of thinking. If you feel like you are finished early, spend the extra time trying to double-check your work until you are absolutely sure that you have the correct answer. 经过深思熟虑后的答案。如果你觉得你提前完成了,请花额外的时间仔细检查你的工作,直到你完全确信你得到了正确的答案。 |
36.7% / 9651 | 91.4% / 3875 | 51.0% / 4827 |
Without generic prompt appending 无通用提示附加 |
50.0% / 6109 | 93.0% / 3298 | 57.6% / 3510 |
Token-conditional control
标记条件控制
One general approach is to simply tell a model in the prompt precisely how many tokens it should generate. Ideally, the model can keep track of its token count and adjust its generation to finish within the desired limits. We experiment with this approach by training a model with token instructions using the format in Figure 10 (left). We bucket the lengths of the reasoning traces from our 1,000 training examples into powers of two (rounded upwards) and add a corresponding instruction to the user prompt. For example, if the instruction says “Think for up to 2048 tokens”, then the reasoning trace has anywhere between 1024 and 2048 tokens. In Table 12, we show that after training the model hardly follows the token instruction. It does sometimes generate more tokens when given a higher limit but often overshoots the limit. This may not be unique to our model as prior work suggests that OpenAI o1-mini can also not follow token instructions (Zhang & Chen, 2024). To prevent exceeding the limit, we test budget forcing the thinking to end once the limit is reached. This leads to perfect control (Table 12 (lower)). With budget forcing, the scaling trend is also clearer as the model can no longer overshoot the limit when given a small thinking budget. This leads to better test-time scaling values for Token Prompting + budget forcing in Table 3. To compute Control reported in Table 3 for token-conditional control variants we divide the number of times the thinking tokens in Table 12 are less than the upper limit by the total evaluations (2/5 for without intervention; 5/5 for with intervention).
一种通用方法是直接在提示中精确告诉模型它应该生成多少个标记。理想情况下,模型可以跟踪其标记计数并调整其生成以在所需限制内完成。我们通过使用图 10(左)中的格式训练一个带有标记指令的模型来实验这种方法。我们将来自 1,000 个训练示例的推理轨迹长度划分为 2 的幂(向上取整),并在用户提示中添加相应的指令。例如,如果指令说“思考最多 2048 个标记”,则推理轨迹的标记数在 1024 到 2048 之间。在表 12 中,我们显示经过训练的模型几乎不遵循标记指令。当给定更高的限制时,它有时会生成更多的标记,但经常超过限制。这可能并不仅限于我们的模型,因为先前的研究表明,OpenAI o1-mini 也无法遵循标记指令(张,陈,2024)。为了防止超过限制,我们测试了预算强制,一旦达到限制,就结束思考。这导致了完美的控制(表 12(下))。 预算强制下,缩放趋势也更加明显,因为当给定小的思考预算时,模型不能再超过限制。这导致表 3 中 Token Prompting + 预算强制在测试时间缩放值方面表现更好。为了计算表 3 中报告的控制变量,我们将表 12 中思考标记少于上限的次数除以总评估次数(无干预为 2/5;干预为 5/5)。
Step-conditional control 步骤条件控制
Token instructions fail as current models cannot count tokens. To accommodate this lack of capability, we experiment with making the counting more coarse-grained. We partition the reasoning traces into steps and ask the model to think for a specific number of steps rather than tokens. We split our reasoning traces on double newlines into steps, which we find act as intuitive separators based on manual inspection of samples. We bucket our training samples into powers of 2 depending on their number of steps and add a corresponding step instruction following the format in Figure 10 (right). This format is based on early experiments, where we found the model to be more likely to adhere to the step limit when counting down (“3 steps left…2 steps left”) rather than counting up (“Step2…Step3…”). This is likely because if counting down, the final step is always 1, which will act as a strong prior to the model to finish its generation. If counting up, the final step before the answer varies, thus if the model does not remember the original step instruction, it may fail to stop. We conclude the following from our results in Table 13: (1) The model still struggles to adhere to the step limit. The model sometimes simply continues counting into negative steps, e.g. “-1 steps left”. To solve this issue, we automatically stop the thinking process once 0 steps are reached and then force the model to transition to answering mode by appending the answer token delimiter (§3). This leads to perfect step adherence (lower half of Table 13), yet problems remain. (2) The model compensates for fewer steps by making each step longer. For example, when forced to use up to 16 steps vs 256 steps, the model generates an average of 96 tokens per step vs 56. Despite this issue, more steps still clearly correlate with more total thinking tokens in Table 13 and better performance leading to a positive slope (3) Step instructions are more costly than other methods. The step delimiters require around 6 tokens each which for e.g. 64 steps adds up to a total of around 380 tokens. When ignoring the step delimiters in token counts as in Table 13, the model still requires 7551 thinking tokens on average to achieve only 33.3% on AIME24. To compute Control reported in Table 3 for step-conditional control variants, we first decide that 100 tokens are an upper limit per step and then multiply this number by the steps instructed to arrive at a proxy total token limit, e.g. 1600 for 16 steps instructed. We then check whether the thinking tokens in Table 13 fit within the respective limit for each evaluation run (3/5 for without intervention; 5/5 for with intervention). For the model in Figure 7, we use a model with step-conditional control trained on an earlier version of our data and using an earlier version of our evaluation codebase.
令牌指令失败,因为当前模型无法计数令牌。为了适应这种能力不足,我们尝试使计数更加粗粒度。我们将推理轨迹分为步骤,并要求模型思考特定数量的步骤而不是令牌。我们将推理轨迹在双换行符处分割成步骤,我们发现这些步骤在手动检查样本的基础上充当直观的分隔符。根据步骤数量将我们的训练样本分为 2 的幂次方,并添加相应的步骤指令,格式如图 10(右)所示。这种格式基于早期实验,我们发现模型在倒计时(“还剩 3 步…还剩 2 步”)时比在计数时(“第二步…第三步…”)更有可能遵守步骤限制。这可能是由于如果倒计时,最后一步总是 1,这将对模型完成其生成起到强烈的先验作用。如果计数,答案之前的最后一步可能不同,因此如果模型没有记住原始步骤指令,它可能无法停止。我们从表 13 的结果中得出以下结论:(1)模型仍然难以遵守步骤限制。 模型有时简单地继续计数到负步数,例如“-1 步剩余”。为了解决这个问题,我们一旦达到 0 步就自动停止思考过程,并通过添加答案标记分隔符(§3)强制模型过渡到回答模式。这导致完美的步数遵守(表 13 的下半部分),但问题仍然存在。(2)模型通过使每步更长来补偿较少的步数。例如,当被迫使用最多 16 步而不是 256 步时,模型每步生成平均 96 个标记,而 56 个。尽管存在这个问题,更多的步数仍然与表 13 中更多的总思考标记和更好的性能相关,导致正斜率(3)步骤指令比其他方法更昂贵。步骤分隔符需要大约 6 个标记,例如 64 步的总计约为 380 个标记。在忽略表 13 中的标记计数中的步骤分隔符时,模型仍然需要平均 7551 个思考标记才能在 AIME24 上仅达到 33.3%。 为了计算表 3 中报告的步条件控制变体的控制,我们首先决定每步的上限为 100 个标记,然后将这个数字乘以指示的步数,以得到代理总标记限制,例如 16 步指示的 1600。然后,我们检查表 13 中的思维标记是否适合每个评估运行(无干预为 3/5;干预为 5/5)。对于图 7 中的模型,我们使用在较早版本的数据上训练的步条件控制模型,并使用较早版本的评估代码库。
Class-conditional control
类条件控制
OpenAI exposes test-time compute control to users via a “reasoning_effort” API parameter with three possible settings: low, medium, and high.333https://github.com/openai/openai-python/blob/44d6210f101abedeb2dd68507fcffcb329df70ea/src/openai/types/chat/completion_create_params.py#L172 The OpenAI documentation also states that “Reducing reasoning effort can result in faster responses and fewer tokens used on reasoning in a response.” suggesting that they are unable to control test-time compute with guarantees. Thus, maybe OpenAI simply adjusts the prompt or system instruction depending on the reasoning effort desired. In Table 14, we show that separate prompts for short and long thinking allow us to control thinking time to some extent: Prompting the model to think for longer leads to longer thinking. However, it does not reliably improve performance and control is not precise. The current adherence to control may suffice when we only have three classes, but it might not scale to finer-grained classes. To compute Control reported in Table 3 for this method, we assume that prompting the model to think for a short time in Table 14 should produce fewer tokens than the default for AIME24, while the long prompt should produce more. As and , one out of two follows our expected control thus Control is 50%.
OpenAI 通过“reasoning_effort”API 参数将测试时计算控制暴露给用户,有三个可能的设置:低、中、高。OpenAI 文档还指出,“减少推理努力可以导致响应更快,并且在响应中用于推理的令牌更少”,这表明他们无法保证控制测试时的计算。因此,也许 OpenAI 只是根据所需的推理努力调整提示或系统指令。在表 14 中,我们展示了为短思考和长思考设置单独的提示,使我们能够在一定程度上控制思考时间:提示模型进行更长时间的思考会导致更长时间的思考。然而,这并不可靠地提高性能,控制并不精确。当我们只有三个类别时,当前的坚持控制可能足够,但它可能无法扩展到更细粒度的类别。为了计算表 3 中此方法的 Control,我们假设在表 14 中提示模型进行短时间的思考应该比 AIME24 的默认值产生更少的令牌,而长提示应该产生更多。正如 和 所示,两个中有一个符合我们的预期控制,因此 Control 为 50%。
E.2 Examples for rejection sampling ablation
E.2 拒绝采样消融的示例
表 15:s1-32B 早期版本在 AIME24 上的拒绝采样生成的答案。黑色文字是问题,黄色文字是解答。仅根据问题,我们使用温度 1 的 s1-32B 生成浅蓝色推理轨迹和蓝色回答。我们生成,直到推理轨迹的第一个例子少于 4000 个标记,第二个例子少于 8000 个标记。平均每个例子需要 199 次(1)尝试,直到思考达到 ( 标记)。
Let and be points in the coordinate plane. Let be the family of segments of unit length lying in the first quadrant with on the -axis and on the -axis. There is a unique point on , distinct from and , that does not belong to any segment from other than . Then , where and are relatively prime positive integers. Find . 设 和 为坐标平面上的点。设 为第一象限内长度为单位的线段 的集合,其中 位于 轴上, 位于 轴上。存在一个唯一的点 在 上,与 和 不同,且不属于除 以外的任何 线段。那么 ,其中 和 是互质的正整数。求 。 |
23 |
[Correct generation with 3981 thinking tokens and 1055 answer tokens] [生成正确,包含 3981 个思考令牌和 1055 个回答令牌] The problem asks for a point on the segment that is not contained in any other segment from the family , except for itself. 问题要求在段 上找到一个点 ,该点不属于家族 中的任何其他段,除了 本身。 The points form a triangle in the first quadrant. 点 在第一象限形成一个三角形。 The segment connects and . 该段 连接 和 。 The equation of the line containing can be found using the two points: 直线的方程可以通过这两个点求得: Slope 斜率 Using the point-slope form with point : 使用点斜式方程 ,以点 为例: The segment lies in the first quadrant and connects to . 该段 位于第一象限,连接 到 。 The family consists of segments of unit length, where is on the -axis and is on the -axis. Let and . 该家族 由长度为 的段组成,其中 位于 轴上, 位于 轴上。设 和 。 The length of is . 的长度为 。 So . 所以 . Since is on the positive -axis and is on the positive -axis, we have and . 由于 位于正 轴上,而 位于正 轴上,因此我们有 和 。 The segment connects and . The equation of the line containing is . 该段 连接 和 。包含 的直线方程为 。 The region covered by the family is the area bounded by the lines containing these segments. The endpoints and vary such that . 该家族 所覆盖的区域是由包含这些线段的直线所界定的区域。端点 和 的变化使得 。 Consider the parametrization , , where . 考虑参数化 , ,其中 。 The segment connects and . The equation of the line is . 该段 连接 和 。直线的方程为 。 We are looking for a unique point on , distinct from and , that does not belong to any segment from other than . 我们正在寻找一个独特的点 ,该点位于 上,与 和 不同,且不属于除了 之外的 中的任何线段。 Let be a point on . We can parametrize the points on as for . 设 为 上的一个点。我们可以将 上的点参数化为 ,对于 。 . Since is distinct from and , we have . 由于 与 和 不同,因此我们有 。 Suppose the point lies on some segment , where and . 假设点 位于某条线段 上,其中 和 。 The point satisfies the equation . 该点 满足方程 。 |
Plugging in the coordinates of : 将 的坐标插入: Multiply by 2: 乘以 2: We are looking for a point on such that the only segment from containing is itself. 我们正在寻找一个点 在 上,使得仅包含 的段 就是 本身。 The segment is part of the family . 该段 属于家族 的一部分。 For , and . 对于 , 和 。 Here , . 这里 , 。 . The line segment connects and . 线段 连接 和 。 The equation of the line is , which simplifies to , or . 直线的方程为 ,简化后为 ,或 。 This matches the equation . 这与方程式 相匹配。 Consider the envelope of the family of lines . 考虑直线族 的包络。 Differentiating with respect to : 对 求导 From the line equation: 从直线方程: Substitute 替代 Substitute back into the line equation: 将代数式代入直线方程: The envelope of the family is the astroid . 该家族的包络是星形线。 The segments from are tangent to this astroid. 的段与此心形线相切。 The point on that we are looking for is the point where is tangent to the astroid. 我们要找的点 在 上,是 与心形线相切的那一点。 The equation of is . 该方程为 ,即 。 Substitute into the astroid equation: 代入到双曲线方程中: Let and . The astroid is . 令 和 。彗星是 。 The point lies on both the line and the astroid. 点 位于直线 和抛物线形星形线(asteroid)上。 Differentiating the astroid equation implicitly: 隐式区分双曲线方程 |
The slope of the tangent to the astroid at a point is . 切线在点 处对心形线的斜率为 。 The slope of the line is . 直线的斜率 是 。 So, at the point of tangency, . 因此,在切点处, 。 The point lies on the line and satisfies . 点 位于直线 上,并满足 。 So . 所以 . Substitute into the line equation: 代入直线方程: Then . 然后 。 The point is . 该点 是 。 We need to check if this point lies on the segment . 我们需要检查该点是否位于线段 上。 The point is of the form . 该点 的形式为 。 . . So lies on . 所以 位于 上。 We need to make sure , which is true for . 我们需要确保 ,这对于 也是正确的。 Now, we need to check if this point does not belong to any other segment from . 现在,我们需要检查此点 是否不属于从 的任何其他段。 The point lies on the astroid. The segment is tangent to the astroid at . 点 位于阿波罗星形线(双曲线星形线)上。线段 在 处与阿波罗星形线相切。 Any other segment from that contains would have as an interior point, not a tangent point. 任何包含 的 的其他部分,其内部点将是 ,而不是切点。 Consider the parametrization of the family . 考虑族 的参数化。 A point belongs to some segment in iff there exists such that . 一个点 属于 中的某条线段,当且仅当存在 使得 。 For the point , we have: 对于点 ,我们有: We know that lies on , which corresponds to some specific segment in . 我们知道 位于 上,这对应于 中的某个特定段。 The problem statement assumes the existence and uniqueness of such a point . 问题陈述假设存在且唯一这样一个点 。 The unique point on , distinct from and , that does not belong to any segment from other than . 该点 在 上,与 和 不同,不属于除了 之外的 中的任何线段。 This means that if lies on another segment , then . 这意味着如果 位于另一个线段 上,那么 。 The point is the point of tangency of with the astroid. 点 是点 与心形线的切点。 Now we need to find . 现在我们需要找到 。 . Here and . 这里 和 。 and are relatively prime. 和 是互质的。 . Final check of the reasoning: 最终推理检查: The family consists of unit-length segments with endpoints on the axes. 该族 由单位长度的线段组成,其端点位于轴上。 The envelope of this family is the astroid . 这个家族的包络是星形线 。 The segments from are tangent to the astroid. 的段与心形线相切。 The segment is also a member of the family . 该段 也是家族 的成员。 We are looking for a point on that lies on the astroid. 我们正在寻找一个位于星形线上的点 ,该点位于 。 This point is the point of tangency of with the astroid. 这一点 是 与心形线的切点。 |
If a point on lies on the astroid, it belongs to the tangent segment at that point. Since is the tangent segment, this point does not belong to any other segment from . 如果点 位于阿波罗尼圆上,则该点属于该点的切线段。由于 是切线段,因此该点不属于从 的任何其他段。 Consider the case when the segment intersects the astroid at two points. 考虑当段 与双曲线在两点相交的情况。 If the segment is not tangent to the astroid, then there are two points of intersection. 如果段 不与阿波罗尼圆相切,则存在两个交点。 Let the points of intersection be and . 设交点为 和 。 If lies on , then the tangent to the astroid at is the segment . 如果 位于 上,则 处的心形线切线是线段 。 If lies on , then the tangent to the astroid at is the segment . 如果 位于 上,则 处的心形线切线是线段 。 The slope of is . 的斜率为 。 We found the point on the astroid with tangent slope . 我们在双曲线上找到了切线斜率为 的点。 The problem statement guarantees the uniqueness of such a point . 问题陈述保证了这样一个点的唯一性 。 The point must be interior to , so . 该点 必须位于 内部,因此 。 The parameter is in this range. 参数 位于此范围内。 The point is distinct from and . 该点 与 和 不同。 If , then , . 如果 ,则 , 。 If , then , . 如果 ,则 , 。 The point of tangency is the unique point on (excluding endpoints) that lies on the astroid. 切点 是 (不包括端点)上唯一的位于心形线的点。 |
Solution: 解决方案:
Let the family of segments be defined by segments of unit length with on the positive -axis and on the positive -axis. Let and . The length condition gives . The segment lies on the line . Using the parametrization , with , the equation of the line is . 定义由单位长度的线段 组成的一组线段,其中 位于正 轴上, 位于正 轴上。设 和 。长度条件给出 。线段 位于直线 上。使用参数化 , ,其中 ,直线的方程为 。 The envelope of this family of lines is the astroid . 该族线的包络是心形线 。 The segments from are tangent to this astroid. 的段与此心形线相切。 The segment connects and . The equation of the line containing is . This segment is part of the family . 该段 连接 和 。包含 的直线方程为 。此段是家族 的一部分。 We are looking for a unique point on , distinct from and , that does not belong to any segment from other than . This means is the point where is tangent to the astroid. 我们正在寻找一个独特的点 ,该点位于 上,与 和 不同,且不属于除了 之外的任何从 出发的线段。这意味着 是 与心形线相切的点。 The slope of the tangent to the astroid at a point is . The slope of is . 切线在点 处的斜率为 。 的斜率为 。 Setting the slopes equal: . 设置斜率相等: 。 |