部署DeepSeek模型,进群交流最in玩法!
立即加群
发布
社区首页 >专栏 >The Three-Stage Scaling Laws Large Language Models

The Three-Stage Scaling Laws Large Language Models

原创
作者头像
立委
发布2025-03-03 15:07:37
发布2025-03-03 15:07:37
330
举报
文章被收录于专栏:deepseek腾讯云TVP

The Three-Stage Scaling Laws Large Language Models

Mr. Huang's background features three S-curves, illustrating the scaling relay race across three stages of large language models, demonstrating a persistent spirit akin to the Chinese fable of the legendary Old Man Who Moved Mountains.

We know that large language models have three stages: pre-training, post-training, and online inference. The biggest change in recent months is the community consensus, following Ilya Sutskever's claim, that the pre-training era has ended. The famous empirical scaling laws for pre-training appear to have plateaued. This has led to the rise of inference models (OpenAI's O series and Deepseek's R series, among others), which emphasize investment in chain-of-thought (CoT) reinforcement learning during post-training and utilization of online inference time (so-called "test time compute"). These reasoning models have indeed demonstrated unprecedented achievements in mathematics, coding, and creative writing.

The scaling of post-training for reasoning models has just begun, and it's unclear how far it can go. But we can gradually see this trajectory from O1 evolving to O3, and from R1 to the reportedly soon-to-be-released R2 and their enhanced capabilities. What about the test time scaling in the final inference stage?

Recently, I spoke with my old friend Junlin, one of the earliest advocates for the three S-curves of scaling in China. I mentioned that I hadn't seen any real test time scaling because no one can control the model's test time compute—how much time/computing power it uses and when it completes assigned tasks is determined by the model itself, so test time doesn't seem "scalable." Junlin agreed that this is currently the case.

These past few days, while playing with large models' deep research capabilities, I've gradually experienced some possibilities for test time scaling. The answer is emerging. Fundamentally, it's about whether there's a curve showing that if you give a query or topic more thinking and response time, it performs better. Specifically, with O3-mini, there's a button called "deep research" that users can choose to use or not to use. Without it, your question still follows a chain of thought because you initially selected the reinforced O3 reasoning model. The process for reasoning models typically takes a minute or two. However, if you also press the deep research button, the final reasoning time is extended by several times, potentially lasting up to 10 minutes. This shows us that even with the same model, different inference times produce different results. This should count as a precursor of test time scaling.

How does it work? How can users invest different amounts of test time compute based on the difficulty or challenge of their topic and their tolerance for waiting time to generate different results for the same topic? It turns out it uses an agent-like approach. The functionality provided by the deep research button is essentially a research reasoning agent. Agents are an additional LLM-native feature that doesn't require changing the model—it changes the interaction method during the inference stage. Currently, this interaction is very simple, just one round, but this test time scaling direction is expected to continue exploring longer and more interactions with users to help maximize the effect of test time compute.

If test time compute scaling doesn't quickly hit bottlenecks, we can imagine future deep research interacting with users for extended periods to complete highly complex projects. Perhaps we're moving beyond minute-level reasoning time investments—we can entirely envision large models investing hours or even days to complete challenging tasks, such as projects that would take human researchers months or years, or completing research projects humans cannot accomplish. The current deep research is very simple—after receiving the user's prompt/query, it immediately breaks down the problem and asks the user five or six simple questions to confirm the required sources, breadth, depth, and considerations for the research. After receiving user feedback, the model begins accepting updated materials (if any) and uses search to collect more relevant information. Then, following the decomposed tasks and the plan confirmed with the user, it analyzes each source and finally synthesizes everything into a research report. This naturally extends the required reasoning time because the task is no longer singular, and the materials aren't limited to knowledge already digested within the model but include more sources searched in real-time—processing all this takes time.

For both reinforcement learning in the post-training stage of reasoning models and the investment in test time compute during the inference stage, the scaling journey has just begun. Let's hope these two S-curves can continue to rise steadily for some time, allowing the scaling relay race to help us progress continuously on the path toward artificial general intelligence (AGI) and eventually artificial superintelligence (ASI).

【Related】

大模型三阶段的 scaling laws 接力赛

张俊林:从Deepseek R1看Scaling Law

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

原创声明:本文系作者授权腾讯云开发者社区发表,未经许可,不得转载。

如有侵权,请联系 cloudcommunity@tencent.com 删除。

评论
登录后参与评论
0 条评论
热度
最新
推荐阅读
目录
  • The Three-Stage Scaling Laws Large Language Models
领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档