Understanding DeepSeek R1
Adolph Cruickshank upravil tuto stránku před 3 měsíci


DeepSeek-R1 is an open-source language design developed on DeepSeek-V3-Base that’s been making waves in the AI neighborhood. Not only does it match-or even surpass-OpenAI’s o1 model in numerous criteria, however it also comes with completely MIT-licensed weights. This marks it as the first non-OpenAI/Google design to deliver strong thinking abilities in an open and available way.

What makes DeepSeek-R1 especially exciting is its transparency. Unlike the less-open methods from some market leaders, DeepSeek has actually published a detailed training method in their paper. The design is also extremely cost-effective, with input tokens costing simply $0.14-0.55 per million (vs o1’s $15) and output tokens at $2.19 per million (vs o1’s $60).

Until ~ GPT-4, the common knowledge was that much better designs required more information and compute. While that’s still valid, engel-und-waisen.de designs like o1 and R1 demonstrate an option: inference-time scaling through thinking.

The Essentials

The DeepSeek-R1 paper provided several models, but main amongst them were R1 and R1-Zero. Following these are a series of distilled models that, while fascinating, I will not go over here.

DeepSeek-R1 utilizes two significant ideas:

1. A multi-stage pipeline where a small set of cold-start data kickstarts the model, followed by massive RL.

  1. Group Relative Policy Optimization (GRPO), a reinforcement learning technique that relies on comparing several model outputs per prompt to prevent the requirement for a separate critic.

    R1 and R1-Zero are both thinking designs. This basically suggests they do Chain-of-Thought before answering. For the R1 series of designs, this takes form as believing within a tag, before answering with a last summary.

    R1-Zero vs R1

    R1-Zero applies Reinforcement Learning (RL) straight to DeepSeek-V3-Base with no supervised fine-tuning (SFT). RL is used to optimize the model’s policy to optimize benefit. R1-Zero attains exceptional accuracy however in some cases produces confusing outputs, such as mixing multiple languages in a single response. R1 repairs that by integrating minimal supervised fine-tuning and numerous RL passes, which improves both accuracy and readability.

    It is intriguing how some languages may express certain concepts better, which leads the model to select the most meaningful language for the task.

    Training Pipeline

    The training pipeline that DeepSeek published in the R1 paper is exceptionally fascinating. It showcases how they developed such strong reasoning models, and what you can anticipate from each stage. This consists of the issues that the resulting models from each phase have, and how they resolved it in the next stage.

    It’s fascinating that their training pipeline varies from the normal:

    The typical training method: Pretraining on large dataset (train to predict next word) to get the base model → supervised fine-tuning → choice tuning through RLHF R1-Zero: Pretrained → RL R1: Pretrained → Multistage training pipeline with numerous SFT and wiki.lafabriquedelalogistique.fr RL stages

    Cold-Start Fine-Tuning: Fine-tune DeepSeek-V3-Base on a couple of thousand Chain-of-Thought (CoT) samples to ensure the RL process has a decent beginning point. This provides a great design to start RL. First RL Stage: Apply GRPO with rule-based benefits to enhance thinking accuracy and format (such as requiring chain-of-thought into believing tags). When they were near merging in the RL procedure, they moved to the next step. The outcome of this action is a strong reasoning design but with weak general abilities, e.g., bad format and language mixing. Rejection Sampling + general data: Create new SFT data through rejection tasting on the RL checkpoint (from step 2), integrated with monitored information from the DeepSeek-V3-Base model. They gathered around 600k top quality reasoning samples. Second Fine-Tuning: Fine-tune DeepSeek-V3-Base again on 800k overall samples (600k thinking + 200k basic tasks) for broader capabilities. This step led to a strong thinking model with basic capabilities. Second RL Stage: Add more reward signals (helpfulness, harmlessness) to improve the final model, in addition to the thinking benefits. The result is DeepSeek-R1. They likewise did model distillation for several Qwen and Llama models on the reasoning traces to get distilled-R1 designs.

    Model distillation is a strategy where you utilize a teacher design to enhance a trainee design by producing training information for the trainee model. The instructor is generally a bigger design than the trainee.

    Group Relative Policy Optimization (GRPO)

    The fundamental idea behind utilizing support for LLMs is to tweak the model’s policy so that it naturally produces more precise and beneficial responses. They used a benefit system that checks not just for accuracy however also for appropriate format and language consistency, so the model gradually learns to favor responses that meet these quality criteria.

    In this paper, they motivate the R1 model to produce chain-of-thought reasoning through RL training with GRPO. Instead of including a separate module at inference time, the training procedure itself pushes the model to produce detailed, detailed outputs-making the chain-of-thought an emerging habits of the optimized policy.

    What makes their technique especially fascinating is its reliance on straightforward, rule-based benefit functions. Instead of depending on costly external models or human-graded examples as in standard RLHF, the RL utilized for R1 uses easy criteria: it may provide a greater benefit if the response is right, kenpoguy.com if it follows the expected/ formatting, and if the language of the response matches that of the prompt. Not relying on a benefit design likewise implies you do not have to spend time and effort training it, and it does not take memory and calculate far from your main design.

    GRPO was introduced in the DeepSeekMath paper. Here’s how GRPO works:

    1. For each input prompt, the design generates different responses.
  2. Each response receives a scalar reward based on elements like accuracy, format, and language consistency.
  3. Rewards are changed relative to the group’s efficiency, essentially determining how much better each action is compared to the others.
  4. The model updates its technique a little to prefer actions with greater relative benefits. It just makes minor adjustments-using techniques like clipping and a KL penalty-to guarantee the policy does not stray too far from its original behavior.

    A cool element of GRPO is its flexibility. You can use easy rule-based benefit functions-for circumstances, awarding a benefit when the design properly utilizes the syntax-to guide the training.

    While DeepSeek used GRPO, you might utilize alternative techniques rather (PPO or PRIME).

    For those aiming to dive much deeper, Will Brown has composed rather a great application of training an LLM with RL using GRPO. GRPO has actually also already been contributed to the Transformer Reinforcement Learning (TRL) library, which is another excellent resource. Finally, Yannic Kilcher has a great video explaining GRPO by going through the DeepSeekMath paper.

    Is RL on LLMs the path to AGI?

    As a last note on explaining DeepSeek-R1 and the approaches they’ve presented in their paper, I wish to highlight a passage from the DeepSeekMath paper, based upon a point Yannic Kilcher made in his video.

    These findings suggest that RL enhances the model’s total efficiency by rendering the output distribution more robust, in other words, it seems that the enhancement is attributed to enhancing the appropriate reaction from TopK rather than the enhancement of basic capabilities.

    In other words, RL fine-tuning tends to shape the output circulation so that the highest-probability outputs are more likely to be appropriate, despite the fact that the general capability (as measured by the variety of correct answers) is mainly present in the pretrained model.

    This suggests that reinforcement knowing on LLMs is more about refining and “shaping” the existing distribution of responses rather than enhancing the model with entirely brand-new capabilities. Consequently, while RL techniques such as PPO and GRPO can produce substantial performance gains, there appears to be an intrinsic ceiling determined by the underlying model’s pretrained knowledge.

    It is uncertain to me how far RL will take us. Perhaps it will be the stepping stone to the next big milestone. I’m delighted to see how it unfolds!

    Running DeepSeek-R1

    I’ve utilized DeepSeek-R1 through the main chat user interface for various problems, which it seems to solve all right. The additional search performance makes it even nicer to use.

    Interestingly, o3-mini(-high) was launched as I was composing this post. From my initial testing, R1 seems stronger at math than o3-mini.

    I also rented a single H100 by means of Lambda Labs for $2/h (26 CPU cores, 214.7 GB RAM, 1.1 TB SSD) to run some experiments. The main goal was to see how the design would perform when released on a single H100 GPU-not to extensively check the design’s abilities.

    671B through Llama.cpp

    DeepSeek-R1 1.58-bit (UD-IQ1_S) quantized design by Unsloth, with a 4-bit quantized KV-cache and [users.atw.hu](http://users.atw.hu/samp-info-forum/index.php?PHPSESSID=f5bdfab001df13eb6eb51165871a109d&action=profile