Skip to content

Commit e9784cd

Browse files
authored
deepseek-r1 Notebook: Add Option for DeepSeek-R1-Distill-Qwen-32B Model (#2732)
- #2718 - The 32B is the one comparable in capability as OpenAI-o1-mini 100B. - Tested with OpenVINO 2024.6 on ARL-S 285K CPU with 32 GB RAM and 500 GB drive (200 GB for vRAM swap file and 200 GB to store the model). Also, tested successfully on ARL-H 285H iGPU with 64 GB RAM. - Tested with OpenVINO 2025.0 on ARL-S 285K CPU as well.
1 parent ca48d78 commit e9784cd

File tree

3 files changed

+15
-0
lines changed

3 files changed

+15
-0
lines changed

notebooks/deepseek-r1/README.md

+1
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ The tutorial supports different models, you can select one from the provided opt
1212
* **DeepSeek-R1-Distill-Qwen-1.5B** is the smallest DeepSeek-R1 distilled model based on [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B). Despite its compact size, the model demonstrates strong capabilities in solving basic mathematical tasks, at the same time its programming capabilities are limited. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) for more info.
1313
* **DeepSeek-R1-Distill-Qwen-7B** is a distilled model based on [Qwen-2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B). The model demonstrates a good balance between mathematical and factual reasoning and can be less suited for complex coding tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) for more info.
1414
* **DeepSeek-R1-Distil-Qwen-14B** is a distilled model based on [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) that has great competence in factual reasoning and solving complex mathematical tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) for more info.
15+
* **DeepSeek-R1-Distil-Qwen-32B** is a distilled model based on [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) that has comparable capability as OpenAI o1-mini. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) for more info. As the original model size is about 65GB, to quantize it to INT4 requires 32GB of RAM with 200GB for Swap File and another 200GB storage to save the models. The INT4 quantized model has about 16GB in size and requires 32GB of RAM when performing inference on CPU or 64GB of RAM on iGPU.
1516

1617
Learn how to accelerate **DeepSeek-R1-Distill-Llama-8B** with **FastDraft** and OpenVINO GenAI speculative decoding pipeline in this [notebook](../../supplementary_materials/notebooks/fastdraft-deepseek/fastdraft_deepseek.ipynb)
1718
## Notebook Contents

notebooks/deepseek-r1/deepseek-r1.ipynb

+1
Original file line numberDiff line numberDiff line change
@@ -110,6 +110,7 @@
110110
"* **DeepSeek-R1-Distill-Qwen-1.5B** is the smallest DeepSeek-R1 distilled model based on [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B). Despite its compact size, the model demonstrates strong capabilities in solving basic mathematical tasks, at the same time its programming capabilities are limited. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) for more info.\n",
111111
"* **DeepSeek-R1-Distill-Qwen-7B** is a distilled model based on [Qwen-2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B). The model demonstrates a good balance between mathematical and factual reasoning and can be less suited for complex coding tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) for more info.\n",
112112
"* **DeepSeek-R1-Distil-Qwen-14B** is a distilled model based on [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) that has great competence in factual reasoning and solving complex mathematical tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) for more info.\n",
113+
"* **DeepSeek-R1-Distil-Qwen-32B** is a distilled model based on [Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) that has comparable capability as OpenAI o1-mini. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-32B) for more info. As the original model size is about 65GB, to quantize it to INT4 requires 32GB of RAM with 200GB for Swap File and another 200GB storage to save the models. The INT4 quantized model has about 16GB in size and requires 32GB of RAM when performing inference on CPU or 64GB of RAM on iGPU.\n",
113114
"\n",
114115
"[Weight compression](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html) is a technique for enhancing the efficiency of models, especially those with large memory requirements. This method reduces the model’s memory footprint, a crucial factor for Large Language Models (LLMs). We provide several options for model weight compression:\n",
115116
"\n",

notebooks/deepseek-r1/llm_config.py

+13
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,12 @@ def deepseek_partial_text_processor(partial_text, new_text):
4040
"system_prompt": DEFAULT_SYSTEM_PROMPT,
4141
"stop_strings": ["<|end▁of▁sentence|>", "<|User|>", "</User|>", "<|User|>", "<|end_of_sentence|>", "</|"],
4242
},
43+
"DeepSeek-R1-Distill-Qwen-32B": {
44+
"model_id": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
45+
"genai_chat_template": "{% for message in messages %}{% if loop.first %}{{ '<|begin▁of▁sentence|>' }}{% endif %}{% if message['role'] == 'system' and message['content'] %}{{ message['content'] }}{% elif message['role'] == 'user' %}{{ '<|User|>' + message['content'] }}{% elif message['role'] == 'assistant' %}{{ '<|Assistant|>' + message['content'] + '<|end▁of▁sentence|>' }}{% endif %}{% if loop.last and add_generation_prompt and message['role'] != 'assitant' %}{{ '<|Assistant|>' }}{% endif %}{% endfor %}",
46+
"system_prompt": DEFAULT_SYSTEM_PROMPT,
47+
"stop_strings": ["<|end▁of▁sentence|>", "<|User|>", "</User|>", "<|User|>", "<|end_of_sentence|>", "</|"],
48+
},
4349
},
4450
"Chinese": {
4551
"DeepSeek-R1-Distill-Qwen-1.5B": {
@@ -66,6 +72,12 @@ def deepseek_partial_text_processor(partial_text, new_text):
6672
"system_prompt": DEFAULT_SYSTEM_PROMPT_CHINESE,
6773
"stop_strings": ["<|end▁of▁sentence|>", "<|User|>", "</User|>", "<|User|>", "<|end_of_sentence|>", "</|"],
6874
},
75+
"DeepSeek-R1-Distill-Qwen-32B": {
76+
"model_id": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
77+
"genai_chat_template": "{% for message in messages %}{% if loop.first %}{{ '<|begin▁of▁sentence|>' }}{% endif %}{% if message['role'] == 'system' and message['content'] %}{{ message['content'] }}{% elif message['role'] == 'user' %}{{ '<|User|>' + message['content'] }}{% elif message['role'] == 'assistant' %}{{ '<|Assistant|>' + message['content'] + '<|end▁of▁sentence|>' }}{% endif %}{% if loop.last and add_generation_prompt and message['role'] != 'assitant' %}{{ '<|Assistant|>' }}{% endif %}{% endfor %}",
78+
"system_prompt": DEFAULT_SYSTEM_PROMPT_CHINESE,
79+
"stop_strings": ["<|end▁of▁sentence|>", "<|User|>", "</User|>", "<|User|>", "<|end_of_sentence|>", "</|"],
80+
},
6981
},
7082
}
7183

@@ -79,6 +91,7 @@ def deepseek_partial_text_processor(partial_text, new_text):
7991
"DeepSeek-R1-Distill-Qwen-7B": {"sym": True, "group_size": 128, "ratio": 1.0},
8092
"DeepSeek-R1-Distill-Qwen-14B": {"sym": True, "group_size": 128, "ratio": 1.0},
8193
"DeepSeek-R1-Distill-Qwen-1.5B": {"sym": True, "group_size": 128, "ratio": 1.0},
94+
"DeepSeek-R1-Distill-Qwen-32B": {"sym": True, "group_size": 128, "ratio": 1.0},
8295
"default": {
8396
"sym": False,
8497
"group_size": 128,

0 commit comments

Comments
 (0)