Skip to content

Commit 15a8276

Browse files
authored
debug triton install outetts (#2716)
1 parent 3bbfd41 commit 15a8276

File tree

6 files changed

+40
-18
lines changed

6 files changed

+40
-18
lines changed

.ci/ignore_treon_docker.txt

+2-1
Original file line numberDiff line numberDiff line change
@@ -83,4 +83,5 @@ notebooks/multilora-image-generation/multilora-image-generation.ipynb
8383
notebooks/llm-agent-react/llm-agent-react-langchain.ipynb
8484
notebooks/multimodal-rag/multimodal-rag-llamaindex.ipynb
8585
notebooks/llm-rag-langchain/llm-rag-langchain-genai.ipynb
86-
notebooks/ltx-video/ltx-video.ipynb
86+
notebooks/ltx-video/ltx-video.ipynb
87+
notebooks/outetts-text-to-speech/outetts-text-to-speech.ipynb

notebooks/deepseek-r1/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ The tutorial supports different models, you can select one from the provided opt
1111
* **DeepSeek-R1-Distill-Llama-8B** is a distilled model based on [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B), that prioritizes high performance and advanced reasoning capabilities, particularly excelling in tasks requiring mathematical and factual precision. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) for more info.
1212
* **DeepSeek-R1-Distill-Qwen-1.5B** is the smallest DeepSeek-R1 distilled model based on [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B). Despite its compact size, the model demonstrates strong capabilities in solving basic mathematical tasks, at the same time its programming capabilities are limited. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) for more info.
1313
* **DeepSeek-R1-Distill-Qwen-7B** is a distilled model based on [Qwen-2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B). The model demonstrates a good balance between mathematical and factual reasoning and can be less suited for complex coding tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) for more info.
14-
* **DeepSeek-R1-Distil-Qwen-14B** is a distilled model based on [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) that has great competence in factual reasoning and solving complex mathematical tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-15B) for more info.
14+
* **DeepSeek-R1-Distil-Qwen-14B** is a distilled model based on [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) that has great competence in factual reasoning and solving complex mathematical tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) for more info.
1515

1616
## Notebook Contents
1717

notebooks/deepseek-r1/deepseek-r1.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -109,7 +109,7 @@
109109
"* **DeepSeek-R1-Distill-Llama-8B** is a distilled model based on [Llama-3.1-8B](https://huggingface.co/meta-llama/Llama-3.1-8B), that prioritizes high performance and advanced reasoning capabilities, particularly excelling in tasks requiring mathematical and factual precision. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B) for more info.\n",
110110
"* **DeepSeek-R1-Distill-Qwen-1.5B** is the smallest DeepSeek-R1 distilled model based on [Qwen2.5-Math-1.5B](https://huggingface.co/Qwen/Qwen2.5-Math-1.5B). Despite its compact size, the model demonstrates strong capabilities in solving basic mathematical tasks, at the same time its programming capabilities are limited. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B) for more info.\n",
111111
"* **DeepSeek-R1-Distill-Qwen-7B** is a distilled model based on [Qwen-2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B). The model demonstrates a good balance between mathematical and factual reasoning and can be less suited for complex coding tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-7B) for more info.\n",
112-
"* **DeepSeek-R1-Distil-Qwen-14B** is a distilled model based on [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) that has great competence in factual reasoning and solving complex mathematical tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-15B) for more info.\n",
112+
"* **DeepSeek-R1-Distil-Qwen-14B** is a distilled model based on [Qwen2.5-14B](https://huggingface.co/Qwen/Qwen2.5-14B) that has great competence in factual reasoning and solving complex mathematical tasks. Check [model card](https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B) for more info.\n",
113113
"\n",
114114
"[Weight compression](https://docs.openvino.ai/2024/openvino-workflow/model-optimization-guide/weight-compression.html) is a technique for enhancing the efficiency of models, especially those with large memory requirements. This method reduces the model’s memory footprint, a crucial factor for Large Language Models (LLMs). We provide several options for model weight compression:\n",
115115
"\n",

notebooks/hugging-face-hub/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
The Hugging Face (HF) Model Hub is a central repository for pre-trained deep learning models. It allows exploration and provides access to thousands of models for a wide range of tasks, including text classification, question answering, and image classification.
66
Hugging Face provides Python packages that serve as APIs and tools to easily download and fine tune state-of-the-art pretrained models, namely [transformers] and [diffusers] packages.
77

8-
![](https://github.com/huggingface/optimum-intel/raw/main/readme_logo.png)
8+
![](https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/logo/hf_intel_logo.png)
99

1010
## Contents:
1111
Throughout this notebook we will learn:

notebooks/hugging-face-hub/hugging-face-hub.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@
1010
"The Hugging Face (HF) [Model Hub](https://huggingface.co/models) is a central repository for pre-trained deep learning models. It allows exploration and provides access to thousands of models for a wide range of tasks, including text classification, question answering, and image classification.\n",
1111
"Hugging Face provides Python packages that serve as APIs and tools to easily download and fine tune state-of-the-art pretrained models, namely [transformers](https://github.com/huggingface/transformers) and [diffusers](https://github.com/huggingface/diffusers) packages.\n",
1212
"\n",
13-
"![](https://github.com/huggingface/optimum-intel/raw/main/readme_logo.png)\n",
13+
"![](https://huggingface.co/datasets/optimum/documentation-images/resolve/main/intel/logo/hf_intel_logo.png)\n",
1414
"\n",
1515
"Throughout this notebook we will learn:\n",
1616
"1. How to load a HF pipeline using the `transformers` package and then convert it to OpenVINO.\n",

notebooks/outetts-text-to-speech/outetts-text-to-speech.ipynb

+34-13
Original file line numberDiff line numberDiff line change
@@ -51,13 +51,43 @@
5151
"outputs": [],
5252
"source": [
5353
"import platform\n",
54+
"import requests\n",
55+
"from pathlib import Path\n",
56+
"\n",
57+
"utility_files = [\"cmd_helper.py\", \"notebook_utils.py\", \"pip_helper.py\"]\n",
58+
"base_utility_url = \"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/utils/\"\n",
59+
"\n",
60+
"for utility_file in utility_files:\n",
61+
" if not Path(utility_file).exists():\n",
62+
" r = requests.get(base_utility_url + utility_file)\n",
63+
" with Path(utility_file).open(\"w\") as f:\n",
64+
" f.write(r.text)\n",
5465
"\n",
55-
"%pip install -q \"torch>=2.1\" \"torchaudio\" \"einops\" \"transformers>=4.46.1\" \"loguru\" \"inflect\" \"pesq\" \"torchcrepe\" \"natsort\" \"polars\" uroman mecab-python3 unidic-lite --extra-index-url https://download.pytorch.org/whl/cpu\n",
56-
"%pip install -q \"gradio>=4.19\" \"openvino>=2024.4.0\" \"tqdm\" \"pyyaml\" \"librosa\" \"soundfile\"\n",
57-
"%pip install -q \"git+https://github.com/huggingface/optimum-intel.git\" --extra-index-url https://download.pytorch.org/whl/cpu\n",
66+
"\n",
67+
"from pip_helper import pip_install\n",
68+
"\n",
69+
"pip_install(\n",
70+
" \"torch>=2.1\",\n",
71+
" \"torchaudio\",\n",
72+
" \"einops\",\n",
73+
" \"transformers>=4.46.1\",\n",
74+
" \"loguru\",\n",
75+
" \"inflect\",\n",
76+
" \"pesq\",\n",
77+
" \"torchcrepe\",\n",
78+
" \"natsort\",\n",
79+
" \"polars\",\n",
80+
" \"uroman\",\n",
81+
" \"mecab-python3\",\n",
82+
" \"unidic-lite\",\n",
83+
" \"--extra-index-url\",\n",
84+
" \"https://download.pytorch.org/whl/cpu\",\n",
85+
")\n",
86+
"pip_install(\"gradio>=4.19\", \"openvino>=2024.4.0\", \"tqdm\", \"pyyaml\", \"librosa\", \"soundfile\")\n",
87+
"pip_install(\"git+https://github.com/huggingface/optimum-intel.git\", \"--extra-index-url\", \"https://download.pytorch.org/whl/cpu\")\n",
5888
"\n",
5989
"if platform.system() == \"Darwin\":\n",
60-
" %pip install -q \"numpy<2.0.0\""
90+
" pip_install(\"numpy<2.0.0\")"
6191
]
6292
},
6393
{
@@ -69,15 +99,6 @@
6999
"import requests\n",
70100
"from pathlib import Path\n",
71101
"\n",
72-
"utility_files = [\"cmd_helper.py\", \"notebook_utils.py\"]\n",
73-
"base_utility_url = \"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/utils/\"\n",
74-
"\n",
75-
"for utility_file in utility_files:\n",
76-
" if not Path(utility_file).exists():\n",
77-
" r = requests.get(base_utility_url + utility_file)\n",
78-
" with Path(utility_file).open(\"w\") as f:\n",
79-
" f.write(r.text)\n",
80-
"\n",
81102
"\n",
82103
"helper_files = [\"gradio_helper.py\", \"ov_outetts_helper.py\"]\n",
83104
"base_helper_url = \"https://raw.githubusercontent.com/openvinotoolkit/openvino_notebooks/latest/notebooks/outetts-text-to-speech\"\n",

0 commit comments

Comments
 (0)