Skip to content

Commit dea7bd6

Browse files
authored
fix outdated links on documentation (#1669)
1 parent 022c0c7 commit dea7bd6

File tree

37 files changed

+44
-44
lines changed

37 files changed

+44
-44
lines changed

README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -396,7 +396,7 @@ Made with [`contrib.rocks`](https://contrib.rocks).
396396

397397
## ❓ FAQ
398398

399-
* [Which devices does OpenVINO support?](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html#doxid-openvino-docs-o-v-u-g-supported-plugins-supported-devices)
399+
* [Which devices does OpenVINO support?](https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html#doxid-openvino-docs-o-v-u-g-supported-plugins-supported-devices)
400400
* [What is the first CPU generation you support with OpenVINO?](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html)
401401
* [Are there any success stories about deploying real-world solutions with OpenVINO?](https://www.intel.com/content/www/us/en/internet-of-things/ai-in-production/success-stories.html)
402402

README_cn.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -367,7 +367,7 @@ jupyter lab notebooks
367367

368368
## ❓ 常见问题解答
369369

370-
* [OpenVINO支持哪些设备?](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html#doxid-openvino-docs-o-v-u-g-supported-plugins-supported-devices)
370+
* [OpenVINO支持哪些设备?](https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_supported_plugins_Supported_Devices.html#doxid-openvino-docs-o-v-u-g-supported-plugins-supported-devices)
371371
* [OpenVINO支持的第一代CPU是什么?](https://www.intel.com/content/www/us/en/developer/tools/openvino-toolkit/system-requirements.html)
372372
* [在使用OpenVINO部署现实世界解决方案方面有没有成功的案例?](https://www.intel.com/content/www/us/en/internet-of-things/ai-in-production/success-stories.html)
373373

notebooks/001-hello-world/001-hello-world.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
"\n",
1010
"This basic introduction to OpenVINO™ shows how to do inference with an image classification model.\n",
1111
"\n",
12-
"A pre-trained [MobileNetV3 model](https://docs.openvino.ai/2023.0/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used in this tutorial. For more information about how OpenVINO IR models are created, refer to the [TensorFlow to OpenVINO](../101-tensorflow-classification-to-openvino/101-tensorflow-classification-to-openvino.ipynb) tutorial.\n",
12+
"A pre-trained [MobileNetV3 model](https://docs.openvino.ai/2023.3/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used in this tutorial. For more information about how OpenVINO IR models are created, refer to the [TensorFlow to OpenVINO](../101-tensorflow-classification-to-openvino/101-tensorflow-classification-to-openvino.ipynb) tutorial.\n",
1313
"\n",
1414
"#### Table of contents:\n",
1515
"- [Imports](#Imports)\n",

notebooks/003-hello-segmentation/003-hello-segmentation.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
"\n",
1010
"A very basic introduction to using segmentation models with OpenVINO™.\n",
1111
"\n",
12-
"In this tutorial, a pre-trained [road-segmentation-adas-0001](https://docs.openvino.ai/2023.0/omz_models_model_road_segmentation_adas_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark.\n",
12+
"In this tutorial, a pre-trained [road-segmentation-adas-0001](https://docs.openvino.ai/2023.3/omz_models_model_road_segmentation_adas_0001.html) model from the [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark.\n",
1313
"\n",
1414
"#### Table of contents:\n",
1515
"- [Imports](#Imports)\n",

notebooks/003-hello-segmentation/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ This notebook demonstrates how to do inference with segmentation model.
1111

1212
## Notebook Contents
1313

14-
A very basic introduction to segmentation with OpenVINO. This notebook uses the [`road-segmentation-adas-0001`](https://docs.openvino.ai/2023.0/omz_models_model_road_segmentation_adas_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) and an input image downloaded from [Mapillary Vistas](https://www.mapillary.com/dataset/vistas). ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark.
14+
A very basic introduction to segmentation with OpenVINO. This notebook uses the [`road-segmentation-adas-0001`](https://docs.openvino.ai/2023.3/omz_models_model_road_segmentation_adas_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) and an input image downloaded from [Mapillary Vistas](https://www.mapillary.com/dataset/vistas). ADAS stands for Advanced Driver Assistance Services. The model recognizes four classes: background, road, curb and mark.
1515

1616
## Installation Instructions
1717

notebooks/004-hello-detection/004-hello-detection.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@
99
"\n",
1010
"A very basic introduction to using object detection models with OpenVINO™.\n",
1111
"\n",
12-
"The [horizontal-text-detection-0001](https://docs.openvino.ai/2023.0/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. It detects horizontal text in images and returns a blob of data in the shape of `[100, 5]`. Each detected text box is stored in the `[x_min, y_min, x_max, y_max, conf]` format, where the\n",
12+
"The [horizontal-text-detection-0001](https://docs.openvino.ai/2023.3/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. It detects horizontal text in images and returns a blob of data in the shape of `[100, 5]`. Each detected text box is stored in the `[x_min, y_min, x_max, y_max, conf]` format, where the\n",
1313
"`(x_min, y_min)` are the coordinates of the top left bounding box corner, `(x_max, y_max)` are the coordinates of the bottom right bounding box corner and `conf` is the confidence for the predicted class.\n",
1414
"\n",
1515
"#### Table of contents:\n",

notebooks/004-hello-detection/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@ This notebook demonstrates how to do inference with detection model.
1212

1313
## Notebook Contents
1414

15-
In this basic introduction to detection with OpenVINO, the [horizontal-text-detection-0001](https://docs.openvino.ai/2023.0/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. It detects text in images and returns blob of data in shape of `[100, 5]`. For each detection, a description is in the `[x_min, y_min, x_max, y_max, conf]` format.
15+
In this basic introduction to detection with OpenVINO, the [horizontal-text-detection-0001](https://docs.openvino.ai/2023.3/omz_models_model_horizontal_text_detection_0001.html) model from [Open Model Zoo](https://github.com/openvinotoolkit/open_model_zoo/) is used. It detects text in images and returns blob of data in shape of `[100, 5]`. For each detection, a description is in the `[x_min, y_min, x_max, y_max, conf]` format.
1616

1717
## Installation Instructions
1818

notebooks/101-tensorflow-classification-to-openvino/101-tensorflow-classification-to-openvino.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
"source": [
99
"# Convert a TensorFlow Model to OpenVINO™\n",
1010
"\n",
11-
"This short tutorial shows how to convert a TensorFlow [MobileNetV3](https://docs.openvino.ai/2023.0/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) image classification model to OpenVINO [Intermediate Representation](https://docs.openvino.ai/2023.3/openvino_docs_MO_DG_IR_and_opsets.html) (OpenVINO IR) format, using [Model Conversion API](https://docs.openvino.ai/2023.3/openvino_docs_model_processing_introduction.html). After creating the OpenVINO IR, load the model in [OpenVINO Runtime](https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html) and do inference with a sample image. \n",
11+
"This short tutorial shows how to convert a TensorFlow [MobileNetV3](https://docs.openvino.ai/2023.3/omz_models_model_mobilenet_v3_small_1_0_224_tf.html) image classification model to OpenVINO [Intermediate Representation](https://docs.openvino.ai/2023.3/openvino_docs_MO_DG_IR_and_opsets.html) (OpenVINO IR) format, using [Model Conversion API](https://docs.openvino.ai/2023.3/openvino_docs_model_processing_introduction.html). After creating the OpenVINO IR, load the model in [OpenVINO Runtime](https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html) and do inference with a sample image. \n",
1212
"\n",
1313
"#### Table of contents:\n",
1414
"- [Imports](#Imports)\n",

notebooks/103-paddle-to-openvino/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ This notebook shows how to convert [PaddlePaddle](https://www.paddlepaddle.org.c
99

1010
## Notebook Contents
1111

12-
The notebook uses [model conversion API](https://docs.openvino.ai/2023.0/openvino_docs_model_processing_introduction.html) to convert a MobileNet V3 [PaddleClas](https://github.com/PaddlePaddle/PaddleClas) model, pre-trained on the [ImageNet](https://www.image-net.org) dataset, to OpenVINO IR. It also shows how to perform classification inference on an image, using [OpenVINO Runtime](https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html) and compares the results of the PaddlePaddle model with the OpenVINO IR model.
12+
The notebook uses [model conversion API](https://docs.openvino.ai/2023.3/openvino_docs_model_processing_introduction.html) to convert a MobileNet V3 [PaddleClas](https://github.com/PaddlePaddle/PaddleClas) model, pre-trained on the [ImageNet](https://www.image-net.org) dataset, to OpenVINO IR. It also shows how to perform classification inference on an image, using [OpenVINO Runtime](https://docs.openvino.ai/nightly/openvino_docs_OV_UG_OV_Runtime_User_Guide.html) and compares the results of the PaddlePaddle model with the OpenVINO IR model.
1313

1414
## Installation Instructions
1515

notebooks/108-gpu-device/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Working with GPUs in OpenVINO™
22

3-
This notebook shows how to do inference with Graphic Processing Units (GPUs). To learn more about GPUs in OpenVINO, refer to the [GPU Device](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_supported_plugins_GPU.html) section in the docs.
3+
This notebook shows how to do inference with Graphic Processing Units (GPUs). To learn more about GPUs in OpenVINO, refer to the [GPU Device](https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_supported_plugins_GPU.html) section in the docs.
44

55
## Notebook Contents
66

notebooks/110-ct-segmentation-quantize/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -25,7 +25,7 @@ so it is not required to run the data preparation and training notebooks before
2525

2626
This quantization tutorial consists of the following steps:
2727

28-
* Use model conversion Python API to convert the model to OpenVINO IR. For more information about model conversion Python API, see this [page](https://docs.openvino.ai/2023.0/openvino_docs_model_processing_introduction.html).
28+
* Use model conversion Python API to convert the model to OpenVINO IR. For more information about model conversion Python API, see this [page](https://docs.openvino.ai/2023.3/openvino_docs_model_processing_introduction.html).
2929
* Quantizing the model with NNCF with the [Post-training Quantization with NNCF Tool](https://docs.openvino.ai/nightly/basic_quantization_flow.html) API in OpenVINO.
3030
* Evaluating the F1 score metric of the original model and the quantized model.
3131
* Benchmarking performance of the original model and the quantized model.

notebooks/113-image-classification-quantization/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openvinotoolkit/openvino_notebooks/blob/main/notebooks/113-image-classification-quantization/113-image-classification-quantization.ipynb)
55

66
This tutorial demonstrates how to apply `INT8` quantization to the MobileNet V2 Image Classification model, using the
7-
[NNCF Post-Training Quantization API](https://docs.openvino.ai/2023.0/ptq_introduction.html). The tutorial uses [MobileNetV2](https://pytorch.org/vision/stable/_modules/torchvision/models/mobilenetv2.html) and [Cifar10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html).
7+
[NNCF Post-Training Quantization API](https://docs.openvino.ai/2023.3/ptq_introduction.html). The tutorial uses [MobileNetV2](https://pytorch.org/vision/stable/_modules/torchvision/models/mobilenetv2.html) and [Cifar10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html).
88
The code of the tutorial is designed to be extendable to custom models and datasets.
99

1010
## Notebook Contents

notebooks/115-async-api/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openvinotoolkit/openvino_notebooks/blob/main/notebooks/115-async-api/115-async-api.ipynb)
55

66

7-
This notebook demonstrates how to use the [Async API](https://docs.openvino.ai/nightly/openvino_docs_deployment_optimization_guide_common.html) and [`AsyncInferQueue`](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Python_API_exclusives.html#asyncinferqueue) for asynchronous execution with OpenVINO.
7+
This notebook demonstrates how to use the [Async API](https://docs.openvino.ai/nightly/openvino_docs_deployment_optimization_guide_common.html) and [`AsyncInferQueue`](https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_Python_API_exclusives.html#asyncinferqueue) for asynchronous execution with OpenVINO.
88

99
OpenVINO Runtime supports inference in either synchronous or asynchronous mode. The key advantage of the Async API is that when a device is busy with inference, the application can perform other tasks in parallel (for example, populating inputs or scheduling other requests) rather than wait for the current inference to complete first.
1010

notebooks/117-model-server/117-model-server.ipynb

+2-2
Original file line numberDiff line numberDiff line change
@@ -321,7 +321,7 @@
321321
"id": "e8ab7f4c",
322322
"metadata": {},
323323
"source": [
324-
"The required Model Server parameters are listed below. For additional configuration options, see the [Model Server Parameters section](https://docs.openvino.ai/2023.2/ovms_docs_parameters.html).\n",
324+
"The required Model Server parameters are listed below. For additional configuration options, see the [Model Server Parameters section](https://docs.openvino.ai/2023.3/ovms_docs_parameters.html).\n",
325325
"\n",
326326
"<table class=\"table\">\n",
327327
"<colgroup>\n",
@@ -749,7 +749,7 @@
749749
"## References\n",
750750
"[back to top ⬆️](#Table-of-contents:)\n",
751751
"\n",
752-
"1. [OpenVINO™ Model Server documentation](https://docs.openvino.ai/2023.0/ovms_what_is_openvino_model_server.html)\n",
752+
"1. [OpenVINO™ Model Server documentation](https://docs.openvino.ai/2023.3/ovms_what_is_openvino_model_server.html)\n",
753753
"2. [OpenVINO™ Model Server GitHub repository](https://github.com/openvinotoolkit/model_server/)"
754754
]
755755
}

notebooks/118-optimize-preprocessing/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22

33
[![Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/openvinotoolkit/openvino_notebooks/blob/main/notebooks/118-optimize-preprocessing/118-optimize-preprocessing.ipynb)
44

5-
This tutorial demonstrates how the image could be transform to the data format expected by the model with Preprocessing API. Preprocessing API is an easy-to-use instrument, that enables integration of preprocessing steps into an execution graph and perform it on selected device, which can improve of device utilization. For more information about Preprocessing API, please, see this [overview](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Preprocessing_Overview.html#) and [details](https://docs.openvino.ai/2023.0/openvino_docs_OV_UG_Preprocessing_Details.html). The tutorial uses [InceptionResNetV2](https://www.tensorflow.org/api_docs/python/tf/keras/applications/inception_resnet_v2) model.
5+
This tutorial demonstrates how the image could be transform to the data format expected by the model with Preprocessing API. Preprocessing API is an easy-to-use instrument, that enables integration of preprocessing steps into an execution graph and perform it on selected device, which can improve of device utilization. For more information about Preprocessing API, please, see this [overview](https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_Preprocessing_Overview.html#) and [details](https://docs.openvino.ai/2023.3/openvino_docs_OV_UG_Preprocessing_Details.html). The tutorial uses [InceptionResNetV2](https://www.tensorflow.org/api_docs/python/tf/keras/applications/inception_resnet_v2) model.
66

77

88
## Notebook Contents

notebooks/119-tflite-to-openvino/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@ This tutorial explains how to convert [TensorFlow Lite](https://www.tensorflow.o
77

88
## Notebook Contents
99

10-
The notebook uses [model conversion API](https://docs.openvino.ai/2023.0/openvino_docs_model_processing_introduction.html) to convert model to OpenVINO Intermediate Representation format.
10+
The notebook uses [model conversion API](https://docs.openvino.ai/2023.3/openvino_docs_model_processing_introduction.html) to convert model to OpenVINO Intermediate Representation format.
1111

1212
## Installation Instructions
1313

notebooks/122-quantizing-model-with-accuracy-control/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@ These tutorials demonstrate how to apply 8-bit quantization with accuracy contro
66

77
The code of the tutorials is designed to be extendable to the same model types trained on custom datasets.
88

9-
The advanced quantization flow allows to apply 8-bit quantization to the model with control of accuracy metric. This is achieved by keeping the most impactful operations within the model in the original precision. The flow is based on the [Quantizing with Accuracy Control](https://docs.openvino.ai/2023.0/quantization_w_accuracy_control.html) and has the following specifics:
9+
The advanced quantization flow allows to apply 8-bit quantization to the model with control of accuracy metric. This is achieved by keeping the most impactful operations within the model in the original precision. The flow is based on the [Quantizing with Accuracy Control](https://docs.openvino.ai/2023.3/quantization_w_accuracy_control.html) and has the following specifics:
1010

1111
- Besides the calibration dataset, a validation dataset is required to compute the accuracy metric. Both datasets can refer to the same data in the simplest case.
1212
- Validation function, used to compute accuracy metric is required. It can be a function that is already available in the source framework or a custom function.

notebooks/201-vision-monodepth/201-vision-monodepth.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -12,7 +12,7 @@
1212
"source": [
1313
"# Monodepth Estimation with OpenVINO\n",
1414
"\n",
15-
"This tutorial demonstrates Monocular Depth Estimation with MidasNet in OpenVINO. Model information can be found [here](https://docs.openvino.ai/2023.0/omz_models_model_midasnet.html).\n",
15+
"This tutorial demonstrates Monocular Depth Estimation with MidasNet in OpenVINO. Model information can be found [here](https://docs.openvino.ai/2023.3/omz_models_model_midasnet.html).\n",
1616
"\n",
1717
"![monodepth](https://user-images.githubusercontent.com/36741649/127173017-a0bbcf75-db24-4d2c-81b9-616e04ab7cd9.gif)\n",
1818
"\n",

notebooks/202-vision-superresolution/202-vision-superresolution-image.ipynb

+1-1
Original file line numberDiff line numberDiff line change
@@ -7,7 +7,7 @@
77
"source": [
88
"# Single Image Super Resolution with OpenVINO™\n",
99
"\n",
10-
"Super Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook shows the Single Image Super Resolution (SISR) which takes just one low resolution image. A model called [single-image-super-resolution-1032](https://docs.openvino.ai/2023.0/omz_models_model_single_image_super_resolution_1032.html), which is available in Open Model Zoo, is used in this tutorial. It is based on the research paper cited below.\n",
10+
"Super Resolution is the process of enhancing the quality of an image by increasing the pixel count using deep learning. This notebook shows the Single Image Super Resolution (SISR) which takes just one low resolution image. A model called [single-image-super-resolution-1032](https://docs.openvino.ai/2023.3/omz_models_model_single_image_super_resolution_1032.html), which is available in Open Model Zoo, is used in this tutorial. It is based on the research paper cited below.\n",
1111
"\n",
1212
"Y. Liu et al., [\"An Attention-Based Approach for Single Image Super Resolution,\"](https://arxiv.org/abs/1807.06779) 2018 24th International Conference on Pattern Recognition (ICPR), 2018, pp. 2777-2784, doi: 10.1109/ICPR.2018.8545760.\n",
1313
"\n",

0 commit comments

Comments
 (0)