You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: docs/model_export.md
+25-26
Original file line number
Diff line number
Diff line change
@@ -2,54 +2,53 @@ English | [简体中文](model_export_cn.md)
2
2
3
3
# Model Export
4
4
5
-
The trained model needs to be exported as a prediction model before deployment.
5
+
After model training, we can export the inference model and deploy it using inference library, which achieves faster inference speed.
6
6
7
7
This tutorial will show how to export a trained model。
8
8
9
9
10
-
## Acquire the Pre-training Model
10
+
## Acquire trained weight
11
+
12
+
After model training, the weight with the highest accuracy is saved in ` path/to/save/best_ model/model.pdparams`.
13
+
14
+
For the convenience of this demo, we run the following commands to download the [trained weight](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/pp_liteseg_stdc1_cityscapes_1024x512_scale0.5_160k/model.pdparams) of PP-LiteSeg.
15
+
11
16
12
-
In this example,BiseNetV2 model will be used. Run the following command or click [link](https://paddleseg.bj.bcebos.com/dygraph/cityscapes/bisenet_cityscapes_1024x1024_160k/model.pdparams) to download the pretrained model.
|save_dir|Save root path for model and VisualDL log files|no|output|
42
-
|model_path|Path of pre-training model parameters|no|The value in config file|
43
-
|with_softmax|Add softmax operator at the end of the network. Since PaddleSeg networking returns Logits by default, you can set it to True if you want the deployment model to get the probability value|no|False|
44
-
|without_argmax|Whether or not to add argmax operator at the end of the network. Since PaddleSeg networking returns Logits by default, we add argmax operator at the end of the network by default in order to directly obtain the prediction results for the deployment model|no|False|
45
-
|input_shape| Set the input shape of exported model, such as `--input_shape 1 3 1024 1024`。if input_shape is not provided,the Default input shape of exported model is [-1, 3, -1, -1]| no (If the image shape in prediction is consistent, you should set the input_shape) | None |
36
+
|config|The path of config file|yes|-|
37
+
|model_path|The path of trained weight|no|-|
38
+
|save_dir| The save dir for the inference model|no|output/inference_model|
39
+
|input_shape| Set the input shape (`N*C*H*W`) of the inference model, such as `--input_shape 1 3 1024 1024`。if input_shape is not provided,the input shape of the inference model is [-1, 3, -1, -1]. If the image shape in prediction is fixed, you should set the input_shape. | no | None |
40
+
|output_op | Set the op that is appended to the inference model, should in [`argmax`, `softmax`, `none`]. PaddleSeg models outputs logits (`N*C*H*W`) by default. Adding `argmax` operation, we get the label for every pixel, the dimension of output is `N*H*W`. Adding `softmax` operation, we get the probability of different classes for every pixel. | no | argmax |
41
+
|with_softmax| Deprecated params, please use --output_op. Add softmax operator at the end of the network. Since PaddleSeg networking returns Logits by default, you can set it to True if you want the deployment model to get the probability value|no|False|
42
+
|without_argmax|Deprecated params, please use --output_op. Whether or not to add argmax operator at the end of the network. Since PaddleSeg networking returns Logits by default, we add argmax operator at the end of the network by default in order to directly obtain the prediction results for the deployment model|no|False|
43
+
46
44
47
-
**If you encounter shape-relevant issue, please try to set the input_shape.**
45
+
Note that:
46
+
* If you encounter shape-relevant issue, please try to set the input_shape.
48
47
49
48
## Prediction Model Files
50
49
51
50
```shell
52
-
output
51
+
output/inference_model
53
52
├── deploy.yaml # Config file of deployment
54
53
├── model.pdiparams # Paramters of static model
55
54
├── model.pdiparams.info # Additional information witch is not concerned generally
0 commit comments