A simple, efficient, easy-to-use nvidia TensorRT wrapper for cnn with c++ and python api,support caffe, uff and onnx format models. you will be able to deploy your model with tiny-tensorrt in few lines of code!
// create engine
trt.CreateEngine(onnxModelpath, engineFile, customOutput, maxBatchSize, mode);
// transfer you input data to tensorrt engine
trt.CopyFromHostToDevice(input,inputIndex);
// inference!!!
trt.Forward();
// retrieve network output
trt.CopyFromHostToDevice(output, outputIndex) // you can get outputIndex in CreateEngine phase
Better int8 calibrator api, refer to User Guide - 2021-5-24
Remove caffe and uff support, convert to onnx with tf2onnx or keras.onnx. - 2021-4-23
Want to implement your own onnx plugin and don't know where to start? - 2021-1-29
- Add DLA support
- Support TensorRT 7
- Custom plugin tutorial and well_commented sample
- Custom onnx model output node
- Engine serialization and deserialization
- INT8 support
- Python api support
- Set device
cuda 10.0+
TensorRT 7
For python api, python 2.x/3.x and numpy in needed
Make sure you had install dependencies list above, if you are familiar with docker, you can use official docker
# clone project and submodule
git clone --recurse-submodules -j8 https://github.com/zerollzeng/tiny-tensorrt.git
cd tiny-tensorrt
mkdir build && cd build && cmake .. && make
Then you can intergrate it into your own project with libtinytrt.so and Trt.h, for python module, you get pytrt.so
Custom Plugin Tutorial (En-Ch)
For the 3rd-party module and TensorRT, you need to follow their license
For the part I wrote, you can do anything you want