How to run TLT Object Detection Pre-Trained Models in Deepstream on NVIDIA® Jetson™ Modules?
WHAT YOU WILL LEARN?
1-How to install deepstream and configure your model files for object detection.
2-How to use deepstream-app to run object detection.
ENVIRONMENT
Hardware: DSBOX-N2(NVIDIA® Jetson™ Nano)
OS: Ubuntu 18.04 LTS, JetPack 4.5
How to configure your model files for object detection
In this blog post, you will learn how to run TLT Object Detection Pre-Trained Models in Deepstream on NVIDIA® Jetson™ Modules. You can see our blog post from here to see how to train a dataset to be models for object detection.
First, we need to install the deepstream:
sudo apt-get update
sudo apt-get install deepstream-5.1
Now, let's move our model and label files to the config folder inside the computer, default version is as follows if you haven’t changed it,” /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models “. If you changed it, just find the deepstream folder inside your computer, and then the rest is same as above.
Now, let's install gedit to configure our files:
sudo apt-get update
sudo apt-get install gedit
After that, we should open a new config in our deepstream folder:
cd /opt/nvidia/deepstream/deepstream-5.1/samples/configs/tlt_pretrained_models
touch your_config_filename.txt
gedit your_config_filename.txt
You should see a blank text file after the last command, then copy paste the below code:
[property]
gpu-id=0
net-scale-factor=0.00392156862745098
model-color-format=0
# write your label file’s path to the right side of equal sign.labelfile-path=labels.txt
# write your model file’s path to the right side of equal sign.tlt-encoded-model=./detectnet_v2_resnet50_model_fp16.etlt
tlt-model-key=forecr
infer-dims=3;544;960
uff-input-order=0
uff-input-blob-name=input_1
output-blob-names=output_cov/Sigmoid;output_bbox/BiasAdd
batch-size=1
## 0=FP32, 1=INT8, 2=FP16 mode, write the number next to## network-mode, in this example, we used FP16 mode
network-mode=2
# number of labels in your label file.num-detected-classes=2
interval=0
gie-unique-id=1
is-classifier=0
[class-attrs-all]
pre-cluster-threshold=0.3
group-threshold=1
eps=0.2
roi-top-offset=0
roi-bottom-offset=0
detected-min-w=0
detected-min-h=0
detected-max-w=0
detected-max-h=0
Then press ‘Ctrl + s’ to save and close the file. Since our config file is ready, we need to edit our deepstream-app file to run our models with respect to our config file. To do that, either find a deepstream-app file in that folder or create a new one with this code:
touch your_deepstreamapp_filename.txt
gedit your_deepstreamapp_filename.txt
And copy the below code or change the file paths if you have one app file already:
[application]
enable-perf-measurement=1
perf-measurement-interval-sec=1
[tiled-display]
enable=1
rows=1
columns=1
width=1280
height=720
gpu-id=0
[source0]
enable=1
#Type - 1=CameraV4L2 2=URI 3=MultiURI
type=3
num-sources=1
# You can choose any sample you want, and give its directory path.
uri=file://../../streams/sample_720p.jpg
gpu-id=0
[streammux]
gpu-id=0
batch-size=1
batched-push-timeout=40000
## Set muxer output width and height
width=1920
height=1080
[sink0]
# sink0 will display the result in real time if you enable it.
enable=0
#Type - 1=FakeSink 2=EglSink 3=File
type=2
sync=1
source-id=0
gpu-id=0
[osd]
enable=1
gpu-id=0
border-width=3
text-size=15
text-color=1;1;1;1;
text-bg-color=0.3;0.3;0.3;1
font=Arial
[primary-gie]
enable=1
gpu-id=0
# Modify as necessary
batch-size=1
#Required by the app for OSD, not a plugin property
bbox-border-color0=1;0;0;1
bbox-border-color1=0;1;1;1
bbox-border-color2=0;0;1;1
bbox-border-color3=0;1;0;1
gie-unique-id=1
# Replace the infer primary config file when you need to
# use other detection models
config-file=forecr_config.txt
#config-file=config_infer_primary_ssd.txt
#config-file=config_infer_primary_dssd.txt
#config-file=config_infer_primary_retinanet.txt
#config-file=config_infer_primary_yolov3.txt
#config-file=config_infer_primary_detectnet_v2.txt
[sink1]
# sink1 will record the result and save it to the same folder as app file when enabled.
enable=1
type=3
#1=mp4 2=mkv
container=1
#1=h264 2=h265 3=mpeg4
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=2000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# name of the output file.
output-file=out.mp4
source-id=0
[sink2]
# additional real time display.
enable=0
#Type - 1=FakeSink 2=EglSink 3=File 4=RTSPStreaming 5=Overlay
type=4
#1=h264 2=h265
codec=1
#encoder type 0=Hardware 1=Software
enc-type=0
sync=0
bitrate=4000000
#H264 Profile - 0=Baseline 2=Main 4=High
#H265 Profile - 0=Main 1=Main10
profile=0
# set below properties in case of RTSPStreaming
rtsp-port=8554
udp-port=5400
[tracker]
enable=1
tracker-width=640
tracker-height=384
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_iou.so
#ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_nvdcf.so
ll-lib-file=/opt/nvidia/deepstream/deepstream-5.1/lib/libnvds_mot_klt.so
#ll-config-file required for DCF/IOU only
ll-config-file=../deepstream-app/tracker_config.yml
#ll-config-file=iou_config.txt
gpu-id=0
#enable-batch-process applicable to DCF only
enable-batch-process=1
[tests]
file-loop=0
After we finish our process, press ‘Ctrl + s’ again to save, and close the file. Now we can run our object detection in deepstream:
deepstream-app -c your_deepstreamapp_filename.txt
And your result should be ready according to the configuration you specified in your config file; either live feed, or saved to the same folder as your deepstream-app file.
Thank you for reading our blog post.