Menü
Taşıyıcı Kartlar
Endüstriyel Bilgisayarlar
Askeri Ürünler
Özelleştir
Blog
İletişim
TR
EN
TR
How to Run Tensorflow Object Detection with Docker on Jetson Modules?
Jetson AGX Xavier
Jetson Nano
Jetson TX2 NX
Jetson Xavier NX
06 April 2021
Ana sayfa
Blogs
How to Run Tensorflow Object Detection with Docker on Jetson Modules?
In this blog post, we are going to explain how to run Tensorflow Object Detection with Docker on Jetson Modules. The process will be the same for all Jetson Modules. This
github repo
has been taken as a reference for the whole process.
Firstly, we have to pull our docker image from this
link.
We will be using "NVIDIA L4T ML" image. You can just pull the image by typing the following command.
docker pull
nvcr.io/nvidia/l4t-ml:r32.5.0-py3
For creating a Docker container, you should be typing the following command. We have passed display parameters because of getting image after object detection and we have passed "-rm" parameter for removing the container after exit the container.
sudo docker run -it -rm --gpus all -e DISPLAY=:0 -v /tmp/.X11-unix:/tmp/.X11-unix --runtime nvidia --network host
nvcr.io/nvidia/l4t-ml:r32.5.0-py3
Now, we are inside of the container. We should clone the tensorflow model repository like below.
apt update apt-get install git git clone
https://github.com/tensorflow/models.git
We should change the working directory.
cd models/research/cd object_detection/
Then, we need to pull the tensorflow object detection api repository like below.
git clone
https://github.com/EdjeElectronics/TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10.git
We need to copy the file of tensorflow object detection api repository inside the "/model/research/object" detection folder like below.
cp TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10/* . rm TensorFlow-Object-Detection-API-Tutorial-Train-Multiple-Objects-Windows-10/
We need to change the python version that we will be using for object detection.
alias python=python3
We have used the mask model which is trained before. If you would like to train your custom model, you can check
this blog
for training custom object detection model with
tensorflow.
You can find the trained model labelmap at the end of the blogpost. You have to get these model and labelmap inside the docker container. For this purpose, you can use "scp" command like below.
scp <your_host_machine_name>@<your_host_machine_IP_adress>:/home/<your_host_machine_name>/Downloads/tf_mask_model/frozen_inference_graph /models/research/object_detection/inference_graph/ scp <your_host_machine_name>@<your_host_machine_IP_adress>:/home/<your_host_machine_name>/Downloads/tf_mask_model/labelmap.pbtxt /models/research/object_detection/training/
We have also changed the using video name on "Object_detection_video.py" file.
Please pay attention to model, labelmap and video directories for avoiding possible problems.
On our NVIDIA machine (outside docker container), we need to give the a
ccess permission to everyone for avoiding display problems.
After modifying the model files , we are ready to run to detection on image.
cd .. python3 Object_detection_video.py
You can below the screenshots below.
You can customize the detection image by changing the image name and path in "Object_detection_video.py" file. You can also run to detect model on images and webcam display.
You can download model file from
here
!
Thanks for reading!