How to Run Object Detection YOLOX on PyTorch with Docker on NVIDIA® Jetson™ Modules?
WHAT YOU WILL LEARN?
1- How to Download YOLOX?
2- How to Pull the Machine Learning Container?
3- How to Run Machine Learning Docker Container?
4- How to Test YOLOX Object Detection Algorithm?
5- What are the Performances of Different YOLOX Models?
ENVIRONMENT
Hardware: DSBOARD-NX2
OS: Jetpack 4.6 (Ubuntu 18.04)
In this blog post, we will show how to install and use the anchor-free version of YOLO object detection algorithm, YOLOX. We will also run the demo on docker to save space from the memory.
How to Download YOLOX?
We will use the PyTorch implementation from Megvii-BaseDetection.
$ git clone https://github.com/Megvii-BaseDetection/YOLOX.git
Download YOLOX models for different sizes from Megvii-BaseDetection/YOLOX. Create a folder named models inside YOLOX folder. You can download sample images and videos to test.
Open tools/demo.py script.
gedit tools/demo.py
Add the following codes into lines 165&166 if you would like to see FPS performances while running the video demonstrations.
infer_time=time.time() - t0
logger.info("Infer time: {:.4f}s".format(infer_time))
logger.info("FPS: {:.4f}".format(1/infer_time))
How to Pull the Machine Learning Container?
We will use the machine learning container, NVIDIA L4T ML specifically created for Jetson products. Depending on your Jetpack version you must select a different version of the docker image. For JetPack 4.6, use L4T R32.6.1. To check each version, go to NVIDIA NGC catalog about machine learning docker container.
$ sudo docker pull nvcr.io/nvidia/l4t-ml:r32.6.1-py3
How to Run and Set up Machine Learning Docker Container?
To run the docker container, run the following command.
-it: activates user interface
--rm: frees up the memory after exiting the docker container
-v: uses specific volumes to access the files inside the container
$ sudo docker run -it --rm --gpus all -v /home/nvidia/YOLOX/:/YOLOX -e DISPLAY=:0 -v /tmp/.X11-unix:/tmp/.X11-unix nvcr.io/nvidia/l4t-ml:r32.6.1-py3
Now, we need to set up the environment and install the packages needed for YOLOX.
First, check your Python3 version.
# python3 –version
If it is below 3.7, you need to upgrade it to make it compatible with packages.
# apt-get update -y
# apt-get install python3.7
# update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.6 1
# update-alternatives --install /usr/bin/python3 python3 /usr/bin/python3.7 2
# update-alternatives --config python3
Press 2 to select Python3.7 as the default Python3 version.
You will also need to install the libraries for Python3.7.
# apt install libpython3.7-dev
Then, go inside the YOLOX directory and run the following commands to install needed packages. If you face with an error in a package, you can uninstall and reinstall to solve the problem.
# cd /YOLOX/
# python3 -m pip install -U pip
# python3 -m pip install -r requirements.txt
# python3 -m pip install -v -e .
Download COCO application programming interface to use the COCO datasets.
# python3 -m pip install cython
# python3 -m pip install 'git+https://github.com/cocodataset/cocoapi.git#subdirectory=PythonAPI'
How to Test YOLOX Object Detection Algorithm?
You can run the demo script for YOLOX using following commands. Specify the model name by adding -n parser, and the model path by -c parser.
To test images, add “image”:
python3.7 tools/demo.py image -n yolox-s -c models/yolox_s.pth --path assets/dog.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
To test videos, add “video”:
ppython3.7 tools/demo.py video -n yolox-s -c models/yolox_s.pth --path test/pedestrians.mp4 --conf 0.25 --nms 0.45 --tsize 640 --save_result --device [cpu/gpu]
What are the Performances of YOLOX at Different Models?
When we run the demonstration, inference time and FPS for each frame will appear at the terminal.
YOLOX average inference times and FPS results on Jetson Xavier NX module is as follows: