How to Train a Custom Object Detection Model with YOLOX? | Forecr.io

How to Train a Custom Object Detection Model with YOLOX?

Jetson AGX Xavier | Jetson Nano | Jetson TX2 NX | Jetson Xavier NX

29 November 2021
ENVIRONMENT

OS: Ubuntu 20.04.3 LTS

CPU: Intel® Core™ i7-10870H Processor 8 Core Processor

RAM: 16 GB 

GPU: NVIDIA® GeForce RTX™ 3060 Laptop GPU 6GB

GPU Driver Version: 470.63.01

CUDA Version: 11.4

In this blog post, we will show how to make custom training on high-performance YOLO version, YOLOX. We will use PyTorch implementation of YOLOX by Megvii-BaseDetection repository. 

If you haven’t installed YOLOX, you can go to this blog post to learn how to install and test YOLOX inside Docker. We will not use container in this post, however same steps can be applied if you wish to use one such as PyTorch container.

How to Download Custom Dataset on Roboflow?


You can download an object detection dataset on Roboflow or create one on your own. First, you need to sign up to Roboflow. Choose a dataset from “Public Datasets” under Resources tab at the left of the page. We will continue with “Packages” dataset in this blog post. 

Click on Fork Dataset at the top right of the page. After the copy is prepared, resize the pictures into 640x640. Click Start Generating, then Export as Pascal VOC. You can also download it as COCO and change configuration files accordingly. Choose Show Download Code and copy the Terminal code as seen below. 


Create a folder inside YOLOX that is previously installed and download the dataset.


mkdir voc-dataset
cd voc-dataset
curl -L "https://app.roboflow.com/ds/cvxXaVV3Ly?key=nph7tx5nET" > roboflow.zip; unzip roboflow.zip; rm roboflow.zip

How to Prepare the Dataset?


To convert the dataset into supported format, copy the train images to datasets/VOCdevkit folder.


cp -r voc_dataset/train/ ./datasets/VOCdevkit


Create either VOC2007 or VOC2012 folders.


mkdir datasets/VOCdevkit/VOC2007


You need to change VOC classes according to the classes in your datasets. Run the following command to open VOC classes.


gedit yolox/data/datasets/voc_classes.py


You also need to change the number of classes (self.num_classes) in the configuration file. Default epoch number is 300, this may take a while depending on your CPU. We decreased it to 10 to get faster results by adding a new line (self.max_epoch=10).


gedit exps/example/yolox_voc/yolox_voc_s.py

How to Train the Model?

To train the custom dataset, pretrained weights must be downloaded. You can create a directory inside YOLOX and download each weight using following commands.

mkdir models
cd models/
wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_s.pth

wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_m.pth

wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_l.pth

wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_x.pth

wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_darknet.pth

wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_nano.pth

wget https://github.com/Megvii-BaseDetection/YOLOX/releases/download/0.1.1rc0/yolox_tiny.pth

cd /YOLOX


You can start training using the following command. 

-f: Configuration file


-d: Number of workers


-b: Batch size


-c: Previously downloaded model weight


python3 tools/train.py -f exps/example/yolox_voc/yolox_voc_s.py -d 1 -b 8 --fp16 -o -c models/yolox_s.pth

How to Evaluate the Model?

We need to evaluate the latest trained model weight to get more accurate results. You can find checkpoint weights under the YOLOX_outputs/yolox_voc_s.


python3 tools/eval.py -n yolox-s -c YOLOX_outputs/yolox_voc_s/latest_ckpt.pth -b 8 -d 1 --conf 0.001 -f exps/example/yolox_voc/yolox_voc_s.py

-n: Specify the model

-c: Last trained weight


How to Test the Model?


You can test the model either on an image or a video. Specify the test image/video by --path flag.


python3 tools/demo.py image -f exps/example/yolox_voc/yolox_voc_s.py -c YOLOX_outputs/yolox_voc_s/latest_ckpt.pth --path voc_dataset/test/IMG_6820_jpg.rf.59a863ec763624df4077718cdeaddcf0.jpg --conf 0.25 --nms 0.45 --tsize 640 --save_result --device gpu


Tested image/video is saved under YOLOX_outputs/yolox_voc/vis_res/ as shown at the end of output at terminal. 


Thank you for reading our blog post.