How to Train a Custom Object Detection Dataset using NVIDIA Transfer Learning Toolkit? -

How to Train a Custom Object Detection Dataset using NVIDIA Transfer Learning Toolkit?

19 August 2021

1- How to convert KITTI formatted dataset into TFRecords?

2- How to train and prune the model?

3- How to retrain the pruned model?

4-How to export the model?


Hardware: Gigabyte Aero 15 Laptop

OS: Ubuntu 18.04.5 LTS

GPU: GeForce RTX 2060 GPU (6 GB)

Converting KITTI Dataset to TFRecords

In this blog post, we will train a custom object detection model with DetectNet-v2. First, we will convert the KITTI formatted dataset into TFRecord files. Then, we will train and prune the model. Finally, we will retrain the pruned model and export it.

The requirements are wrote in Transfer Learning Toolkit’s (TLT) Quick Start page (In this guide, we won’t use NGC docker registry and API key).

To begin with, download the archived training files from here. Create “~/tlt_experiments” folder and move the extracted files into it. This created folder will use as a mutual folder between Docker and Host PC.

Now, start into the Docker container, change the current directory to “tlt_experiments” and download the ResNet-50 model for training with these terminal commands:

docker run --gpus all -it -v ~/tlt-experiments:/workspace/tlt-experiments
cd /workspace/tlt-experiments/

Convert the KITTI formatted dataset to TFRecord files.

tlt-dataset-convert -d detectnet_v2_tfrecords_kitti_trainval.txt -o tfrecords/kitti_trainval/

Training and Pruning the Model

Now, we can train our dataset (This process may take a while) and evaluate it.

detectnet_v2 train -e detectnet_v2_train_resnet50_kitti.txt -r detectnet_v2/experiment_dir_unpruned -k forecr -n resnet50_detector

ls -lh detectnet_v2/experiment_dir_unpruned/weights/

detectnet_v2 evaluate -e detectnet_v2_train_resnet50_kitti.txt -m detectnet_v2/experiment_dir_unpruned/weights/resnet50_detector.tlt -k forecr

Prune the model to get more stable results with these terminal commands:

mkdir -p detectnet_v2/experiment_dir_pruned

tlt-prune -m detectnet_v2/experiment_dir_unpruned/weights/resnet50_detector.tlt -o detectnet_v2/experiment_dir_pruned/resnet50_nopool_bn_detectnet_v2_pruned.tlt -eq union -pth 0.8 -k forecr

ls -lh detectnet_v2/experiment_dir_pruned

Retraining the Pruned Model

NVIDIA recommends that you retrain this pruned model over the same dataset to regain the accuracy. Retrain the pruned model and evaluate it again.

detectnet_v2 train -e detectnet_v2_retrain_resnet50_kitti.txt -r detectnet_v2/experiment_dir_retrain -k forecr -n resnet50_detector_pruned

ls -lh detectnet_v2/experiment_dir_retrain/weights/

detectnet_v2 evaluate -e detectnet_v2_retrain_resnet50_kitti.txt -m detectnet_v2/experiment_dir_retrain/weights/resnet50_detector_pruned.tlt -k forecr

The final model located at (in Docker) :


and at (in Host PC) :


Exporting the Model

Export the model as “.etlt” file to use the model directly. We exported each type of models (FP16, FP32, INT8) with these commands:

detectnet_v2 export -m detectnet_v2/experiment_dir_retrain/weights/resnet50_detector_pruned.tlt -o exported_models/fp16/detectnet_v2_resnet50_model_fp16.etlt -k forecr --data_type fp16 --gen_ds_config -e detectnet_v2_retrain_resnet50_kitti.txt

detectnet_v2 export -m detectnet_v2/experiment_dir_retrain/weights/resnet50_detector_pruned.tlt -o exported_models/fp32/detectnet_v2_resnet50_model_fp32.etlt -k forecr --data_type fp32 --gen_ds_config -e detectnet_v2_retrain_resnet50_kitti.txt

detectnet_v2 calibration_tensorfile -e detectnet_v2_retrain_resnet50_kitti.txt -o exported_models/int8/calibration.tensor
detectnet_v2 export -m detectnet_v2/experiment_dir_retrain/weights/resnet50_detector_pruned.tlt -o exported_models/int8/detectnet_v2_resnet50_model_int8.etlt -k forecr --cal_data_file exported_models/int8/calibration.tensor --data_type int8 --gen_ds_config -e detectnet_v2_retrain_resnet50_kitti.txt

For more details, you can follow this TLT Object Detection page for DetectNet_v2page.

Thank you for reading our blog post.