How to Train a Custom Image Classification Dataset using NVIDIA Transfer Learning Toolkit? - Forecr.io

How to Train a Custom Image Classification Dataset using NVIDIA Transfer Learning Toolkit?

31 August 2021
ENVIRONMENT

Hardware: Corsair Gaming Desktop PC

OS: Ubuntu 18.04.5 LTS

GPU: GeForce RTX 2060 GPU (6 GB)


Configure The Dataset


In this blog post, we will train a custom image classification model with ResNet. First, we will download and configure the dataset. Then, we will train and prune the model. Finally, we will retrain the pruned model and export it.


The requirements are wrote in Transfer Learning Toolkit’s (TLT) Quick Start page (In this guide, we won’t use NGC docker registry and API key).


To begin with, download the archived training files to the download part of this post. Create “~/tlt_experiments” folder and move the extracted files into it. This created folder will use as a mutual folder between Docker and Host PC. Then, download the car dataset from here. Configure the dataset folder like below. The dataset detailed from here.





In our dataset we renamed all files in train folder and copy some pictures to test and val folders.


Now, start into the Docker container, change the current directory to “tlt_experiments” and download the ResNet-101 model for training with these terminal commands: 



docker run --gpus all -it -v ~/tlt-experiments:/workspace/tlt-experiments nvcr.io/nvidia/tlt-streamanalytics:v3.0-py3
cd /workspace/tlt-experiments/
wget https://api.ngc.nvidia.com/v2/models/nvidia/tlt_pretrained_classification/versions/resnet101/files/resnet_101.hdf5

Training and Pruning the Model


Now, we can train our dataset (This process may take a while) and evaluate it.



mkdir experiment_output
mkdir experiment_output/train

classification train -e config.txt -r experiment_output/train/ -k forecr
classification evaluate -e config_evaluate.txt -k forecr



Prune the model to get more stable results with these terminal commands:



mkdir experiment_output/prune

classification prune -m experiment_output/train/weights/resnet_030.tlt -o /workspace/tlt-experiments/experiment_output/prune/model_pruned.tlt -k forecr

Retraining and Exporting the Pruned Model


NVIDIA® recommends that you retrain this pruned model over the same dataset to regain the accuracy. Retrain the pruned model and evaluate it again.


mkdir experiment_output/retrain

classification train -e config_retrain.txt -r experiment_output/retrain/ -k forecr


The final model located at (in Docker) :

“/workspace/tlt-experiments/experiment_output/retrain/weights/resnet_002.tlt”

and at (in Host PC) :

“~/tlt-experiments/experiment_output/retrain/weights/resnet_002.tlt”


Export the model as “.etlt” file to use the model directly. We exported each type of models (FP16, FP32) with these commands:



detectnet_v2 export -m detectnet_v2/experiment_dir_retrain/weights/resnet50_detector_pruned.tlt -o exported_models/fp16/detectnet_v2_resnet50_model_fp16.etlt -k forecr --data_type fp16 --gen_ds_config -e detectnet_v2_retrain_resnet50_kitti.txt

detectnet_v2 export -m detectnet_v2/experiment_dir_retrain/weights/resnet50_detector_pruned.tlt -o exported_models/fp32/detectnet_v2_resnet50_model_fp32.etlt -k forecr --data_type fp32 --gen_ds_config -e detectnet_v2_retrain_resnet50_kitti.txt


For more details, you can follow this TLT Image Classification page.



Thank you for reading our blog post.