Training a Custom Object Detection Model with YOLOv5

Training a Custom Object Detection Model with YOLOv5

23 March 2021

1- Setting up the Docker container

2- Configuring the dataset

3- Training the dataset


    Operating System: Ubuntu 18.04.5 LTS

    CPU: AMD Ryzen 7 3700X 8-Core Processor

    RAM: 32 GB DDR4 - 2133MHz

    Graphic Card: RTX 2060 (6GB)

    Graphic Card’s Driver Version: 460.39

In this tutorial, we used YOLOv5 repository to train our custom masked face dataset with NVIDIA Container Toolkit. First of all, we will set our Docker environment and check GPU support with Python3. Secondly, we will set dataset in our mutual YOLOv5 folder (available between host PC and temporary Docker container). Finally, we will train the dataset with YOLOv5m weight file.


Host PC Info:

  • Operating System: Ubuntu-18.04.5
  • Graphic Card: RTX 2060 (6GB)
  • Graphic Card Driver Version: 460.39
  • CUDA Version: 11.2


Docker Container Setup


To begin with, let's download the YOLOv5 repository and set the PyTorch container (with the same docker version) up to train masked face dataset.


$ docker --version



$ git clone
$ docker pull
$ docker run -it --rm --gpus all -v ${PWD}/yolov5:/yolov5
$ cd /yolov5/
$ sed -i "s/opencv/#opencv/g" requirements.txt
$ sed -i "s/torch/#torch/g" requirements.txt
$ apt update && apt install -y zip htop screen libgl1-mesa-glx
$ apt install -y python3-opencv
$ pip3 install --upgrade pip
$ pip3 install --no-cache -r requirements.txt
$ python3
>>> import torch
>>> from IPython.display import Image
>>> print('torch %s %s' % (torch.__version__, torch.cuda.get_device_properties(0) if torch.cuda.is_available() else 'CPU'))
>>> exit()



Dataset Configuration


Then, create "dataset" folder and move training and validation dataset into it.


Now, let's create and configure ".yaml" files. Our dataset has two classes, so that we set "nc" constants as 2 and names array as "Mask" and "Without_mask".

$ touch data.yaml
$ echo "train: /yolov5/dataset/train/images" > data.yaml
$ echo "val: /yolov5/dataset/valid/images" >> data.yaml
$ echo "nc: 2" >> data.yaml
$ echo "names: ['Mask', 'Without_mask']" >> data.yaml
$ cp models/*.yaml .
$ sed -i "s/nc: 80/nc: 2/g" yolov5*.yaml


Furthermore, download weights into the weights folder:

$ cd weights/
$ bash
$ cd ..


Dataset Training


In our setup, we trained with 100 epochs but you can increase that to get more stable results. We started training batch size with 16 but our GPU's memory was not enough to train (In your setup, this may run due to your dataset). So that, we decreased batch size step-by-step (16,8,4,2,1). If we use more batch sizes, you may see this error:



$ cd /yolov5/
$ python3 --img 608 --batch 2 --epochs 100 --data '/yolov5/data.yaml' --cfg /yolov5/yolov5m.yaml --weights '/yolov5/weights/' --cache


A few moments later...



After the training ended...



Our weight file saved into runs/train/exp10 folder:



Our result graphics are into runs/train/exp10/results.png file:



Now, let's test our weight file:

$ cd /yolov5/
$ python3 --weights runs/train/exp10/weights/ --img 416 --conf 0.4 --source /yolov5/dataset/valid/images

Test results saved to runs/detect/exp. Let's check some results:



You can also train with other weights (YOLOv5s, YOLOv5l, YOLOv5x) to use your model in powerful or minimal environments. It's up to you.

For example if you want to train with the YOLOv5s model, you can train with this command:


$ python3 --img 608 --batch 2 --epochs 100 --data '/yolov5/data.yaml' --cfg /yolov5/yolov5s.yaml --weights '/yolov5/weights/' --cache


Thank you for reading our blog post. 

If you want to get updated on new blog posts, product launches and discounts, you can fill out the form and sign up for our newsletter. By signing up, you can reach various blog posts about AI, deep learning, machine vision, high-speed cameras, and industrial interfaces. 

You will receive a "free shipping" code for next purchase immediately.