How to Run Nvidia Jetson Inference Example on Forecr Products (DetectNet) - Forecr.io

How to Run Nvidia Jetson Inference Example on Forecr Products (DetectNet)

Jetson AGX Xavier | Jetson Nano | Jetson TX2 NX | Jetson Xavier NX

05 August 2021
WHAT YOU WILL LEARN?

1- How to locate an object using detectNet form from pictures, videos or camera capture?

2- How to write your own object detection program using Python?

ENVIRONMENT

Hardware: Jetson Nano Developer Kit

OS: Jetpack 4.5


With this blog post, you can learn how to locate object on Jetson™ Nano™ using jetson-inference. We use github post as a reference for this post. To set up the jetson-inference project, click here.

How to locate an object using detectNet form from pictures, videos or camera capture?


In the recognition examples, which we previously did, the output of the entire input image is given.

Now, we’re going to focus on object detection and we will find where the various objects are located in the frame.


How to locate object with detectNet ?


First of all, if you want to use the Docker Container you need to go to the following directory.


cd jetson-inference 



Then, run the docker file.


docker/run.sh 


Go to the "build" directory.


cd build/aarch64/bin 


Inside the docker container, with the DetectNet program which is written in both C++ and Python , you can locate objects . You can also add your images to the data/images directory under jetson-inference.


# C++
./detectnet images/peds_1.jpg images/test/output.jpg
# Python
./detectnet.py images/peds_1.jpg images/test/output.jpg


Various images are found under the images for testing, such as cat_*.jpg, dog_*.jpg, horse_*.jpg, peds_*.jpg, ect.


If you have multiple images that you'd like to process at one time;


C++
./detectnet "images/peds_*.jpg" images/test/peds_output_%i.jpg
Python
./detectnet.py "images/peds_*.jpg" images/test/peds_output_%i.jpg


Processing Video Files


 If we want to detect objects from the video,  first we need to enter the command in the terminal while running the dockers.


docker/run.sh ..volume /usr/share/visionworks/sources/data:/videos 


 We can list the videos later.

ls videos 


Now, it's time to run the detectNet form.

detectnet  --threshold=0.50 /videos/cars.mp4


How to Run the Live Camera Detection Demo ?


We can locate objects using detectNet form from real time camera stream. The types of supported cameras include:


MIPI CSI cameras (csi://0)

V4L2 cameras (/dev/video0)

RTP/RTSP streams (rtsp://username:password@ip:port)


C++

./detectnet csi://0                    # MIPI CSI camera
./detectnet /dev/video0 # V4L2 camera
./detectnet /dev/video0 output.mp4 # save to video file


Python


./detectnet.py csi://0                 # MIPI CSI camera
./detectnet.py /dev/video0 # V4L2 camera
./detectnet.py /dev/video0 output.mp4 # save to video file

How to write your own object detection program using Python?


In this part we will create "Coding Your Own Object Detection Program" with Python

and this program will locate objects from camera captures with the detectNet form.


-  First you should go to the home directory

 - Create a file named as "my-detection"

 - Go inside the "my-detection" file

 - Make an empty file called "my-detection.py"

 - Get into the jetson-inference


docker/run.sh  - -volume ~/my-detection:/my-detection 


Now, we open the empty file that we created and then, start to write our own program.


First, we add the libraries which are needed to be recognized and load the images to the command line.


import jetson.inference 
import jetson.utils


We're going to load our detection network. We created an object called “net”. This object can call detectnet with ("ssd-mobilenet-v2", which is one of the most used network models .Then we are going to set the threshold.

net = jetson.inference.detectNet("ssd-mobilenet-v2", threshold=0.5) 


If you want, you can review the (https://rawgit.com/dusty-nv/jetson-inference/dev/docs/html/python/jetson.inference.html#detectNet) link as a reference.


Now, we will create an instance of the object to connect to the camera device for streaming:


camera = jetson.utils.videoSource("csi://0")      # '/dev/video0' for V4L2 



 We create video output object and create a loop that will run until the user exits:

display = jetson.utils.videoOutput("display://0") # 'my_video.mp4' for file 
while display.IsStreaming(): # main loop will go here


We are going to get a list of detections to capture the next frame from the camera.


img = camera.Capture() 
detections = net.Detect(img)


Finally we'll visualize the results with OpenGL and update the title of the window.


display.Render(img)
display.SetStatus(“Object Detection | Network {:.0f} FPS”.format(net.GetNetworkFPS ()))



To run the program that we coded, we must run the Python interpreter from the terminal.


python3 my-detection.py 

Thank you for reading our blog post.