How to Run Jetson Inference Example on Forecr Products (PoseNet)

How to Run Jetson Inference Example on Forecr Products (PoseNet)

Jetson AGX Xavier | Jetson Nano | Jetson TX2 NX | Jetson Xavier NX

18 August 2021
ENVIRONMENT

Hardware: DSBOX-NX2, Camera (MIPI CSI or V4L2)

OS: Jetpack 4.5

In this blog post, we will be explaining how to make pose estimation using poseNet object based on jetson-inference. The poseNet object takes an image as the input and according to the poses it detects, creates lines on the image to indicate poses as the output.  


Before we get started, make sure jetson-inference project is set up. If you haven’t downloaded the project, click here to learn how to do it step by step.  

How to make pose estimation using poseNet? 


If you used the docker container when building the project, run the docker and go into build/aarch64/bin directory where the project is built. 

cd jetson-inference 
docker/run.sh
cd build/aarch64/bin


If you built the project from the source, go to the folder without running the container. 


cd jetson-inference/build/aarch64/bin


Now, you can run the pose estimation program by running the following command. We used sample images that comes with the project under data/images folder and saved the output files to data/images/test folder. 

Single images:

posenet images/humans_0.jpg  images/test/pose_humans_0.jpg  #C++


OR

posenet.py images/humans_0.jpg  images/test/pose_humans_0.jpg  #Python


Multiple images: 

posenet "images/humans_*jpg"  images/test/pose_humans_%i.jpg  #C++


OR

posenet.py "images/humans_*jpg"  images/test/pose_humans_%i.jpg   #Python

The default pose network that poseNet uses is resnet18-body. You can change it by adding --network flag. Other available pose models are resnet18-hand and densenet121-body. There is no sample picture comes with the project, so download them manually. 

posenet --network=resnet18-hand "images/hand_*.jpg" images/test/pose_hand_%i.jpg #C++ 


OR

posenet --network=resnet18-hand "images/hand_*.jpg" images/test/pose_hand_%i.jpg #Python




How to Use Camera to Make a Pose Estimation? 


You can also use the camera with poseNet to make a pose estimation. First, make sure to connect your camera before running the container. 

Then, by going to same directory before, run the following command to launch the program. Use /dev/video0 for USB camera, csi://0 for CSI camera. 

posenet --network=resnet18-hand /dev/video0    #C++


OR


posenet --network=resnet18-hand /dev/video0    #Python




Thank you for reading our blog post. 


Türkçe