Hardware: DSBOX-NX2, Camera (MIPI CSI or V4L2)
OS: Jetpack 4.5
In this blog post, we will be explaining how to make pose estimation using poseNet object based on jetson-inference. The poseNet object takes an image as the input and according to the poses it detects, creates lines on the image to indicate poses as the output.
Before we get started, make sure jetson-inference project is set up. If you haven’t downloaded the project, click here to learn how to do it step by step.
If you used the docker container when building the project, run the docker and go into build/aarch64/bin directory where the project is built.
If you built the project from the source, go to the folder without running the container.
Now, you can run the pose estimation program by running the following command. We used sample images that comes with the project under data/images folder and saved the output files to data/images/test folder.
posenet images/humans_0.jpg images/test/pose_humans_0.jpg #C++
posenet.py images/humans_0.jpg images/test/pose_humans_0.jpg #Python
posenet "images/humans_*jpg" images/test/pose_humans_%i.jpg #C++
posenet.py "images/humans_*jpg" images/test/pose_humans_%i.jpg #Python
The default pose network that poseNet uses is resnet18-body. You can change it by adding --network flag. Other available pose models are resnet18-hand and densenet121-body. There is no sample picture comes with the project, so download them manually.
posenet --network=resnet18-hand "images/hand_*.jpg" images/test/pose_hand_%i.jpg #C++
posenet --network=resnet18-hand "images/hand_*.jpg" images/test/pose_hand_%i.jpg #Python
You can also use the camera with poseNet to make a pose estimation. First, make sure to connect your camera before running the container.
Then, by going to same directory before, run the following command to launch the program. Use /dev/video0 for USB camera, csi://0 for CSI camera.
posenet --network=resnet18-hand /dev/video0 #C++
posenet --network=resnet18-hand /dev/video0 #Python
Thank you for reading our blog post.