Skeleton Tracking

Back to Tutorials

12. Multiple Hands Gestures 14. Stickman 1 2  3  4  5  6  7  8  9  10  11  12  13  14  15  16



Skeleton tracking is the technique of tracking a human in front of a camera by identifying parts of the human body. ViiMUnitSkeleton is the ViiM unit responsible for this. While reading this tutorial you should have open the Skeleton project on the ViiM_OF_Samples folder inside the apps folder, but before if you haven't read the other tutorials, we strongly advise you to read at least the tutorial about the ViiM engine and the tutorial about registering a unit. Before heading to the .cpp file, it's important to show you how to initialize the skeleton unit itself and explain some other needed variables in the header file:

The new bits of code will now be explained. We initialize the ViiMUnitSkeleton in line 34 just like any other unit. In line 36 we create a structure of data that will be populated with the joint's data of up to 10 skeletons, that is 10 users in front of the camera. This structure has, for each joint, the rotation matrix, the quaternion, the angles of rotation and position. There is also a fifth element which is the confidence that the information relative to the joint is accurate. usersBeingTracked and users variables will be explained when they appear in the .cpp file, which will be presented next.

We start as usual firing the ViiM engine, registering the unit and configuring some known ViiM's proprieties in the OpenFramework's setup function. But there are also some new skeleton unit's specific methods:

We have this line commented out because we want to track up to 10 skeletons, but if you want to limit the number of recognized skeletons, just use this command with the number of users you want.

Here we set the coordinates type to be in perspective, which means that the origin is at the top left corner of the image with the X-axis incrementing to the right and the Y-axis down, both measured in pixels. The Z-axis is represented in world coordinates, measured in millimeters with its origin at the sensor. The other type of coordinates is the world coordinates, with the origin of the coordinate system at the sensor. We also set the queue type to CLOSER_USER, when we have more skeletons than the maximum number of skeletons allowed. This queue system is the same as the one in the User Tracking sample. In the end of the update method we add event listeners to listen for 3 events: CALIBRATING is fired when a user appeared in front of the camera and the system starts calibrating with the user's body; CALIBRATED is fired if the system has calibrated with the user's body and starts tracking the user's skeleton; FAILED happens when the calibration doesn't succeed. The update function simply calls

which provides this unit's processed image.

Now here comes the most important part of the code in the draw function.

We first start by updating the number and Ids of users being tracked by the system in lines 102 and 103. For each user found by the unit, this instruction will copy the user's joints data. We then proceed to drawing red squares in the position of the detected joints. From lines 117 to 126 we go through each joint's position and draw a rectangle centered in the (joint_position_X, joint_position_Y) pixel with a size depending on the joint_position_Z. In practice this means that the furthest away you are from the camera, the smaller the squares will be.

12. Multiple Hands Gestures 14. Stickman 1 2  3  4  5  6  7  8  9  10  11  12  13  14  15  16

Back to Tutorials