This tutorial goes very quickly through the concepts and steps required to be able to acquire through ROS framework, a skeleton detected by the Microsoft Kinect device and the OpenNI driver and middleware.
Getting a working OpenNI + nite installation sounds like a nightmare to more than a few people who have tried to do it on their own. Getting a working API binding it in your favorite language might also be a difficult quest.
For these reasons a lot of people use bundled versions of OpenNI in other frameworks, which generally means that other people have taken care of fixing the OpenNI + Nite installation process and maintained a working API.
While this might be seen as adding another (useless ?) layer on top of so many abstraction layers, it is on the other hand often saving you a lot of boring work.
Finally, and this is more for roboticists, using ROS as such a framework has many advantages amongst which include:
This tutorial assumes you have a Unix-like system working with ROS installed along with its openni_kinect package (we use the ROS bundled openni version).
Packages or stacks can be either installed through distribution specific binaries or using rosdep (you will first need to setup your ROS environment, as detailed further).
ROS code is organised into stacks and packages (a stack contains many packages).
In this tutorial we will mainly need the following concepts of ros, described in ROS documentation as:
This means that we will need to have:
Since publishers and suscribers need not be on the same computer, it is perfectly possible and transparent from that point to have the kinect plugged on one computer and access the skeleton positions from an other.
ROS comes with a set of command line tools; a useful resource to remember the associated commands is the ROS cheat sheet.
You will first need to configure ROS. This means setting up the environment variables so that the:
This process is described in details in a dedicated ROS tutorial and summarized below.
Setting up a ROS environment requires:
source /opt/ros/electric/setup.bash export ROS_ROOT=/opt/ros/electric/ros export PATH=$ROS_ROOT/bin:$PATH export PYTHONPATH=$ROS_ROOT/core/roslib/src:$PYTHONPATH export ROS_PACKAGE_PATH=~/ros_workspace:/opt/ros/electric/stacks:$ROS_PACKAGE_PATH export ROS_MASTER_URI=http://localhost:11311
(Of course replace ~/ros_workspace by your actual ROS workspace.)
All the following commands need to be run in a terminal where the ROS environment variables are set, so if you chose not to add the previous commands to your .bashrc or equivalent please take care of loading your ROS environment manually in each terminal you open.
Once your environment is set up correctly, you should be able to run the ROS master by typing:
roscore
This displays some information and does not release the terminal so you have to open a new terminal for the next command.
Then you might launch the openni_tracker node through the command:
rosrun openni_tracker openni_tracker
and if everything is OK, nothing should be output until someone is detected in front of the kinect and it outputs something similar to:
[ INFO] [1334336554.595585110]: New User 1
If the user tries to get calibrated by adopting the Psi pose. Useful output should inform you about the process.
Note that you will need to calibrate a user first in order to actually get some skeleton data from the kinect.
You may seek control by typing (again in a new terminal):
rostopic list
and get
/rosout /rosout_agg /tf
The /tf topic should only be there if a user is calibrated and will contain the skeleton data.
Actually the /tf comes from the time frame stack that provides the data type in which the skeleton poses are given.
At this point it is possible to visualize the data published on the /tf topic by using for example the rviz tool, launched by:
rosrun rviz rviz
Then select what is in the /t topic for display.
The last step is to write our own suscriber to actually access the skeleton data.
The example we give in this tutorial is written in python.
We first need to create a ROS package that will contain our code. For ROS to be able to later locate your code you will have to create this package in the ROS workspace we created earlier. This can be done through the roscreate-pkg command:
roscreate-pkg NAME rospy tf
where NAME is the name you want to give to the package, rospy and tf specify that our package depends on both these libraries.
This will generate a directory named after NAME.
Now accessing to the skeleton values from python requires the following steps:
import roslib roslib.load_manifest('NAME')
import rospy import tf
rospy.init_node('kinect_listener', anonymous=True)
listener = tf.TransformListener()
trans, rot = listener.lookupTransform('/openni_depth_frame', '/left_knee_1', rospy.Duration())
Note that we have to provide the TransformListener with a time, to get the frame at this particular time. Providing current ROS time returns the last translation and rotation values for the frame.
This mechanism can be integrated in a simple Kinect class (that can be put in a python file in the package src directory), which provides easy access to all the frames of a given user through:
# Import the kinect class from your file from yourfile import Kinect # Init the Kinect object kin = Kinect() # Get values for i in xrange(10): print i, kin.get_posture()
The code of the corresponding class can be found at https://gist.github.com/2414166.
In this tutorial we only covered the steps required to get a working access to skeleton information. More in depth presentations of the ROS system can be found in the ROS wiki tutorials.