We need to set up the coordenate transform tree for the robot. This involves the physical coordinate transform between different components of the robot. This could be the transform between the coordinate axis of the base of the robot and Kinect and/or the IMU and/or etc. depending on the sensors present on the robot.
Once the tf tree is defined, converting a point represented in one coordinate frame into any other coordinate frame present in the tree will be taken care by ROS libraries. we can get the transformed point with a few lines of code. The following C++ code snippet illustrates how to get the transformed point.
geometry_msgs::PointStamped pt_to_be_transformed; geometry_msgs::PointStamped transformed_pt; pt_to_be_transformed.point.x = x; pt_to_be_transformed.point.y = y; pt_to_be_transformed.point.z = z; tf::TransformListener goal_listener; goal_listener.transformPoint("target_frame", pt_to_be_transformed, transformed_pt);
Follow the transform configuration guide to setup the coordinate frames and the transform trees
There is a launche file
tf.launch, which create the basics coordinate transformations.
roslaunch ros_erle_rover_navigation tf.launch
Once the tf tree is defined, we can debug or figure out most of the problems by looking at the transform configuration tree. The coordinate transform tree can be visualized by using the following command.
rosrun tf view_frames evince frame.pdf
This will generate a file
frames.pdf in the current directory which will contain information about the existing coordinate frames, the links between two frames, their publishing frequencies etc.