Posted on by

Welcome to the next installment in my series on the ROS transform system (ROS tf). In my previous article I explained how BLUEsat’s Rovers all run on the Robotics Operating System (ROS), and what the ROS tf system was all about. In this article I will introduce a number of useful tools and packages that allow us to take advantage of ROS transforms, with very little code involved.

First we’ll look at the Robot State Publisher – a simple tool for publishing basic transforms of robots joints using a model of your robot. Then I will demonstrate visualising TF trees in RQT, a useful way to identify problems with your transform system. Finally I will explain how to visualise the movement of your robot using RViz.

Using Robot State Publisher

The standard ROS packages include a number of ways to to publish transforms with little or no code.  One of the most useful of these is the robot state publisher, which takes in a URDF file modelling your robot, and outputs static transforms for fixed joints. It also listens for joint messages to publish the position of other joints. Creating a URDF file is outside the scope of this article, but I recommend the SolidWorks to URDF converter, although it is no longer maintained.

The robot state publisher package is effective because it abstracts away a lot of the work you would otherwise need to do. Developers only need to publish message describing an angular rotation, velocity or extension of a joint (depending on the type – more later). There’s even a UI you can use for testing provided by the joint state publisher ROS package.

Starting the publisher is reasonably simple. First you need to define the location of your urdf file. If you are using a ROS launch file then you can simply add the following line to your launch file.

<param name="robot_description" command="$(find xacro)/xacro '$(find owr_bluetounge2_tf)/robots/owr_bluetounge2_tf.URDF'" />

Replacing owr_bluetounge2_tf with the the name of the ROS package your urdf file is in and /robots/owr_bluetounge2_tf.URDF with the path to your urdf file within the the package folder. If you want, you can also get rid of the $(find ... ) section completely and just use an absolute path.

Then to run the node itself, add the following line to the file and then run it using the roslaunch command.

<node name="robot_state_publisher" pkg="robot_state_publisher" type="state_publisher" />

A complete file can be found on our GitHub.

By itself this will only publish the “static transforms.” These are the ones not associated with moving joints. To get ROS to publish transforms for joints that move we need to provide it with either an “extension”, “rotation” or “velocity” for the joint. The type of value depends on what type of joint you defined in your urdf. In terms of units ROS standardises measurements in meters, radians, and m/s for distance, rotation and speed respectively.

Now that you have your robot model set up, you probably want to test it. The easiest way to do this is with the “joint_state_publisher”. This provides a simple GUI to publish those topics.

A gui generated by ROS's robot_state_publisher tool that shows a range of sliders used to control ROS transforms
You can use the joint_state_publisher in parallel with the robot_state_publisher to control transforms.

To run the “joint_state_publisher” simply type:

rosrun joint_state_publisher joint_state_publisher _use_gui:=true

This is useful for testing, but for the actual operation of your robot you probably want to automate this. To do this you need to publish one or more “sensor_msgs/joint_states” message on the /joint_states topic describing the states of your joints.

As well as a standard ROS header describing time information, the message contains four arrays. The first is “names” and corresponds to the names of your joints in the urdf file. This defines the order for the values in the other three arrays; “position”, “velocity”, “effort”. To preserve ordering, you need to define each named joint in all three other arrays, but the joint state publisher will only consider position or velocity depending on the type of joint.

Visualising TF Trees in RQT

Once you have your transform tree up and running, you probably want to check that it is working as expected – this is where RQT comes in. If you haven’t encountered RQT before, I recommend you take a look at our previous tutorial on debugging ROS systems. As I mentioned in that article, RQT is basically a Swiss army knife full of handy ROS tools. You start it by typing rqt in the terminal, with roscore running.

In this case the plugin we are looking for is simply called the “TF Tree” tool. You can find it in the “Plugins” menu under “Visualisation”. It allows you to see the connections in your TF tree, when they last updated, and most importantly, any gaps in the tree.

The image below is a great example of this. We can see that there are several disjoint trees. This is a good indication that something is wrong with our ROS system. In order for us to be able to transform between all of the robot’s co-ordinate systems we need to be able to transform freely.

A visualisation of the BLUEtongue Rover's TF Tree in RQT. Showing several disjointed trees.
In this RQT screenshot the the BLUEtongue Rover’s TF Tree is shown as several disconnected graphs. This means that some of the transforms we were expecting are missing.

In the case above this is because the robot state publisher has not received states for all of the joints it is expecting. But other causes may include a node not running or a localisation system that hasn’t collected sufficient sensor data yet.

3D Representations of your Robot in RViz

Finally let’s visualise the transforms we have been working. This is where RViz comes in.

RViz contains method for visualising a range of different transformed data. First let’s see our robot in its transformed state.

The RViz window is split into three columns. From left to right. The first displays information about the data you are visualising, the second displays a 3D scene that will contain the data and the third contains information about the camera.

To add your robot, make sure you have joint state publisher running as described above. Then click “Add” in the left hand column. A list of data types should appear in a new window. Select “Robot Model.” You should get something like the below image.

The BLUEtongue 2.0 Rover visualised on a 3D Plane in ROS's RViz tool.
The BLUEtongue 2.0 Rover fully visualised in RViz using the joint state publisher.

Now try moving some of the joints using the joint state publisher. The corresponding part of your robot should move!

Have a play around in RViz and see what else you can visualise.

Thanks for reading this installation of my series on the ROS transform (TF) system. Please, stay tuned for the next instalment where I will cover using these transforms to process data in code!


Posted on by

At BLUEsat UNSW, the Off-World Robotics Software Team uses the Robotics Operating System (ROS) as the basis for much of our code. The modularity, communications framework, and existing packages it provides are all factors in this, but another key benefit is its “transform” system. The transform library, which is often referred to as “tf”, provides a standardised way of managing the position of your robot[s] in a ROS system, as well as the moving components of said robots and features in the environment. This article is the first of a series that will introduce you to ROS tf.

The tf system operates by providing a tree of co-ordinate systems, called “frames”. Each frame is an independent co-ordinate system, with an origin that can move relative to its parent in the tree. A common example of this is localising a robot to its environment. To do this you would need at least two frames: one frame for your robot’s local area – lets call it “map”, and another for your robot itself – lets call it “base_link”. In order for ROS tf to localise your robot on the map we need to publish a transform between the “map” and “base_link” frames. Such a transform will often be published by a SLAM package such as hector_slam or an EKF package such as that provided in the ROS navigation packages.

BLUEsat's BLUEtongue Rover represented as a 3D model in RViz with each transform being marked by a set of axes.
ROS’s RViz tool can be used to display 3D representations of your transform tree. Here we see the BLUEtongue Rover with each set of axes representing a transform in our tree.

For every link in the tree we need to provide ROS with a “transform”, that defines the relative position of the two frames. This is where ROS’s modularity kicks in, as you are not restricted to a single node publishing all of your transforms. Instead, as with normal ROS topics, you can have many nodes providing different transforms! This means that you could have one node publishing your robot’s position in your map co-ordinate system, and another node handling the rotation of your rover’s arm. ROS tf then puts these together for you, so that you could, for example, calculate the location for the end of your arm in the map co-ordinate system.

Transforms are time stamped which means that nodes can deal with temporary data loss or lag and do accurate mapping calculations. It also means that they can be recorded with rosbag for future playback. However the time-stamping can also create some issues, which I shall talk about later in the article.

The Justification for ROS TF

So why is this useful? Say I have a LIDAR on a gimbal, and I need to know the positions of its point cloud relative to my rover’s centre of mass. But the LIDAR only publishes a plain of point information relative to its own position. Furthermore the angle of the gimbal is controlled by a separate system, to the one publishing the LIDAR data. Sound familiar?

In a traditional solution the code that reads the LIDAR data, must be aware of the position of the gimbal and manually transform its output to its desired co-ordinate system. This means that the gimbal must know to provide that data, and your system must have a way of syncing the timing of the inputs.

In ROS all of this work is done for you: the LIDAR driver publishes a header indicating that it is in a co-ordinate system who’s parent is the top of the gimbal, and the instant the data was recorded at. The system responsible for controlling the gimbal publishes a transform, indicating its position at that instant. And any system that needs the two pieces of data in a common co-ordinate system, such as that of the base of the rover, can simply run the data through a standard filter provided by ROS to get the information it needs. The video below shows our original BLUEtongue 1.0 Mars Rover doing exactly that.

 

Admittedly if only one system is using those two pieces of data there may not be a massive advantage, but imagine if you had many sensors on top of the gimbal, or many separate systems controlling moving parts…

The Transform Tree

As mentioned previously ROS transforms exist in a tree structure. This has some important implications for how we can structure our graph. The first of these is that a transform can only have one parent. So ROS tf won’t solve more complex graphs. This is especially relevant if you are using something like ROS’s Joint State Publisher to publish the position of joints on your robot as ROS won’t do calculations for multi-link joints. You’ll need to do that yourself.

It also means that if one link in your tree fails you won’t be able to do transforms across that gap as there is no redundancy. However, the system is reasonably resilient. You can do a transform between any two connected points, even if the rest of the graph is not fully connected.

Part of the BLUEtongue 2.0 Rover's Transform (TF) Tree in ROS's RQT tool. ROS tf
Part of the BLUEtongue 2.0 Mars Rover’s Transform (TF) Tree displayed in ROS’s RQT tool.

As well as resilience, the tf tree does offer several advantages. As it’s a tree, each node only needs to know about the two co-ordinate frames it is publishing between; dramatically reducing the complexity of any publisher. This is especially useful in a distributed system with many moving parts, or even a ROS environment with multiple robots!  Furthermore if you follow ROS conventions for naming your frames you can easily combine 3rd party tools, such as one of the many ROS packages used for localisation or calculating the position of your robots joints, without having to modify them in any way.

The Timing Problem

I’d be amiss if I didn’t mention that the ROS tf system is not without its issues and difficulties. Foremost of these is ensuring that you have the transforms you need when you need them. Normally any data you want to transform will have a standard ROS header as part of its message. This header not only identifies the frame, but also the time the data was received. Lets look at our example with the LIDAR gimbal to see why this is important. In that scenario, by the time the LIDAR data  is ready to publish on our ROS network, the gimbal would have moved on. However, we want to do our transforms using the position at the time the data was collected. The timestamp allows us to do this.

But, unsurprisingly, ROS only keeps a limited buffer of transforms in memory. This can cause problems if one of your transform publishers suffers from lag, or your data takes a long time to process before a transformation is applied. Usually your system will need to be able to cope with dropping frames of data, although there are other ways to handle this that I will discuss later in the series.

Next Steps

Well that’s it for now. I hope the first part of this guide to the ROS tf system has proven useful to you! Keep an eye on our blog for my next article, where I’ll be looking at tools and libraries provided by ROS that take advantage of the tf system. In the meantime you may want to take a look at our guide to ROS debugging, which introduces some of the tools we are going to be looking at, or if you are feeling impatient you can skip ahead and look at the official tutorials. If you are interested in learning more about the robotics we do at BLUEsat UNSW you could also take a look at the Off-World Robotics page.