As part of a project we are working we were thinking of using the UR ROS driver to subscribe to getting the pose of the UR robot continuously to aid in calibration of a vision sensor we are developing. Before we get too much into the detail of how to do this, we have a few high level questions that you might be able to help with just to make sure we fully understand the approach.
How can we get the UR robot to continuously publish the pose of the robot to ROS, regardless of what program is running?
Will the UR robot publish in teach mode over ROS?
Any setup guides / example programs that show this feature in action
Thanks it advance
- How can we get the UR robot to continuously publish the pose of the robot to ROS, regardless of what program is running?
The driver publishes joint data regardless of the program running as soon as the robot is in mode running.
- Will the UR robot publish in teach mode over ROS?
Same as above. So yes, joint positions will always be published.
- Any setup guides / example programs that show this feature in action
The driver itself publishes joint data on the
joint_states topic. Usually, you run a
robot_state_publisher to convert joint data into tf positions. In the default launchfiles from the driver this is already happening. See here.
Many thanks for the reply.
Can I ask a further question. On the UR ROS driver page it states the following
For using the ur_robot_driver with a real robot you need to install the externalcontrol-1.0.4.urcap which can be found inside the resources folder of this driver.
Is this a requirement if we only want to subscribe to the joint data topics or does this statement apply only if we want to control the robot?
Only if you want to control the robot.
Super - many thanks.
I may be out of date here because I haven’t looked at the new ROS driver. But a big heads up:
If you use the robot_state_publisher to calculate the kinematic pose of the robot using the joint angles published by the UR driver you will be getting the uncalibrated pose. The only way to get the true pose of the robot is to get the TCP pose topic from the driver (our ROS driver publishes this directly). This pose is calculated internally on the robot using its calibrated DH model. This means your collision checking will also be wrong as the link positions in ROS will be incorrect, despite your TCP pose being correct.
Another way is to model the DH model as a ROS URDF and dynamically update the links using the calibration values read off the robot. This is the approach we took.
If you do not do this there can be a discrepancy of as much as a few mm (depending on how out of spec your robot links are) for some joint positions. This will make certain approaches of camera calibration not work.
See my post history for more information regarding the DH kinematic model the robot uses.
It appears this has been addressed in the official ROS driver:
I’ll leave my post here as a warning for anyone not using the official driver.