As part of a project we are working we were thinking of using the UR ROS driver to subscribe to getting the pose of the UR robot continuously to aid in calibration of a vision sensor we are developing. Before we get too much into the detail of how to do this, we have a few high level questions that you might be able to help with just to make sure we fully understand the approach.
How can we get the UR robot to continuously publish the pose of the robot to ROS, regardless of what program is running?
Will the UR robot publish in teach mode over ROS?
Any setup guides / example programs that show this feature in action
How can we get the UR robot to continuously publish the pose of the robot to ROS, regardless of what program is running?
The driver publishes joint data regardless of the program running as soon as the robot is in mode running.
Will the UR robot publish in teach mode over ROS?
Same as above. So yes, joint positions will always be published.
Any setup guides / example programs that show this feature in action
The driver itself publishes joint data on the joint_states topic. Usually, you run a robot_state_publisher to convert joint data into tf positions. In the default launchfiles from the driver this is already happening. See here.
Can I ask a further question. On the UR ROS driver page it states the following
For using the ur_robot_driver with a real robot you need to install the externalcontrol-1.0.4.urcap which can be found inside the resources folder of this driver.
Is this a requirement if we only want to subscribe to the joint data topics or does this statement apply only if we want to control the robot?
I may be out of date here because I haven’t looked at the new ROS driver. But a big heads up:
If you use the robot_state_publisher to calculate the kinematic pose of the robot using the joint angles published by the UR driver you will be getting the uncalibrated pose. The only way to get the true pose of the robot is to get the TCP pose topic from the driver (our ROS driver publishes this directly). This pose is calculated internally on the robot using its calibrated DH model. This means your collision checking will also be wrong as the link positions in ROS will be incorrect, despite your TCP pose being correct.
Another way is to model the DH model as a ROS URDF and dynamically update the links using the calibration values read off the robot. This is the approach we took.
If you do not do this there can be a discrepancy of as much as a few mm (depending on how out of spec your robot links are) for some joint positions. This will make certain approaches of camera calibration not work.
See my post history for more information regarding the DH kinematic model the robot uses.
Edit:
It appears this has been addressed in the official ROS driver:
I’ll leave my post here as a warning for anyone not using the official driver.
Hoping someone sees this since it’s been years since the last reply. How do you get the driver to run continuously? I access the controllers through the test_move python code and then select the one I want to use (I’m not sure how else to access them) and I made it a subscriber where the values are constantly updated from a publisher on another PC. From my understanding since its trajectory, it won’t update in realtime and for me it only starts to move once I close the program. Also, since test_move also uses a set duration, every time a set of values is sent (about 10hz), it delays the program by 1 second or however long I assign the duration making a huge delay long term. If I put the duration too low, there becomes an error since I didn’t give it enough time to make the full motion. Is there a way to get rid of the duration so it can move without a delay? At first, I was using the joint based controller and updated those values but now I’m looking to use a cartesian one and update the position values. Any tips or recommendations are appriciated thank you!