Universal Robots Forum

Teach waypoints in a coordinate system defined by the camera

I have a camera that detects the pose of an object. I can get this pose through xmlrpc in the UR program. Now I would like the robot to grasp this object. To do that, I define an approach and a grasp waypoints. I need to define the waypoints in the coordinate frame of the object. In this case, if the object is moved, and the camera returns a new pose, the robot will still correctly move to the approach pose and then to the grasp pose.

I would like to teach these 2 waypoints. As far as I know, the only way to teach waypoints in a variable coordinate system is to use feature variable.

So, I defined a feature variable named object_pose. I set it to some arbitrary initial value.

In the program, I assign the pose to it in the assignment node:

object_pose_var := camera.get_pose()

Now, I need to teach the approach and grasp waypoints. I tried to do the following. I halt the program just after this assignment. At this moment the object_pose is set to the correct pose returned by the camera. I can verify this also in the variable tab.

But when I start teaching the waypoints in the coordinate frame of object_pose, I see, that object_pose still has its initial value! So, the waypoints teached are not in the object coordinate frame.

Theoretically, I could move the robot to the pose returned by the camera, stop the program, redefine the feature variable to the current pose and then teach my waypoints. I practice, this is difficult, because the pose predicted by the camera lies inside the object, and the robot will collide with it. And I also would like to avoid the trouble redefining the feature every time I teach a new object.

So, what it the right way to work with the feature variables in my case? Maybe I can write a URCap for it, but I still need to know what to do to allow the user to teach the waypoints in the coordinate frame that I get from the camera.