Get Tcp Location While Conveyor Tracking

Hi,
I have seen a few questions here regarding getting the tcp relative to a feature, but none which actually apply to the conveyor tracking.
My program needs to make a relative move of the flange. I know that movej would move to the given pose as if we were at the beginning of the tracking. So I assumed get_actual_tcp_pose() would do the same, i.e. give me the position as if the conveyor never moved. Apparently that’s not the case
This code snippet results in an actual move:
movej(get_actual_tcp_pose())
which surprised me.
Using the conveyor direction feature pose like the suggestion in How to get tcp pose in feature coordinates? does not work because the conveyor feature has no information on how much travel the conveyor has already done.

The only thing I can think of to workaround this would be to save the current tcp pose at the beginning of the tracking, and use that as a reference, but I cannot do that in my urcaps, becasue I have no access to when the user calls track_conveyor_linear.

What would be the course of action here?

In my own program, I did all of the calculations within the conveyor tracking portion of the program. So it goes like this:

-Acquire picture and send position data from camera to UR
-Conveyor tracking starts. Do the following inside the Conveyor Tracking command:
-----get actual tcp pose, perform coordinate transforms etc., calculate pick position
-----pick part move
-----script command: stop_conveyor_tracking() after part is picked, if needed

I haven’t had any issues doing it this way. It seems to automatically apply the conveyor tracking to the calculated poses.

If it appears to be missing the part along the direction of conveyor travel, it may be that your encoder parameters need adjustment. Add a “hover” position just above the part, and have a popup open while it is at this position. This will “pause” the program without stopping the conveyor tracking. Watch the tcp tracking above the part and see if it is tracking well. If it is moving faster or slower, adjust your ticks per revolution accordingly until it tracks perfectly. I found that entering the exact ticks per revolution of my encoder did not work, and I had to fudge the number a bit to get it to track smoothly.

p.s. I wrote the post you referred to, and I was doing the calculations inside conveyor tracking the whole time. I didn’t mention it in that post though

Thanks @anna.
The process you are describing is the regular use case of camera over conveyor belt, and I have been able to use that successfully.

What I am trying to do is a bit different.
I need to do a relative movement of the robot arm while tracking a conveyor. The vision system does not even play a role here.

- ConeyorTracking # Tracking begin on the positive Y axis
---- Wait(2.0)  # Arm is following the conveyor successfully
---- movej(get_actual_tcp_pose) # (1) Expect to stay put (still tracking the conveyor)
---- movej(pose_trans(get_actual_tcp_pose, p[0.1,0,0,0,0,0])) # (2) Expect to move in the x direction 
                                                              # while still following the conveyor

What I actually see in (1) is that the robot moves in the direction of the conveyor the same length it had already passed. If I want it to stay put I should use movej(pose_where_tracking_started) but I do not have that pose saved anywhere…

So to make sure I understand correctly, you want to save (1) as a stationary pose that doesn’t track?
Would it help to use script code stop_conveyor_tracking() right before get_actual_tcp_pose, then start conveyor tracking again once you’ve got the pose?

No, when I say stationary I mean “keep tracking the conveyor as if there was no movej command”

Instead of a movej(get_actual_tcp_pose), I would try using assignments to save poses as variables and only apply the move once the next position is defined. So

  • ConveyorTracking # Tracking begin on the positive Y axis
    ---- Wait(2.0)
    ---- Var1 := get_actual_tcp_pose()
    ---- Var2:= (pose_trans( Var1 , p[0.1,0,0,0,0,0]))
    ----movej(Var2)

I have tried that, it gives the same result.

How about:

Var1 := get_actual_tcp_pose()

  • ConveyorTracking # Tracking begin on the positive Y axis
    ---- Wait(2.0)
    ---- Var2:= (pose_trans( Var1 , p[0.1,0,0,0,0,0]))
    ----movej(Var2)

?

OR wait a minute…you just want a move in the x-direction while the y-direction tracks, right? I broke down my position vectors and literally added the moves I wanted to each direction, and that worked perfectly. I used it to get my part location offset, and also to force Wrist3 to rotate 90 degrees to orient my tool.

old_position := get_actual_tcp_pose()
x_new := old_position[0] + DESIRED OFFSET
y_new := old_position[1]
.
.
.
Rz_new := old_position[5] +/- d2r(angle in degrees) #used this to turn wrist3
new_position ≔ p[ x_new ,y_new , z_new , Rx_new , Ry_new , Rz_new]
movej(new position)

Kinda brute force but it worked for me

How about:

Var1 := get_actual_tcp_pose()

ConveyorTracking # Tracking begin on the positive Y axis
---- Wait(2.0)
---- Var2:= (pose_trans( Var1 , p[0.1,0,0,0,0,0]))
----movej(Var2)
?

This would work, but I cannot do that. I am programming a URCaps which the user may or may not place inside a ConveyorTracking block.

old_position := get_actual_tcp_pose()
x_new := old_position[0] + DESIRED OFFSET
y_new := old_position[1]
.
.
.
Rz_new := old_position[5] +/- d2r(angle in degrees) #used this to turn wrist3
new_position ≔ p[ x_new ,y_new , z_new , Rx_new , Ry_new , Rz_new]
movej(new position)

I can’t test right now but I cannot imagine this would work… Consider the case where the offset = 0, then this is exactly the same as doing
movej(get_actual_tcp_pose())
which we have established does not work.
Have you tested this while tracking a conveyor?

Yes, it’s what I’m using in my program. I use it with conveyor tracking. I’m not designing any URCaps, however.

I have an EOAT-mounted camera send coordinates of a part relative to the center of the picture, which the robot interprets as relative to the center of the camera’s lens. I add or subtract those from my x and y after a coordinate transform to get the robot to move in the relevant feature coordinates, and apply the rotation to the Rz to control the orientation of the tool during the pick. It’s been working for several years.

Although, as I mentioned earlier, entering the encoder values accurately didn’t work for me and I had to apply a correction, both to the encoder ticks/revolution and to my part positions. I assumed these corrections were needed because of encoder wheel slipping, imperfect alignment of the camera relative to the belt, or other unquantified factors. But perhaps the problem you are having is something I inadvertently addressed by fudging my encoder and position values? You may need a speed correction factor in the y-direction.

One thing I did while I was initially getting my conveyor picking routine going, was make a folder in my conveyor tracking named “Unsuppress to troubleshoot picking issues” with a MoveJ inside that applied my offsets one coordinate direction at a time. So in the folder, I first move to Position 0, just above the part - no correction. Popup opens so I can see where it landed and watch it track. Then when I close the popup, it moves to Position 1 with the x-dir offset added and opens another popup so I can see what that did. When I close that popup, it applies the y-dir offset. So I use this to check each move step-by-step.

I’ve left that folder suppressed in each of my programs. If I ever run into picking issues I unsuppress it again so I can break down and examine each movement.

I still don’t understand how this could work. Unless the line old_position := get_actual_tcp_pose() is outside the conveyor tracking.

It’s not outside. It is the first thing that happens inside the conveyor tracking, though… There’s no wait first, and the conveyor is slow.

Here’s the text to my actual code, somewhat cleaned up, and commented. I included the loop where I get the part location from the camera.

   Loop conv_find[0]≟0 or conv_find[3]≠1
     conv_find≔socket_read_ascii_float(3,"Sherlock2UR")
     sync()
   socket_close("Sherlock2UR")
Tracking Conveyor 1
     Calculate positions            ###This is just a named folder to make copying code to other programs easier
       conv_watch≔get_actual_tcp_pose()
       conv_pos_inv≔pose_inv(pose_trans(pose_inv(conv_watch),Belt_Image))     ###Belt_Image is a feature plane
       x_conv≔conv_find[1]/1000*0.7   ###0.7 is the fudge factor I mentioned on part location.  x is the direction of conveyor travel
       y_conv≔conv_find[2]/1000          
       Pc_x≔conv_pos_inv[0]+x_conv
       Pc_y≔conv_pos_inv[1]+y_conv
       Rc_x≔conv_pos_inv[3]
       Rc_y≔conv_pos_inv[4]
       Rc_z≔conv_pos_inv[5]-d2r(90)
       conv_hover≔p[Pc_x,Pc_y,hover_z_con,Rc_x,Rc_y,Rc_z]  ###My z values are defined and initialized at program start
       conv_grab≔p[Pc_x,Pc_y,grab_z_con,Rc_x,Rc_y,Rc_z]
       conv_clear≔p[Pc_x,Pc_y,pickup_z_con,Rc_x,Rc_y,Rc_z]
     'unsuppress to check search position vectors before moving'
     'unsuppress to troubleshoot position errors'
     MoveL   ###Still in conveyor tracking, using my saved positions as waypoints for this move
       conv_hover   
       'unsuppress to pause and check position'
       conv_grab
       Set AIR_1A=Off
       Wait: 0.01
       Set AIR_1B=On
       Wait: 0.1
       conv_clear
       Wait: 0.01
       stop_conveyor_tracking()"

So the problem you are having sheds new light on my need for a conveyor direction part-location fudge factor; I had assumed that since my robot and camera setup is mobile, the camera was not perfectly level with respect to the conveyor surface, and that was introducing position error. I thought I was correcting for out-of-level, but maybe I was actually correcting for conveyor advancement. In either case, it’s working fine.

Ok that makes sense. Although not applicable for my use case.
Anyone from UR have any thought about this?

@jbm or anyone from UR support, any thoughts about this?

As this is intended to be integrated in a URCap would it be possible as a workaround to just ask the user to define the position manually in the program or installation node?

@sko Thanks for the reply!
The way our URCaps operates is: the user positions the arm at the initial scan position (with a movej before the urcaps node) and we perform the scan by moving linearly relative to that position in a specific orientation.
To ask the user to enter the end pose would make the tool much harder to use, and a lot more user-error-prone.

So to be clear, do you agree that there is inconsistency in the definition of poses? The fact that the following command:
movej(get_actual_tcp_pose())
actually creates a movement?

Hi @ocohen,

Since you have the conveyor tracking enabled, that can create a movement. Since the the conveyor tracking is contributing with an offset to the movement. You can use the script function get_target_tcp_pose_along_path() to get the tcp that is not affected by the conveyor tracking offset.

1 Like