Sign ambiguity when receiving poses

I’m creating a palletizing URCap, but am getting odd results when training poses.

Say I want to define a grid by teaching the four corners and an approach. I’m using the standard API callbacks, but the rotation values of my poses seem to be throwing my program off.

For example, I could teach a simple grid on a table with the TCP pointing directly down in the Z-axis and receive the points:

"grid_wp_4": [
08101524819116173,
0.5945688358266212,
0.20108636608529956,
-0.3820898459356046,
-3.1182606439167704,
-5.193289700675591E-6
],

"grid_wp_0": [
0.8101630657792341,
0.5945658072247689,
0.09828903558445529,
0.3820745098037886,
3.118260688166712,
5.4561644630430715E-6
]

4 is the approach to 0.

I understand that [Rx,Ry,Rz] = [-Rx,-Ry,-Rz], but sometimes I also get Rz having a different sign to the other two. I’m putting that down to jitter and Rz being so close to zero.

What I’m wondering is whether this could be related to the robot behaviour I’m seeing. When I try to execute my sequence, the robot will sometimes (not everytime) try to approach the point from the complete opposite direction (pointing up in the Z-axis). Can anyone shed some light on this?

Thanks.