Hi,

I’m wondering how the interpolate_pose(p_from, p_to, alpha) function actually interpolates orientation. Interpolation of the translational component is obvious but interpolating or averaging orientations in 3D is a less well defined problem and can be highly dependent on the underlying parameterization of orientation that you have chosen.

The manual states

Linear interpolation of tool position and orientation.

When alpha is 0, returns p_from. When alpha is 1, returns p_to. As alpha

goes from 0 to 1, returns a pose going in a straight line (and geodetic

orientation change) from p_from to p_to. If alpha is less than 0, returns

a point before p_from on the

I’m not sure what geodetic orientation change means and am having a hard time finding an explanation anywhere.

Any help is very much appreciated.

Thanks,

Gabe

my understanding of geodetic in this case means the shortest distance between two points (the TCP Points of both orientations) which the robot is moving between while changing the orientation. So a value of 0.5 is a pose right in the middle of these two points (same distance each).

Thanks for your response @m.birkholz.

Unfortunately, unless I’m just missing it, your answer doesn’t address my question regarding the details of the orientation change.

Linear interpolation between to points is very straightforward. What I’m interested in is interpolation between two orientations - specifically for the 3D case. What does it mean to be half way between two different 3D orientations? What does it mean to be a certain distance from a 3D orientation? There is not a straightforward, or only one, answer to this question. It depends on your chosen mathematical representation of orientation (rvec, euler angles, rotation matrix, quaternion) as well as assumptions or design choices you make. It sounds like UR has chosen a framework based on geodetics for this but I can’t find any documentation or literature on such a framework so that’s why I’m asking it here.

Thanks again in advance for any assistance in understanding this.

Implementation is equivalent to *movel()* between two points.

Hi @mmi, thanks for your response. Unfortunately that also doesn’t answer my question. It just makes my question about the meaning of geodetic orientation change in movel.

I understand practical usage of movel and interpolate pose but I’m trying to understand the underlying implementation a little better so I can use them smarter and avoid some weirdness I see in certain edge cases.