I have an issue in my application where I must avoid the “short” rotation when performing linear movements, to prevent “winding up” the last joint (similar to Cartesian linear move can "wind up" the wrist)
To avoid this issue I am calling get_inverse_kinematics() with my target pose, then checking the joint angles against my allowed range of motion. If I’m outside that region I generate a cartesian waypoint which unwinds the last joint of the robot.
I’m experiencing two issues:
the call to get_inverse_kinematics() doesn’t always match the joints the robot actually assumes after the movel. This means after passing the joint limit check the robot moves into those limits anyway. Is the expected behaviour? Is there a different way i can get the resulting joints of a movel before moving?
because the target and actual pose are possibly at the same position, but with a different orientation i cannot blend through the motion of the waypoint. Is it somehow possible to force movel blending to respect the orientation instead of the position?
I wish to avoid unwinding with a moveJ if possible because this doesnt guarantee that my tcp stays at the same position during unwinding!
We use pose_dist versus point_dist to understand rotation between two points that are close together. If the values are very different we know there is a lot of rotation needed to get to the next pose and can take action to create a point in the middle.
thanks for the reply, this doesnt solve the issue though, in the case of changing orientation but not position, I need a point in between, where the robot will stop. I’d prefer if the rotate can blend through this.
Sorry for the bump, but could an official ur representative confirm whether or not it is expected behaviour that the result of a movel () command and the result of get_inverse_kin() of the same target pose dont result in the same joint angles?
If this is expected behaviour, what is the reasoning behind it?
I’ve found a solution, it remains to be seen whether or not it is stable.
The issue is that movel moves towards a pose by taking the shortest linear path. on the other hand get_inverse_kin(pose,q_near) returns a joint configuration that is closest to q_near (default is current joint configuration). For very small movements these tend to be the same, wheras for large movements this is not always the case.
To test what joint configration results after a movel, before actually moving there, this function is now being used:
inc_angles = get_actual_joint_positions()
inc_pose = get_actual_tcp_pose()
act_pose = get_actual_tcp_pose()
i = 1
while i <= 1000:
inc_pose = interpolate_pose(act_pose,target,i/1000)
inc_angles = get_inverse_kin(inc_pose,inc_angles)
i = i + 1
if i%10 == 0:
basically i run a loop, always incrementing my current cartesian pose by a small increment, and calculating the inverse kinematics close to the last joint configuration. This way I can “simulate” a movement of the robot, so long as the increments remain small enough this should be stable. The sync is required to not consume too much time. Currently 1000 iterations takes around 80ms.
Hopefully this helps someone
For the trajectory of the moveL there is only one solution. Your solution seems reasonable to me, the time consumed is due to the sync() (8ms on CB / 2ms on e-Series ) for every 10th iteration. So it is easy predictable as function of the number of iterations and the number of iterations you can do for each sync().
The long script with get_inverse_kin2 is currently the way to find the joint angles a MoveL will end in.
You need a MoveJ command to unwind the wrist, or define several waypoints for the MoveL to choose the desired way around.
With the upcoming software release 5.9, it will be possible to blend in movements with small positional displacement and large orientation change (only when writing script, though)