I know that you can already use movel and movej with a 6-element joint positions array, but it does not work as expected; or at least it is misleading:
Working with the robot program nodes, it is not possible to specify directly to use a joint positions waypoint, i.e. only the TCP pose can be set as target and either MoveL or MoveJ move the TCP to that pose; fine, but it is not guaranteed that the joint positions is the one set in the teaching phase. This may lead for example to a wrist collision.
Using the script function movej(joint_positions) moves linearly in joint space to the specified joint_positions, as expected. Nontheless, movel(joint_positions) use the specified joint_positions to calculate the target TCP pose and again, the final joint positions might be different.
I would expect to use movej and movel to move linearly respectively in the joint space and in the operative space, either with a TCP pose target or with a joint positions one.
When using a pose as an argument to moveJ, you can “force” a specific configuration (e.g., elbow up or elbow down), by using movej(get_inverse_kin(x, qnear…)
Here, the qnear argument is key. As get_inverse_kin will return a list of joint angles, qnear has to be relatively close to what you want to achieve. Refer to the script manual for more details.
However, I believe this doesn’t work with moveL, even if you use joint angles as an argument, they get translated into pose. and then the command does its own thing…
I think the reason it probably hasn’t been implemented yet is that it is pretty tricky. By trying to enforce a specific configuration you can easily create scenarios that would require the rotational component of the pose to get further away from the final value before going back towards it which is not what movel is supposed to do.
Anyway, I can imagine there are lots of edge cases that make creating a clean API challenging. I still think it should be done though.