How to process center coordinate & rotational angle value datas of an object into UR coordinates in a python script and sending it to make UR5e moves

Hello everyone, I’m a student that currently working on a pick and place project based computer vision as the input data. For the hardwares, in this project I’m using:

  • a Logitech webcam camera planted on a UR5e wrist
  • PC and Ethernet cable
  • UR5e with RG2 gripper

And also:

  • Ubuntu 18.04
  • ROS Melodic
  • Universal_Robots_ROS_Driver.

I have changed the action ns in ros_controllers.yaml to scaled_pos_joint_traj_controller/follow_joint_trajectory .

And I tested on the terminal :

roslaunch ur_calibration calibration_correction.launch \robot_ip:=192.168.0.100 target_filename:="${HOME}/my_robot_calibration.yaml

roslaunch ur_robot_driver ur5e_bringup.launch robot_ip:=192.168.0.100 [reverse_port:=REVERSE_PORT] kinematics_config:=/home/tazkia/my_robot_calibration.yaml

roslaunch ur5e_moveit_config ur5e_moveit_planning_execution.launch limited:=true

roslaunch ur5e_moveit_config moveit_rviz.launch rviz_config:=$(rospack find ur5e_moveit_config)/launch/moveit.rviz

And then drag the ball in front of UR in Rviz, click Plan, and then click Execute, and the robot moves. So the driver works for me and I have no problem. :white_check_mark:

The opencv python program is also ready. The output data is X and Y coordinates (because the Z or depth will be made as an absolute value) and the rotational angle value in Yaw axis which later to control wrist 3 / end effector of UR.

For the python script, many people try to use test_move.py from the previous driver but Universal_Robots_ROS_Driver doesn’t come with that. So which python script program that can be sent to control the robot?

But I still don’t understand how to process this data (x, y, and theta) of the object in a frame and turn into global coordinates of UR5e, and then should I use inverse kinematics inside the code program or just simply send as the end effector pose goal to the robot? And then after I combine the python script of image processing and python script to control the robot, the final is how to send / call this one python script file to make the robot moves?

Thank you everyone, I really need your help to graduate from this undergrad :”) Have a good day ^^

That’s a lot of questions, I’ll try to answer them to the best of my knowledge.

Usually, when you have a ROS component publishing any geometry data, this is done using a geometry_msgs/Pose. As this alone raises the question of where this geometric representation is actually referenced, the stamped version geometry_msgs/PoseStamped is usually used, adding timing information and the name of a reference frame.

The tf system provides very mighty methods of bringing different corrdinate systems into relation and easily transforming between them.

So, if done correctly, you will get a 6D pose of your recognized object, presumably given in the camera coordinate system. If the camera’s pose is calibrated and known, this object pose can be easily transformed into any other reference frame such as the robot’s base frame.

You could then probably ask MoveIt!. to generate a plan towards that pose and execute it using the driver, just as you did with the “moving ball” method in RViz.

So, if you want to use MoveIt for grasping, your application script should go through MoveIt for planning and executing your motions.

If you just want to make the robot move without MoveIt, you’ll have to call the driver’s action server directly, as in the old driver’s test_move.py script as you already mentioned. This script is also usable with the current driver, you’ll just have to adapt the action server at the top to point to the correct location. I will add other testing scripts to the driver soonish, anyway…

Also, we are currently integrating Cartesian motion commands to the driver, See https://github.com/UniversalRobots/Universal_Robots_ROS_Driver/pull/413 and Staging by fmauch · Pull Request #408 · UniversalRobots/Universal_Robots_ROS_Driver · GitHub respectively. To use this, this will currently require building a couple of packages from source and we are still missing tutorials on how to actually use those interfaces with the ur_robot_driver, but that should be coming quite soon, as well.