Ur5 vision based pick and place

We want to implement the vision based pick and place application using ROS and Python. The camera we are using is an Intel Realsense D435i depth camera.

We have done hand eye calibration for the camera and got a transformation matrix from robot to camera (using GitHub - portgasray/ur5_realsense_calibration: We released a toolbox for automatic hand-eye calibration between Intel Realsense camera and Universal Robot 5 based on easy hand-eye calibration), we were facing technical difficulty in transferring camera (object) coordinates to the base_link of the robot.

Thanks in advance

Seems to be a duplicate of UR5 vision base pick and place.