UR5 vision pick and place

We want to implement the vision based pick and place application. The camera we are using is an Intel Realsense D435i depth camera.

We have done hand eye calibration for the camera, we were facing technical difficulty in transferring camera coordinates to the base_link of the robot.
We are using Hand eye calibrated value and using inverse of that sending to robot. the values we are getting is not proper

def Quat2Mat(point1):

    qx=  0.497791828975
qy=  -0.502527906573
qz= 0.49788582261
qw= 0.501775553177
result = R.from_quat([qx, qy, qz, qw])
result=result.as_dcm()
#print(result)
result = np.concatenate((result,np.reshape([0,0,0],(1,-1))),axis=0)
result = np.concatenate((result,np.reshape([-0.0142438828014,-0.0916812589659,-0.0320534281395,1],(-1,1))),axis=1)
result = np.linalg.pinv(np.array(result))
result_1 = np.matmul(np.array(result),point1)
    print("trans",result_1)
return result_1

point1 are the object coordinate values from the camera and qx,qy,qz,qw are values got from hand eye calibration.

the coordinates to be sent to robot (result_1) are not proper, These result_1 values are given to the UR5
Please help us, where we are going wrong

This seems to be a duplicate of UR5 vision base pick and place.