Eye in Hand calibration with realsense D415 camera

Hello everyone. I am new to the Universal robots and this community.
I want to create hand-in-eye calibration with the UR robot and the intel realsense camera D415.
May I ask the following questions:

  1. how can I achieve the calibration. Do I need to use the calibration board?
  2. Is there any reference Python code, may I get the link?
  3. What is the landmark, how it works (Although I research about the landmark, I do not clearly understand)
  4. Difference between landmark and the calibration board.

Thank you so much everyone for taking time and I know this questions may be very rough questions. I am very sorry for my inconvenience and thanks for the consideration.

1 Like

See this discussion:

the checkerboard is used typically to calibrate camera intrinsics. In the case of the D415, this is already done in the factory. All that’s required is transforming the D415 3D data points into 3D datapoints for the robot, since they have different frames of reference. You typically solve this problem by locating 3 points in the camera reference frame and then locate the same 3 points in the robot frame. You can then compute a transform matrix that transforms camera points into robot points and vica versa. If your camera is always in the same location when taking images, then your done. If not, you need an additional transformation matrix that describes the difference in camera position during the 3 point calibration and the position during image capture. I’ve used this method for stitching multiple 3D images into one large 3D image with great success. Have fun!