Calculating a feature pose by hand

Hello all,

First of all thanks for all your contribution to this forum, it’s been really helpful.

What I have a UR10 with a mounted camera. I detect a marker, get the rotation and the translation matrix with respect to the camera, which is almost at the TCP position. I know that this data is correct, because when I create a feature in polyscope, the plane I created is at the right place.

My question is this: How do I get the rotation and translation matrices with respect to base? I know that this data is available under the features menu as shown below. But for other purposes I need to be able to calculate it too.

I have the rvec and tvec from tool to the qr marker. I assumed that the rvec and tvec from base to the tool is given by tool positions, which means that i have both translations and rotations from base to tool and tool to marker.

Then I created 2 homogenous matrices which has both translation and rotation using Translation and rotation in one matrix

Let’s call them Hb and Hc.
Hb = The matrix that transforms base coordinate system to the tcp.
Hc = The matrix that transforms tool coordinate system to the qr marker feature.

Numpy.matmul (Hb, Hc) gives me the exact same numbers as the trans_pose(from_base_to_tool, from_tool_to_feature). However I expected that the multiplication order should be the other way around, Numpy.matmul (Hb, Hc) .

What am I doing wrong?