Point tracking for 3D modeling

Hello, i want to program the robot for taking images of an object in a half hemisphere for 3D modeling application.
To be able to do it automatically I want to program the robot in a way that no matter where I take the arm, it will focus the tcp (camera focal point) on one particular point in space.
I want to do it automatically and not by manually moving the robot and pointing its arm towards the focal point.
For example: The arm will rotate around p[0,0,1.1,0,0,0] in a half hemisphere that will range between a zenith angle of 70° to −70° with intervals of 10°. the rotation around the p point will be in intervals of 60° . The distance from p will be 0.8 meters.

I would setup a tcp for the camera, were the tcp is at the focal distance of your camera, so 0.8 meters in front of it. Then I would teach the actual point of focus and let the robot move to that point using the camera tcp, giving it the desired orientation angle.

Thank you for the reply,

We tried to work with this method and it is time consuming since the arm has to be taught every waypoint. We are interested in creating a scene that includes hundreds of such points so that we can build a quality 3D model.

Another problem we encountered with this method is when we wanted to move forward or away from the object and the focus point changed along with the zoom.

Did you happen to work on a project with these requirements and can you direct me to automated solution methods? For example, writing script code in a Python environment or some ROBODK-style software that is suitable for working with robots?

Thanks in advance.