I am using four cameras in parallel, at different constant position, so that I can capture a 360° 3D picture of any random object.
To link each image taken by each camera, I only performed rotations and translations, knowing a priori the theoretical positions of each camera. The fact is that I have error measures and there is a little gap between each image when I do the reconstruction.
Do you know how I could do a dynamic calibration? I could for instance put a visible point in the field of vision of the four cameras, detect this point, and then do the rotations and translation according to the distances from this point, and not according to the theoretical positions of the cameras i have?
Has anyone already done that kind of things, or has a good method to perform multicamera calibration?
Thank you and regards,