Hope all is good. I am working with an RGB-D dataset captured using an ORBBEC camera. Unfortunately, the RGB and depth sensors were not registered prior to the capturing of the dataset. So the RGB and Depth images of the same frame have a shift.
The dataset was annotated over the RGB images, hence, the annotations can't be applied to the corresponding depth images as there is a shift between the RGB and Depth frames.
The problem is, the shift is not constant. It's not like every pixel in RGB is shifted 20 pixels to the right in the corresponding depth image. Instead, I can notice that closer objects have a higher shift in pixels while further objects don't seem to have that big of a problem.
I have attached an image that shows an RGB image overlayed on top of the corresponding depth image.
Is there a way to properly fix this matter? Is there some sort of relation between the pixel depth and the corresponding shift?