We have to implement our own calibration implementation.

But I can’t find how the depth data is distributed, are they linear distributed or are as distrubed as coming from a pinhole camera.

In case linear distributed I can calculate (simplified) every X world coordinate by multiplying the corresponding depth value with the tan(hFOV/2) and some other constant parameters.

If not linear distributed I will need for every X world coordinate to multiply each corresponding depth value with its own angle (and the same constant values), going from 0 degrees (middle value) to hFov/2 for the border values.

The depth data is linear distributed. You can follow the first method to calculate.