I am able to acquire depth and rgb streams(using Astra SDK and OpenNI sdk), but how can I get the depth DATA(raw x,y,z coordinates)? As explained in the publication above, I need to average 100 pictures and do least squares fitting to find out the error for each pixel. Thank you for the help!
The depth image stream is pretty raw. The only thing below that is the infrared image which looks similar to this:
Calibrating the depth stream seems pretty straightforward:
Put a flat target perpendicular to the camera at several distances. Then compare the measured distances in the depth image with the real ones.
Thanks for the reply. I am able to get the depth -image- from both Astra SDK and the OpenNI SDK. I need however the coordinates of depth for each pixel. The idea is to compare data from sensor facing white wall with data of plane fit. The error would be a function of distance and pixels of the image. So I’d need the coordinates