Acquire depth coordinate for depth calibration of the camera

Hello, I’m a beginner here, I need do do depth calibration of my ORBBEC ASTRA MINI. Exactly as described here http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.850.6962&rep=rep1&type=pdf.

I am able to acquire depth and rgb streams(using Astra SDK and OpenNI sdk), but how can I get the depth DATA(raw x,y,z coordinates)? As explained in the publication above, I need to average 100 pictures and do least squares fitting to find out the error for each pixel. Thank you for the help!

AP

I don’t know what you want.

The depth image stream is pretty raw. The only thing below that is the infrared image which looks similar to this:

Calibrating the depth stream seems pretty straightforward:
Put a flat target perpendicular to the camera at several distances. Then compare the measured distances in the depth image with the real ones.

Thanks for the reply. I am able to get the depth -image- from both Astra SDK and the OpenNI SDK. I need however the coordinates of depth for each pixel. The idea is to compare data from sensor facing white wall with data of plane fit. The error would be a function of distance and pixels of the image. So I’d need the coordinates

And that’s what I don’t get. What do you exactly mean when you talk about coordinates?

The only coordinates I can imagine are the pixel position (as x and y) and the value of the pixel (as z).

That’s exactly what I want pixel position and the value of pixel, I’m sorry for the confusion. How can I acquire and save them?

Well, that depends on the programming language and libraries you’re using.

After you get the stream from the Astra SDK, it did its job.
Thus, your question is less about Astra, and more about programming in general.

The simplest way is to make two nested for-loops, one for x and one for y, read the pixel value, and do your calculations.

Thanks for the help!

1 Like