How to account for lens distortion/pixel angular size to measure distances?

Hey everyone

I have a setup in which the orbbec sensor is pointing to a flat surface.

I select 3 random points of this surface and calculate the plane equation in the following form:

Ax+By+Cy+D = 0;

Of which (A, B, C) is the planes normal vector.
Expanding from this i can take any pixel x,y,z in the depth map and calculate its vertical distance to this plane in the following manner:

L = |A * x + B * y + C * z + D| / sqrt(a^2 + b^2 + c^2 )

My problem is, though the result L makes sense (higher objects get higher values), it has a significant error (20-30%). This error wouldn’t matter for many applications, but I do need to have somewhat precise readings. I suspect that this error comes from the fact that I never take into account lens distortion and pixel angular size.

Can someone help me understand how do I incorporate this factor into my calculations?

All the best.

Just so you are aware - lens distortion is not the only type of error that can occur with a depth sensor.
It is possible you are also seeing depth error reporting - which is a fluctuation from pixel to pixel in depth calculation.
I dont recall orbbec publishing any detailed error ranges - but ive seen anecdotal reports that the errors increase with distance as well … this may mean that any give pixel can be out in its zdepth by up a couple of mm up to even a cm at distances.

Westa

Yes, I am aware of this and I am expecting some error.

But I have 10 cm objects being measured at 13cm at a range of about 1m, so I suspect there is some other source of error in the calculation.

Are U measuring single points or the mean average of a group of points … And most structured light style sensors are also notoriously bad at edge … They don’t see round edges of corners very well … This can result in large inaccuracies where the sensor reports garbage.
Westa

Hey, thank you for your answer :slight_smile:

No, I’m using a value for each point straight way. My plan is to calculate a rough plane using those 3 points, use this plane to calculate all points that belong to it, and finally use linear regression in the whole list of points to fit the best plane.

But, again, I suspect that I need to account for lens distortion effect.

I found this thread in which this seems to be accounted for, but I’m not sure of how to apply it to my case.

Each of those three points will likely have errors from frame to frame.
I would suggest averaging over frames and a group of frames.
Westa