Color to depth stream registrations, Astra Embedded S

Hello,

I recently bought an Astra Embedded S camera and was successful in using it in my own scripts.
I am using the OpenNi library. My final goal is to be able to measure distances using the depth stream, but selecting the points to measure the distance on the RGB feed. To do such a thing, I need that the two feeds are perfectly aligned (registered).

I was able to calibrate the two modules (IR and RGB) by myself, as Orbbec do not provide any compatible tool to do it with this specific camera.

However the builtin registration mode does not work at all. When setting the registration to true, I have shifts along the two axis with the two images as showed in the attached picture. As you can observe the depth feed is 0-padded and their FoV are not the same.
Does anyone could register the two stream to perform pixel to pixel correspondence ? In fact, without this feature the camera modules are unusable.

Cheers

The fix I found for this (with Python under Windows 64bit) was to go all the way to the folder that has the OpenNI dll, where there will be a configuration file called Orbbec.ini. Assuming you’ve downloaded the OpenNI SDK from the Orbbec website, then it would be under OpenNI_2.3.0.55\Windows\Astra OpenNI2 Development Instruction(x64)_V1.3\OpenNI2\OpenNI-Windows-x64-2.3.0.55\Redist\OpenNI2\Drivers.
In the configuration file, there is a line that is commented, something like
;Registration=0
Uncommenting by removing the semicolon and changing Registration to 1 fixed it for me. However, there will still be quite a bit of padding mind you, but pixel (x,y) in the RGB image will correspond to the same (x,y) pixel in the depth image.
If this doesn’t fully fix it, you may also have to change the Resolution setting (make sure you go to the one corresponding to the depth stream), since the depth stream resolution is not 640x480 but 640x400 (or 1280x800). The number corresponding to these non-standard resolutions is unfortunately not shown in the comments, but I found somewhere online (can’t remember where) that 17 corresponds to 640x400.

Hope this helps!

Hello Bogdan and thanks for your help.

Unfortunately this is something I already tried and there is still some distortion…

Well that’s strange … but just in case it helps: what I was doing was using the “Depth Stream using Python and OpenCV” example from Examples — Orbbec Astra Wiki 2.0 documentation (where I provide the path to the Redist folder as an argument to openni2.initialize) and the setting that I mentioned - and I do get two very well aligned images.

Oh … you are right, I looked a bit closer and I am, in fact, still getting some offset between depth and RGB. What is even stranger is that when using AstraSDK (recompiling the SimpleStreamViewer example for 640x400 depth and turning on registration) it gets even worse.
I wonder if the issues mentioned 4 years ago in Mapping depth to color (registration) - offset - #7 by Rafael are still there. Has anyone managed to register the two streams successfully and reliably…? Would doing it manually help I wonder?

Yes, even is the images get a better alignment when setting this boolean to true, there is still not a perfect registration between the two streams. I tried investigating if the remaining shift was constant along the 2 axis but it appears that it is not.
I admit that I more or less gave up with this camera. It is crazy that such a crucial feature being the registration of the two streams cannot even be supported yet.

Do you get RGB stream from Astra usinig OpenCV? How do you register a RGB frame captured by OpenCV to depth (or ir) frame captured by openni?

The depth resolution is 640x400 (using openni2) and the RGB resolution is 640x480 (opencv). If you delete the lower 80 rows of RGB image, making the frame 640x400, then you can solve half of the problem.

Don’t know if its still relevant, but I’ve been working with these cams lately as part of my job and found out that in order to match the RGB and DEPTH stream of the device you have to find 3 constants (which are different for each device):

  1. scaling factor
  2. x-axis offset
  3. y-axis offset

In order to find them using python place an object with clear edges in from of the device (might need to adjust the distance to get better accuracy) and take a picture with both cameras.

After doing so, find the contours of your object in the RGB image using OpenCV
the method that worked best for me is using canny edge detecting, converting to binary, and then using the method used in the following article to get the mask of your object Filling holes in an image using OpenCV.)
after getting the mask use the following code to get the area and center of the shape:

contours, heirarchy = cv2.findContours(image=mat, mode=cv2.RETR_TREE, method=cv2.CHAIN_APPROX_SIMPLE)
cont = contours[0]
area = cv2.contourArea(cont)
M = cv2.moments(cont)
cX = int(M["m10"] / M["m00"])
cY = int(M["m01"] / M["m00"])

In order to extract your object from the depth image I suggest to either use the fact that the value of pixels around edges is 0, you can dilate and erode to get better results. Another option is to filter out everything that isn’t within a certain range. You can also combine the two.
After getting the mask of your object, use the same code from before to get the area and center of the shape from the depth image.

After calculating the area and center of your object in both images, use the following code to perform the transformation:

def shift(mat, dx, dy):
    new = cv2.copyMakeBorder(mat, 0 if dy < 0 else dy, 0 if dy > 0 else -dy, 0 if dx < 0 else dx, 0 if dx > 0 else -dx,
                             cv2.BORDER_CONSTANT, value=[0, 0, 0])

    new = new[:, :-dx] if dx > 0 else new[:, -dx:]
    new = new[:-dy, :] if dy > 0 else new[-dy:, :]
    return new


def zoom_at(img, x, y, ratio):
    scaled = cv2.resize(img, (0, 0), fx=ratio, fy=ratio)
    canvas = np.zeros_like(img)
    dx = int(x * (1 - ratio))
    dy = int(y * (1 - ratio))
    canvas[dy:dy + scaled.shape[0], dx:dx + scaled.shape[1]] = scaled
    return canvas

scale_ratio = math.sqrt(scale_rgb / scale_depth)
x_diff, y_diff = center_x_rgb - center_x_depth, center_y_rgb - center_y_depth
matching_depth = shift(depth_image, x_diff, y_diff)
matching_depth = zoom_at(matching_depth , center_x_rgb, center_y_rgb, scale_ratio)