Depth registration and colorsync troubles for Astra Mini S

Hey everyone,

I am trying to do depth to color image registration on some recorded video (recorded with the Android API). My troubles and complaints are as follows:

  1. The use of this.mDevice.setImageRegistrationMode(ImageRegistrationMode.DEPTH_TO_COLOR); crashes the application, although this.mDevice.isImageRegistrationModeSupported(ImageRegistrationMode.DEPTH_TO_COLOR) returns True. (It complains about setting property 5 failed). I inspected the C++ API and found that the depth registration sets property 101 on the depth stream, instead of the device. So I tried to implement this in Android, but it seems to not change anything.
public void setDepthRegistration() {NativeMethods.oniStreamSetProperty(getHandle(),101,true);}

  1. Since the integration depth registration failed, I tried to read the camera parameters and use opencv to do StereoRectify. Camera parameters from the Orbbec API for my Astra Mini S:
vid 2bc5, pid 0407
Mirrored : no
[IR Camera Intrinsic]
578.938 578.938 318.496 251.533
[RGB Camera Intrinsic]
517.138 517.138 319.184 229.077
[Rotate Matrix]
 1.000 -0.003 0.002
 0.003 1.000 0.005
 -0.002 -0.005 1.000
 [Translate Matrix]
 -25.097 0.288 -1.118
 [IR Camera Distorted Params ]
 -0.094342 0.290512 -0.299526 -0.000318 -0.000279
 [RGB Camera Distorted Params]
 0.044356 -0.174023 0.077324 0.001794 -0.003853

A minimal script to transform and display the attached example images, which produces the attached example output:

import cv2
import numpy as np


# Camera Params
fx_d = 578.938
fy_d = 578.938
cx_d = 318.496
cy_d = 251.533

k1_d = -0.094342
k2_d = 0.290512
p1_d = -0.299526
p2_d = -0.000318
k3_d = -0.000279

cam_matrix_d = np.array([

dist_d = np.array([ k1_d, k2_d, p1_d, p2_d, k3_d])

fx_rgb = 517.138
fy_rgb = 517.138
cx_rgb = 319.184
cy_rgb = 229.077

k1_rgb = 0.044356
k2_rgb = -0.174023
p1_rgb = 0.077324
p2_rgb = 0.001794
k3_rgb = -0.003853

cam_matrix_rgb = np.array([

dist_rgb = np.array([k1_rgb, k2_rgb, p1_rgb, p2_rgb,k3_rgb ])

RR = np.array([
    [1, -0.003 , 0.002],
RR2d = np.array([
    [1, -0.003 ],

TT = np.array([-25.097, 0.288, -1.118])

R_d,R_rgb,P_d,P_rgb,_,_,_ = cv2.stereoRectify(cam_matrix_d,dist_d,cam_matrix_rgb,dist_rgb,(w,h),RR,TT,None,None,None,None,None,cv2.CALIB_ZERO_DISPARITY)

# Read images
depth_img = cv2.imread('depth1.png')
rgb_img = cv2.imread('rgb1.png')

# Remap to RGB
depth_img_corr = cv2.remap(depth_img,map_d1,map_d2,cv2.INTER_LINEAR)
rgb_img_corr = cv2.remap(rgb_img, map_rgb1, map_rgb2, cv2.INTER_LINEAR)

# Create some kind of overlay for visualization
overlay_img = rgb_img_corr.copy()
alpha = 0.8
cv2.addWeighted(depth_img_corr, alpha, overlay_img, 1 - alpha, 0, overlay_img)

cv2.imshow("rgb_orig", rgb_img)
cv2.imshow("depth_orig", depth_img)
cv2.imshow("rgb_corr", rgb_img_corr)
cv2.imshow("depth_corr", depth_img_corr)
cv2.imshow("overlay", overlay_img)
while True:

Something is clearly wrong here. The original images aren’t that distorted, and also aren’t that far away from each other (the rotation matrix is almost the identity). What am I doing wrong?

  1. There is an option for depth color synchronization in Openni, this.mDevice.setDepthColorSyncEnabled(true); which doesn’t crash my device, but also doesn’t seem to do anything. Is there a way to enable this for Android? Or are both cameras just free running and I should just hope the corresponding depth and rgb frames are temporally close enough?

I’m hoping someone (maybe @Jackson since you’ve been very helpful with this before) can help with this. I don’t think I can switch to the new Astra SDK beta because it doesn’t seem there is a convenient way to record data (and I also don’t want to open another can of worms, when the depth registration is the last piece I need). Example images are attached.

Thanks for reading/ helping!

Wasn’t letting me put the example in the original post:

Hi luketaverne,

Thanks for your question. Please try the AstraViewer from the link below to test if RGB-D can be properly aligned. The source code of how we achieve that is also included in the package. This tool is part of our calibration package. Press ‘4’ after launching the viewer will load the parameters from flash for Astra Mini S.

Hey @Jackson,

Thanks for the pointers! It got me on the right track. The alignment using the tool you linked in windows looks good, and contains the same camera parameters that I got from the other Orbbec tool. Windows alignment:

Alignment from Android oni file:

There seems to be a horizontal offset of ~11 pixels. When I apply this change to the alignment I get:

Which looks much better (look at the lamp thing on the bottom left), I don’t like the idea of applying an arbitrary offset to the depth mapping. Both videos were taken at approximately the same distance and angle.

Do you have any insight into why there is this difference between the two platforms? I can provide a minimal python example if it would help.

Thanks again! Looking forward to getting this problem finally fixed properly :slight_smile:

Hi luketaverne,

Thanks for these information. The IR camera is involved when calibrating the alignment with the RGB camera. When applying the results to the depth image, the way of the cropping done on the IR image and depth image has a little difference to ensure the optimal performance, which results in the offset. And the source code of the tool from the above should already included this formula.
I hope this answers your question. If you need further support regarding calibration, please email us at

Hi @Jackson,

Would you be able to point to where exactly in the code this cropping of the IR vs depth is done? As far as I can see, the IR and depth images are treated exactly the same in the provided sample code.

From RGBDCamera.cpp:

int CRGBDCamera::getNextFrame(cv::Mat& color, cv::Mat& ir, cv::Mat& depth)

    if (m_ir == 0)//color
        uint16_t *pPixel;
        for (unsigned int y = 0; y < m_H; y++)
            pPixel = ((uint16_t*)((char*)m_rgb_ir_frame.getData() + ((int)(y)* m_rgb_ir_frame.getStrideInBytes())));
            ushort* data = (ushort*)ir.ptr<uchar>(y);
            for (unsigned int x = 0; x < m_W; x++)
                *data++ = (*pPixel);

    openni::VideoStream* pDepthStream = &m_depth;
    rc = openni::OpenNI::waitForAnyStream(&pDepthStream, 1, &changedStreamDummy, SAMPLE_READ_WAIT_TIMEOUT);

    rc = m_depth.readFrame(&m_depth_frame);
    if (rc != openni::STATUS_OK || !m_depth_frame.isValid())
        printf("Read failed!\n%s\n", openni::OpenNI::getExtendedError());
        return -1;
        uint16_t *pPixel;
        for (unsigned int y = 0; y < m_H; y++)
            pPixel = ((uint16_t*)((char*)m_depth_frame.getData() + ((int)(y)* m_depth_frame.getStrideInBytes())));
            uint16_t* data = (uint16_t*)depth.ptr<uchar>(y);
            for (unsigned int x = 0; x < m_W; x++)
                *data++ = (*pPixel);
    return 0;

And then I apply the same DepthToColorMapping as the C++ code, using the same AllMat as the C++ code, but get different results w.r.t. the alignment.