How to align rgb frame to depth map

In the point cloud example https://github.com/orbbec/pyorbbecsdk/blob/main/examples/pointcloud_filter_o3d.py
The output point cloud is with size 1920 * 1080 * 3 instead of 640 * 576 * 3.
it looks like it tries to assign depth value from lower resolution depth map to higher resolution rgb frame, which makes the rgbd point cloud have some wrong rgb/depth pair, especially on the object boundary, for example,
image
is there a way to align rgb frame to lower resolution depth map instead?

The pointcloud filter is just sample to demo how to have pointcloud function, for different use case, you can set different camera profile(resolution, fps) to test. Simply, you can test this with OrbbecViewer tool as well. Release Release v1.10.8 · orbbec/OrbbecSDK · GitHub

Hi, Nathan, I find that there is the C2D func in Azure Kinect DK, will the C2D be available in orbbec SDK? right now I only see D2C in orbbec SDK

Which Orbbec camera you have?

I am using femto bolt/mega. Thx

C2D will be available but no timeline now.