Orbbec Astra Pro - Depth data

Hi, sorry for asking dumb question here, my colleague and I trying to develop an interactive pond like this (

).

We want to detect the user’s feet detection as we will project the application on the floor.

To get the user input from the input, we are planning to use depth sensing. But my question here, how can i get data from depth reading? I am using Unity to develop this application. Can anyone guide me on getting depth data? Im stuck here. Please help. :frowning:

-Sattiyan

You don’t want to use the depth data directly. In the demo code, the depth stream is processed into a “Skeleton” array. It sounds like you are interested in only certain nodes for the feet. You will be able to track 2 people with a single camera.

But is it possible if I want to detect multiple detection? I wont be needing the full body detection. I just need to detect when there is something or someone on the floor (on the projection).

The body/skeleton array also contains data for “body” tracking, which is distinct from “skeleton” tracking. You can detect up to 6 bodies, but only 2 skeletons. If memory serves, the bodies are in the same array as the skeletons, but only contain “Center of Mass” positional data. Indices ‘0’ and ‘1’ are skeletons. Other indices are the bodies (without skeletal node data).

I hope that helps!

There should be an example scene that comes with the unity package that contains a scene where you can see the sensors data. The depth data is a gray scale image in that case. You could look at that code to see how that image is obtained.