Hololens integration

Hi,

Looking for some tips from wiser people.
I own the astra and the dev board, the aim was to run these two and send the depth data over the wire somehow (after some realtime processing) to be displayed on hololens as 3D hologram.

There are ways of doing so with Kinect plugged to a windows machine, sending its higher level pointcloud or such over the network, it’s using the Kinect specific APIs dlls
How would you approach this problem with astra and the board?

Any hints,
Thanks!
-t

I do this all the time with Primesense cameras. To help with bandwidth, I convert the real-world XYZ data to integer millimeters, and send via UDP or Zero-MQ. I add some dead simple packetizing and frame numbering to help with UDP losses. I have some C# code written for the programming package “vvvv” that while specialized for that environment should be easily convertable to C or C++ if you are code fluent. The two source files are attached, but I had to add “.txt” to their names so they would be allowed as attachments. Focus on the “Evaluate()” function/method in each. Not pretty or probably correct code, but it works.

Thanks so much, you wiser person…:slight_smile:

I’ll have to dig in, Rewriting the code seems not so problematic but the
"Architecture’ is not clear to me, yet.
What should be running on the board? I mean the astra camera is connected to the board,
so I would assume I could just grab a point cloud or mash and send it over the wire somehow (or use the suggested code) and receive it in some player on the other end?

I’m sure the code is valuable but I’m still r searching how to use it in my case,
Super excited to see such valuable response so quickly,
Thanks!

Ooops! Ignore those files, they were early routines for sending floats. Attached are the correct ones…
PCSocketOutPCNode.cs.txt (5.5 KB)
PCSocketInPCNode.cs.txt (6.1 KB)

The sender would run on the board, it needs as input the XYZ pointcloud data. You could make it lower impact on that side by just sending the depth data, same idea just change the code to directly take the depth integers instead of float XYZ, and convert to world XYZ on the host. But I find it better to distribute that load, and do the OpenNI calls for that on each camera PC.

The receiver code would run on the target system. I use multiples of that routine on different ports so one system can get data from multiple cameras being driven by things like Intel NUCs (or old laptops!) over longer distances than you can run USB.

Edit: Hopefully when the Parsee comes out it will make needing all those PCs obsolete!

Thanks, I’ll have to look again at your suggestion
In my dream world, on the board I run a simple script grabbing the pointcliud in some known format, push it on some MQTT and consume it over on hololens, using some simple 3D viewer hopefully edge even,
Dreaming?
-t