Hi
i’ve been reading the forum to find a way to use the kinect plugin
i want to show the rgb image , combined with the depth so it looks 3d or similar to a color point cloud using blueprints.
i checked and it looks like depth stream and color stream are well aligned (i guess the plugin uses coordinate mapper to make both in the same coordinate system).
Im kind of new with unreal so i dont see how this can be done,
i found this :
its a bit similar of what i want, just take the rgb plane/material and make it deform based on the values of the depth.
Some people commented it could be posible using a heightmap.
i had to invert the depth texture using “one minus” node
im guessing my problem requires to remove black or undetected pixels around the body and somehow average pixels (meaning blur them) to decrease the peaks
actually the depth and color are not aligned. you have to use the knect sdk coordinate mapper to align them
additionally as far as I see all the existing plugins are altering the depth data. they are whether giving you an average, or just the low value byte or the high value byte of depth
to have the highest accuracy you need both.
afterward you need to make sure you stop the compression on the image in ue4, and then turn the image value from 0-1 to 0-255 to get the real depth.
I had to write my own plugin to be able to do that
In the lion32 plugin the issue is the same. he uses an average formula for putting a 13bits depth to a 8bits color channel. if it is good for you then you can use it