Announcement

Collapse
No announcement yet.

Neo Kinect - easy access to the Kinect v2 capabilities in your games

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #91
    Hello RVillani,

    Amazing plugin ! And good to see all the support you provide.

    I need to capture a larger zone that can't be covered by one single kinect. Is there a way to use 2 sensor on the same computer / project ?

    Comment


      #92
      Originally posted by Zyperworld View Post
      Hello RVillani,

      Amazing plugin ! And good to see all the support you provide.

      I need to capture a larger zone that can't be covered by one single kinect. Is there a way to use 2 sensor on the same computer / project ?
      The Windows Kinect API seems to be able to do it. But it's not implemented on the plugin. It'll only recognize 1 sensor.
      Freelancer Game Dev Generalist and Unreal Consultant | Portfolio
      Unreal products: Dynamic Picture Frames, Neo Kinect

      Comment


        #93

        Hello Rodrigo ,

        I am pretty new with both kinect and unreal platform.I have an azure kinect camera.Is the plugin support azure kinect camera ?

        Because I downloaded your example project and I played it.All the screens in the sample scene are black(Color Mapping,Frame Access fields in the example scene).When I check the logs ,1 - KinectThreadLog: Kinect thread inited. 2 - KinectThreadLog: Kinect thread running. 3 - KinectThreadLog: Kinect thread ended. 4 - NeoKinectLog: Warning: Kinect sensor uninitialized. I saw these logs several times.I am expecting your helpfully comments or opinions.

        Best Regards.

        Comment


          #94
          Originally posted by cocuthemyth View Post
          Hello Rodrigo ,

          I am pretty new with both kinect and unreal platform.I have an azure kinect camera.Is the plugin support azure kinect camera ?

          Because I downloaded your example project and I played it.All the screens in the sample scene are black(Color Mapping,Frame Access fields in the example scene).When I check the logs ,1 - KinectThreadLog: Kinect thread inited. 2 - KinectThreadLog: Kinect thread running. 3 - KinectThreadLog: Kinect thread ended. 4 - NeoKinectLog: Warning: Kinect sensor uninitialized. I saw these logs several times.I am expecting your helpfully comments or opinions.

          Best Regards.
          I think I answered this same question on the plugin page.
          No, Azure is not supported. The plugin is specific for Kinect V2, as the description says.
          Freelancer Game Dev Generalist and Unreal Consultant | Portfolio
          Unreal products: Dynamic Picture Frames, Neo Kinect

          Comment


            #95
            Hello RVillani! Having fun working with your plugin, I wonder how can I separate people by body index in Material? I need to assign different texture to each individual body. So I need a matte for particular body, not for all at once. I suppose I need to use Colored Body Index frame and chroma key them? Also I want to cut distance based on depth - can you advice something?
            Last edited by psychedelicfugue; 02-09-2021, 04:57 AM.

            Comment


              #96
              Originally posted by psychedelicfugue View Post
              Hello RVillani! Having fun working with your plugin, I wonder how can I separate people by body index in Material? I need to assign different texture to each individual body. So I need a matte for particular body, not for all at once. I suppose I need to use Colored Body Index frame and chroma key them? Also I want to cut distance based on depth - can you advice something?
              I'm happy you're having fun with it!

              Colored Body Index can help if you compare colors, but I'd prefer the standard BodyIndex frame for this. It sets the body index as a byte in the Red channel. Since bytes go from 0 to 255 and color channels from 0.0 to 1.0, you have to multiply Red by 255 to get its true value, which will be 0, 1, 2, 3, 4 or 5 depending on the index and 255 where there's no user.

              Having Red * 255, you can isolate an index by number using the if node or with a bit of math. To avoid branching in the shader, I prefer the math.

              Let's call Red * 255 SensorIndex and the desired index to isolate Index. The nodes would be:
              Code:
              Subtract(Index - SensorIndex) -> Abs -> Saturate -> OneMinus
              Abs converts negative values to positive; Saturate clamps values between 0 and 1 and OneMinus is [1 - input value].
              That would give you 1 if SensorIndex is the Index you're looking for and 0 otherwise. You could then use it in a Lerp node to mask your effects.

              What did you mean with cutting the distance? Replacing something with a post process?
              Well, whatever it is, you'll probably need to know the distance in a material. Once you pass the Depth texture into a material, get its Red channel and multiply it by 65535 (max value in 16 bits), because it's stored as a 16-bit value. Similar problem to the BodyIndex, but with 2 bytes instead of 1.
              Once you did that, you'll have the distance in millimeters from the sensor. Unreal works in cm, so I'd multiply that by 0.1 to make things easier. Then, you can do:
              Code:
              Subtract(Threshold - SensorDepth) -> Saturate
              The result will be 1 for anything before the threshold distance and 0 for anything beyond. If you want the opposite, flip the subtraction order.
              Then, same as before, use that on a Lerp to switch between what you display where the distance is or not beyond the threshold value.
              Freelancer Game Dev Generalist and Unreal Consultant | Portfolio
              Unreal products: Dynamic Picture Frames, Neo Kinect

              Comment


                #97
                Thank you for your thorough answer! What I meant by "Cutoff distance" was the ability to make transparent certain parts of image based on distance from sensor. Usually something like Photoshop Levels filter applied to image from depth sensor can do this. Hope your answer will provide me a clue. Will give it a try.
                Last edited by psychedelicfugue; 02-17-2021, 08:42 AM.

                Comment


                  #98
                  Originally posted by psychedelicfugue View Post
                  Thank you for your thorough answer! What I meant by "Cutoff distance" was the ability to make transparent certain parts of image based on distance from sensor. Usually something like Photoshop Levels filter applied to image from depth sensor can do this. Hope your answer will provide me a clue. Will give it a try.
                  I see. If you use the math I described to have a 0/1 value from the distance threshold, you could use that with lerp to have your transparent/not transparent values
                  Freelancer Game Dev Generalist and Unreal Consultant | Portfolio
                  Unreal products: Dynamic Picture Frames, Neo Kinect

                  Comment


                    #99
                    Thank you very much! Great plugin! Super easy to setup and works very nicely!

                    Click image for larger version

Name:	augmentedlol.jpg
Views:	26
Size:	255.1 KB
ID:	1864731

                    some gifs as proof-of-concept:
                    https://drive.google.com/file/d/14NT...hvAx6hDBFebFEy
                    https://drive.google.com/file/d/1bp6...Xl2uSmpSzcKZi2
                    https://drive.google.com/file/d/1-Ml...TlDDy368A5TJRs
                    https://drive.google.com/file/d/1sdZ...VrFbR8BNwqC3D7

                    was able to prototype this very fast an straight forward in bp by using your plugin.

                    accuracy can surely be improved with measuring out the real room and placing the virtual-kinect exactly where it is in real-space. also need tp figure-out the scaling and placement of the Avateering-Actor in the Level to get in "perfect sync" (making the floor getting recognizable for the kinect should be beneficial).
                    in material i currently do divide UE4's "SceneDepth" by 15000 to get in synced with the Kinect's "Depth In Color Space" but surely there is a proper way to "normalize" this.
                    tricked a bit around the invalid areas in the depth map, but still asking myself what's the best way to solve this. maybe taking a static depth-image from the room without people and use it to fill the zero-areas in the live depth-image?
                    [CENTER][IMG2=JSON]{

                    Comment


                      Originally posted by Schlabbermampf View Post
                      Thank you very much! Great plugin! Super easy to setup and works very nicely!

                      Click image for larger version  Name:	augmentedlol.jpg Views:	8 Size:	255.1 KB ID:	1864731

                      some gifs as proof-of-concept:
                      https://drive.google.com/file/d/14NT...hvAx6hDBFebFEy
                      https://drive.google.com/file/d/1bp6...Xl2uSmpSzcKZi2
                      https://drive.google.com/file/d/1-Ml...TlDDy368A5TJRs
                      https://drive.google.com/file/d/1sdZ...VrFbR8BNwqC3D7

                      was able to prototype this very fast an straight forward in bp by using your plugin.

                      accuracy can surely be improved with measuring out the real room and placing the virtual-kinect exactly where it is in real-space. also need tp figure-out the scaling and placement of the Avateering-Actor in the Level to get in "perfect sync" (making the floor getting recognizable for the kinect should be beneficial).
                      in material i currently do divide UE4's "SceneDepth" by 15000 to get in synced with the Kinect's "Depth In Color Space" but surely there is a proper way to "normalize" this.
                      tricked a bit around the invalid areas in the depth map, but still asking myself what's the best way to solve this. maybe taking a static depth-image from the room without people and use it to fill the zero-areas in the live depth-image?
                      That's really cool! I appreciate you posting it here

                      You can get the precise depth millimeter values in the material by multiplying the Red channel by 65535 (2 bytes max value). Then, to convert to Unreal's centimeters, just multiply the result by 0.1.
                      It's a quirk because no matter the texture format, the material nodes always convert them into 0..1 floats. And the depth texture is a single 16 bits channel.

                      For the AR avateering, call SetUseJointsColorSpaceTransforms with true to be able to use the color space coordinates from joints locations and rotations. That'll position the pelvis exactly where yours is. The other joints follow suit, but for it to really be over you, you'll need to also scale the bones with the user joints lengths. There are functions to get joints lengths from Kinect. You just need to also compute the skeleton's joints distances to be able to compare and scale correctly. I did that with an alpha in the Anim BP to control the strength of all the transform nodes in it. I would set that Alpha to 0 from the Avateering BP on BeginPlay, read the joints distances in the skeleton, then set Alpha back to 1. Saving the distances in a float array, using JointToIndex for the values indexes, I would get those values to compare to Kinect's detected joints lengths and scale the bones on X accordingly to match the user's joints lengths. I also lerped the final computed scale to smooth it a bit, since Kinect can be very jittery.

                      To help with the precision on Kinect knowing how it's positioned in the room, there's a Get Kinect Ground Plane function. It'll give you the floor normal and Kinect's detected height from it. I used that to rotate the virtual sensor's Pitch angle by getting it from the Sine (degrees) of the ground plane's Z axis. The virtual sensor was an empty actor I used as root for anything else that followed coordinates from the real sensor, so gravity would act correctly for physics. But I'd have a button to press and call that function and cache its results in a variable. The sensor can be tricked when there are things moving in the environment, so don't trust those values to be always correct.

                      I hope you have more fun

                      And let me know here if you have further questions. I think I've done most of the hardest things one can do with this sensor. Just haven't used it to create 3D meshes.
                      Last edited by RVillani; 02-22-2021, 09:21 PM. Reason: Improved readability for function names
                      Freelancer Game Dev Generalist and Unreal Consultant | Portfolio
                      Unreal products: Dynamic Picture Frames, Neo Kinect

                      Comment


                        Originally posted by RVillani View Post

                        You can get the precise depth millimeter values in the material by multiplying the Red channel by 65535 (2 bytes max value). Then, (...)
                        This is so great, thank you for those intense tips! I will work through them and will surely have a lot of fun ;D

                        btw: Stumbled across Kinect Fusion on Github. One of the base features of this extension is to pin the Kinect's exact location and orientation down. Getting this integrated, e.g. one could make 3D scans within UE4. Maybe, lets see, if this is the way to go, i will let you know (there was a neat, free App from Autodesk to make great 3D-Scans with your smartphone, but is doesn't exist anymore...)

                        Wishing more of the new HoloLens stuff from Epic and Microsoft would be including the Kinect, since it is basically all you need for prototyping the same concepts.

                        One more extra gif:
                        https://drive.google.com/file/d/15mb...JMq_p905fKSGPr
                        [CENTER][IMG2=JSON]{

                        Comment


                          Originally posted by Schlabbermampf View Post
                          Nice air bending!

                          Originally posted by Schlabbermampf View Post
                          One of the base features of this extension is to pin the Kinect's exact location and orientation down
                          I wonder how. Kinect only has a accelerometer, and the only data you can read from it is its inclination. Unless the extension is using some kind of computer vision to track the movements from what it sees changing in the camera.
                          Freelancer Game Dev Generalist and Unreal Consultant | Portfolio
                          Unreal products: Dynamic Picture Frames, Neo Kinect

                          Comment

                          Working...
                          X