Announcement

Collapse
No announcement yet.

How to place single GPU particles at specified locations?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #46
    Thanks for the update on NVIDIA, their work looks awesome, hope to get my hands on a prototype soon. Could even make me buy a M6000 if needed :-P.

    My point about the 4k textures is not about visual quality, it's about the number of points encoded with the algorithm. One 4k texture at a time can render 16M+ points, and that number corresponds approximately to the number of points generated by a high-res FARO focus 3D scan. Which means that a single view point composed of 16M+ points could be rendered with little to no strain on the 1080. If that point of view covers up enough space, we could then load the next point of view only when the level streaming decides so. Haven't had time yet to test all of this, got paid jobs on the roll, but I definitely think it's worth a shot.

    For Unreal.js, don't know how complicated it would be, basic JS examples work without a problem, I would just need to understand the potree code a little better to extract the useful functions and adapt them to UE rendering.

    What kind of experience do you have with potree ?

    Any chance you could send me a copy of your PotreeVR project so I can try and get it working ? We could start a repo on this.

    Great for the vid, keep me posted.

    Comment


      #47
      Originally posted by Tourblion View Post
      Thanks for the update on NVIDIA, their work looks awesome, hope to get my hands on a prototype soon. Could even make me buy a M6000 if needed :-P.

      My point about the 4k textures is not about visual quality, it's about the number of points encoded with the algorithm. One 4k texture at a time can render 16M+ points, and that number corresponds approximately to the number of points generated by a high-res FARO focus 3D scan. Which means that a single view point composed of 16M+ points could be rendered with little to no strain on the 1080. If that point of view covers up enough space, we could then load the next point of view only when the level streaming decides so. Haven't had time yet to test all of this, got paid jobs on the roll, but I definitely think it's worth a shot.

      For Unreal.js, don't know how complicated it would be, basic JS examples work without a problem, I would just need to understand the potree code a little better to extract the useful functions and adapt them to UE rendering.

      What kind of experience do you have with potree ?

      Any chance you could send me a copy of your PotreeVR project so I can try and get it working ? We could start a repo on this.

      Great for the vid, keep me posted.
      The nvidia prototype works great for a tech demo, I have to do some more in depth testing though.

      I am not really familiar with the level streaming, so if you have a cloud big enough in term of size (not density of point), it would trigger level streaming even for this kind of data ? From what I understand it would be like an enormous mesh spanning across the entire level.

      I use potree a lot and I am in touch with the developer, I will send you my email through pm to discuss this further.

      I made the videos for the point cloud processing part this week-end, I still have to stitch them together and upload them on youtube !

      Comment


        #48


        Ok, here is the video explaining how to process point clouds in order to use them in the project.
        Feel free to ask any questions.

        Comment


          #49
          Thanks for that guide!

          I was able to re-purpose an older procedural mesh BP to make the static mesh ready to point sample the texture data. It looks to be working ok, but I am wondering where these odd diagonal lines in my test scene come from. This is from part of Portland.

          Click image for larger version

Name:	PointCloud_01.JPG
Views:	1
Size:	109.6 KB
ID:	1117781

          Click image for larger version

Name:	PointCloud_02.JPG
Views:	1
Size:	91.0 KB
ID:	1117782

          Wondering if it could just be from the source having subtle elevation changes and using an additive shader to view it. I haven't used two texture for the precision yet so this is just one 16bit texture. Or could this be improper sampling? It should be sampling every texel but obviously I could have bugs there.

          FWIW, I found that to convert from 16bit png to a format UE4 could take, I had to use photoshop to convert to EXR. But if I just converted the 16bit image to 32 and saved as exr it seemed to be applying gamma. If I instead made a new 32-bit document and pasted the 16-bit file into it, that seemed to avoid the gamma issue. Having a gamma issue for a texture like this will bunch most of the points up towards one side.

          I'd like to get this plugin installed to matlab so it can write EXR directly but I am not sure exactly how to install it, or if I need to have a separate source file version besides the compiled version I have. http://www.mit.edu/~kimo/software/matlabexr/


          *EDIT*
          After debugging the point cloud in CloudCompare (by setting colors to None and reducing point size to 1) I can see that the stair stepping is indeed in the data. I guess using aggressive subsampling causes this? Maybe just the effect of part of a layer just barely overlapping with the low density?

          Click image for larger version

Name:	points.JPG
Views:	1
Size:	385.9 KB
ID:	1117783
          Last edited by RyanB; 11-01-2016, 04:00 PM.
          Ryan Brucks
          Principal Technical Artist, Epic Games

          Comment


            #50
            Where did you get your point cloud ?
            It is not unusual to get those straight lines all over the place, most of the time it's because you kept the geographic coordinates as references and unleashed them into your average euclidean coordinate system.

            I doubt it's a sampling bug. There is a good upsampling built-in solution in CloudCompare if you want to give it a try, always worked flawlessly for me. It's hidden in "plugins" -> "PCL wrapper" -> "Smooth using MLS". Did you use that already for the voxel dilatation ?
            I could also give a look at your cloud if you want.

            As for matlab, you have to copy the files into a specific folder and then compile them with the command lines from the readme.
            I also see that it says "Does not support EXR images with uint16 data or float data".

            Do you think we need 32bits depth ? From what I have seen, we will have humongous performance concerns way before the point count will exceed 16 bits capacity.
            I really need to look with nvidia how we can make their GPU optimization from the tech demo works inside UE4, if possible.
            As you may have understand, I work in acquisition and processing of such data (all things 3D for cultural heritage, from photogrammetry to laser scan), but my dev skills are very limited

            Comment


              #51
              I got the lidar from Open Topography.

              I am not necessarily saying we need 32-bit depth, but using 32-bit EXR seems to be currently the only way to get an HDR image into UE4 without it being read as a cubemap. For larger scenes it means there is still quite some quantization loss but as you point out it would require tons more points for it to really matter. A smaller scene would have better apparent precision.
              Ryan Brucks
              Principal Technical Artist, Epic Games

              Comment


                #52
                Originally posted by RyanB View Post
                I got the lidar from Open Topography.
                Ok, you should try to shorten the geographic coordinates from the original dataset (CloudCompare suggestion usually do the trick). I am 95% confident you will get ride of the lines and upsampling afterward should not be a problem.

                I will also give a closer look to the matlab toolbox soon.

                Comment


                  #53
                  Are you talking about pressing "Yes" on the dialogue about translating out the offset? It so I did that and then entered the same coordinates in the matlab export process.
                  Ryan Brucks
                  Principal Technical Artist, Epic Games

                  Comment


                    #54
                    Yes, CloudCompare suggest values to shorten the size of the coordinate system, but it's only a "temporary fix". If you save or export the data afterwards, it keeps the original coordinate system.
                    If you say you fixed this in matlab, it's indeed very strange.

                    Could you point me to the right dataset from Open Topography ? I would like to check it.
                    It would not be the first time that a massive public release for LIDAR acquisition is broken. They released a nationwide LIDAR data for slovenia where each point was duplicated with a slight offset for example...

                    Comment


                      #55
                      I got it from here (hopefully this link persists)

                      http://opentopo.sdsc.edu/lidarOutput...c1478015820331
                      Ryan Brucks
                      Principal Technical Artist, Epic Games

                      Comment


                        #56
                        Ok, that's just the overlapping bands from the LIDAR acquisition (each flight line requires some overlap with the previous one).
                        I was so convinced that they would provide clean data, that I did not think of the most obvious answer...

                        You can "clean it" yourself by running "Tools -> Other -> Remove duplicate points" in CloudCompare. A value of 0.8 ~ 0.9 should do it.

                        Comment


                          #57
                          Hey everyone,

                          I've been watch this thread for a while, waiting until I have a spare moment to try this out myself. There some really great work here...

                          One thing I've been meaning to ask: with this method, it seems like a lot of effort is put into preparing point cloud data encoded as images. It also seems like there are some limitations with this approach, and it's not exactly scalable. Could someone explain why we can't just load data from an ascii or binary file from disk?

                          Comment


                            #58
                            Originally posted by as3ef2th1 View Post
                            Ok, that's just the overlapping bands from the LIDAR acquisition (each flight line requires some overlap with the previous one).
                            I was so convinced that they would provide clean data, that I did not think of the most obvious answer...

                            You can "clean it" yourself by running "Tools -> Other -> Remove duplicate points" in CloudCompare. A value of 0.8 ~ 0.9 should do it.
                            Excellent. Worked like a charm, thanks! Now I am just curious where I can find some really high quality source. I like how OpenTopography lets you sort by max resolution. There seems to be a few fault lines and crater areas that have really high cloud densities but I am also looking for some more urban areas with better color information as well to test on. I have a way to render shadows on the point cloud pretty easily that I want to try with buildings. If nothing else I can test with what I have now.
                            Ryan Brucks
                            Principal Technical Artist, Epic Games

                            Comment


                              #59
                              I can provide two massive point clouds for testing purposes if you want :

                              - the original modern Besancon project (see previous replies) which is fully colored and has around 170 000 000 points for a small urban area.

                              Click image for larger version

Name:	capture.jpg
Views:	1
Size:	458.0 KB
ID:	1117832

                              - an historic reconstruction from the Besancon area I made from historic aerial photography, black and white "textures" with baked ambiant occlusion and something like 350 000 000 points.

                              Click image for larger version

Name:	besancon1956_1 (1).jpg
Views:	1
Size:	624.2 KB
ID:	1117833

                              Tell me and I will upload the one you want somewhere.

                              Comment


                                #60
                                That Besancon project looks really cool. I haven't seen too many datasets with full color like that, and those specialty "project sites" always prove difficult for me to navigate

                                Did another test of "El Mayor-Cucapah Earthquake Rupture Terrestrial Laser Scan-Site 2" which is here:
                                http://opentopo.sdsc.edu/lidarDatase...042012.32611.2

                                Click image for larger version

Name:	test.JPG
Views:	1
Size:	108.6 KB
ID:	1117834

                                I used the classification map combined with AO to give it some color. Its kind of neat that because this is a ground level capture, the lidar occlusion reads like shadowing. Makes this data not super useful for testing lighting of a lidar scene but oh well. It is neat that you can make out the shape of some of the bushes quite well. I am also curious about ways to use the foliage data to help seed actual foliage meshes in ue4. I think there could be some great methods there with the right sampling method and a way to read the data and spawn thing.
                                Ryan Brucks
                                Principal Technical Artist, Epic Games

                                Comment

                                Working...
                                X