Announcement

Collapse
No announcement yet.

How to place single GPU particles at specified locations?

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    #31
    Sorry guys, been a bit busy with work. Here's a project with the Besancon cloud, so you can see how it's done.

    The point cloud actor creates four static meshes, each with 1048576 polygons. It applies X offsets in increments of 1048576 to the Dynamic Material instances on each, after the first. It uses two 2K images for the position lookup(one for high and one for low bits), though it doesn't really need the second one at this point.

    I tried as large as 4K images for clouds with 16.7 million points, but I don't have cards beefy enough for that to perform well. Besancon only had some 3.8 million points in total though. I just downloaded AHN3 a 20GB point cloud of the Netherlands, gonna work on streaming it in, and writing different LODs to pieces of 2K render targets. It will require the extra precision.

    I noticed some rather ugly artifacts(mainly in the town, rather than the mountain) on my GTX 580(see image) that I didn't on my R9 280x(see video). Not sure if that's due to a difference between NVidia and AMD, or because my 580 is an old card that's dying. I get 75-90 FPS on 280x but only 25-30 on the 580. Let me know what you see and what cards you're using.

    The color looks a bit cartoony when the RGB is compressed with HDR, but a bit blown out with other compressions, but you can fiddle texture adjustments to get a look you like.

    Click image for larger version

Name:	artifacts.jpg
Views:	1
Size:	213.3 KB
ID:	1112022



    Edit: Oh yeah, the very large and invisible cube in the point cloud actor is to replace the bounds of the long polygon chains(they are set to Use Attach Parent Bounds, under rendering details). If you use this technique in another project make sure to do that, or the point cloud won't stay visible when the chains' original bounds are out of view.
    Last edited by xnihil0zer0; 07-14-2016, 04:32 PM.

    Comment


      #32
      [MENTION=27525]xnihil0zer0[/MENTION] that's very great work. i like it
      [MENTION=34027]as3ef2th1[/MENTION] I have 3 813 697 point in my Las.

      You need to decimate your point cloud such as the point count can fit a square texture:
      - 16 384 points for a 128x128 texture (this is from my code snippet)
      - 262 144 points for a 512x512 texture and so on...
      Now i understand better

      I will work this.
      Thanks to the help in french next time

      Comment


        #33
        Thank you.

        I will have to toy around a bit before I really understand what you did and how you did it.

        I have only one stupid question right now, what is the difference between the high and low quality picture when it comes to the way you encode the data ? Did the low one has shortened decimals ?

        Anyway, I grabbed a point cloud of one of our excavation to try your new project and it works well.

        You can look at it as a 360 VR panorama. Or with the picture below:







        PS: As for perfomance, I have a gtx 680 and my fps are pretty low. I will benchmark it more thoroughly later.
        Last edited by as3ef2th1; 07-15-2016, 06:22 AM. Reason: benchmark

        Comment


          #34
          I took the range in the largest axis. Multiplied all values by 65536/range, got floor and frac of all values. The frac became the low image, the floor/65536 became the high image.

          Comment


            #35
            Looks like you got artifacts on your polys too. Must be an NVidia thing.

            Comment


              #36
              [MENTION=34027]as3ef2th1[/MENTION] i decimate to 149 444 points. Save as .txt. Rename the .txt to .dat. Import in matlab.

              But in don't know how compil in mat lab to convert in bitmap.

              Can you help me ?

              Comment


                #37
                [MENTION=34027]as3ef2th1[/MENTION]

                When you're benchmarking frame rates, also try changing the last index, for loop in the point cloud actor's construction script, to 2, 1 and 0. Just so we can come up with a target maximum number of polys for an LOD system.

                Comment


                  #38
                  [MENTION=27525]xnihil0zer0[/MENTION]

                  what software you use to make the point of cloud to csv ?
                  I use cloud compare and maybe it s make bad csv, that s for what i have an error in matlab. Onlive convertor don't want my csv too.

                  Comment


                    #39
                    [MENTION=91416]ilxs[/MENTION] I use LAStools, las2txt.exe. Use comma separator. I output xyz and rgb as separate files. I rename the files to CSV.

                    Looks like Matlab is failing because you have the wrong number of rows and columns when you try to reshape. If you have 3813697 pts, your csv should be 3813697 rows, 3 columns. Then you need to pad it with (2048*2048)-3813697=380607 rows of 0,0,0. Then you can reshape it to 2048,2048,3.

                    You also need to scale all your data so that it is between 0 and 1.
                    Last edited by xnihil0zer0; 07-18-2016, 01:56 PM.

                    Comment


                      #40
                      [MENTION=27525]xnihil0zer0[/MENTION]

                      Now that's works and i understand the process.
                      [MENTION=34027]as3ef2th1[/MENTION] helped me today. Sorry i forget to mention it.

                      Comment


                        #41
                        Beware of LAStools. That's a great piece of software, but if you use the "unlicensed" version you should know that the software adds random errors in your data.

                        From the license file you can read :

                        Note that the output of the unlicensed version can be slightly distorted
                        after certain point limits are exceeded. Control output in the console
                        (aka "the black window") informs the user whenever this happens
                        The point limit is fairly low (don't remember the exact value), which means that if you use it for scientific purposes you want to buy a license.
                        Obviously for "toying around" this is a marginal problem, but when I train people for LIDAR data processing, I feel this is something they have to know since most people are not aware of this.

                        Comment


                          #42
                          Do you think this procedure would be able to be executed at runtime? Meaning, I don't have external point cloud data, but I generate a list of points in 3D space inside gameplay.

                          With this list of points (~100,000 points), I would like to create a LIDAR-like point cloud as shown in the above examples in this thread.

                          Would this be possible?

                          Comment


                            #43
                            [MENTION=69501]jumi1174[/MENTION]. Sure. For example:

                            First, in any actor blueprint, you could collect lots of line trace hit results into an vector array variable each tick. You can't get away with tons, cause CPUs are kinda slow but you could probably add a couple hundred per tick without significantly degrading performance.

                            Create a CanvasRenderTarget2D blueprint with a vector array variable to accept points from your main blueprint, and a vector2D variable to store the coordinate of the next empty pixel on the render target.

                            In the canvas blueprint, On EventRecieveUpdate perform a ForEachLoop with it's array, use DrawLine(the version where the target is Canvas), make both position A and B the next empty pixel coordinate, use a thickness of 1. Convert the current vector to a linear color for the line color. Update the next empty pixel coordinate.

                            Create your point cloud material, have a TextureSamplerParameter2D for the lookup table.

                            In your actor blueprint, on EventBeginPlay, CreateCanvasRenderTarger2D of the class you just created. CreateDynamicMaterialInstance of the point cloud material. SetTextureParameterValue on the DynamicMaterialInstance using the CanvasRenderTarget2D.

                            On tick, collect your trace results to an array. Use CastToCanvasRenderTarget2D, to set the vector array in created canvas with the vector array from your actor blueprint. UpdateResource targeting the canvas, and it will draw all the pixels to the render target, which will offset the points of the cloud. Finally, clear the actor's vector array so it's fresh for the next tick.

                            There are ways to add points faster with SceneCapture2Ds, but you won't be able to discriminate the points as easily(like if you only want the hit results from a specific object)

                            Comment


                              #44
                              Hey everyone,

                              Kinda refreshing the topic here as I think this could have fantastic applications.
                              I'm looking into your different projects right now, and I'll see if I can get anything new.
                              Got some GTX 1080s at work on which I can do some heavy load work with. The idea would be to get at least 4096x4096 textures working and effectively rendering 16M points a frame.
                              This could possibly allow for a level streaming setup where each level would contain one of these particle emitters.
                              This would resemble the octree approach on loading only accurate data but using UE4 concepts like levels.
                              Based on my FARO experience, if I can get 16M chunks of point cloud data streaming one at a time, there would be enough points to represent a smooth-ish enclosed area.
                              And based on Unreal.js, why not directly integrate the potree algorithm ?
                              Any ways, lots to be tested, I'll try and check in every now and then.
                              C ya

                              Comment


                                #45
                                Hey Tourblion,

                                First, I would like to point out that Markus (from Potree) is working at nvidia now. And they are doing great stuff with PointCloud and VR (see https://www.youtube.com/watch?v=LBONrJSvOmU and http://on-demand.gputechconf.com/gtc...e-lapse-vr.pdf).

                                Now, getting back at Unreal Engine.
                                - Yes, it is possible to make 4096 textures but I'm still wondering if it will be relevant for VR, even with an 1080 behind it. The performances are already not that great, event with 2048 textures. My 1080 is coming next week, so if you try it and I try it myself, we can show some bench very soon.

                                - I am not sure why you say 16 000 000 points would allow level streaming, can you develop a bit on this ?

                                - And based on Unreal.js, why not directly integrate the potree algorithm ?
                                Ok, this is very interesting, is it difficult to do ?

                                From my experience with potree :
                                1. The code is not THAT clean, so, beware, still great software though
                                2. I had a working example of Potree VR but it was not really happy with the stereoscopy. Meaning that, for some reason, you were not able to load properly big cloud in VR, they would become two very sparse cloud. I was never able to fix it.

                                - I will do the youtube tutorial next week, I am late on this but I had to finish my thesis...

                                Comment

                                Working...
                                X