How to place single GPU particles at specified locations?

Beware of LAStools. That’s a great piece of software, but if you use the “unlicensed” version you should know that the software adds random errors in your data.

From the license file you can read :

The point limit is fairly low (don’t remember the exact value), which means that if you use it for scientific purposes you want to buy a license.
Obviously for “toying around” this is a marginal problem, but when I train people for LIDAR data processing, I feel this is something they have to know since most people are not aware of this.

Do you think this procedure would be able to be executed at runtime? Meaning, I don’t have external point cloud data, but I generate a list of points in 3D space inside gameplay.

With this list of points (~100,000 points), I would like to create a LIDAR-like point cloud as shown in the above examples in this thread.

Would this be possible?

@anonymous_user_3148a182. Sure. For example:

First, in any actor blueprint, you could collect lots of line trace hit results into an vector array variable each tick. You can’t get away with tons, cause CPUs are kinda slow but you could probably add a couple hundred per tick without significantly degrading performance.

Create a CanvasRenderTarget2D blueprint with a vector array variable to accept points from your main blueprint, and a vector2D variable to store the coordinate of the next empty pixel on the render target.

In the canvas blueprint, On EventRecieveUpdate perform a ForEachLoop with it’s array, use DrawLine(the version where the target is Canvas), make both position A and B the next empty pixel coordinate, use a thickness of 1. Convert the current vector to a linear color for the line color. Update the next empty pixel coordinate.

Create your point cloud material, have a TextureSamplerParameter2D for the lookup table.

In your actor blueprint, on EventBeginPlay, CreateCanvasRenderTarger2D of the class you just created. CreateDynamicMaterialInstance of the point cloud material. SetTextureParameterValue on the DynamicMaterialInstance using the CanvasRenderTarget2D.

On tick, collect your trace results to an array. Use CastToCanvasRenderTarget2D, to set the vector array in created canvas with the vector array from your actor blueprint. UpdateResource targeting the canvas, and it will draw all the pixels to the render target, which will offset the points of the cloud. Finally, clear the actor’s vector array so it’s fresh for the next tick.

There are ways to add points faster with SceneCapture2Ds, but you won’t be able to discriminate the points as easily(like if you only want the hit results from a specific object)

Hey everyone,

Kinda refreshing the topic here as I think this could have fantastic applications.
I’m looking into your different projects right now, and I’ll see if I can get anything new.
Got some GTX 1080s at work on which I can do some heavy load work with. The idea would be to get at least 4096x4096 textures working and effectively rendering 16M points a frame.
This could possibly allow for a level streaming setup where each level would contain one of these particle emitters.
This would resemble the octree approach on loading only accurate data but using UE4 concepts like levels.
Based on my FARO experience, if I can get 16M chunks of point cloud data streaming one at a time, there would be enough points to represent a smooth-ish enclosed area.
And based on Unreal.js, why not directly integrate the potree algorithm ?
Any ways, lots to be tested, I’ll try and check in every now and then.
C ya

Hey Tourblion,

First, I would like to point out that Markus (from Potree) is working at nvidia now. And they are doing great stuff with PointCloud and VR (see https://www.youtube.com/watch?v=LBONrJSvOmU and http://on-demand.gputechconf.com/gtc/2016/presentation/s6512-innfarn-yoo-massive-time-lapse-vr.pdf).

Now, getting back at Unreal Engine.

  • Yes, it is possible to make 4096 textures but I’m still wondering if it will be relevant for VR, even with an 1080 behind it. The performances are already not that great, event with 2048 textures. My 1080 is coming next week, so if you try it and I try it myself, we can show some bench very soon.

  • I am not sure why you say 16 000 000 points would allow level streaming, can you develop a bit on this ?

  • And based on Unreal.js, why not directly integrate the potree algorithm ?
    Ok, this is very interesting, is it difficult to do ?

From my experience with potree :

  1. The code is not THAT clean, so, beware, still great software though
  2. I had a working example of Potree VR but it was not really happy with the stereoscopy. Meaning that, for some reason, you were not able to load properly big cloud in VR, they would become two very sparse cloud. I was never able to fix it.
  • I will do the youtube tutorial next week, I am late on this but I had to finish my thesis…

Thanks for the update on NVIDIA, their work looks awesome, hope to get my hands on a prototype soon. Could even make me buy a M6000 if needed :-P.

My point about the 4k textures is not about visual quality, it’s about the number of points encoded with the algorithm. One 4k texture at a time can render 16M+ points, and that number corresponds approximately to the number of points generated by a high-res FARO focus 3D scan. Which means that a single view point composed of 16M+ points could be rendered with little to no strain on the 1080. If that point of view covers up enough space, we could then load the next point of view only when the level streaming decides so. Haven’t had time yet to test all of this, got paid jobs on the roll, but I definitely think it’s worth a shot.

For Unreal.js, don’t know how complicated it would be, basic JS examples work without a problem, I would just need to understand the potree code a little better to extract the useful functions and adapt them to UE rendering.

What kind of experience do you have with potree ?

Any chance you could send me a copy of your PotreeVR project so I can try and get it working ? We could start a repo on this.

Great for the vid, keep me posted.

The nvidia prototype works great for a tech demo, I have to do some more in depth testing though.

I am not really familiar with the level streaming, so if you have a cloud big enough in term of size (not density of point), it would trigger level streaming even for this kind of data ? From what I understand it would be like an enormous mesh spanning across the entire level.

I use potree a lot and I am in touch with the developer, I will send you my email through pm to discuss this further.

I made the videos for the point cloud processing part this week-end, I still have to stitch them together and upload them on youtube !

Ok, here is the video explaining how to process point clouds in order to use them in the project.
Feel free to ask any questions.

Thanks for that guide!

I was able to re-purpose an older procedural mesh BP to make the static mesh ready to point sample the texture data. It looks to be working ok, but I am wondering where these odd diagonal lines in my test scene come from. This is from part of Portland.

Wondering if it could just be from the source having subtle elevation changes and using an additive shader to view it. I haven’t used two texture for the precision yet so this is just one 16bit texture. Or could this be improper sampling? It should be sampling every texel but obviously I could have bugs there.

FWIW, I found that to convert from 16bit png to a format UE4 could take, I had to use photoshop to convert to EXR. But if I just converted the 16bit image to 32 and saved as exr it seemed to be applying gamma. If I instead made a new 32-bit document and pasted the 16-bit file into it, that seemed to avoid the gamma issue. Having a gamma issue for a texture like this will bunch most of the points up towards one side.

I’d like to get this plugin installed to matlab so it can write EXR directly but I am not sure exactly how to install it, or if I need to have a separate source file version besides the compiled version I have. http://www.mit.edu/~kimo/software/matlabexr/

EDIT
After debugging the point cloud in CloudCompare (by setting colors to None and reducing point size to 1) I can see that the stair stepping is indeed in the data. I guess using aggressive subsampling causes this? Maybe just the effect of part of a layer just barely overlapping with the low density?

Where did you get your point cloud ?
It is not unusual to get those straight lines all over the place, most of the time it’s because you kept the geographic coordinates as references and unleashed them into your average euclidean coordinate system.

I doubt it’s a sampling bug. There is a good upsampling built-in solution in CloudCompare if you want to give it a try, always worked flawlessly for me. It’s hidden in “plugins” -> “PCL wrapper” -> “Smooth using MLS”. Did you use that already for the voxel dilatation ?
I could also give a look at your cloud if you want.

As for matlab, you have to copy the files into a specific folder and then compile them with the command lines from the readme.
I also see that it says “Does not support EXR images with uint16 data or float data”.

Do you think we need 32bits depth ? From what I have seen, we will have humongous performance concerns way before the point count will exceed 16 bits capacity.
I really need to look with nvidia how we can make their GPU optimization from the tech demo works inside UE4, if possible.
As you may have understand, I work in acquisition and processing of such data (all things 3D for cultural heritage, from photogrammetry to laser scan), but my dev skills are very limited :frowning:

I got the lidar from Open Topography.

I am not necessarily saying we need 32-bit depth, but using 32-bit EXR seems to be currently the only way to get an HDR image into UE4 without it being read as a cubemap. For larger scenes it means there is still quite some quantization loss but as you point out it would require tons more points for it to really matter. A smaller scene would have better apparent precision.

Ok, you should try to shorten the geographic coordinates from the original dataset (CloudCompare suggestion usually do the trick). I am 95% confident you will get ride of the lines and upsampling afterward should not be a problem.

I will also give a closer look to the matlab toolbox soon.

Are you talking about pressing “Yes” on the dialogue about translating out the offset? It so I did that and then entered the same coordinates in the matlab export process.

Yes, CloudCompare suggest values to shorten the size of the coordinate system, but it’s only a “temporary fix”. If you save or export the data afterwards, it keeps the original coordinate system.
If you say you fixed this in matlab, it’s indeed very strange.

Could you point me to the right dataset from Open Topography ? I would like to check it.
It would not be the first time that a massive public release for LIDAR acquisition is broken. They released a nationwide LIDAR data for slovenia where each point was duplicated with a slight offset for example…

I got it from here (hopefully this link persists)

http://opentopo.sdsc.edu/lidarOutput?jobId=pc1478015820331

Ok, that’s just the overlapping bands from the LIDAR acquisition (each flight line requires some overlap with the previous one).
I was so convinced that they would provide clean data, that I did not think of the most obvious answer…

You can “clean it” yourself by running “Tools -> Other -> Remove duplicate points” in CloudCompare. A value of 0.8 ~ 0.9 should do it.

Hey everyone,

I’ve been watch this thread for a while, waiting until I have a spare moment to try this out myself. There some really great work here…

One thing I’ve been meaning to ask: with this method, it seems like a lot of effort is put into preparing point cloud data encoded as images. It also seems like there are some limitations with this approach, and it’s not exactly scalable. Could someone explain why we can’t just load data from an ascii or binary file from disk?

Excellent. Worked like a charm, thanks! Now I am just curious where I can find some really high quality source. I like how OpenTopography lets you sort by max resolution. There seems to be a few fault lines and crater areas that have really high cloud densities but I am also looking for some more urban areas with better color information as well to test on. I have a way to render shadows on the point cloud pretty easily that I want to try with buildings. If nothing else I can test with what I have now.

I can provide two massive point clouds for testing purposes if you want :

  • the original modern Besancon project (see previous replies) which is fully colored and has around 170 000 000 points for a small urban area.

f0e65ef60aee350fd49360b083bfb55bfa56383d.jpeg

  • an historic reconstruction from the Besancon area I made from historic aerial photography, black and white “textures” with baked ambiant occlusion and something like 350 000 000 points.

Tell me and I will upload the one you want somewhere.

That Besancon project looks really cool. I haven’t seen too many datasets with full color like that, and those specialty “project sites” always prove difficult for me to navigate :slight_smile:

Did another test of “El Mayor-Cucapah Earthquake Rupture Terrestrial Laser Scan-Site 2” which is here:
http://opentopo.sdsc.edu/lidarDataset?opentopoID=OTLAS.042012.32611.2

397a93a2f53f6d118d8ef0b3b7a4ca56ea2bb8a7.jpeg

I used the classification map combined with AO to give it some color. Its kind of neat that because this is a ground level capture, the lidar occlusion reads like shadowing. Makes this data not super useful for testing lighting of a lidar scene but oh well. It is neat that you can make out the shape of some of the bushes quite well. I am also curious about ways to use the foliage data to help seed actual foliage meshes in ue4. I think there could be some great methods there with the right sampling method and a way to read the data and spawn thing.