Download

How to place single GPU particles at specified locations?

Didn’t get as much done today as I’d hoped, but here’s a big chunk of San Francisco.

Impressive result xnihil0zer0 !

I see you also have color (heatmap by height ?), do you think it’s possible to hook-up something to get rgb data ? Like one picture as a table with coordinates and another as a table with rgb values by point ?

Yep.


Aha, this is looking very good. And you have 80 fps with this kind of point cloud.

I don’t know if it’s the lightening (or the absence of it ?) but it looks cartoonish.

Hi

After 8 days of research, i see your Thread, and it can save me.

I am working on the same project than you. That’s a very good work and i would to do the same.

In first way, i am not english, so sorry to my bad english, and i am only a graphist who code in blueprint :slight_smile:

I use a .las and i open it in Cloud compare. Then i save in .csv.
Il my folder the .csv is a .txt. That’s a problem, because i would use A-vekt A-VEKT Image CSV Converter to convert in bmp.

do you have a solution to convert to bitmap ?

I like the city. Verywonderful :slight_smile:

Thancks to help and sorry to my bad English again

It seems awfully painfull to produce the picture you need with this software since you don’t have any control on the output.

Anyway, you can export directly as .csv with CloudCompare, juste switch the extension from .txt to .csv when you save it.
And you can also change directly the extension of any preexisting textfile.
A .csv file is nothing more than a textfile with values seperated by a coma.

I didn’t find a good online/free conversion tool to do this stuff. It may be possible to do it with R and a some image processing packages (I checked quickly and did not found it, but it has to exist somewhere in the CRAN repository) or maybe with ImageJ/FijiJ which is also a very good image processing tool.
I am doing it with Matlab and xnihil0zer0 is doing it with Wolfram Mathematica.

As soon as xnihil0zer0 can provide some explanations about his latest results I will do a step by step tutorial on youtube, from .las file to unreal engine.

Okay thanks. I will test this, and i return my feed back :slight_smile:

I am trying to use Matlab to convert csv to bitmap, but i don’t understand the process.

as3ef2th1 i used your code example of Matlab, but i have two questions.

My data is big and matlab create a smaller selection. That’s good ?

I don’t know if my code it’s good to run it, or it’s not complet ?

http://img15.hostingpics.net/pics/723040Tablebig.png

http://img15.hostingpics.net/pics/586495Afterthat.png

thank you

How many points do you have in your .las file ?

You should read more closely this post.

You need to decimate your point cloud such as the point count can fit a square texture:

  • 16 384 points for a 128x128 texture (this is from my code snippet)
  • 262 144 points for a 512x512 texture and so on…

Anyway it’s useless to go beyond 200 000 points with the first example from the topic, since even with 24bits textures you can’t have double big enough to store precise coordinates informations.
This is why your data will be likely to have line artefacts like the ones from Michael Geary’s picture some post above.

At this point I would suggest you to wait for xnihil0zer0 explanation and update, it seems he has improved a lot this little “trick” for point cloud visualization.
Until then, I think it would be a waste of time to try to ‘bruteforce’ your data inside UE4.

By the way, I am also french (juste guessing from “point of cloud” !) so I will be able to explain this in french at some point if you want.

Sorry guys, been a bit busy with work. Here’s a project with the Besancon cloud, so you can see how it’s done.

The point cloud actor creates four static meshes, each with 1048576 polygons. It applies X offsets in increments of 1048576 to the Dynamic Material instances on each, after the first. It uses two 2K images for the position lookup(one for high and one for low bits), though it doesn’t really need the second one at this point.

I tried as large as 4K images for clouds with 16.7 million points, but I don’t have cards beefy enough for that to perform well. Besancon only had some 3.8 million points in total though. I just downloaded AHN3 a 20GB point cloud of the Netherlands, gonna work on streaming it in, and writing different LODs to pieces of 2K render targets. It will require the extra precision.

I noticed some rather ugly artifacts(mainly in the town, rather than the mountain) on my GTX 580(see image) that I didn’t on my R9 280x(see video). Not sure if that’s due to a difference between NVidia and AMD, or because my 580 is an old card that’s dying. I get 75-90 FPS on 280x but only 25-30 on the 580. Let me know what you see and what cards you’re using.

The color looks a bit cartoony when the RGB is compressed with HDR, but a bit blown out with other compressions, but you can fiddle texture adjustments to get a look you like.

7a2c215834a36dbabc6ee989e85ab1923172e287.jpeg

Edit: Oh yeah, the very large and invisible cube in the point cloud actor is to replace the bounds of the long polygon chains(they are set to Use Attach Parent Bounds, under rendering details). If you use this technique in another project make sure to do that, or the point cloud won’t stay visible when the chains’ original bounds are out of view.

@xnihil0zer0 that’s very great work. i like it :slight_smile:
@as3ef2th1 I have 3 813 697 point in my Las.

Now i understand better

I will work this.
Thanks to the help in french next time :slight_smile:

Thank you.

I will have to toy around a bit before I really understand what you did and how you did it.

I have only one stupid question right now, what is the difference between the high and low quality picture when it comes to the way you encode the data ? Did the low one has shortened decimals ?

Anyway, I grabbed a point cloud of one of our excavation to try your new project and it works well.

You can look at it as a 360 VR panorama](http://aspectus.nazg.org/project/pcloud.html). Or with the picture below:

http://tof.canardpc.com/preview2/ab0f1ea6-a75b-43de-ba92-94c3d89356e0.jpg

http://tof.canardpc.com/preview2/c24c8887-1927-4914-8594-1dbbef4c1f82.jpg

PS: As for perfomance, I have a gtx 680 and my fps are pretty low. I will benchmark it more thoroughly later.

I took the range in the largest axis. Multiplied all values by 65536/range, got floor and frac of all values. The frac became the low image, the floor/65536 became the high image.

Looks like you got artifacts on your polys too. Must be an NVidia thing.

@as3ef2th1 i decimate to 149 444 points. Save as .txt. Rename the .txt to .dat. Import in matlab.

But in don’t know how compil in mat lab to convert in bitmap.

Can you help me ?

@as3ef2th1

When you’re benchmarking frame rates, also try changing the last index, for loop in the point cloud actor’s construction script, to 2, 1 and 0. Just so we can come up with a target maximum number of polys for an LOD system.

@xnihil0zer0

what software you use to make the point of cloud to csv ?
I use cloud compare and maybe it s make bad csv, that s for what i have an error in matlab. Onlive convertor don’t want my csv too.

@ilxs I use LAStools, las2txt.exe. Use comma separator. I output xyz and rgb as separate files. I rename the files to CSV.

Looks like Matlab is failing because you have the wrong number of rows and columns when you try to reshape. If you have 3813697 pts, your csv should be 3813697 rows, 3 columns. Then you need to pad it with (2048*2048)-3813697=380607 rows of 0,0,0. Then you can reshape it to 2048,2048,3.

You also need to scale all your data so that it is between 0 and 1.

@xnihil0zer0

Now that’s works and i understand the process.
@as3ef2th1 helped me today. Sorry i forget to mention it.