This is very interesting. I’m looking into using point clouds to render crime scenes in a TV show I’m working on. I’ve had some success with photoscan to make the point cloud data from photos taken in these rooms and they look great using the agisoft viewer. Something I’ll want to try - regarding the quantising problem - is making several point cloud volumes. It should be easy with a bit of python to do this - then you can just dial in your required point resolution using the python script. I also want to be able to cull sections to do reveals and builds. I don’t really know may way around the unreal particle system so any ideas here would be great. I’ll post up some cloud data for anyone who’s interested when I get a moment. Good work everyone!
One thing we could do to get much better performance out of these would be to group the texels by location so that we can use LOD on the sampling meshes. The thing preventing this from being possible currently is that the point cloud data is randomly distributed rather than grouped so we would have to LOD entire point clouds at once which is a shame. The example above is 32million triangles, most of which end up in the distance.
Also FWIW I am not using particles, just regular static meshes. I think most other people in this thread are now as well.
I make my mesh using a blueprint. Here is the entire script to make the mesh. Note that the procedural mesh component exists in the component list. You just click the procmesh component in the details panel after placing in the world and a “create static mesh” button will appear and then you can use that. I have made up to 1024x1024 meshes this way and the perf isn’t that bad.
Note that I am encoding the positions into the UVs since there was an engine bug that applied sRGB gamma to vertex colors when I tried.
I did my sampling differently than others. I didn’t use separate MIDs, instead I painted solid vertex color on each instance of the mesh (note that the ‘paint vertices’ node in BP doesn’t have the sRGB bug) and used that to control the multiple meshes looking up into one texture. The math is simple, the only thing that needs to be updated with an MID is the “Tiles” parameter. I made my mesh a 512x512 grid, so if the texture is 4k, it needs to be 8 tiles to use the whole thing.
And here is how I convert each quad into a sprite. The top part is only to remake the UVs since I clobbered them to store the grid positions. There is probably an easier way to do this, especially once that vertex color bug is fixed (I could have just written the UVs to the vertex colors but at this point I didn’t want to redo the mesh).
“Encoded Poly size” just needs to match the size you made the polys using. Technically they don’t match in these images since the BP shows them as 100 size but I didn’t retake the image.
One final note: I also recommend setting the Z position of each poly in the grid BP to use the point index. That way the polys will be stacked like a deck of cards. If you create them all at the same Z height, the editor perf can tank if you happen to fullscreen view it. Shouldn’t affect the material setup at all since it subtracts worldposition at the end to neutralize to 0,0,0 before projecting.
Yep, most LIDAR data is provided with some sort of classification in order to be able to filter the laser returns and only keep the ground (for DEM production and archaeological/micro-topographical detection).
A “good” classification is precise enough to filter low foliage, high foliage, rocks, constructions and such. So, indeed, I get how you could use this to improve foliage seeding algorithms.
I will upload the colored Besancon data tonight, I am very curious to see how shadow rendering would do with this dataset.
I will also have to read more carefully your last post, because I am not quite sure I understand everything !
By the way , do you think it’s possible to use what nvidia developed recently (see http://on-demand.gputechconf.com/gtc/2016/presentation/s6512-innfarn-yoo-massive-time-lapse-vr.pdf) and “hook it up” in some ways to the engine ?
I am in touch with them about this project, could be possible to make this happen with some help from their side.
That nvidia paper is interesting. I will have to read it more carefully later on. I am assuming that there is a ton of research out there on more intelligent ways of rendering massive amounts of point data but I was more just thinking of a quick method to increase the scope of what we can do in ue4 without having to delve too deeply into some advanced data structure code.
The concept of ‘blue noise’ as a sampling reduction filter shouldn’t be too crazy to find a way to apply. That and scaling the points by distance to make up for the lack of density should be pretty powerful.
Thanks a bunch for sharing the Besancon data! I was able to get it into engine as a 4k with 16m points (or 32m polys). Runs at 50fps on my 980ti so its fairly taxing. I have some ideas to group the points by quadrants to support LODs but I may not have time to try them for a bit.
Looks pretty sweet, especially from a medium distance:
Also got a quick test of casting shadows using a cached shadowmap, kind of neat to see it animated:
You can find a detailed tutorial of that method on my blog. Only a minor change is required to make it work with the lidar meshes. You need to pass in the WorldPositionOffsets to the positions where the WorldPosition Node is, and specify things like the radius and object center manually but that should be it.
http://shaderbits.com/blog/custom-per-object-shadowmaps-using-blueprints
I also tried a test of using the shadowmap to de-light the source color data. It was somewhat promising but there were a few issues preventing it from working well.
- Shadow bias has to be large enough for the points to avoid self-shadowing, which means all of the pointed rooftops can’t reconstruct the sharp shadow on the ridge, which leaves a dark outline there. Mostly the ground had problems with self shadowing with lower bias, so there may be some big gains to be had from intelligent biasing (ie bias the Z values near the roofs less than the floor).
- Even though at first glance the lighting direction seemed fairly consistent, when I actually compared it to a shadowmap, there was quite a bit of movement of shadow direction across the data. Of course thats expected since some time has to pass while capturing so much data. It could be tricky to work around this issue consistently.
- Gamma curve means undoing the lighting can’t be a simply color multiply with a mask. Need to revisit some of the more advanced delighting techniques Francois and others used for the Kite demo pipeline to see how much it can be improved. This could be related to lack of proper color encoding as alluded in previous posts. The shadows do seem a bit over dark by default.
- Small details often get somewhat rounded positional data which means their shadows won’t be recreated in the shadowmap and thus they can’t be easily delit. The windows on the roofs show this.
Comparison of color data as provided by as3ef2th1 (left) and very rough de-lighting pass (right):
This image highlights the problems with the de-lighting. The roofs have sharp black ridges where the shadow bias couldn’t reconstruct the original shadow accurately enough, and the tree shadows in the courtyard have bright halos since the single-color-boost isn’t taking into account the proper gamma, and the soft shadows are the worst since its being delit with a crisp shadow.
Logical next step would be to see what it looks like to apply the dynamic shadow over the de-lit image, but I haven’t tested it yet since I need to make another shader that can re-render the color map with the shadowmap data so multiple shadowmaps are not needed to relight.
I really like this back view of the castle on the hilltop:
I noticed my version is mirrored. Maybe just difference of inverted Y coordinate between packages as usual?
This is really cool. So, to be clear here - you’re creating thousands of quads which the shader then reposition?
Close, but the quads are pre-created using the procedural mesh component using a separate blueprint (thats the one you see with the ‘2d execution grid’ macro). Then I select the procedural mesh component and convert it to a regular old static mesh. Then the actual ‘point cloud’ BP just spawns regular old static mesh components using the premade grid, and each mesh looks up into a different part of the lookup texture.
Looks like I forgot to show the BP graph for the BP that spawns the premade grids but I can add it later.
I used a 512x512 grid and then just repeat it as many times as necessary to fill the data, much like others described on page 1.
The next step I want to try is to sort all the points along the X and Y axis. Then, in theory, partial patches will stick to regions rather than being evenly distributed across the whole cloud which makes it impossible to LOD (with the available tools).
Hey guys,
So I wanted to test sorting the data. I was able to figure out how to sort rows using matlab, but it turns out that to actually do a 2D sort requires several passes over both axes if you use a traditional sort. Thats because once you sort the X row, once you start sorting Y, it disturbs the X sort a certain amount and that continues. Maybe you guys know how to do that using matlab or some other method but I found it easier to solve using the GPU and shaders.
Basically I did something similar to a ‘bubble sort’, but its not as bad on the GPU as its considered on the CPU. That is because the GPU can compare every 2 texels simultaneously so you only need to consider the individual passes. And if you alternate between sorting on the X and Y rows after every 2 texels, you can sort the whole texture in a single pass for each axis. So a 4k texture requires 8k passes to sort both X and Y. It looks pretty cool to visualize this by looking at the color data since it basically stitches something that looks like a city out of noise.
This kind of method would definitely make more sense if the data was rectangular to begin with, but it still gives decently squarish regions.
it is interesting, but just sorting the data like this causes it to render faster! If you do r.screenpercentage 200, the difference becomes huge. I think the reason is because when tons of nearby screen pixels have to look up all over the 4k texture, it causes poor memory coherence whereas having the points located nearby in the texture reduces that. On my Titan video card, at screen percentage 200, the unsorted data runs at 17fps and the sorted data runs at 53fps!
The next step is to try actually using LODs based on the distances. Currently I don’t have an easy way to set the bounds of each section to correlate with the actual data they contain. I may just switch to testing a rectangular region so that I can just set the bounce of each section programatically (assuming the calculated values will be close enough for LOD purposes). Currently I would have to export the texture and extract bounds from it using matlab or something like that.
[video]Sorting point cloud data via GPU
You can see how it breaks up the area:
Note that the larger regions are actually less dense in the original data. Smaller regions are more dense.
Hi,
I’m new to Unreal. Do you think this technique of bitmap offset can also be used to set the rotation and scale of particles ?
My goal is to put fur card particles on an animated character. The particles having been created and combed in another software (Blender). I’ve read everywhere that I’d better redo the work directly in Unreal, but if there was a way to overcome this incompatibility I would really like it.
Although I would also need the particles to inherit their color from the UV texture of my character, so that doesn’t seem possible with this offset technique to me, but I prefer to ask anyway.
Is there any chance you could provide an unreal project. I’m trying to layout points from a 360 by 180 degree Arnold render using a world position pass. My plan is to take a maya/Arnold rendered VR project and setup a way to display it with a limited amount of positional tracking with the HTC vive or Oculus rift. I know how to set this sort of thing up in Houdini and nuke but i’m a little lost in unreal and a project file would help a ton. Thanks for your help. This thread is really amazing btw. You guys are wizards.
Out of curiosity, how would you go about animating these sets of data points?
Hi ,
Thanks for sharing your method, the view from your pointcloud looks awesome! I am doing a project of real-time point cloud generation and rendering with Kinect and UE4. I have followed the above ideas and got some outcomes. However, when I came to your method, I was quite confused (I am quite new to UE and still learning to use the engine). Could you please show the BPs so that I can better understand your method and play around with it? Thanks beforehand
Hey ,
impressive work! I’m working on a similar point cloud project atm, and I’m wondering if in you can easily and quickly rotate and scale each individual splat in your approach? I realised my approach with GroupedPaperSprites but I’m struggling to give them individual sizes and orientation (see https://forums.unrealengine.com/deve…rite-instances as well). I would also like to have a nice GPU-powered efficient implementation but don’t know how to achieve this ATM.
Would be nice if you give me/us some hints! Thanks !