The on-demand streaming aspect is related to loading the data from storage to system RAM, then from RAM to the GPU. Both designed to keep the requirements as low as possible, and both are always enabled.
However, this is not cloud streaming. The data (still) has to be loaded locally, although runtime imports are supported.
Thanks - the runtime import is what I was interested in as I assume we can now have a point cloud file, say in a folder on your desktop, that can be loaded into the engine at runtime?
Thanks! Just gave it a go -awesome job!
If there is a super large file does it incrementally display the point cloud data or does it wait until the file is fully loaded before displaying the point cloud?
[Edit]
Have just tried throwing a 2Gb point cloud at runtime and it loaded quickly off my machine (SSD probably helped) however whilst it was set to Async, I had a performance hit when the data was loading (running very smooth when loaded).
Also incremental point cloud display would be really helpful when Async loading, particularly if the file is being accessed across a network etc.
Sorry for the delay, I donāt seem to be getting any notifications again.
Glad you got it working!
The performance hit while importing is to be expected, as the import is fully multithreaded, and probably using all available resources
I have experimented with the incremental display approach before, but it was very unstable due to the risk of multiple threads accessing the same data, and any robust workarounds resulted in significant performance hits for the whole process. We might revisit this in the future as an option.
Hi @anonymous_user_97eacd55,
Sadly, only marginal improvements to runtime data modification made the cut for this release.
We do, however, plan to develop a concept of dynamic cloud, for efficient, frequent data updates (think cloud streaming, live streaming, etc.) as soon as possible.
Thanks for the update on this - itās awesome to be able to do this at runtime so great work. Think the incremental display approach could be really cool - eg loading every nth line of scan data initially and filling in the gaps. Otherwise, we were considering a ālazy loadingā approach where there are subsampled scans which are loaded in initially however that will mean there is a lot of duplicate data sitting on our servers somewhere, not to mention the additional work producing the sub sampled data sets. Maybe this is more readily implementable with ASCII formats?
The problem is in accessing the memory with already imported parts of the data, not the source file. The only way I could do this without risking a crash would be something like that:
Lock asset for insertion
Insert a small batch of points
Unlock
Lock for rendering
Render
Unlock
Lock for insertion
(ā¦)
However, the amount of locking would cripple the performance considerably.
I will definitely look into this in the future, as I liked the overall idea (when it worked, it looked very cool! )
Can you give some info on making materials?
I tried a basic one, with blend mode translucent and just a vertex color in base color and emissive. Comes out white and square, while I have circle selected in the properties. I could easily mask it, but I want to understand whatās going on, especially with the colors (which are present when not using a custom material).
Trying to get a large LIDAR site to perform nicely in VR but seems very inconsistent and having trouble getting shadows to appear. Looks like thereās a few things mentioned in this thread which I could try but figured Iād post a screen recording here. I had very high hopes for LIDAR in Unreal Engine, trying to hold onto that feeling but itās getting hard!
Custom point cloud materials should work as any other does.
The Vertex Color node contains RGB information corresponding to the Color Source property selected on the actor / component and opacity mask in A (in case you use circles as a shape).
Could you share screenshots of your material setup and actor settings?
In the meantime, have a look at the default materials (youāll need to enable Engine and Plugin content in the Content Browser).
Shadows
By default, the cloud actors have their Gain property (under Color Adjustment) set to 1, which results in the color being emissive. Drop the value to 0 and see if that helps.
Flickers
This could be caused by MinScreenSpace being set too high, I would suggest resetting to its default setting.
Looking at the code, the runtime import perf could potentially be improved by allowing the user to set concurrency. Right now (LidarPointCloud.cpp:653 in master) itās total threads - 1, so as you say it saturates the system. You could add a setting like āMaxImportConcurrencyā, where a positive value is thread count, and negative is actual system threads - value:
Also, without testing it, iām guessing it probably hitches a bit when the data is syncād from the loading threads to the game thread. This might be offset by syncing the data back in batches over a number of ticks.
Weāre hoping to have raytracing ready for the next release but due to relatively low priority, I cannot guarantee it will make the cut.
[HR][/HR]
Thanks for the improvement suggestion - do you have any bench numbers by any chance?
This may no longer be relevant for binary formats, as we now process import and tree build-up concurrently. As a result, the CPU utilization is not getting saturated, instead, the limiting factor seems to be storage throughput.
However, it may still improve performance for ASCII-based imports.
Is there a way to detect the moment collision building is finished after spawning a point cloud in runtime? Iām having trouble with spawning point clouds and other actors with physics at the same time, making the other actors fall through the point cloud before the collision is created.