[feature request] - rendering lightning and shaders demanding way too much hdd space - possible solutions and strategies to solve problem

[feature request] - rendering lightning and shaders straight into graphics card physical buffer memory rather then hard disk

greetings

at the current moment i tried unreal engine 5 and:

  • i realized that lightning and texturing was way much better then in unreal engine 4, thanks to realtime raytracing and other features relating to real time raytracing and raymarching and pathtracing that help increasing rendering pipeline quality;
  • this happens thanks to the fact that light and shading gets pre-rendered and stored in the harddrive to save rendering time, but that demands first of all a fast ssd hard drive, and then secondly a lot of hard drive space, for the sake of storing rendering cache, and then loading it everytime a project is started;
  • thanks to that, simply installing and running unreal engine 5 can demand, under certain circumnstances, 40-80-200gb of internal storage just for the sake of developing and prototyping a quick demo just with a couple of projects;
  • even if my computer doesn’t fullfill the full specs of the engine (it almost does), I can run simple scenes decently, at a constant pace of 60fps which for me is more then enough, but i can’t do way much more then this, and it usually consumes a lot of hard disk space;

hereby I would like to:

  • possibly using advanced graphic computation techniques coming from the demoscene to do that
  • pointing to the necessity of using highly encrypted and serialized, and compressed means of storing data, within lightning, shading and geometry pre-cooking, for the sake of reducing hdd usage;
  • using that to feed dynamic allocation of buffer memory in the graphics cards;
  • furthermore; combining, optimization strategies, within lightning and pre-rendering, to both store data in the HDD, the graphics card, throughout voxel. lightning and shading serialization, concatenation, clustering, and dynamic propagation and storing, throughout non-linear means of disperse diffusion, perharps using encoded states with cellular automata and string theory;

without further a do

Get a better computer/hard drive
Your request is actually unreasonable. (Which is rare, mind you.)

The limits or ins/outs of how light computation is done while working require by nature large amounts of disk space.
Even more space for the compiled shaders and baked textures later on when you build lights.

You can’t expect the editor you use to work with to compress your files/data because it’s used to actually work.
Just imagine if it looked like a quality 2 jpg all the time…

The buffer on the card is already dynamic.
Ue5 performance is just trash.
This has nothing to do with the buffer sizes. Or how you internally use the streaming pool, or the various scalability settings, which are all configurable in a finished project anyway.

Believe me. I’m the last one to ever defend epic on their BS excuse of an engine.

If you can actually do the last part you are talking about, you should probably just apply to work at epic.
Seems to me more like a string off a science fiction show full of BS rather than something actionable, but I could be wrong.

That said, Epic isn’t even capable of adjusting performance on .27 after a year and a half and a billion bug reports/complaints, so I very much doubt they would be able to implement sci-fy level gobblegoop stuff :stuck_out_tongue_winking_eye:

Good luck :wink:

maybe you are right, that it is difficult to actually provide means of implementing every single feature on the engine regarding issues related to performance and caching and baking and pre-cooking of light raytracing raymarching pathtracing and shaders. but there are strategies for massive compression of data, there are means to actually optimize data massively and so on so forth. but trust me, from the moment that i’ve seen maybe in 2002 or 2003, people running a 4kb-64kb single program that could decompress itself and generate fairly complex graphics for the time (maybe at the level of quake 3 or quake 4), using advanced strategies for generating shaders, and compressing shaders and so on so forth, i honestly believe that if epic wanted and if epic could, and if they had means of doing so, they could actually turn unreal engine into an editor that would occupy perharps 200-800mb, and could generate all the data related to the projects, throughout the exact same techniques that the demoscene is using. and no, i actually trust epic, and believe on their work, but also believe that the graphics that the engine is running could actually run on a amd r9 290. and this is not an insult, but trust me, and i know what i am talking about. so bearing this in mind, please don’t take it way too personally but that’s just my opinion. looking forward. have a nice day