While painting vertices it would be great if I could select more then one mesh and paint in all of them.
The reason this would be great, is because I have this platforms that I build using modular parts. I create an actor BP with the platform, and then I move the actor to the map and I want to paint vertices for texture blending. The way this works now, I have to select a part paint, select another part, paint.
Here is an image to illustrate.
Some real serious work on groom asset performance and memory costs would be great. I’ve completely deleted them from my metahuman AI characters for now because of the insane cost, even when not rendered. Vertex buffer using about 2.5gb without hair, and up to 10 times that if the groom is present in the BP (even if not rendered at all). Spawn times are also horrific with them.
But from a fidelity standpoint its wonderful. Just not practical to use currently unless your game consists of one character and lots of bald people.
Also, if Epic could figure out why child blueprints using structs randomly reset to default values when a struct variable is added/changed, that would also be grand.
I think those would make a really huge difference! I’ve been digging through UE5_main on Github, and I discovered they’re implementing support for hardware rasterization (Nanite was previously mostly run in software). Not only could it have performance benefits, but they mentioned (from what I read) potentially reengineering nanite to support WPO and masked materials (such as deformable foliage).
The core problem as I understand it is that Nanite depends on breaking a mesh into clusters for the hyper-efficient streaming and culling that makes it so powerful. In practice, this means the geometry is rendered separate from the materials, and WPO is a material behavior, so the mesh has no way of knowing whether or not it should be moved. The same problem goes for masked materials and knowing what should be drawn ‘behind’ the material, as Nanite can’t know what parts of it are masked.
I think they’re working on solving these problems, and if they do, then Nanite would probably become the default path for everything besides transparent objects. Tessellation would be a good thing to bring back just due to how much legacy content depends on it, but I can understand Epic’s reasoning. Yes on the water plugin, it’s currently being a headache for me as well, but I think improvments are in the works.
And for performance improvements? I got the newest build of UE5 working just a week ago, and aside from the (expected) bugs, it’s entirely shippable in my opinion. Performance is great, developer visualizations and features (such as a checkbox for emissive meshes so the Engine knows not to cull them) all make it a really usable engine. Still prone to substantial hitches and crashes under certain circumstances, but 90% of the time excellent. I think you’ll get your wishes/
Oh, one thing I forgot to add that I would be extremely greatful for: the path tracer is an amazing tool, and the denoiser vastly increases the image quality on lower ( ~400 SPP) sample counts, but the denoiser tends to knock out a lot of fine-surface detail and normal information. For example, when testing out the Realistic Rendering scene, the denoiser removed most of the high-frequency detail in the plaster and the leather, leading to an almost plastic-y look that didn’t match the real-time scene.
If possible, I would love it if the path tracer had the option to introduce textures (including normal) after the diffuse lighting has already been denoised (Quake 2 RTX and other real-time path tracers did this). I know they used Lambertian BRDF for diffuse and cut other corners for speed over physical accuracy, but I would personally prefer a more accurate but physically imperfect ground truth over having to wait for 10,000+ SPPS to converge.
Also, and I know this is a big ask and almost certainly not a 5.1 thing, but some sort of GPU-based denoiser similar to A-SVGF or ReBlur (might be proprietary though). NVRTX didn’t feature any support for real-time path tracing, despite the fact that there are code and denoisers that could be used for it, powering other systems like RTXDI. Moreover, my main issue with the OpenImage (CPU) denoiser is that it causes my computer to hitch massively whenever it runs, which can be a little frustrating during quick iteration. Still, very good for final shots.
I have another important supplement,can the menus of ue5 be separated separately? It’s very tiring to click the menu commands repeatedly to find them,all menus of Autodesk tools can be separated independently. It’s very convenient to click those commands
In addition, in the world outline view, many folders are created,why is there no option of unified folding? It is very troublesome to fold folders with many files one by one
I just found that Lumen is heavily affected by “effects scalability”, this is really bad, because some games, use this option as customization to lower Niagara or cascade effects, now If I want to lower it to Low or medium… Lumen stops working correctly, and I get all kind of new issues, for example, choosing low causes Lumen to stop working correctly.
As you can see in this comparison, High quality effects have a direct impact on lights, so now in order to use Lumen, you are force to also have high quality effects, and vice versa.
Lumen (lights) should have its own scalability setting, in my humble opinion.
I’m not entirely sure if this is a wishlist item or a bug report, but I’ve noticed that even in newer versions of Lumen (two weeks old or so), GI is not visible in reflections at all if the higher quality ‘hit point’ lighting is chosen. This leads to some relatively strong discontinuities between the lower and higher quality settings, including some unusual darkening of scenes when screen traces are not available.
I understand both the computational cost and the pipeline discontinuities that would come from it, but some support for genuine multi-bounce reflection would be extremely welcome. I’ve found that even a single additional bounce can vastly improve fidelity, although I know it’s prohibitively expensive in many games.
Reflections on SingleLayerWater also appear to be either broken or not found in the usual CVars, not even SSR, but that’s not as much a priority for my work.
This is again not a wishlist item, but something that was on my wishlist and Epic has begun fufilling it: Thank you for beginning to add support for SkyAtmosphere and ExponentialHeightFog to the path tracer (discovered this on github, less than 48 hours old I believe).
I had previously figured out a rather arduous workaround, where I used the MovieRenderCue to bake my path-traced scene, set it to pick up real-time fog in a separate pass, and then composited them in post. This will vastly simplify my workflow and allow me to stay in-engine to do my work. Thank you!
The Chaos Engine’s automatic fracturing is decent, but i would somehow maybe like to do those manually those fractures first and then use fracture tool to automatically fracture those my made lvl1 fractures to lvl2
Like a tree for example if i want it to broken i could make branches and wood parts strong and leaf maybe ‘no collision’ or something and fall throught ground and make huge delete zone underground to remove those?
Now i try to make fraction to a tree and i get 600 pieces for lvl1 and it rly destroys all fps what there is and no clue how to make it better and it grabs the whole collision area and there is air so it makes it really bad
Using that ‘Manual’ fracturing u could make like box collision things inside a asset tree from vievport and then u could highlight ur made shapes and then u could choose that automatic fracture.
Unfortunately, many existing resources do not support ue5. They only support UE4 27, even older engines.
For example, the resources of viaduct. I hope the resource author can keep up with the new version as soon as possible. thank you
Highly detailed documentation for Chaos, especially stuff like rewinding and resimulating simulation with FRewindData.
Better documentation how ticks work, I mean in terms of everything, when and where are called physic ticks, how and where async ticks sync with main game loop, etc.
Option to have Extrapolated prediction of Character movement(Predicting where the character probably is on his local machine, not what the next move of Character will be)
and GAS animations(The Animation Montage, when called remotely gets fast forwarded by the time the RPC had to arrive on server / other client). It would make fast-paced Networked Melee Games much more fair and feel much more rollback-like.
Dream Wishlist:
Better documentation on how to serialize stuff like animation montages (For example: current mesh pose and behaviors that happen after “On notify Start” is called)
Bigger support for other netcodes, like Rollback/Lockstep (So letting the engine run fully deterministic in terms of physics/ Tick order of actors/and floating point optimizations as they vary on diferrent architectures and/or from run to run)
*and not having to modify the engine to implement those things, as they’re VERY tied to engine core, and the engine core is documented POORLY, to say the least. *
Hmm my wishlist is a little less ambitious than a lot of others here xD
Improved Asset Importer that handles all coordinate systems and scale systems correctly, most importantly when importing skeletons.
Hierarchy Import options so you can import assets with their children to be nested like they are in your cc of choice, for example for correctly positioned doors in their frames etc.
More settings for texture compression, for handpainted art the standard compression always features banding and artifacts.
No PostProcessing and AutoExposure by Default, the standard settings there seriously distort any artwork and lighting you set up and should be introduced later.
Mhm. Looks like we may be stuck with this though. I’m hoping there will be a nanite-based solution one day, but for the moment it’s quite frustrating (and really limiting for games using heightmap-generated landscapes).
The MovieRenderCue lets one mask out specific pieces of geometry, as well as marking certain items/lights as cinematic, so they will or will not show in renders. It’s a very useful thing!
Yes, I entirely agree. No tesselation on static meshes I can live with, but the landscape virtual heightmap is so unintuitive and frustrating, I still haven’t figured out how to make it work. It’s resulted in situations where ultra high-detail nanite meshes clash with low-res landscapes, and leave an altogether dissonant visual image.