Unreal Engine 5.6 Preview

I see UE5.6 now includes the latest OpenXR 1.1.46 headers. I tried to integrate Logitech MX Ink stylus requiring “XR_LOGITECH_mx_ink_stylus_interaction” extension and “/interaction_profiles/logitech/mx_ink_stylus_logitech” interaction profile but when I hold it, the interaction profile for right/left hand is always null. Any clue ?

D3D11 SM5 or D3D12 SM5?

1 Like

Just feedback: I DO like the preview-button added to the material-nodes; if there was any one feature we would want to add a shortcut for. THANK YOU!!

1 Like

VLM is static and exist in persistent level, it doesn’t get streamed with sublevels. You should always check how VLM grid is positioned relative to level’s geometry, ideally all samples must be in empty or solid spaces, border cases may cause issues. Default grid size of 200 is mostly too much for small levels (indoor scenarios) so you should diminish it, also having grid too sparse gonna cause lights to bleed through walls due to interpolation. At the same time having it too dense would put the stress on VRAM as VLMs are preloaded there.
Pitch black probes may be the result of them being under the landscape (even if lanscape component is hidden or made invisible by ‘visibility’ tool) - there is the setting to disable this “optimization” in project config. Also if you use sublevels they may be loaded into Lightmass all at once causing incorrect shadows from ‘nonexistant’ objects.

First of, what would anyone get out of lying about blueprint performance, especially people who use blueprint


Second of, blueprint performance issues were solved with blueprint nativization, although blueprint nativization to C++ had its fair share of issues, removing feature entirely instead of fixing it was kinda sad, they didn’t remove subobjects and are instead fixing them, why not do same for blueprint nativ.

https://youtu.be/S2olUc9zcB8

Third of, Epic themselves are aware of blueprint performance overhead, you likely won’t notice it in lightweight systems such as inventory.

  • Issue is that there’s overhead per node and per switch from C++ to blueprint vm and forth, which is why having multiple bp actors with empty tick node can cause lag because of entry into blueprint vm land overhead despite tick doing nothing.

Epic bandaided this in recent unreal versions by checking if tick has any nodes and if not skipping call to tick in blueprint actor, to prevent people accidentally lagging project with empty ticks

2 Likes

Honestly… you forgot this:

So the result is this:

Let’s face it. There’s no point in optimizing menus and math functions that waste a millisecond when you have a menu open, a game paused, and UMG open. It’s really about what the developer knows and what low design you have for the project. :wink:

As a disgraphics person, I can say for myself that blueprints are one of the best things UE has received, especially after UE script, which was difficult.

4 Likes

True! Actually with the amount of bugs / lack of optimization of engine code it barely matters if you measure in FPS.

[Bug report] Bad widget performance : Slate : ProcessMouseMove

BUT, once you start creating large systems of your own (you deal with your own code, less with engine code), then you do get benefits in C++ and access to code + concepts that blueprints can not implement.

Then besides measureable performance on the end product and possibilties for software design, I also refer to my older post. Measuring performance as “how quick is it to implement, maintain, alter” etc. Blueprint might only score high on quick to implement on the short run. after a prototype phase of a small blueprint test you get to rewrite it again in c++ anyway.

I’m not saying fck no to blueprints (until there’s a better thing), because they’re useful to people who have 0 interest in programming but still got to script now and then (widget designers creating animations), or material creators who have no interest in HLSL etc. Same for the animation blueprint system, as you can immediately see results. However, especially in the latter, a complex system like ALS animation blueprint easily gets to 30MB! on a GIT commit, which is unreasonable. The moment that one UASSET file corrupts (and notice that some time later) you’re fcked as well.

I have more arguments like that which make a graph comparison on say FPS on BP vs C++ largely irrelevant to what I call “performance” of the whole.

What I truly hope for is that there will be a better alternative soon. Maybe if AI improves on the programming side, or when accessibility settings on text editing software prove enough. For some people who have trouble with writing / reading / numbers, colors or large font size is enough to help them but everyone needs something different in different complexity levels.

As a C++ programmer it’s hard to take a 24/7 blueprint programmer seriously, if they have the ability to move on to C++ and don’t do it (why??). Maybe blueprints are useful to get kids into programming but at some point it’s a tiny cage.

1 Like

I appreciate your valuable arguments :slight_smile:

You’re totally right (sarcasm).

Can we get an update from someone on the graphics team give a response about this this? This should have never been removed.

1 Like

My performance tanks! :frowning:

This other guy seems to be getting great results with the Dark Ruins sample.

I have a scene with just a landscape, nanite grass, and UDS. Works decently-well in 554:

Summary

in 556 initviews just spikes:

UNNACCOUNTED! :P

Scene is not so costly material-wise:

Material

Hopefully this is something picked up before GA. I’d like to see what kind of benefit I stand to gain in performance..

I had a similar problem after upgrading to 5.6, and it somehow resolved itself. But I found that it’s the surface cache updates showing up in unaccounted. Try r.LumenScene.SurfaceCache.Freeze 1 to see if it’s the same problem. If it is, then manipulating cVars such as r.LumenScene.SurfaceCache.CardMinResolution, r.LumenScene.SurfaceCache.MeshCardsMinSize and related to radiosity may help you.

When using the landscape tool: Importing a 4033x4033 heightmap tries to make 4096 components by default and also does so if you press the fit data button. I don’t know if this is intentional or not, but 1024 is the recommended max usually.

i vote for this too.

sounds a ton like what it was proposed in the request for lts post.

awesome! i’m testing it on linux.

i work with vr. i wish they improve vr too.

3 Likes

Is that 5.3’s material shader cost view mode? The viewmode is broken after 5.3 and shows green on everything unless the material has a really bad issue(so that red is killing pixel shading cost). You also need to view the shaders without Nanite enabled in the viewmode. Besides, your nanite base pass isn’t the bottleneck, you’re killing the visbuffer with overdraw.

Also that “other guy” messed with something in the benchmark, Lumen was not configured properly between his test and a removed comment was saved in this post explaining it

5.6 cannot cast shadows in forward rendering.

NVM, forward rendering has a bunch of new issues, I had RT systems on and the engine doesn’t account for the forward mode being on anymore (bad).

Feature request: Could you add a simple boolean in the post process settings to skip auto exposure interpolation (linear and exponential) and instantly jump to the target auto exposure?

Setting the Speedup/down do ridiculous values is not instant for stronger exposure differences.

Best regards.

Thats goes into feedback…

No one reads that.