UE 4.11 to be dedicated to "speed and performance" optimization...

[=Daniel.Wenograd;428668]
LPV, DFGI (so slow in 4.9+ that it’s not really worth it right now though), VXGI, Landscape GI. For most purposes, we’re pretty much covered in that regard. Improvements would be great on that front though, there isn’t really a solution that just works yet. LPV only works on the sun so it’s useless for interiors, DFGI is unusable in current engine versions, VXGI is great but doesn’t run nearly as well as it needs to for older systems, and landscape GI again only works for exteriors.

Hopefully when they finish “speed and performance optimization” there will be the opportunity to work either on their own solution that actually supports point, directional, and spot lights, or work with Nvidia to make VXGI run faster (Like how Crytek managed to make SVOGI fast enough to actually start using in the CE 3.8 and forward). It’s really unfortunate that things have gone from fully dynamic with SVOGI all the way down to pretty much abandoning the concept as of late. Even diffuse lighting from reflection captures is broken, which could have been a tolerable alternative to tide things over with until hardware catches up to solutions like VXGI.
[/]

QFA! I’d like to see them work on dynamic GI that works for both indoor and outdoor scene.

PC optimization!?

**I have a PC that goes with me, and when ther is time “UE4” is my hobby on it…

…SOO, Please, Please, Please, Do optimize it for low spec. PCs too!!! :smiley:

…Thanks!!!**

Any news on Improvements regarding DX12?

I would like to see low level improvments. Not 100% sure the best way to do this.

OpenCL? Async Shaders? C++ AMP? Raw AVX instructions?

Hmm.

I tend to feel like either OpenCL 2.0 or C++ amp would be ideal for using the vector capabilities of not only current Bulldozer+ and Sandy Bridge+ but also Skylake and Steamroller GPUs going forward with shared virtual memory support. Greenland looks like it’s going to completely dwarf current APUs, and I tend to favor OpenCL2.0 because it’s just more mature, it also works on mobile, it’s a fully open standard, and better supported outside of Windows and better documented.

Eventually NVLink will enable algorithms that require fine grained shared memory to run on GPGPUs but until I don’t think we should include discreet GPUs in this at all and even when they can it might be better to leave them to handle rendering and async shaders unabridged. This is only a suggest for GPGPUs with Fine Grained Shared Virtual Memory support first found in Kaveri and Skylake iGPUs. One obvious place for this would be PhysX.

Aside from increasing performance something like this would allow games using UE4 to run on much older hardware or run at much better quality than they would if we’re still basically just targeting the ISAs of Thuban and Westmere. Bulldozer and Sandy Bridge will be very very old hat some day, the longer games can be sold to them the better for us, besides, this would raise the bar for z170, x99 and 990FX platforms too, and good OpenCL2.0 code can be vectorized for SSE4a and SSE4.1 or in the absolute worst scenario even just plain old ARMv7 or x86 anyway.

Does anyone Think this may help some!? :slight_smile:

Just Surfing the net and came about this:

Any new info. on this??? :smiley:

UE4.11.PREVIEW 1 is out!!!

Well folks what do you have to say!? :slight_smile:

Hey, will these performance optimizations make it into 4.11? Would be great. What’s the status of the “squeezing the last 20-30% out of many rendering systems” awesomeness? :slight_smile:

Ok version 4.11 preview 5 is amazing, it is using all 32 of my cores, when loading one of my massive World Composition levels (6400sq km), nice performance enhancements EPIC!

[=ajbombadill;478494]
Ok version 4.11 preview 5 is amazing, it is using all 32 of my cores, when loading one of my massive World Composition levels (6400sq km), nice performance enhancements EPIC!
[/]

Not to de-rail the thread but how did you make a world that large? Many worldmachine words with the edges flattened? I’m looking to create a earth sized world with all the open ocean full water sectors the same level so I’m going to need major amounts of tiles and the thought of tiling thousands of worlds together makes my head blow up.

RE: performance…

Anyones thats tried coding in C++ even with hot-reloading, an SSD and and i7 knows how incredibly slow it can be even with only a few hundred lines of code takes 15-45 seconds to compile to see if a small code change succeeded or failed.

[=Nsomnia;479188]
Anyones thats tried coding in C++ even with hot-reloading, an SSD and and i7 knows how incredibly slow it can be even with only a few hundred lines of code takes 15-45 seconds to compile to see if a small code change succeeded or failed.
[/]

Usually in the 15-30s range with 50k+ lines here. The big offender is the first build after starting the computer, oh boy, it can take minutes. Not sure whether Visual is to blame, or UE4.

On the other hand, I don’t know any game engine that offers a 30s rebuild + ingame reload, and once you leave the wonderful realm of PC development, those 30s sure look nice.

[= XaVIeR;427820]
They should focus on documentation for some 6 months instead.
[/]

I second this.

Edit: So does my Mom.