Great posts - I love reading these as I like optimization work but I’m too lazy to mess with that in the early process of making my game, since there’s so much gameplay work to be done (I working on the Dungeon Survival project - I commented on your blog before).
“In practice, I don’t think there are compilers out there that perform any significant optimizations for all kinds of const objects. but for objects that are primitive types (ints, chars, etc.) I think that compilers can be quite aggressive in optimizing the use of those items.”
“There is [only] one case where saying “const” can really mean something, and that is when objects are made const at the point they are defined. In that case, the compiler can often successfully put such “really const” objects into read-only memory…].”
I just cant get over this feeling that constness should be beneficial to the compiler
Yet I read in so many threads people say it is not a significant improvement.
Yet my feeling remains that is actually beneficial to the compiler
I am hoping to hear from Pablo about any sort of measurements he’s done on this subject, and anyone else for that matter who has done empirical testing on “const-ness beneficence” (has a nice ring to it, doesn’t it?).
As mentioned on that StackOverflow question there isn’t any improvement really. In optimized builds the compiler already does whatever it can do based on data dependency so the assembly generated is the same (or at least it is on Microsoft’s compiler and other compilers I have used such as SNC). The reason you can to use const correctness isn’t about performance but about being explicit about the intention of the code in terms of data changes. A more important type qualifier to use in terms of performance is restrict rather than const (where applicable).
Hi Pablo, Im need your help for understand which issue have my UE4.
These are my CPU consumption: 4.3.0 Preview New Empty Map, Range 22-26% 4.3.0 Preview The Cave Level with all settings on Epic, Range 16%18
4.3. Release When Compile the shader, Range 98-100%
4.3. Release In BackgroundMode, Range 0.5-1% 4.3. Release The Cave Level with all settings on Epic, Range 29-31%
Now, my PC is not a workstation but this big difference of consumption Its seem to be strange, I dont care of quality,if is possible to use much less CPU is a good compromise.
Thx for your help.
What you are using to measure isn’t that good. You would actually need to profile under the same condition and with a better tool. Optimally you would use VTune or some other high quality profiler, but you can try Very Sleepy which is pretty good I’m told.
Ok, I have donwloaded Vtune Free Trial and this is a result from the analyze sadly this version of program dont have the option to save the project.
I have the project folder(in zip format)you can take a look at that,or you can tell me what you need and I can submit screens if are usefull to you.
Thx again for help.
You would actually need to do have to reproduce the exact same process with the same time on both versions and profile both to contrast the data and see where performance changed for your machine. In my case I haven’t notice any particular change when profiling. In any case having a higher CPU usage doesn’t necessarily means that it’s worse if whatever was happening on the previous version is now done with a lower wall-clock time. I think you would actually need to have some specific performance optimization knowledge to tackle this.
What you may want to do is ask Epic if it seems reasonable to have some kind of editor benchmark to keep track of performance across versions. I asked a question related to that on a Twitch stream and here is the answer: http://www.youtube.com/watch?v=6i9Q1cWO8O4&feature=youtu.be&t=1h2m10s I still think it would make sense to have some automated process to track performance regressions but I’m don’t know about enough about the editor to implement that myself and I already got some other stuff in my plate right now.