Hi I am a beginner programmer and I was looking around some forum discussions about the uses of const
and someone said that it could possibly help with optimizations. I thought const
correctness was mostly for readability and fool-proofing the code; plus, if there were any optimizations, it would only speed up the compilation, rather than the code itself. But it sparked my interest when someone said that it can be helpful when reading and calculating large data, like when using FVector
s or custom struct
s. One particular person talked about a CPPCon Talk by Jason Turner. Could someone elaborate more on this? Can using const
noticeably optimize a performance of a code?
Applying const can produce optimizations, it provides the compiler more information about the usage so the compiler might be able to do interesting things like reorder instructions or move things between registers. But these will be run-time optimizations, it won’t make the compilation go faster. Or at least that’s not a thing I’ve ever heard of.
const-correctness is a slightly different concept. I agree that it’s primary goal is error-proofing the code and providing compiler enforcement for what you can and cannot do with const
instances. But this is different than just applying const. A type can be const-correct, but it doesn’t really matter if you never declare instances/references to that type as const.
As with a lot of programming discussion, this one gets complicated really fast once you start taking into account more language features. It might be okay to always apply const
to basic types or pointers but you could cause yourself problems for more complicated types. For example, making a complicated type (for example a TArray) const might mean that returning it becomes very expensive because move-semantics can’t be employed (move-semantics would allow a copy of internal pointers to be made “moving” the array to the new instance instead of making an actual copy of the array).
As with any other tool, changes made specifically for optimization purposes should always be done because of, and validated against profiling information. Applying const
because it lets you write less error prone code is only an upside (until perhaps profiling tells you to remove it).
One final thing, as with anything else it can be taken too far. Some people and IDE’s try to get you to apply const so blindly that it get put in really dumb places (at least in my opinion). I’m all for making function locals const or function reference parameters const. But making function value parameters const has always seemed silly to me because it just adds noise to the function declaration. It doesn’t change how the caller uses the function (it still makes a copy either way), so it’s a leaking an implementation detail out into API. If I could make the declaration be non-cost and the definition use const, that would be kinda interesting.
Ah, so contrary to my initial assumption, const
can improve run-time optimizations.
it still makes a copy either way
Yeah, I agree. Making a function value parameter a const
is a bit sillyXD
Right. BUT be careful about applying it strictly as an optimization. If you’re applying it as part of an overall effort towards const-correctness that’s one thing. If you’re applying it (or removing it) strictly as an optimization you better have profiling data to tell you that the change is actually an improvement.
I see. Just curious, if you were to get profiling data, would you just use QUICK_SCOPE_CYCLE_COUNTER
and look at the Inclusive Time
for two different instances of code? Like profiling one code, commenting it out, and profiling the newly “optimized” code? Or are there better ways to visualize it by some graphing feature?
Yeah, that’s basically what’d I’d probably do.
Depending on the function, maybe I’d write a specific benchmarking test. If the function isn’t high frequency enough or only gets called a lot in a situation that is a pain to replicate to run the test.
Nice. I should look into techniques for making benchmark tests:)