First, code duplication should be eliminated in almost all cases. You have three if/else branches doing the same thing, but on different fields of the structure. This stuff belong to another function or lambda. Duplication results in bugs and increases maintenance costs.
Also, cmath library has round() function, which rounds float/double value to the nearest integer.
Your code does the same thing as this builtin function, so you’re reinventing the wheel. That is a bad practice.
Using round(), you can replace convoluted if/else with
I thought the same thing too with respect to having a single function being called to calculate the term. However, normally there is the cost of the push/pop/etc. associated with it. I don’t know how auto influences the assembly output. Have you looked at it?
In general, especially with applications that run in resource rich environments or with functions that aren’t being used too frequently, modularizing as much as possible is good programming practice. For me though, and for this specific problem, I wouldn’t incur the cost of the extra function calls. From a knee-jerk evaluation most of the statements can be executed in a linear fashion with the FPU. If the auto and use of it has no cost associated with the stack and execution management then I would use your approach.
If I had time I’d analyze and benchmark the two but I’m busy with other things at the moment.
It’s advised to use UE4 equivalents though. FMath::Round for example or FMath::GridSnap. That stuff is optimized for the engine and also provides the necessary platform independence (see FPlatformMath).
By default you don’t know what compiler will do with your code. There are absolutely no warranties. Different platform and different compilers will produce different code. You don’t know if it will or will not inline some method, if it will unroll loops, if it will turn reference into pointers or treat them as aliases to variables, and so on. C++ standard does not provide any guarantees there.
Thinking about low-level performance optimizations while writing stuff is a big red flag which indicates premature optimization. Code is written for other humans to read it and the most important thing is readability and reduction of possibility of having bugs happen in the future (avoiding code duplication is one of those techniques). “Profile before optimizing”. Failure to follow this rule will result in a lot of wasted time (say, you’ll make a function run 4x times faster, except it takes 0.1 ns to complete and is called 5 times total during program’s lifetime). There’s no point in worrying about optimizing this function unless it is being called 20 million times per second - 233 mhz cpu could easily do that back in the 1990s…2000s. Also, algorithmic optimization often will produce significantly larger impact than low-level fine-tuning.
If you’re conecerned about low-level performance, I’d like to point out that conversion from float to int can be quite slow. Compiler will generate _ftol instructions for that, and if you have few billions of calls to those, then _ftol will turn into bottleneck.
However, you shouldn’t be concerned about performance unless you have profiling results on your hand.
Game studios do not use STL, at least I think most of them moved away from it.
I’ve seen a while ago a GDC talk about “Common C++ Performance Mistakes in Games” and they pretty much bash STL all the way…
For example, EA has their own ‘EASTL’, Epic Games have theirs, and so on.
NegInfinity, are you kidding. That’s why we have a lot of bad code as it. For the most part I’m not disagreeing with your other evaluations but what you’re espousing doesn’t negate what I’ve done over the years. Certainly you may be shielded from the different architectures but that doesn’t apply to everyone and every application. You apparently missed my point.
STL is slow, and if you don’t know how to use it correctly it gets even worse. I’ve seen it firsthand numerous times. The fact that “game studios” felt it was good to use in the first place is a clear indication that people there do err in judgement, so much so that a talk was given at GDC.
I got to get off this thread. . .I have work to do.
STL stands for Standard Template Library. There’s no STL, only standard library currently, mathematical functions are not templated.
I’m not sure what your point was supposed to be, so you might want to state it clearly. For me “bad code” is that is harder to read and has premature optimization applied. Speaking of “doesn’t negate”… technologies aren’t set in stone and keep changing over time, so if something was a good idea in the past, it doesn’t mean it is still a good idea at the present time.
and if we’re talking about standard containers, those are usually slow in the debug build, because, for example, msvc applies a lot of sanity checks and warnings in debug build only. Now, if you want precise memory management and, say, no dynamic memory allocations, that’s different story.
I have to ask, what is a premature optimization? Reducing the number of statements is an optimization. Keeping functionality centralized is an optimization. I guess in your mind then these are premature too.
So use RoundToFloat (which admittedly is just a cast RoundToInt). Are you going to ignore most of the stuff in FMath and use your own thing instead or are you going to heed your own advice and keep code clean and readable for other devs doing it the UE4 way?
(Note: I know the weird comma-placing, that’s just my style. Use yours.)
Also, I changed the pass-by-reference to a returned value (if that’s possible with blueprints; not yet checked) since you’re not modifying original values of the referenced variable but the pass-by-value FVector Vector.
As mentioned above, the separate if-statement is for readability which I agree specially when working with teams and improving maintainability.