That is a bad idea when it comes to C++ code.
By default you don’t know what compiler will do with your code. There are absolutely no warranties. Different platform and different compilers will produce different code. You don’t know if it will or will not inline some method, if it will unroll loops, if it will turn reference into pointers or treat them as aliases to variables, and so on. C++ standard does not provide any guarantees there.
Thinking about low-level performance optimizations while writing stuff is a big red flag which indicates premature optimization. Code is written for other humans to read it and the most important thing is readability and reduction of possibility of having bugs happen in the future (avoiding code duplication is one of those techniques). “Profile before optimizing”. Failure to follow this rule will result in a lot of wasted time (say, you’ll make a function run 4x times faster, except it takes 0.1 ns to complete and is called 5 times total during program’s lifetime). There’s no point in worrying about optimizing this function unless it is being called 20 million times per second - 233 mhz cpu could easily do that back in the 1990s…2000s. Also, algorithmic optimization often will produce significantly larger impact than low-level fine-tuning.
If you’re conecerned about low-level performance, I’d like to point out that conversion from float to int can be quite slow. Compiler will generate _ftol instructions for that, and if you have few billions of calls to those, then _ftol will turn into bottleneck.
However, you shouldn’t be concerned about performance unless you have profiling results on your hand.