I’m using Unreal 5.3.2 for plugin compatibility reasons but I haven’t found any posts about anyone else having this issue between reddit and google. I also already made a reddit post but they just told me that the results that I’m getting shouldn’t be possible.
As an explanation of this spaghetti bowl of a blueprint, it is supposed to take scientific notation and normalize it. So if it is passed 600x10^3 it should covert to 6x10^5. As you can see we’re taking A and Y from Ax10^y and operating on them as if we’re doing the math on paper. Right now the value that’s being calculated that I’m working with is 0.000091x10^28 which should be converted to 9.1x10^23 but instead it evaluates 0.000091 to be greater than one but less than ten, so it passes the values unchanged to the output node. I’m just kinda baffled.
Seems like the issue might come from before calling this function. Are you parsing the scientific notation from string ? Maybe there’s an issue with the parsing part.
Add some Print Strings to figure out what’s going on. Print the Mantissa, print the Power, print the result of the comparison…
So it looks like I misunderstood how the debugger works, because it seems that it is evaluating true with a print. So that just means there’s an error either in my math or the plumbing somewhere.
The error is in your thinking, maybe also in some laziness IF you neglected to look up what floating point precision is/means in this context.
To support scientific notation precision you need an engine that is capable of it - EI: Not unreal, particularly Not when using blueprint.
You can get around it by creating your own C++ Class with your own variable definitions, likely using Double for the variables - AND: you cannot utilize any of the unreal Kismet stuff, since they constantly cast up or down as they feel like (and you’d run into the very same issues).
The only math you would be able to do is whatever you re-write in C++, additions, subtractions, multiplications, and divisions.
Past that, you’d need to write your own functions for anything else (power of, sqrt, etc.) - luckily, you are some 40 years after actually needing to write anything from scratch. If you just google something like "code to do square root in C++ you’ll likely get functions to copy and paste into your C++ class.
If you take care to make sure that you have absolutely NO casting of the variable between operations, You’ll still eventually encounter some significant floating point error.
Look up precisely how it works, You’d think that less decimal points = less error, and you’d be 100% wrong. It’s got to do with binary and how numbers are assigned to the variables themselves.
OR just run a quick test; Add 0.1 and 0.2, You WILL NOT get .3 (unless you truncate or something).
Generally speaking - it’s why the kismet library does the casting and the trimming.
It’s also why you often want to use >= or <= rather than == when dealing with values.
In the example of the case, the value will be >= .3, but never == to .3…