Update from a space maths-er in 2022; UE 5.0.3 seems to have enabled double-precision for all floats used in blueprints, which is great! I have large numbers everywhere and planetary-scale math happening.
However there is still a BP-UI limitation on how small you can enter in a float literal: 10^-6. If you try to input anything smaller than 0.000001 the UI changes it to 0.0. This kinda stinks because the only way to construct small doubles for use in blueprints is to divide larger numbers by some constant at some point at runtime, so you can’t use them in default values—for example, in structs.
I’m getting around this by:
- Creating a struct representing simulation constants
- Having the small constants be “documented” as being in units of, for example,
10^-12 - Setting their defaults to a value
10^12times greater than desired - Giving my game state this struct as a private variable
- In the game state constructor, for each small constant:
- Reading the unscaled default from the just-initialized struct
- Updating the struct with a scaled value (ex, dividing them by the
1000000000000value that blueprint literals let me express)
I also have a convenience setter function for each small constant that does this division, in case I need to update their value with a literal float from a blueprint somewhere.
This fully works around UE5’s blueprint limitations in small double-floating-point literal expression, and my default gravitational constant entered into the blueprint UI as 66.743 can be pulled from the game state and Just Works
in calculations.
Of course, the UI also similarly refuses to display these small floating point values as anything other than 0.00000, which threw me off during breakpoint debugging sessions, but things seem to be actually functioning fine if you make peace with the notion that that’s how the engine displays small values.