Hello UDN!
As stated above, I have found FRichCurve::Eval does not always return correct data. The variance in data seems to be dependent on what platform we’re playing on. My specific use case is having a CurveFloat help decide the precise orientation of a bone over a given time. When the time has accumulated to a specified threshold (that threshold is represented by the final keyframe at that precise time in the curve), I want to just sit and continue returning the value at that key frame until we need to play again.
This final keyframe in my curve is set to (X=1, Y=60). I would expect the value returned by FRichCurve::Eval for InTime==1.0 to give me 60. However, on platform, I am seeing that the value returned is 0.0! I dug quite a bit into this and actually found another post describing the exact issue I’m seeing, but it has since been closed with an unresolved help ticket:
The Ticket:
Thanks to this post, I was able to follow the lead that it was related to the fact that the keyframe indeed has a weighted tangent. I began to step through FRichCurve::Eval
to UE::Curves::EvalForTwoKeys
and finally to UE::Curves::WeightedEvalForTwoKeys
. Inside of UE::Curves::WeightedEvalForTwoKeys
, UE::Curves::SolveCubic
is called which then executes a platform math command pow
. I get very similar Results
here between PC and the platform I’m using, but I notice that the one result we want to get (1.0) is i1.0 on PC, but 1.000000000004 on platform (may not be the exact number of 0s, but you get the idea). I am going to post the snippet that handles these Results
from the Cubic function, and you will see why this imprecision across platforms creates a problem.
From UE::Curves::WeightedEvalForTwoKeys
:
const int32 NumResults = SolveCubic(Coeff, Results); double NewInterp = Alpha; if (NumResults == 1) { NewInterp = Results[0]; } else { NewInterp = TNumericLimits<double>::Lowest(); //just need to be out of range for (double Result : Results) { if ((Result >= 0.0) && (Result <= 1.0)) { if (NewInterp < 0.0 || Result > NewInterp) { NewInterp = Result; } } } if (NewInterp == TNumericLimits<double>::Lowest()) { NewInterp = 0.0; } }
Because this else block here is checking if the currently evaluated Result is between 0.0 and 1.0 without any given tolerance, I believe the imprecision from the hardware’s fast math makes this function unpredictably return incorrect data.
It would be great if this ticket could be investigated further! It’s been over a year now since the original post was acknowledged. It seems to be as simple as adding some tolerance for the extremely precise doubles that are used in the Cubic calculation.
Thanks for reading!